Compare commits

...

243 commits

Author SHA1 Message Date
Kubernetes Prow Robot
01919d0ef1
Merge pull request #686 from kubernetes-sigs/dependabot/go_modules/golang.org/x/crypto-0.31.0
build(deps): bump golang.org/x/crypto from 0.22.0 to 0.31.0
2025-04-01 03:34:37 -07:00
dependabot[bot]
21ea0ab279
build(deps): bump golang.org/x/crypto from 0.22.0 to 0.31.0
Bumps [golang.org/x/crypto](https://github.com/golang/crypto) from 0.22.0 to 0.31.0.
- [Commits](https://github.com/golang/crypto/compare/v0.22.0...v0.31.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-12-12 00:01:25 +00:00
Kubernetes Prow Robot
c2ae4cdaf1
Merge pull request #672 from chc5/go-upgrade-cve-fix
Upgrade Go version to 1.22.5 to fix CVEs.
2024-07-23 10:06:51 -07:00
Chieh Chen
26d05b7ae9 Upgrade Go version to 1.22.5 to fix CVEs. 2024-07-17 20:13:25 +00:00
Damien Grisonnet
17cef511b1
Merge pull request #660 from dgrisonnet/cut-0.12.0
Cut release 0.12.0
2024-05-16 20:14:56 +02:00
Damien Grisonnet
9988fd3e91 *: cut release 0.12.0
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2024-05-16 20:01:59 +02:00
Kubernetes Prow Robot
06e1d3913e
Merge pull request #659 from dgrisonnet/bump-1.30
Bump to Kubernetes 1.30
2024-05-16 09:57:38 -07:00
Damien Grisonnet
39ef9fa0e7 test: use official image of kind
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2024-05-16 14:26:37 +02:00
Damien Grisonnet
01b29a6578 *: fix openapi-gen options
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2024-05-16 14:26:37 +02:00
Damien Grisonnet
1d31a46aa1 *: update-lint
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2024-05-16 14:26:37 +02:00
Damien Grisonnet
d3784c5725 test: bump test dependencies
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2024-05-16 14:26:37 +02:00
Damien Grisonnet
aba25ac4aa cmd: fix OpenAPI
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2024-05-16 14:26:37 +02:00
Damien Grisonnet
fdde189945 *: bump deps
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2024-05-16 14:26:36 +02:00
Kubernetes Prow Robot
63bd3e8d44
Merge pull request #653 from logicalhan/patch-1
Update OWNERS (add myself and dashpole to OWNERs)
2024-04-19 10:20:08 -07:00
Kubernetes Prow Robot
1692f124d3
Merge pull request #651 from kubernetes-sigs/dependabot/go_modules/google.golang.org/grpc-1.58.3
build(deps): bump google.golang.org/grpc from 1.58.2 to 1.58.3
2024-04-19 10:10:45 -07:00
Han Kang
11d7d2bb05
Update OWNERS (add myself and dashpole to OWNERs) 2024-04-18 10:11:01 -07:00
dependabot[bot]
b224085e86
build(deps): bump google.golang.org/grpc from 1.58.2 to 1.58.3
Bumps [google.golang.org/grpc](https://github.com/grpc/grpc-go) from 1.58.2 to 1.58.3.
- [Release notes](https://github.com/grpc/grpc-go/releases)
- [Commits](https://github.com/grpc/grpc-go/compare/v1.58.2...v1.58.3)

---
updated-dependencies:
- dependency-name: google.golang.org/grpc
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-04-05 15:31:29 +00:00
Kubernetes Prow Robot
5d9b01a57a
Merge pull request #649 from machine424/uppps
deps: upgrade github.com/golang/protobuf to v1.5.4 for better compati…
2024-04-05 08:30:48 -07:00
machine424
4d5c98d364
deps: upgrade github.com/golang/protobuf to v1.5.4 for better compatibilty, see https://github.com/golang/protobuf/issues/1596#issuecomment-1981208282
upgrade go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp to v0.49.0 to address CVE-2023-45142 even though prometheus-adapter isn't using it directly and isn't exposing any traces.
2024-04-04 09:48:53 +02:00
Kubernetes Prow Robot
b48bff400e
Merge pull request #599 from bogo-y/fix
Fix metric unregistered
2024-03-26 09:15:19 -07:00
Kubernetes Prow Robot
9156bf3fbc
Merge pull request #614 from kubernetes-sigs/dependabot/go_modules/google.golang.org/grpc-1.56.3
build(deps): bump google.golang.org/grpc from 1.53.0 to 1.56.3
2023-11-28 18:40:38 +01:00
Kubernetes Prow Robot
27cf936f32
Merge pull request #608 from jaybooth4/release-112
Cut release v0.11.2
2023-11-13 15:58:49 +01:00
Kubernetes Prow Robot
ed795c1ae2
Merge pull request #620 from jaybooth4/master
Update prometheus-adapter go patch versions
2023-11-08 18:22:35 +01:00
Jason
a64d132d91 Update prometheus-adapter go patch versions 2023-11-07 02:48:54 +00:00
dependabot[bot]
a01b094a63
build(deps): bump google.golang.org/grpc from 1.53.0 to 1.56.3
Bumps [google.golang.org/grpc](https://github.com/grpc/grpc-go) from 1.53.0 to 1.56.3.
- [Release notes](https://github.com/grpc/grpc-go/releases)
- [Commits](https://github.com/grpc/grpc-go/compare/v1.53.0...v1.56.3)

---
updated-dependencies:
- dependency-name: google.golang.org/grpc
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-11-06 21:56:01 +00:00
Kubernetes Prow Robot
f588141f08
Merge pull request #609 from kubernetes-sigs/dependabot/go_modules/golang.org/x/net-0.17.0
build(deps): bump golang.org/x/net from 0.8.0 to 0.17.0
2023-11-06 22:54:46 +01:00
Kubernetes Prow Robot
98e716c7d3
Merge pull request #618 from machine424/d-http2
Add a toggle to disable HTTP/2 on the server to mitigate CVE-2023-44487
2023-10-31 17:16:11 +01:00
machine424
ba77337ae4
Add a toggle to disable HTTP/2 on the server to mitigate CVE-2023-44487
until the Go standard library and golang.org/x/net are fully fixed.
2023-10-30 09:48:56 +01:00
bogo
a5bcb39046
replace "endpoint" with "path" 2023-10-23 11:15:51 +08:00
bogo
2a4a4316dd run make update-lint and set EnabledMetrics=false in the server config 2023-10-13 20:14:34 +08:00
dependabot[bot]
f82ee9d1dc
build(deps): bump golang.org/x/net from 0.8.0 to 0.17.0
Bumps [golang.org/x/net](https://github.com/golang/net) from 0.8.0 to 0.17.0.
- [Commits](https://github.com/golang/net/compare/v0.8.0...v0.17.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-10-11 23:39:51 +00:00
bogo
a53ee9eed1
replace "endpoint" with "path"
Co-authored-by: Simon Pasquier <spasquie@redhat.com>
2023-10-07 10:40:52 +08:00
Jason Booth
7ba3c13bb6 Cut release v0.11.2 2023-09-08 13:26:38 -04:00
Kubernetes Prow Robot
891c52fe00
Merge pull request #606 from jaybooth4/upversion
Update prometheus-adapter go patch version to 1.20.7
2023-09-08 10:12:15 -07:00
bogo
e772844ed8
normalize import order 2023-09-04 11:01:25 +08:00
Jason Booth
6a1ba321da Update prometheus-adapter go patch versions
Upgrade to 1.20.7 to address multiple CVE security findings
    CVE-2023-29404
    CVE-2023-29402
    CVE-2023-29405
    CVE-2023-29403
    CVE-2023-39533
    CVE-2023-29409
    CVE-2023-29406
2023-08-30 13:53:44 -04:00
bogo
0032610ace change the latency metric and dependency inject prometheus registry 2023-08-29 17:19:47 +08:00
bogo
fda3dad49b
Merge pull request #1 from kubernetes-sigs/master
update my dev brunch
2023-08-29 15:32:10 +08:00
Damien Grisonnet
4cc5de93cb
Merge pull request #604 from dgrisonnet/cut-release-0.11.1
Cut release v0.11.1
2023-08-24 15:30:25 +02:00
Damien Grisonnet
a4100f047a Cut release v0.11.1
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2023-08-24 15:13:50 +02:00
Damien Grisonnet
cb883fb789
Merge pull request #601 from dgrisonnet/fix-multiarch
Fix multiarch image build
2023-08-22 18:18:47 +02:00
Damien Grisonnet
198c469805 Fix multiarch image build
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2023-08-22 18:01:46 +02:00
bogo
7cf3ac5d90
Update metric buckets 2023-08-17 20:57:47 +08:00
bogo
966ef227fe Fix metric unregistered 2023-08-17 20:23:01 +08:00
Kubernetes Prow Robot
74ba84b76e
Merge pull request #592 from kubernetes-sigs/dependabot/go_modules/google.golang.org/grpc-1.53.0
build(deps): bump google.golang.org/grpc from 1.40.0 to 1.53.0
2023-07-27 05:56:09 -07:00
Damien Grisonnet
36fbcc78f1
Merge pull request #596 from dgrisonnet/release-0.11.0
Cut release 0.11.0
2023-07-25 14:23:33 +02:00
Damien Grisonnet
0a6e74a5b3 *: cut release 0.11.0
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2023-07-25 14:06:46 +02:00
dependabot[bot]
147dacee4a
build(deps): bump google.golang.org/grpc from 1.40.0 to 1.53.0
Bumps [google.golang.org/grpc](https://github.com/grpc/grpc-go) from 1.40.0 to 1.53.0.
- [Release notes](https://github.com/grpc/grpc-go/releases)
- [Commits](https://github.com/grpc/grpc-go/compare/v1.40.0...v1.53.0)

---
updated-dependencies:
- dependency-name: google.golang.org/grpc
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-07-25 09:38:23 +00:00
Damien Grisonnet
f733e2f74d
Merge pull request #586 from dgrisonnet/k8s-1.27
*: bump go to 1.20 and k8s deps to 0.27.2
2023-07-25 11:37:17 +02:00
Damien Grisonnet
b50333c035 cmd/adapter: add support for openapi v3
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2023-06-22 13:52:06 +02:00
Damien Grisonnet
f69aae4c78
Merge pull request #587 from dgrisonnet/update-lint
Update golangci-lint to 1.53.2
2023-06-20 18:42:42 +02:00
Damien Grisonnet
8579be6c7b manifests: remove deprecated klog flag
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2023-06-20 18:39:51 +02:00
Damien Grisonnet
e69388346f *: bump go to 1.20 and k8s deps to 0.27.2
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2023-06-20 18:39:50 +02:00
Damien Grisonnet
86efb37019 Update golangci-lint to 1.53.2
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2023-06-13 15:02:46 +02:00
Kubernetes Prow Robot
c8caa11da1
Merge pull request #583 from brcorey/master
fix: use dl.k8s.io, not kubernetes-release bucket
2023-05-17 22:30:33 -07:00
Corey
b3a3d97596 fix: use dl.k8s.io, not kubernetes-release bucket
Signed-off-by: Corey <brant742@gmail.com>
2023-05-11 20:55:13 +00:00
Kubernetes Prow Robot
27eb607509
Merge pull request #565 from kubernetes-sigs/dependabot/go_modules/golang.org/x/net-0.7.0
build(deps): bump golang.org/x/net from 0.4.0 to 0.7.0
2023-03-16 07:15:22 -07:00
dependabot[bot]
d58cdcee93
build(deps): bump golang.org/x/net from 0.4.0 to 0.7.0
Bumps [golang.org/x/net](https://github.com/golang/net) from 0.4.0 to 0.7.0.
- [Release notes](https://github.com/golang/net/releases)
- [Commits](https://github.com/golang/net/compare/v0.4.0...v0.7.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-03-09 19:18:02 +00:00
Kubernetes Prow Robot
aab718746b
Merge pull request #569 from kubernetes-sigs/dependabot/go_modules/golang.org/x/crypto-0.1.0
build(deps): bump golang.org/x/crypto from 0.0.0-20220214200702-86341886e292 to 0.1.0
2023-03-09 11:13:52 -08:00
dependabot[bot]
8528f29516
build(deps): bump golang.org/x/crypto
Bumps [golang.org/x/crypto](https://github.com/golang/crypto) from 0.0.0-20220214200702-86341886e292 to 0.1.0.
- [Release notes](https://github.com/golang/crypto/releases)
- [Commits](https://github.com/golang/crypto/commits/v0.1.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-03-07 01:39:44 +00:00
Kubernetes Prow Robot
9a1ffb7b17
Merge pull request #559 from asherf/docs-yaml
Fix yaml in sample config & docs
2023-01-26 02:54:25 -08:00
Asher Foa
6a7f2b5ce1 Fix yaml in sample config & docs 2023-01-25 12:31:52 -05:00
Kubernetes Prow Robot
f607905cf6
Merge pull request #539 from olivierlemasle/e2e
Add initial e2e tests
2023-01-19 03:30:34 -08:00
Olivier Lemasle
7bdb7f14b9 manifests: use node_ metrics 2023-01-19 12:20:42 +01:00
Olivier Lemasle
1145dbfe93 Add initial e2e tests 2023-01-19 12:20:42 +01:00
Kubernetes Prow Robot
307795482f
Merge pull request #553 from gburton1/patch-golang-1.18.9
Patch upgrade of Golang to 1.18.9
2023-01-02 03:09:31 -08:00
Greg Burton
d341e8f67b Patch upgrade of Golang to 1.18.9
Signed-off-by: Greg Burton <9094087+gburton1@users.noreply.github.com>
2022-12-27 16:15:18 -08:00
Kubernetes Prow Robot
b03cc3e7c8
Merge pull request #551 from olivierlemasle/update-golang.org-x-net
Bump golang.org/x/net to v0.4.0 for GO-2022-1144
2022-12-16 08:16:18 -08:00
Kubernetes Prow Robot
b233597358
Merge pull request #518 from sillyfrog/master
Fix broken links in README.md
2022-12-13 11:19:34 -08:00
Olivier Lemasle
5d24df6353 Bump golang.org/x/net to v0.4.0 for GO-2022-1144 2022-12-12 12:11:21 +01:00
Sillyfrog
411763b355 Fix broken links in README.md 2022-12-11 07:09:03 +10:00
Kubernetes Prow Robot
062c42eccc
Merge pull request #550 from olivierlemasle/stylecheck
golangci-lint: Add stylecheck linter
2022-12-09 06:58:19 -08:00
Kubernetes Prow Robot
e18cc18201
Merge pull request #546 from olivierlemasle/log-flags
Refactor adding logging flags
2022-12-09 06:58:12 -08:00
Kubernetes Prow Robot
70604d2f54
Merge pull request #547 from olivierlemasle/GO-2022-0969
Fix GO-2022-0969
2022-12-09 06:52:13 -08:00
Olivier Lemasle
03cd31007e Add stylecheck linter 2022-12-08 23:12:26 +01:00
Olivier Lemasle
3d590269aa Update golang.org/x/net - Fix GO-2022-0969 2022-12-06 21:52:17 +01:00
Olivier Lemasle
09cc27e609 Refactor adding logging flags 2022-12-05 10:10:51 +01:00
Kubernetes Prow Robot
a5faf9f920
Merge pull request #544 from olivierlemasle/minversion
Set MinVersion: tls.VersionTLS12 in prometheus client's TLSClientConfig
2022-11-29 09:11:24 -08:00
Olivier Lemasle
dc0c0058d0 Set MinVersion: tls.VersionTLS12 in prometheus client's TLSClientConfig
Having no explicit MinVersion is reported by [gosec] as G402 (CWE-295):
`TLS MinVersion too low`

Using MinVersion: tls.VersionTLS12 because it's what client-go uses:
cf 1ac8d45935/transport/transport.go (L92)

That way, the Kubernetes API client and the Prometheus client in
prometheus-adapter use the same TLS config MinVersion.

[gosec]: https://github.com/securego/gosec
2022-11-29 17:24:50 +01:00
Kubernetes Prow Robot
8958457968
Merge pull request #540 from olivierlemasle/verify
Use golangci-lint
2022-11-29 08:15:24 -08:00
Olivier Lemasle
0ea1c1b8d3 Use Golangci-lint 2022-11-28 23:17:16 +01:00
Kubernetes Prow Robot
85e2d2052d
Merge pull request #542 from dgrisonnet/add-olivierlemasle
Add olivierlemasle as reviewer
2022-11-28 06:22:08 -08:00
Damien Grisonnet
d5c45b27b0 Add olivierlemasle as reviewer
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2022-11-28 15:10:40 +01:00
Kubernetes Prow Robot
fdfecc8d7f
Merge pull request #538 from olivierlemasle/fix-token-file
Fix segfault when using --prometheus-token-file
2022-11-22 04:12:13 -08:00
Olivier Lemasle
e740fee947 Fix segfault when using --prometheus-token-file 2022-11-09 18:06:38 +01:00
Kubernetes Prow Robot
268b2a8ec2
Merge pull request #531 from JoaoBraveCoding/426
Updates deploy/manifest to latest version in sync with kube-prom
2022-11-08 22:44:13 -08:00
Joao Marcal
372dfc9d3a
Updates README, docs/walkthrough and deploy/
Signed-off-by: JoaoBraveCoding <jmarcal@redhat.com>
2022-11-08 15:36:35 +00:00
Joao Marcal
3afe2c74bc
Updates deploy/manifest to latest version in sync with kube-prom
Issue https://github.com/kubernetes-sigs/prometheus-adapter/issues/426
2022-09-02 17:16:33 +01:00
Damien Grisonnet
dd75b55557
Merge pull request #529 from dgrisonnet/update-registry-location
Update registry location to registry.k8s.io
2022-08-31 14:54:01 +02:00
Kubernetes Prow Robot
4767a63a67
Merge pull request #526 from dgrisonnet/update-owners
Update OWNERS
2022-08-31 05:45:01 -07:00
Damien Grisonnet
204d5996a4 *: update registry location to registry.k8s.io
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2022-08-31 14:32:12 +02:00
Damien Grisonnet
d4d0a69514
Merge pull request #528 from dgrisonnet/fix-image
Fix image location in manifests
2022-08-31 14:28:38 +02:00
Damien Grisonnet
e5ad3d8903 deploy: fix image location
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2022-08-31 14:16:23 +02:00
Damien Grisonnet
465e4153f9 *: update OWNERS
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2022-08-23 17:29:11 +02:00
Damien Grisonnet
f23e67113a
Merge pull request #524 from dgrisonnet/recover-klog-flags
cmd/adapter: recover klog flags
2022-08-12 18:53:30 +02:00
Damien Grisonnet
303ac6fd45 cmd/adapter: recover klog flags
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2022-08-12 18:50:41 +02:00
Damien Grisonnet
7b4ba08b5d
Merge pull request #523 from dgrisonnet/cut-release-0.10
Cut release 0.10.0
2022-08-12 17:52:11 +02:00
Damien Grisonnet
56b57a0b0e VERSION: update to v0.10.0
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2022-08-12 17:45:28 +02:00
Kubernetes Prow Robot
dd85956fbf
Merge pull request #509 from ksauzz/feature/query-verb
Add --prometheus-verb to support POST requests to prometheus servers
2022-08-12 05:34:43 -07:00
Kazuhiro Suzuki
65abf73917 Update help about --prometheus-verb option 2022-08-12 12:45:22 +09:00
Damien Grisonnet
47ca16ef50
Merge pull request #521 from dgrisonnet/bump-k8s-deps-1.24
Update dependencies
2022-08-11 16:30:52 +02:00
Damien Grisonnet
cca107d97c *: support new MetricsGetter interface
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2022-08-11 15:04:45 +02:00
Damien Grisonnet
9321bf0162 zz_generated.openapi.go: regenerate
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2022-08-11 15:04:45 +02:00
Damien Grisonnet
d2ae4c1569 go.mod: bump golang and k8s deps to 0.24.3
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2022-08-11 15:04:43 +02:00
Kubernetes Prow Robot
c6e518beac
Merge pull request #491 from Ruwan-Ranganath/master
Update README.md with Helm-3 Command
2022-07-07 07:11:34 -07:00
Kubernetes Prow Robot
508b82b712
Merge pull request #494 from grzesuav/patch-2
Change apiregistration.k8s.io to v1
2022-07-07 07:07:34 -07:00
Kazuhiro Suzuki
a8742cff28 Add --prometheus-verb to support POST requests to prometheus servers 2022-06-28 18:23:52 +09:00
Kubernetes Prow Robot
9008b12a01
Merge pull request #498 from lokichoggio/master
fix: close file
2022-04-27 07:38:11 -07:00
lokichoggio
df3080de31
fix: close file 2022-04-11 17:54:51 +08:00
Grzegorz Głąb
00920756a4
Change apiregistration.k8s.io to v1 2022-03-23 20:48:46 +01:00
Ruwan Ranganath
e85e426ee0
Update README.md 2022-03-10 14:38:05 +05:30
Kubernetes Prow Robot
bf33cafefc
Merge pull request #482 from peizhouyu/Validate_OWNERS_files
Validate OWNERS files
2022-01-26 01:56:26 -08:00
peizhouyu
0aaf002fbc Validate OWNERS files 2022-01-25 11:21:29 +08:00
Kubernetes Prow Robot
2cc6362964
Merge pull request #476 from dims/drop-unused-alias-in-owners-aliases
Drop unused alias in OWNERS_ALIASES
2022-01-10 08:25:13 -08:00
Davanum Srinivas
8441ee2f74
Drop unused alias in OWNERS_ALIASES
Signed-off-by: Davanum Srinivas <davanum@gmail.com>
2021-12-24 17:03:30 -05:00
Kubernetes Prow Robot
c9e69613d3
Merge pull request #472 from spiffxp/use-k8s-infra-for-gcb-image
images: use k8s-staging-test-infra/gcb-docker-gcloud
2021-12-01 00:09:17 -08:00
Aaron Crickenberger
b877e9d1bb images: use k8s-staging-test-infra/gcb-docker-gcloud 2021-11-30 13:04:55 -08:00
Kubernetes Prow Robot
bd568beea0
Merge pull request #461 from dgrisonnet/version-v0.9.1
*: merge changes from v0.9.1
2021-11-09 08:39:47 -08:00
Kubernetes Prow Robot
57a6fda6b1
Merge pull request #465 from mbutkereit/typo-sample-config
Add s to metricQuery
2021-11-09 06:23:47 -08:00
mbutkereit
6720d67d3a Add s to metricQuery 2021-10-29 08:13:50 +02:00
Damien Grisonnet
4f58885c9a *: merge changes from v0.9.1
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2021-10-15 18:30:08 +02:00
Kubernetes Prow Robot
3206c65b47
Merge pull request #455 from leoskyrocker/master
Fix external metrics provider not respecting metrics-max-age
2021-10-06 08:12:34 -07:00
Leo Lei
bb4722e38b Fix external metrics provider not respecting metrics-max-age 2021-09-24 18:42:24 +08:00
Kubernetes Prow Robot
dd107a714b
Merge pull request #454 from spiffxp/follow-k8sio-default-branch
docs: follow kubernetes/k8s.io branch rename:
2021-09-16 02:19:45 -07:00
Aaron Crickenberger
d76d3eaa49 docs: follow kubernetes/k8s.io branch rename: 2021-09-15 15:32:26 -07:00
Kubernetes Prow Robot
12309c9d1d
Merge pull request #438 from fpetkovski/bug-template
Add bug template
2021-09-13 03:40:07 -07:00
fpetkovski
3288fb9d41 Add collapsible blocks 2021-08-23 10:44:30 +02:00
Kubernetes Prow Robot
56df87890c
Merge pull request #447 from dgrisonnet/gcr.k8s.io
README: improve gcr.k8s.io instructions
2021-08-18 05:18:09 -07:00
Kubernetes Prow Robot
ae458c4464
Merge pull request #448 from aw1cks-forks/master
v0.9.0: Bump version file to reflect new release
2021-08-18 01:32:08 -07:00
Alex Wicks
1ef79d0a86 Bump VERSION file to reflect latest release 2021-08-17 17:36:07 +01:00
Damien Grisonnet
7040f70905 README: improve gcr.k8s.io instructions
Images hosted on gcr.k8s.io aren't browsable to the users, so linking to
the website results in a 403 HTTP error which is confusing.

Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2021-08-17 17:54:25 +02:00
Kubernetes Prow Robot
0a9c781e5c
Merge pull request #442 from leoskyrocker/master
Update documentation to include metrics-max-age
2021-08-16 02:35:49 -07:00
Leo Lei
11ee7ee7e1 Fix typo 2021-08-12 21:58:33 +08:00
Kubernetes Prow Robot
0f60f49639
Merge pull request #444 from dgrisonnet/context
Propagate metric providers context
2021-08-11 08:44:46 -07:00
Damien Grisonnet
8b85c68c9e pkg: propagate metric providers context
In custom-metrics-apiserver v1.22.0, contexts were added to the metric
providers. We can benefit from that by propagating the context given to
the provider down to the requests.

Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2021-08-11 17:04:01 +02:00
Kubernetes Prow Robot
4264c97f7b
Merge pull request #443 from dgrisonnet/bump-k8s-deps-1.22
Update golang dependencies
2021-08-11 05:28:46 -07:00
Damien Grisonnet
4eb6c313a1 go.mod: update custom-metrics-apiserver to v1.22.0
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2021-08-11 13:56:45 +02:00
Damien Grisonnet
cc5d3b8ed2 go.mod: update dependencies
* Bump Kubernetes dependencies to v0.22.0
* Bump ginkgo to v1.16.4
* Bump gomega to v1.15.0
* Bump cobra to v1.2.1

Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2021-08-11 13:35:42 +02:00
Leo Lei
0a2c697e0b Update documentation to include metrics-max-age 2021-08-11 13:28:50 +08:00
Kubernetes Prow Robot
c6f774e28a
Merge pull request #440 from dgrisonnet/remove-travis-deploy
Remove unused travis deploy file
2021-08-10 02:11:18 -07:00
Damien Grisonnet
20a5b7a80d chore: remove unused travis deploy file
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2021-08-10 10:58:32 +02:00
fpetkovski
ac814833e1 Add bug template 2021-07-27 13:50:42 +02:00
Kubernetes Prow Robot
eef6b8fef1
Merge pull request #436 from arajkumar/openapi-for-external-and-custom-metrics
fix: add openapi spec for custom and external metrics types
2021-07-19 07:36:52 -07:00
Arunprasad Rajkumar
6cea5b88ca
fix: add openapi spec for custom and external metrics types
Signed-off-by: Arunprasad Rajkumar <arajkuma@redhat.com>
2021-07-19 19:27:43 +05:30
Kubernetes Prow Robot
97236f92ed
Merge pull request #432 from discordianfish/prometheus-request-headers
Support setting headers on requests to Prometheus
2021-07-19 05:34:51 -07:00
Johannes 'fish' Ziemke
d84340cc85 Support setting headers on requests to Prometheus 2021-07-17 14:44:35 +02:00
Damien Grisonnet
3fde77674e
Merge pull request #431 from dgrisonnet/neg-resource-metrics
Prevent prometheus-adapter from returning negative resource metrics
2021-07-16 16:41:09 +02:00
Kubernetes Prow Robot
95995bcf4b
Merge pull request #434 from fpetkovski/improve-docs
Document image registries
2021-07-15 09:08:48 -07:00
Filip Petkovski
5cf9dc3427
Apply suggestions from code review
Co-authored-by: Damien Grisonnet <damien.grisonnet@epita.fr>
2021-07-15 17:39:19 +02:00
Kubernetes Prow Robot
71ab6c4d90
Merge pull request #435 from arajkumar/fix-incorrect-type-used-for-swagger
fix: incorrect type used for openapi spec
2021-07-15 08:32:47 -07:00
Arunprasad Rajkumar
aed49ff54f
fix: incorrect type used for openapi spec
Prior to this fix, openapi spec for prometheus-adapter apiextension was based on the type "k8s.io/sample-apiserver/pkg/apiserver" which is incorrect. Due to the incorrect type, `kubectl explain podmetrics` (or nodemetrics) wasn't showing any doc for any resources from metrics.k8s.io/v1beta1.

This changeset fixes the problem by using the right type(sigs.k8s.io/metrics-server/pkg/api) for the openapi generation.

This also helped to remove the sample-apiserver dependency from
prometheus-adapter.

Signed-off-by: Arunprasad Rajkumar <arajkuma@redhat.com>
2021-07-15 19:39:59 +05:30
fpetkovski
134774884c Document image registries
Signed-off-by: fpetkovski <filip.petkovsky@gmail.com>
2021-07-15 09:18:13 +02:00
Damien Grisonnet
0b3ac78d19 pkg/resourceprovider: guard from negative metrics
When serving the resource metrics API, prometheus-adapter may return
negative values for pods/nodes memory and CPU usage. This happens
because Prometheus sees counter resets which results in Prometheus
interpolating data incorrectly to avoid the counter value going down.
To prevent that, we need to add some guards in prometheus-adapter to
replace the negative value by zero whenever it detects one.

Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2021-07-13 12:19:02 +02:00
Damien Grisonnet
93450fc29f
Merge pull request #424 from dgrisonnet/cloudbuild-timeout
Increase cloudbuild timeout to 1h
2021-07-06 17:01:57 +02:00
Damien Grisonnet
c8ee46b6b4
Merge pull request #423 from dgrisonnet/fix-push-multi-arch
Fix push-multi-arch image deployment
2021-07-06 17:01:36 +02:00
Damien Grisonnet
4a22d18a5d cloudbuild: increase timeout to 1h
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2021-07-06 16:52:52 +02:00
Damien Grisonnet
9fd8918914 image: fix push-multi-arch image deployment
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2021-07-06 16:51:17 +02:00
Damien Grisonnet
731e852494
Merge pull request #420 from dgrisonnet/fix-prow-deploy
Stop populating IMAGE env variable
2021-07-06 15:18:15 +02:00
Kubernetes Prow Robot
2dbb46f158
Merge pull request #421 from ashishranjan1457/issue-412-ashishranjan1457
Fix external rule tag in documentation
2021-07-02 08:06:13 -07:00
Ashish Ranjan
4256683587
Fix the tag for external rules 2021-07-02 16:23:28 +05:30
Ashish Ranjan
0ceb09085c
Fix external rule tag in documentation
Replaced external: by externalRules: in documentation for external rules
2021-07-02 12:17:59 +05:30
Damien Grisonnet
670b3def30 Makefile: stop populating IMAGE env variable
The IMAGE env variable is used by prow when building images, so it was
replacing what we would have expected to be the `prometheus-adapter`
image by the container image used to build the prometheus-adapter image.

Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2021-07-01 15:18:12 +02:00
Kubernetes Prow Robot
7cd63baccf
Merge pull request #418 from dgrisonnet/remove-travis
Remove travis in favor of prow.k8s.io
2021-07-01 04:29:54 -07:00
Kubernetes Prow Robot
89425b72cc
Merge pull request #419 from dgrisonnet/default-gcr
Default images to the official k8s.gcr.io and gcr.io registries
2021-07-01 04:25:54 -07:00
Kubernetes Prow Robot
5e59822274
Merge pull request #417 from dgrisonnet/release
RELEASE.md: update with gcr promotion guidelines
2021-07-01 04:15:54 -07:00
Damien Grisonnet
467f24d45c image: default to gcr.io instead of hub.docker.com
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2021-06-30 19:27:04 +02:00
Damien Grisonnet
231446751c deploy: remove travis in favor of prow.k8s.io
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2021-06-30 19:19:17 +02:00
Damien Grisonnet
a057c04b09 RELEASE.md: update with gcr promotion guidelines
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2021-06-30 18:54:15 +02:00
Kubernetes Prow Robot
ae1765153a
Merge pull request #416 from dgrisonnet/cloudbuild
Add cloudbuild.yaml
2021-06-30 06:31:03 -07:00
Kubernetes Prow Robot
e4d11e44e3
Merge pull request #415 from dgrisonnet/container-push
Improve container push rules
2021-06-30 06:27:03 -07:00
Damien Grisonnet
06e41b486c deploy: add cloudbuild.yaml
Add cloudbuild.yaml file required by the image pushing job.

Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2021-06-30 15:07:54 +02:00
Damien Grisonnet
046b970edb deploy: improve container push rules
Improve and cleanup container push rules to prepare for the move to the
official gcr.k8s.io registry.
As part of the improvements, I replaced the non cross platform busybox
image by gcr.io/distroless/static:latest which is platform agnostic.

Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2021-06-30 15:03:49 +02:00
Kubernetes Prow Robot
70418fdbf8
Merge pull request #410 from dgrisonnet/fix-pod-informer
Fix pod lister by running the pod informer
2021-06-07 07:54:40 -07:00
Damien Grisonnet
91b9b7afc2 .travis.yml: update go version to 1.16
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2021-06-07 16:17:37 +02:00
Damien Grisonnet
152cf3bbaa cmd: run pod informer
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2021-06-07 16:10:13 +02:00
Kubernetes Prow Robot
407217728b
Merge pull request #407 from dgrisonnet/localvendor
Remove localvendor directory
2021-06-03 08:23:38 -07:00
Kubernetes Prow Robot
82a71ebb6f
Merge pull request #408 from paulfantom/version-file
*: add version file
2021-06-03 07:07:38 -07:00
paulfantom
aae4ef6b51
*: add version file
Signed-off-by: paulfantom <pawel@krupa.net.pl>
2021-06-03 16:02:21 +02:00
Kubernetes Prow Robot
89b6c7e31c
Merge pull request #406 from dgrisonnet/fix-build
Makefile: consolidate docker-build
2021-06-03 06:49:38 -07:00
Kubernetes Prow Robot
a7ff3cb9c2
Merge pull request #405 from dgrisonnet/filter-pods
Filter non-running pods
2021-06-03 06:45:38 -07:00
Damien Grisonnet
ef7bb58ff2 go.mod: remove localvendor
The latest version of metrics-server, v0.5.0 doesn't seem to require
having kubelet dependencies locally anymore. Thus, we can safely remove
the localvendor directory that was used to store them.

Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2021-06-02 17:26:03 +02:00
Damien Grisonnet
03e8eb8ddb Makefile: consolidate docker-build
As part of this commit, I upgraded the golang image used for building to
1.16 and consolidated how the docker-build rule was working. Previously,
it was failing in master's CI because the go modules were not downloaded
in the build image. To improve that, I replaced the combination of
docker run and docker build by a multi-stage Dockerfile responsible for
building the adapter and running it.

In addition to that, I removed the `_output` directory completely as it
wasn't really meaningful to have it anymore. I also removed the
`build-local-image` rule as it was a duplicate of the `docker-build`
rule with the only different of using a scratch base image.

Also, since all the base images that we are using by default are based
on busybox, I change the UID used in the image to 65534 which correspond
to the nobody user in busybox.

Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2021-06-02 17:12:27 +02:00
Damien Grisonnet
cf45915a4a pkg: only account for Pods in running state
Create the pod lister based on a filtered informer factory that will
filter non-running pods so that prometheus-adapter don't expect metrics
from them.

Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2021-06-02 11:38:09 +02:00
Kubernetes Prow Robot
815fa20931
Merge pull request #404 from dgrisonnet/module
Move prometheus-adapter to sigs.k8s.io golang package
2021-06-02 02:30:06 -07:00
Damien Grisonnet
9dfbca09ca go.mod: move to sigs.k8s.io golang package
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2021-06-01 17:35:45 +02:00
Kubernetes Prow Robot
215cb0c292
Merge pull request #395 from dgrisonnet/panic-ms
Prevent metrics-server panics on GetContainerMetrics and GetNodeMetrics
2021-06-01 07:58:27 -07:00
Damien Grisonnet
76e61d47f6 pkg/resourceprovider: prevent metrics-server panic
The code that we are reusing from metrics-server to call
GetContainerMetrics and GetNodeMetrics requires that both functions
returns arrays of lengths of the number of pods/nodes given as
arguments. In some cases, prometheus-adapter was returning nil which
caused panics in metrics-server code.

Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2021-06-01 16:32:58 +02:00
Kubernetes Prow Robot
09334d3a6d
Merge pull request #399 from dgrisonnet/deps
go.mod: bump dependencies
2021-06-01 07:22:27 -07:00
Damien Grisonnet
c67e8f5956 go.mod: bump dependencies
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2021-06-01 15:25:56 +02:00
Kubernetes Prow Robot
dd7a263002
Merge pull request #401 from dgrisonnet/vendor
Remove vendor directory
2021-06-01 06:22:27 -07:00
Damien Grisonnet
9148122308 chore: update gitignore
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2021-05-26 09:45:52 +02:00
Damien Grisonnet
39b782bce9 chore: remove vendor directory
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2021-05-26 09:41:03 +02:00
Kubernetes Prow Robot
f3aafa7c8f
Merge pull request #400 from dgrisonnet/openapi
hack/tools: remove openapi-gen install in vendor
2021-05-26 00:17:21 -07:00
Damien Grisonnet
7bc0f0473d hack/tools: remove openapi-gen install in vendor
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2021-05-25 19:13:16 +02:00
Kubernetes Prow Robot
c893b1140c
Merge pull request #380 from carsonoid/issue-324-carsonoid
Allow metrics to be defined as `namespaced: false`
2021-05-12 06:12:17 -07:00
Carson Anderson
6c1d85ccf9 Split ExternalMetricsQuery and MetricsQuery funcs 2021-05-11 14:49:11 -06:00
Carson Anderson
c0ae5d6dd4 fix tests 2021-04-23 11:39:47 -06:00
Carson Anderson
fa5f8cd742 Add requested fixes 2021-04-23 11:27:38 -06:00
Carson Anderson
510c3724ce Add docs, tests, and move namespaced to metricsQuery 2021-04-08 11:07:50 -06:00
Kubernetes Prow Robot
4b40c1796d
Merge pull request #391 from andrewpollack/fix-docu-links-deploy
Updating deploy/README.md to fix links
2021-04-06 01:53:35 -07:00
andrewpkq
f4d5bc9045 Updating deploy/README.md to fix links 2021-04-03 18:55:16 -07:00
Kubernetes Prow Robot
b67ac3e747
Merge pull request #389 from dgrisonnet/signal-handler
Add signal handler
2021-03-29 08:40:46 -07:00
Damien Grisonnet
9db8d2f731 cmd/adapter: add signal handler
Add a signal handler stopping the adapter if it receives a SIGINT or
SIGTERM signal. This prevent the prometheus-adapter pod from being stuck
in "Terminating" state.

Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2021-03-29 17:24:49 +02:00
Kubernetes Prow Robot
c30c69de09
Merge pull request #387 from dgrisonnet/remove-travis-verify
.travis.yml: remove verify job
2021-03-25 10:15:36 -07:00
Kubernetes Prow Robot
7151cd83b9
Merge pull request #382 from dgrisonnet/test
Makefile: include tests from cmd directory
2021-03-25 10:15:29 -07:00
Kubernetes Prow Robot
c41a99a529
Merge pull request #386 from aackerman/aackerman/fix-metric-labels
Fix documented metrics labels to work for k8s 1.16+
2021-03-23 11:07:36 -07:00
aackerman
0e105eeeb1 Update tests to use container and pod labels instead of container_name and pod_name 2021-03-23 12:41:44 -05:00
aackerman
54cd969594 Fixup default config generation to use pod and container instead of pod_name and container_name, those labels have been removed in kubernetes 1.16 2021-03-23 10:59:40 -05:00
Damien Grisonnet
df347a1427 .travis.yml: remove verify job
The verify job has been moved to prow, so we can remove the one from
travis.

Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2021-03-23 16:52:30 +01:00
aackerman
dce6abfba9 Fix documented metrics labels to work for k8s 1.16+ 2021-03-23 09:01:56 -05:00
Kubernetes Prow Robot
2c1be65011
Merge pull request #384 from lilic/remove-myself
OWNERS: Remove myself from the OWNERS
2021-03-19 08:38:35 -07:00
Lili Cosic
9231ff996d OWNERS: Remove myself from the OWNERS 2021-03-19 16:33:24 +01:00
Kubernetes Prow Robot
976c38aee4
Merge pull request #379 from TheKangaroo/fix/walkthrough
fix walkthrough example
2021-03-10 08:01:14 -08:00
Damien Grisonnet
737c8232e1 Makefile: include tests from cmd directory
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2021-03-10 16:53:26 +01:00
Kubernetes Prow Robot
ef33937e43
Merge pull request #377 from dgrisonnet/update-owners
Add dgrisonnet to the OWNERS
2021-03-09 08:05:00 -08:00
Carson Anderson
3ae38c7417 Allow metrics to be defined as namespaced: false
When set to false, no namespace label will be set by the adapter based on the namespace
portion of the url in the request path.

This allows individual consumers to set namespace independent of the source kubernetes resource.

---

Example:

Given an adapter config like this:

```
    externalRules:
    - seriesQuery: 'nsq_topic_depth'
      resources:
        namespaced: false
```

An HPA could target a different namespace by setting it in the selector:

```
  - type: External
    external:
      metric:
        name: nsq_topic_depth
        selector:
          labelSelector:
            topic: my-topic
            namespace: nsq
```

This is useful for scaling on metrics from services that run in a differnt namespace than the source resource.
2021-03-05 15:25:37 -07:00
Till Adam
3bd8b54ad5
fix walkthrough example
The service created by this command did not match the port name used
later on in the walkthrough. Therefor it is defined explicitly here.
2021-03-05 13:06:40 +01:00
Damien Grisonnet
dcf0ece4ea Add dgrisonnet to the OWNERS
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2021-03-02 12:02:26 +01:00
Kubernetes Prow Robot
7e11fe30ee
Merge pull request #372 from paulfantom/json
pkg/config: allow configuration to be read from json schema
2021-03-01 05:22:46 -08:00
Kubernetes Prow Robot
019a27f200
Merge pull request #354 from iyalang/feature/add-cert-auth
add TLS auth for accessing Prometheus
2021-03-01 05:18:42 -08:00
Kubernetes Prow Robot
e087f72404
Merge pull request #319 from kehao95/update-walkthrough
Update Prometheus Operator Doc location
2021-03-01 01:18:39 -08:00
Kubernetes Prow Robot
30f6e2fd07
Merge pull request #374 from paulfantom/imports
*: move all imports to github.com/kubernetes-sigs/prometheus-adapter
2021-02-23 01:22:03 -08:00
Iya Lang
808bd76c5a add TLS auth for accessing Prometheus 2021-02-22 21:39:58 +01:00
paulfantom
cd55a67b89
*: move all imports to github.com/kubernetes-sigs/prometheus-adapter
Signed-off-by: paulfantom <pawel@krupa.net.pl>
2021-02-22 15:49:03 +01:00
paulfantom
1cc7bed020
pkg/config: allow configuration to be read from json schema 2021-02-22 10:17:27 +01:00
Nikhita Raghunath
dd841a6e5e Fix SECURIY_CONTACTS filename 2021-01-30 11:01:32 +05:30
Nikhita Raghunath
079f67825f Fix OWNERS, OWNERS_ALIASES file names 2021-01-30 10:54:21 +05:30
Sergiusz Urbaniak
12d1fb4a72
Merge pull request #362 from dgrisonnet/fix-auth-webhook-panic
Fix authorizer webhook panic
2021-01-22 10:28:16 +01:00
Damien Grisonnet
78eec11706 vendor: revendor
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2021-01-21 17:40:42 +01:00
Damien Grisonnet
61a30408f6 go.mod: bump apiserver to fix auth webhook panics
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2021-01-21 17:39:56 +01:00
Damien Grisonnet
b0423f39ac go.mod: bump kubernetes dependencies to v0.20.2
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2021-01-21 17:22:19 +01:00
Sergiusz Urbaniak
f604e07020
Merge pull request #359 from dgrisonnet/add-notice
Add NOTICE to comply with the CNCF rules
2021-01-21 16:53:18 +01:00
Damien Grisonnet
48e9f418fa Add NOTICE to comply with the CNCF rules
If (a) contributor(s) have not signed the CLA and could not be reached,
a NOTICE file should be added referencing section 7 of the CLA with a
list of the developers who could not be reached.

Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2021-01-18 10:09:16 +01:00
Sergiusz Urbaniak
f33fc94229
Merge pull request #348 from dgrisonnet/populate-selector
Populate metric selector for custom metrics
2020-12-15 15:54:52 +01:00
Sergiusz Urbaniak
aa77eca551
Merge pull request #352 from s-urbaniak/bump-1.20
bump to k8s 1.20, go 1.15
2020-12-14 13:26:04 +01:00
Sergiusz Urbaniak
96cdc4d143 Makefile: bump to Go 1.15 2020-12-14 12:54:22 +01:00
Sergiusz Urbaniak
147b5c8858
pkg/api/generated: regenerate 2020-12-14 12:43:59 +01:00
Sergiusz Urbaniak
9f0440be0f
vendor: revendor 2020-12-14 12:43:28 +01:00
Sergiusz Urbaniak
269295a414
go.mod: bump to k8s 1.20 2020-12-14 12:43:07 +01:00
Damien Grisonnet
76020f6618 pkg/custom-provider: populate metric selector
When querying custom-metrics, the metric label selectors weren't
populated in the resulting values.

Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2020-12-10 16:14:22 +01:00
Sergiusz Urbaniak
be35274475
Merge pull request #344 from dgrisonnet/bump-metrics-server
Make NodeMetrics and PodMetrics APIs match K8s conventions
2020-12-03 14:37:19 +01:00
Damien Grisonnet
6d8a82f423 go.mod: bump metrics-server
Include a fix to the NodeMetrics and PodMetrics APIs to match k8s
conventions.
- NodeMetrics and PodMetrics now have labels as they can be filtered by
them
- Field selector is now applied to NodeMetrics and PodMetrics instead
of Nodes and Pods

Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
2020-12-02 18:28:56 +01:00
Hao Ke
43043ced4a
Update Prometheus Operator Doc location 2020-09-28 14:29:15 -04:00
4034 changed files with 6289 additions and 1291591 deletions

52
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View file

@ -0,0 +1,52 @@
---
name: Bug report
about: Report a bug encountered while running prometheus-adapter
title: ''
labels: kind/bug
assignees: ''
---
<!-- Please use this template while reporting a bug and provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner. Thanks!
If the matter is security related, please disclose it privately see https://github.com/kubernetes/kube-state-metrics/blob/master/SECURITY.md
-->
**What happened?**:
**What did you expect to happen?**:
**Please provide the prometheus-adapter config**:
<details open>
<summary>prometheus-adapter config</summary>
<!--- INSERT config HERE --->
</details>
**Please provide the HPA resource used for autoscaling**:
<details open>
<summary>HPA yaml</summary>
<!--- INSERT yaml HERE --->
</details>
**Please provide the HPA status**:
**Please provide the prometheus-adapter logs with -v=6 around the time the issue happened**:
<details open>
<summary>prometheus-adapter logs</summary>
<!--- INSERT logs HERE --->
</details>
**Anything else we need to know?**:
**Environment**:
- prometheus-adapter version:
- prometheus version:
- Kubernetes version (use `kubectl version`):
- Cloud provider or hardware configuration:
- Other info:

5
.gitignore vendored
View file

@ -1,4 +1,5 @@
*.swp
*~
_output
deploy/adapter
/vendor
/adapter
.e2e

39
.golangci.yml Normal file
View file

@ -0,0 +1,39 @@
run:
deadline: 5m
linters:
disable-all: true
enable:
- bodyclose
- dogsled
- dupl
- errcheck
- exportloopref
- gocritic
- gocyclo
- gofmt
- goimports
- gosec
- goprintffuncname
- gosimple
- govet
- ineffassign
- misspell
- nakedret
- nolintlint
- revive
- staticcheck
- stylecheck
- typecheck
- unconvert
- unused
- whitespace
linters-settings:
goimports:
local-prefixes: sigs.k8s.io/prometheus-adapter
revive:
rules:
- name: exported
arguments:
- disableStutteringCheck

View file

@ -1,9 +0,0 @@
#!/bin/bash
set -x
docker login -u "$DOCKER_USERNAME" -p "$DOCKER_PASSWORD"
if [[ -n $TRAVIS_TAG ]]; then
make push VERSION=${TRAVIS_TAG}
else
make push-amd64
fi

View file

@ -1,26 +0,0 @@
language: go
go:
- '1.13'
# blech, Travis downloads with capitals in DirectXMan12, which confuses go
go_import_path: github.com/directxman12/k8s-prometheus-adapter
script: make verify && git diff --exit-code
sudo: required
services:
- docker
env:
- GO111MODULE=on
deploy:
- provider: script
script: bash .travis-deploy.sh
on:
branch: master
- provider: script
script: bash .travis-deploy.sh
on:
tags: true

22
Dockerfile Normal file
View file

@ -0,0 +1,22 @@
ARG ARCH
ARG GO_VERSION
FROM golang:${GO_VERSION} as build
WORKDIR /go/src/sigs.k8s.io/prometheus-adapter
COPY go.mod .
COPY go.sum .
RUN go mod download
COPY pkg pkg
COPY cmd cmd
COPY Makefile Makefile
ARG ARCH
RUN make prometheus-adapter
FROM gcr.io/distroless/static:latest-$ARCH
COPY --from=build /go/src/sigs.k8s.io/prometheus-adapter/adapter /
USER 65534
ENTRYPOINT ["/adapter"]

160
Makefile
View file

@ -1,90 +1,118 @@
REGISTRY?=directxman12
IMAGE?=k8s-prometheus-adapter
REGISTRY?=gcr.io/k8s-staging-prometheus-adapter
IMAGE=prometheus-adapter
ARCH?=$(shell go env GOARCH)
ALL_ARCH=amd64 arm arm64 ppc64le s390x
ML_PLATFORMS=linux/amd64,linux/arm,linux/arm64,linux/ppc64le,linux/s390x
OUT_DIR?=$(PWD)/_output
GOPATH:=$(shell go env GOPATH)
OPENAPI_PATH=./vendor/k8s.io/kube-openapi
VERSION=$(shell cat VERSION)
TAG_PREFIX=v
TAG?=$(TAG_PREFIX)$(VERSION)
VERSION?=latest
GOIMAGE=golang:1.13
GO111MODULE=on
export GO111MODULE
GO_VERSION?=1.22.5
GOLANGCI_VERSION?=1.56.2
ifeq ($(ARCH),amd64)
BASEIMAGE?=busybox
endif
ifeq ($(ARCH),arm)
BASEIMAGE?=armhf/busybox
endif
ifeq ($(ARCH),arm64)
BASEIMAGE?=aarch64/busybox
endif
ifeq ($(ARCH),ppc64le)
BASEIMAGE?=ppc64le/busybox
endif
ifeq ($(ARCH),s390x)
BASEIMAGE?=s390x/busybox
endif
.PHONY: all
all: prometheus-adapter
.PHONY: all docker-build push-% push test verify-gofmt gofmt verify build-local-image
# Build
# -----
all: $(OUT_DIR)/$(ARCH)/adapter
SRC_DEPS=$(shell find pkg cmd -type f -name "*.go")
src_deps=$(shell find pkg cmd -type f -name "*.go")
$(OUT_DIR)/%/adapter: $(src_deps)
CGO_ENABLED=0 GOARCH=$* go build -tags netgo -o $(OUT_DIR)/$*/adapter github.com/directxman12/k8s-prometheus-adapter/cmd/adapter
prometheus-adapter: $(SRC_DEPS)
CGO_ENABLED=0 GOARCH=$(ARCH) go build sigs.k8s.io/prometheus-adapter/cmd/adapter
docker-build: $(OUT_DIR)/Dockerfile
docker run -it -v $(OUT_DIR):/build -v $(PWD):/go/src/github.com/directxman12/k8s-prometheus-adapter -e GOARCH=$(ARCH) $(GOIMAGE) /bin/bash -c "\
CGO_ENABLED=0 go build -tags netgo -o /build/$(ARCH)/adapter github.com/directxman12/k8s-prometheus-adapter/cmd/adapter"
.PHONY: container
container:
docker build -t $(REGISTRY)/$(IMAGE)-$(ARCH):$(TAG) --build-arg ARCH=$(ARCH) --build-arg GO_VERSION=$(GO_VERSION) .
docker build -t $(REGISTRY)/$(IMAGE)-$(ARCH):$(VERSION) --build-arg ARCH=$(ARCH) --build-arg BASEIMAGE=$(BASEIMAGE) $(OUT_DIR)
# Container push
# --------------
$(OUT_DIR)/Dockerfile: deploy/Dockerfile
mkdir -p $(OUT_DIR)
cp deploy/Dockerfile $(OUT_DIR)/Dockerfile
PUSH_ARCH_TARGETS=$(addprefix push-,$(ALL_ARCH))
build-local-image: $(OUT_DIR)/Dockerfile $(OUT_DIR)/$(ARCH)/adapter
docker build -t $(REGISTRY)/$(IMAGE)-$(ARCH):$(VERSION) --build-arg ARCH=$(ARCH) --build-arg BASEIMAGE=scratch $(OUT_DIR)
.PHONY: push
push: container
docker push $(REGISTRY)/$(IMAGE)-$(ARCH):$(TAG)
push-%:
$(MAKE) ARCH=$* docker-build
docker push $(REGISTRY)/$(IMAGE)-$*:$(VERSION)
push-all: $(PUSH_ARCH_TARGETS) push-multi-arch;
push: ./manifest-tool $(addprefix push-,$(ALL_ARCH))
./manifest-tool push from-args --platforms $(ML_PLATFORMS) --template $(REGISTRY)/$(IMAGE)-ARCH:$(VERSION) --target $(REGISTRY)/$(IMAGE):$(VERSION)
.PHONY: $(PUSH_ARCH_TARGETS)
$(PUSH_ARCH_TARGETS): push-%:
ARCH=$* $(MAKE) push
./manifest-tool:
curl -sSL https://github.com/estesp/manifest-tool/releases/download/v0.5.0/manifest-tool-linux-amd64 > manifest-tool
chmod +x manifest-tool
.PHONY: push-multi-arch
push-multi-arch: export DOCKER_CLI_EXPERIMENTAL = enabled
push-multi-arch:
docker manifest create --amend $(REGISTRY)/$(IMAGE):$(TAG) $(shell echo $(ALL_ARCH) | sed -e "s~[^ ]*~$(REGISTRY)/$(IMAGE)\-&:$(TAG)~g")
@for arch in $(ALL_ARCH); do docker manifest annotate --arch $${arch} $(REGISTRY)/$(IMAGE):$(TAG) $(REGISTRY)/$(IMAGE)-$${arch}:$(TAG); done
docker manifest push --purge $(REGISTRY)/$(IMAGE):$(TAG)
vendor:
go mod tidy
go mod vendor
# Test
# ----
.PHONY: test
test:
CGO_ENABLED=0 go test ./pkg/...
CGO_ENABLED=0 go test ./cmd/... ./pkg/...
verify-gofmt:
./hack/gofmt-all.sh -v
.PHONY: test-e2e
test-e2e:
./test/run-e2e-tests.sh
gofmt:
./hack/gofmt-all.sh
go-mod:
go mod tidy
go mod vendor
# Static analysis
# ---------------
.PHONY: verify
verify: verify-lint verify-deps verify-generated
.PHONY: update
update: update-lint update-generated
# Format and lint
# ---------------
HAS_GOLANGCI_VERSION:=$(shell $(GOPATH)/bin/golangci-lint version --format=short)
.PHONY: golangci
golangci:
ifneq ($(HAS_GOLANGCI_VERSION), $(GOLANGCI_VERSION))
curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b $(GOPATH)/bin v$(GOLANGCI_VERSION)
endif
.PHONY: verify-lint
verify-lint: golangci
$(GOPATH)/bin/golangci-lint run --modules-download-mode=readonly || (echo 'Run "make update-lint"' && exit 1)
.PHONY: update-lint
update-lint: golangci
$(GOPATH)/bin/golangci-lint run --fix --modules-download-mode=readonly
# Dependencies
# ------------
.PHONY: verify-deps
verify-deps:
go mod verify
go mod tidy
@git diff --exit-code -- go.mod go.sum
verify: verify-gofmt go-mod test
# Generation
# ----------
pkg/api/generated/openapi/zz_generated.openapi.go: go.mod go.sum
go run $(OPENAPI_PATH)/cmd/openapi-gen/openapi-gen.go --logtostderr \
-i k8s.io/metrics/pkg/apis/custom_metrics,k8s.io/metrics/pkg/apis/custom_metrics/v1beta1,k8s.io/metrics/pkg/apis/custom_metrics/v1beta2,k8s.io/metrics/pkg/apis/external_metrics,k8s.io/metrics/pkg/apis/external_metrics/v1beta1,k8s.io/metrics/pkg/apis/metrics,k8s.io/metrics/pkg/apis/metrics/v1beta1,k8s.io/apimachinery/pkg/apis/meta/v1,k8s.io/apimachinery/pkg/api/resource,k8s.io/apimachinery/pkg/version,k8s.io/api/core/v1 \
-h ./hack/boilerplate.go.txt \
-p ./pkg/api/generated/openapi \
-O zz_generated.openapi \
-o ./ \
-r /dev/null
generated_files=pkg/api/generated/openapi/zz_generated.openapi.go
.PHONY: verify-generated
verify-generated: update-generated
@git diff --exit-code -- $(generated_files)
.PHONY: update-generated
update-generated:
go install -mod=readonly k8s.io/kube-openapi/cmd/openapi-gen
$(GOPATH)/bin/openapi-gen --logtostderr \
--go-header-file ./hack/boilerplate.go.txt \
--output-pkg ./pkg/api/generated/openapi \
--output-file zz_generated.openapi.go \
--output-dir ./pkg/api/generated/openapi \
-r /dev/null \
"k8s.io/metrics/pkg/apis/custom_metrics" "k8s.io/metrics/pkg/apis/custom_metrics/v1beta1" "k8s.io/metrics/pkg/apis/custom_metrics/v1beta2" "k8s.io/metrics/pkg/apis/external_metrics" "k8s.io/metrics/pkg/apis/external_metrics/v1beta1" "k8s.io/metrics/pkg/apis/metrics" "k8s.io/metrics/pkg/apis/metrics/v1beta1" "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/apimachinery/pkg/api/resource" "k8s.io/apimachinery/pkg/version" "k8s.io/api/core/v1"

16
NOTICE Normal file
View file

@ -0,0 +1,16 @@
When donating the k8s-prometheus-adapter project to the CNCF, we could not
reach all the contributors to make them sign the CNCF CLA. As such, according
to the CNCF rules to donate a repository, we must add a NOTICE referencing
section 7 of the CLA with a list of developers who could not be reached.
`7. Should You wish to submit work that is not Your original creation, You may
submit it to the Foundation separately from any Contribution, identifying the
complete details of its source and of any license or other restriction
(including, but not limited to, related patents, trademarks, and license
agreements) of which you are personally aware, and conspicuously marking the
work as "Submitted on behalf of a third-party: [named here]".`
Submitted on behalf of a third-party: Andrew "thisisamurray" Murray
Submitted on behalf of a third-party: Duane "duane-ibm" D'Souza
Submitted on behalf of a third-party: John "john-delivuk" Delivuk
Submitted on behalf of a third-party: Richard "rrtaylor" Taylor

View file

@ -1,15 +1,17 @@
# See the OWNERS docs at https://go.k8s.io/owners
owners:
- s-urbaniak
- lilic
approvers:
- prometheus-adapter-approvers
- dgrisonnet
- logicalhan
- dashpole
reviewers:
- prometheus-adapter-approvers
- prometheus-adapter-reviewers
- dgrisonnet
- olivierlemasle
- logicalhan
- dashpole
emeritus_approvers:
- brancz
- directxman12
- lilic
- s-urbaniak

View file

@ -1,7 +0,0 @@
# See the OWNERS docs at https://go.k8s.io/owners#owners_aliases
aliases:
prometheus-adapter-approvers:
- s-urbaniak
- lilic
prometheus-adapter-reviewers: []

View file

@ -1,11 +1,7 @@
# Prometheus Adapter for Kubernetes Metrics APIs
[![Build Status](https://travis-ci.org/DirectXMan12/k8s-prometheus-adapter.svg?branch=master)](https://travis-ci.org/DirectXMan12/k8s-prometheus-adapter)
This repository contains an implementation of the Kubernetes
[resource metrics](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/resource-metrics-api.md),
[custom metrics](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/custom-metrics-api.md), and
[external metrics](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/external-metrics-api.md) APIs.
This repository contains an implementation of the Kubernetes Custom, Resource and External
[Metric APIs](https://github.com/kubernetes/metrics).
This adapter is therefore suitable for use with the autoscaling/v2 Horizontal Pod Autoscaler in Kubernetes 1.6+.
It can also replace the [metrics server](https://github.com/kubernetes-incubator/metrics-server) on clusters that already run Prometheus and collect the appropriate metrics.
@ -23,11 +19,26 @@ If you're a helm user, a helm chart is listed on prometheus-community repository
To install it with the release name `my-release`, run this Helm command:
For Helm2
```console
$ helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
$ helm repo update
$ helm install --name my-release prometheus-community/prometheus-adapter
```
For Helm3 ( as name is mandatory )
```console
$ helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
$ helm repo update
$ helm install my-release prometheus-community/prometheus-adapter
```
Official images
---
All official images for releases after v0.8.4 are available in `registry.k8s.io/prometheus-adapter/prometheus-adapter:$VERSION`. The project also maintains a [staging registry](https://console.cloud.google.com/gcr/images/k8s-staging-prometheus-adapter/GLOBAL/) where images for each commit from the master branch are published. You can use this registry if you need to test a version from a specific commit, or if you need to deploy a patch while waiting for a new release.
Images for versions v0.8.4 and prior are only available in unofficial registries:
* https://quay.io/repository/coreos/k8s-prometheus-adapter-amd64
* https://hub.docker.com/r/directxman12/k8s-prometheus-adapter/
Configuration
-------------
@ -38,7 +49,7 @@ will attempt to using [Kubernetes in-cluster
config](https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/#accessing-the-api-from-a-pod)
to connect to the cluster.
It takes the following addition arguments specific to configuring how the
It takes the following additional arguments specific to configuring how the
adapter talks to Prometheus and the main Kubernetes cluster:
- `--lister-kubeconfig=<path-to-kubeconfig>`: This configures
@ -47,11 +58,26 @@ adapter talks to Prometheus and the main Kubernetes cluster:
in-cluster config.
- `--metrics-relist-interval=<duration>`: This is the interval at which to
update the cache of available metrics from Prometheus. Since the adapter
only lists metrics during discovery that exist between the current time and
the last discovery query, your relist interval should be equal to or larger
than your Prometheus scrape interval, otherwise your metrics will
occaisonally disappear from the adapter.
update the cache of available metrics from Prometheus. By default, this
value is set to 10 minutes.
- `--metrics-max-age=<duration>`: This is the max age of the metrics to be
loaded from Prometheus. For example, when set to `10m`, it will query
Prometheus for metrics since 10m ago, and only those that has datapoints
within the time period will appear in the adapter. Therefore, the metrics-max-age
should be equal to or larger than your Prometheus' scrape interval,
or your metrics will occaisonally disappear from the adapter.
By default, this is set to be the same as metrics-relist-interval to avoid
some confusing behavior (See this [PR](https://github.com/kubernetes-sigs/prometheus-adapter/pull/230)).
Note: We recommend setting this only if you understand what is happening.
For example, this setting could be useful in cases where the scrape duration is
over a network call, e.g. pulling metrics from AWS CloudWatch, or Google Monitoring,
more specifically, Google Monitoring sometimes have delays on when data will show
up in their system after being sampled. This means that even if you scraped data
frequently, they might not show up soon. If you configured the relist interval to
a short period but without configuring this, you might not be able to see your
metrics in the adapter in certain scenarios.
- `--prometheus-url=<url>`: This is the URL used to connect to Prometheus.
It will eventually contain query parameters to configure the connection.

View file

@ -4,6 +4,10 @@ prometheus-adapter is released on an as-needed basis. The process is as follows:
1. An issue is proposing a new release with a changelog since the last release
1. At least one [OWNERS](OWNERS) must LGTM this release
1. An OWNER runs `git tag -s $VERSION` and inserts the changelog and pushes the tag with `git push $VERSION`
1. The release issue is closed
1. A PR that bumps version hardcoded in code is created and merged
1. An OWNER creates a draft Github release
1. An OWNER creates a release tag using `git tag -s $VERSION`, inserts the changelog and pushes the tag with `git push $VERSION`. Then waits for [prow.k8s.io](https://prow.k8s.io) to build and push new images to [gcr.io/k8s-staging-prometheus-adapter](https://gcr.io/k8s-staging-prometheus-adapter)
1. A PR in [kubernetes/k8s.io](https://github.com/kubernetes/k8s.io/blob/main/k8s.gcr.io/images/k8s-staging-prometheus-adapter/images.yaml) is created to release images to `k8s.gcr.io`
1. An OWNER publishes the GitHub release
1. An announcement email is sent to `kubernetes-sig-instrumentation@googlegroups.com` with the subject `[ANNOUNCE] prometheus-adapter $VERSION is released`
1. The release issue is closed

View file

@ -10,5 +10,5 @@
# DO NOT REPORT SECURITY VULNERABILITIES DIRECTLY TO THESE NAMES, FOLLOW THE
# INSTRUCTIONS AT https://kubernetes.io/security/
dgrisonnet
s-urbaniak
lilic

1
VERSION Normal file
View file

@ -0,0 +1 @@
0.12.0

11
cloudbuild.yaml Normal file
View file

@ -0,0 +1,11 @@
# See https://cloud.google.com/cloud-build/docs/build-config
timeout: 3600s
options:
substitution_option: ALLOW_LOOSE
steps:
- name: 'gcr.io/k8s-staging-test-infra/gcb-docker-gcloud:v20211118-2f2d816b90'
entrypoint: make
env:
- TAG=$_PULL_BASE_REF
args:
- push-all

View file

@ -19,36 +19,38 @@ package main
import (
"crypto/tls"
"crypto/x509"
"flag"
"fmt"
"io/ioutil"
"net/http"
"net/url"
"os"
"strings"
"time"
"k8s.io/apimachinery/pkg/util/wait"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
openapinamer "k8s.io/apiserver/pkg/endpoints/openapi"
genericapiserver "k8s.io/apiserver/pkg/server"
"k8s.io/client-go/metadata"
"k8s.io/client-go/metadata/metadatainformer"
"k8s.io/client-go/rest"
"k8s.io/client-go/tools/clientcmd"
"k8s.io/client-go/transport"
"k8s.io/component-base/logs"
"k8s.io/klog/v2"
"k8s.io/sample-apiserver/pkg/apiserver"
basecmd "github.com/kubernetes-sigs/custom-metrics-apiserver/pkg/cmd"
"github.com/kubernetes-sigs/custom-metrics-apiserver/pkg/provider"
customexternalmetrics "sigs.k8s.io/custom-metrics-apiserver/pkg/apiserver"
basecmd "sigs.k8s.io/custom-metrics-apiserver/pkg/cmd"
"sigs.k8s.io/custom-metrics-apiserver/pkg/provider"
"sigs.k8s.io/metrics-server/pkg/api"
generatedopenapi "github.com/directxman12/k8s-prometheus-adapter/pkg/api/generated/openapi"
prom "github.com/directxman12/k8s-prometheus-adapter/pkg/client"
mprom "github.com/directxman12/k8s-prometheus-adapter/pkg/client/metrics"
adaptercfg "github.com/directxman12/k8s-prometheus-adapter/pkg/config"
cmprov "github.com/directxman12/k8s-prometheus-adapter/pkg/custom-provider"
extprov "github.com/directxman12/k8s-prometheus-adapter/pkg/external-provider"
"github.com/directxman12/k8s-prometheus-adapter/pkg/naming"
resprov "github.com/directxman12/k8s-prometheus-adapter/pkg/resourceprovider"
generatedopenapi "sigs.k8s.io/prometheus-adapter/pkg/api/generated/openapi"
prom "sigs.k8s.io/prometheus-adapter/pkg/client"
mprom "sigs.k8s.io/prometheus-adapter/pkg/client/metrics"
adaptercfg "sigs.k8s.io/prometheus-adapter/pkg/config"
cmprov "sigs.k8s.io/prometheus-adapter/pkg/custom-provider"
extprov "sigs.k8s.io/prometheus-adapter/pkg/external-provider"
"sigs.k8s.io/prometheus-adapter/pkg/naming"
resprov "sigs.k8s.io/prometheus-adapter/pkg/resourceprovider"
)
type PrometheusAdapter struct {
@ -62,15 +64,24 @@ type PrometheusAdapter struct {
PrometheusAuthConf string
// PrometheusCAFile points to the file containing the ca-root for connecting with Prometheus
PrometheusCAFile string
// PrometheusClientTLSCertFile points to the file containing the client TLS cert for connecting with Prometheus
PrometheusClientTLSCertFile string
// PrometheusClientTLSKeyFile points to the file containing the client TLS key for connecting with Prometheus
PrometheusClientTLSKeyFile string
// PrometheusTokenFile points to the file that contains the bearer token when connecting with Prometheus
PrometheusTokenFile string
// PrometheusHeaders is a k=v list of headers to set on requests to PrometheusURL
PrometheusHeaders []string
// PrometheusVerb is a verb to set on requests to PrometheusURL
PrometheusVerb string
// AdapterConfigFile points to the file containing the metrics discovery configuration.
AdapterConfigFile string
// MetricsRelistInterval is the interval at which to relist the set of available metrics
MetricsRelistInterval time.Duration
// MetricsMaxAge is the period to query available metrics for
MetricsMaxAge time.Duration
// DisableHTTP2 indicates that http2 should not be enabled.
DisableHTTP2 bool
metricsConfig *adaptercfg.MetricsDiscoveryConfig
}
@ -80,10 +91,14 @@ func (cmd *PrometheusAdapter) makePromClient() (prom.Client, error) {
return nil, fmt.Errorf("invalid Prometheus URL %q: %v", baseURL, err)
}
if cmd.PrometheusVerb != http.MethodGet && cmd.PrometheusVerb != http.MethodPost {
return nil, fmt.Errorf("unsupported Prometheus HTTP verb %q; supported verbs: \"GET\" and \"POST\"", cmd.PrometheusVerb)
}
var httpClient *http.Client
if cmd.PrometheusCAFile != "" {
prometheusCAClient, err := makePrometheusCAClient(cmd.PrometheusCAFile)
prometheusCAClient, err := makePrometheusCAClient(cmd.PrometheusCAFile, cmd.PrometheusClientTLSCertFile, cmd.PrometheusClientTLSKeyFile)
if err != nil {
return nil, err
}
@ -99,16 +114,19 @@ func (cmd *PrometheusAdapter) makePromClient() (prom.Client, error) {
}
if cmd.PrometheusTokenFile != "" {
data, err := ioutil.ReadFile(cmd.PrometheusTokenFile)
data, err := os.ReadFile(cmd.PrometheusTokenFile)
if err != nil {
return nil, fmt.Errorf("failed to read prometheus-token-file: %v", err)
}
httpClient.Transport = transport.NewBearerAuthRoundTripper(string(data), httpClient.Transport)
wrappedTransport := http.DefaultTransport
if httpClient.Transport != nil {
wrappedTransport = httpClient.Transport
}
httpClient.Transport = transport.NewBearerAuthRoundTripper(string(data), wrappedTransport)
}
genericPromClient := prom.NewGenericAPIClient(httpClient, baseURL)
genericPromClient := prom.NewGenericAPIClient(httpClient, baseURL, parseHeaderArgs(cmd.PrometheusHeaders))
instrumentedGenericPromClient := mprom.InstrumentGenericAPIClient(genericPromClient, baseURL.String())
return prom.NewClientForAPI(instrumentedGenericPromClient), nil
return prom.NewClientForAPI(instrumentedGenericPromClient, cmd.PrometheusVerb), nil
}
func (cmd *PrometheusAdapter) addFlags() {
@ -120,15 +138,28 @@ func (cmd *PrometheusAdapter) addFlags() {
"kubeconfig file used to configure auth when connecting to Prometheus.")
cmd.Flags().StringVar(&cmd.PrometheusCAFile, "prometheus-ca-file", cmd.PrometheusCAFile,
"Optional CA file to use when connecting with Prometheus")
cmd.Flags().StringVar(&cmd.PrometheusClientTLSCertFile, "prometheus-client-tls-cert-file", cmd.PrometheusClientTLSCertFile,
"Optional client TLS cert file to use when connecting with Prometheus, auto-renewal is not supported")
cmd.Flags().StringVar(&cmd.PrometheusClientTLSKeyFile, "prometheus-client-tls-key-file", cmd.PrometheusClientTLSKeyFile,
"Optional client TLS key file to use when connecting with Prometheus, auto-renewal is not supported")
cmd.Flags().StringVar(&cmd.PrometheusTokenFile, "prometheus-token-file", cmd.PrometheusTokenFile,
"Optional file containing the bearer token to use when connecting with Prometheus")
cmd.Flags().StringArrayVar(&cmd.PrometheusHeaders, "prometheus-header", cmd.PrometheusHeaders,
"Optional header to set on requests to prometheus-url. Can be repeated")
cmd.Flags().StringVar(&cmd.PrometheusVerb, "prometheus-verb", cmd.PrometheusVerb,
"HTTP verb to set on requests to Prometheus. Possible values: \"GET\", \"POST\"")
cmd.Flags().StringVar(&cmd.AdapterConfigFile, "config", cmd.AdapterConfigFile,
"Configuration file containing details of how to transform between Prometheus metrics "+
"and custom metrics API resources")
cmd.Flags().DurationVar(&cmd.MetricsRelistInterval, "metrics-relist-interval", cmd.MetricsRelistInterval, ""+
cmd.Flags().DurationVar(&cmd.MetricsRelistInterval, "metrics-relist-interval", cmd.MetricsRelistInterval,
"interval at which to re-list the set of all available metrics from Prometheus")
cmd.Flags().DurationVar(&cmd.MetricsMaxAge, "metrics-max-age", cmd.MetricsMaxAge, ""+
cmd.Flags().DurationVar(&cmd.MetricsMaxAge, "metrics-max-age", cmd.MetricsMaxAge,
"period for which to query the set of available metrics from Prometheus")
cmd.Flags().BoolVar(&cmd.DisableHTTP2, "disable-http2", cmd.DisableHTTP2,
"Disable HTTP/2 support")
// Add logging flags
logs.AddFlags(cmd.Flags())
}
func (cmd *PrometheusAdapter) loadConfig() error {
@ -196,13 +227,13 @@ func (cmd *PrometheusAdapter) makeExternalProvider(promClient prom.Client, stopC
}
// construct the provider and start it
emProvider, runner := extprov.NewExternalPrometheusProvider(promClient, namers, cmd.MetricsRelistInterval)
emProvider, runner := extprov.NewExternalPrometheusProvider(promClient, namers, cmd.MetricsRelistInterval, cmd.MetricsMaxAge)
runner.RunUntil(stopCh)
return emProvider, nil
}
func (cmd *PrometheusAdapter) addResourceMetricsAPI(promClient prom.Client) error {
func (cmd *PrometheusAdapter) addResourceMetricsAPI(promClient prom.Client, stopCh <-chan struct{}) error {
if cmd.metricsConfig.ResourceRules == nil {
// bail if we don't have rules for setting things up
return nil
@ -218,19 +249,48 @@ func (cmd *PrometheusAdapter) addResourceMetricsAPI(promClient prom.Client) erro
return fmt.Errorf("unable to construct resource metrics API provider: %v", err)
}
informers, err := cmd.Informers()
rest, err := cmd.ClientConfig()
if err != nil {
return err
}
client, err := metadata.NewForConfig(rest)
if err != nil {
return err
}
podInformerFactory := metadatainformer.NewFilteredSharedInformerFactory(client, 0, corev1.NamespaceAll, func(options *metav1.ListOptions) {
options.FieldSelector = "status.phase=Running"
})
podInformer := podInformerFactory.ForResource(corev1.SchemeGroupVersion.WithResource("pods"))
informer, err := cmd.Informers()
if err != nil {
return err
}
config, err := cmd.Config()
if err != nil {
return err
}
config.GenericConfig.EnableMetrics = false
server, err := cmd.Server()
if err != nil {
return err
}
if err := api.Install(provider, informers.Core().V1(), server.GenericAPIServer); err != nil {
metricsHandler, err := mprom.MetricsHandler()
if err != nil {
return err
}
server.GenericAPIServer.Handler.NonGoRestfulMux.HandleFunc("/metrics", metricsHandler)
if err := api.Install(provider, podInformer.Lister(), informer.Core().V1().Nodes().Lister(), server.GenericAPIServer, nil); err != nil {
return err
}
go podInformer.Informer().Run(stopCh)
return nil
}
@ -242,20 +302,28 @@ func main() {
// set up flags
cmd := &PrometheusAdapter{
PrometheusURL: "https://localhost",
PrometheusVerb: http.MethodGet,
MetricsRelistInterval: 10 * time.Minute,
}
cmd.Name = "prometheus-metrics-adapter"
cmd.OpenAPIConfig = genericapiserver.DefaultOpenAPIConfig(generatedopenapi.GetOpenAPIDefinitions, openapinamer.NewDefinitionNamer(apiserver.Scheme))
cmd.OpenAPIConfig.Info.Title = "prometheus-metrics-adapter"
cmd.OpenAPIConfig.Info.Version = "1.0.0"
cmd.addFlags()
cmd.Flags().AddGoFlagSet(flag.CommandLine) // make sure we get the klog flags
if err := cmd.Flags().Parse(os.Args); err != nil {
klog.Fatalf("unable to parse flags: %v", err)
}
if cmd.OpenAPIConfig == nil {
cmd.OpenAPIConfig = genericapiserver.DefaultOpenAPIConfig(generatedopenapi.GetOpenAPIDefinitions, openapinamer.NewDefinitionNamer(api.Scheme, customexternalmetrics.Scheme))
cmd.OpenAPIConfig.Info.Title = "prometheus-metrics-adapter"
cmd.OpenAPIConfig.Info.Version = "1.0.0"
}
if cmd.OpenAPIV3Config == nil {
cmd.OpenAPIV3Config = genericapiserver.DefaultOpenAPIV3Config(generatedopenapi.GetOpenAPIDefinitions, openapinamer.NewDefinitionNamer(api.Scheme, customexternalmetrics.Scheme))
cmd.OpenAPIV3Config.Info.Title = "prometheus-metrics-adapter"
cmd.OpenAPIV3Config.Info.Version = "1.0.0"
}
// if --metrics-max-age is not set, make it equal to --metrics-relist-interval
if cmd.MetricsMaxAge == 0*time.Second {
cmd.MetricsMaxAge = cmd.MetricsRelistInterval
@ -272,8 +340,11 @@ func main() {
klog.Fatalf("unable to load metrics discovery config: %v", err)
}
// stop channel closed on SIGTERM and SIGINT
stopCh := genericapiserver.SetupSignalHandler()
// construct the provider
cmProvider, err := cmd.makeProvider(promClient, wait.NeverStop)
cmProvider, err := cmd.makeProvider(promClient, stopCh)
if err != nil {
klog.Fatalf("unable to construct custom metrics provider: %v", err)
}
@ -284,7 +355,7 @@ func main() {
}
// construct the external provider
emProvider, err := cmd.makeExternalProvider(promClient, wait.NeverStop)
emProvider, err := cmd.makeExternalProvider(promClient, stopCh)
if err != nil {
klog.Fatalf("unable to construct external metrics provider: %v", err)
}
@ -295,12 +366,20 @@ func main() {
}
// attach resource metrics support, if it's needed
if err := cmd.addResourceMetricsAPI(promClient); err != nil {
if err := cmd.addResourceMetricsAPI(promClient, stopCh); err != nil {
klog.Fatalf("unable to install resource metrics API: %v", err)
}
// disable HTTP/2 to mitigate CVE-2023-44487 until the Go standard library
// and golang.org/x/net are fully fixed.
server, err := cmd.Server()
if err != nil {
klog.Fatalf("unable to fetch server: %v", err)
}
server.GenericAPIServer.SecureServingInfo.DisableHTTP2 = cmd.DisableHTTP2
// run the server
if err := cmd.Run(wait.NeverStop); err != nil {
if err := cmd.Run(stopCh); err != nil {
klog.Fatalf("unable to run custom metrics adapter: %v", err)
}
}
@ -324,7 +403,7 @@ func makeKubeconfigHTTPClient(inClusterAuth bool, kubeConfigPath string) (*http.
loader := clientcmd.NewNonInteractiveDeferredLoadingClientConfig(loadingRules, &clientcmd.ConfigOverrides{})
authConf, err = loader.ClientConfig()
if err != nil {
return nil, fmt.Errorf("unable to construct auth configuration from %q for connecting to Prometheus: %v", kubeConfigPath, err)
return nil, fmt.Errorf("unable to construct auth configuration from %q for connecting to Prometheus: %v", kubeConfigPath, err)
}
} else {
var err error
@ -340,8 +419,8 @@ func makeKubeconfigHTTPClient(inClusterAuth bool, kubeConfigPath string) (*http.
return &http.Client{Transport: tr}, nil
}
func makePrometheusCAClient(caFilename string) (*http.Client, error) {
data, err := ioutil.ReadFile(caFilename)
func makePrometheusCAClient(caFilePath string, tlsCertFilePath string, tlsKeyFilePath string) (*http.Client, error) {
data, err := os.ReadFile(caFilePath)
if err != nil {
return nil, fmt.Errorf("failed to read prometheus-ca-file: %v", err)
}
@ -351,11 +430,41 @@ func makePrometheusCAClient(caFilename string) (*http.Client, error) {
return nil, fmt.Errorf("no certs found in prometheus-ca-file")
}
if (tlsCertFilePath != "") && (tlsKeyFilePath != "") {
tlsClientCerts, err := tls.LoadX509KeyPair(tlsCertFilePath, tlsKeyFilePath)
if err != nil {
return nil, fmt.Errorf("failed to read TLS key pair: %v", err)
}
return &http.Client{
Transport: &http.Transport{
TLSClientConfig: &tls.Config{
RootCAs: pool,
Certificates: []tls.Certificate{tlsClientCerts},
MinVersion: tls.VersionTLS12,
},
},
}, nil
}
return &http.Client{
Transport: &http.Transport{
TLSClientConfig: &tls.Config{
RootCAs: pool,
RootCAs: pool,
MinVersion: tls.VersionTLS12,
},
},
}, nil
}
func parseHeaderArgs(args []string) http.Header {
headers := make(http.Header, len(args))
for _, h := range args {
parts := strings.SplitN(h, "=", 2)
value := ""
if len(parts) > 1 {
value = parts[1]
}
headers.Add(parts[0], value)
}
return headers
}

211
cmd/adapter/adapter_test.go Normal file
View file

@ -0,0 +1,211 @@
/*
Copyright 2016 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package main
import (
"net/http"
"os"
"path/filepath"
"reflect"
"testing"
)
const certsDir = "testdata"
func TestMakeKubeconfigHTTPClient(t *testing.T) {
tests := []struct {
kubeconfigPath string
inClusterAuth bool
success bool
}{
{
kubeconfigPath: filepath.Join(certsDir, "kubeconfig"),
inClusterAuth: false,
success: true,
},
{
kubeconfigPath: filepath.Join(certsDir, "kubeconfig"),
inClusterAuth: true,
success: false,
},
{
kubeconfigPath: filepath.Join(certsDir, "kubeconfig-error"),
inClusterAuth: false,
success: false,
},
{
kubeconfigPath: "",
inClusterAuth: false,
success: true,
},
}
os.Setenv("KUBERNETES_SERVICE_HOST", "prometheus")
os.Setenv("KUBERNETES_SERVICE_PORT", "8080")
for _, test := range tests {
t.Logf("Running test for: inClusterAuth %v, kubeconfigPath %v", test.inClusterAuth, test.kubeconfigPath)
kubeconfigHTTPClient, err := makeKubeconfigHTTPClient(test.inClusterAuth, test.kubeconfigPath)
if test.success {
if err != nil {
t.Errorf("Error is %v, expected nil", err)
continue
}
if kubeconfigHTTPClient.Transport == nil {
if test.inClusterAuth || test.kubeconfigPath != "" {
t.Error("HTTP client Transport is nil, expected http.RoundTripper")
}
}
} else if err == nil {
t.Errorf("Error is nil, expected %v", err)
}
}
}
func TestMakePrometheusCAClient(t *testing.T) {
tests := []struct {
caFilePath string
tlsCertFilePath string
tlsKeyFilePath string
success bool
tlsUsed bool
}{
{
caFilePath: filepath.Join(certsDir, "ca.pem"),
tlsCertFilePath: filepath.Join(certsDir, "tlscert.crt"),
tlsKeyFilePath: filepath.Join(certsDir, "tlskey.key"),
success: true,
tlsUsed: true,
},
{
caFilePath: filepath.Join(certsDir, "ca-error.pem"),
tlsCertFilePath: filepath.Join(certsDir, "tlscert.crt"),
tlsKeyFilePath: filepath.Join(certsDir, "tlskey.key"),
success: false,
tlsUsed: true,
},
{
caFilePath: filepath.Join(certsDir, "ca.pem"),
tlsCertFilePath: filepath.Join(certsDir, "tlscert-error.crt"),
tlsKeyFilePath: filepath.Join(certsDir, "tlskey.key"),
success: false,
tlsUsed: true,
},
{
caFilePath: filepath.Join(certsDir, "ca.pem"),
tlsCertFilePath: "",
tlsKeyFilePath: "",
success: true,
tlsUsed: false,
},
}
for _, test := range tests {
t.Logf("Running test for: caFilePath %v, tlsCertFilePath %v, tlsKeyFilePath %v", test.caFilePath, test.tlsCertFilePath, test.tlsKeyFilePath)
prometheusCAClient, err := makePrometheusCAClient(test.caFilePath, test.tlsCertFilePath, test.tlsKeyFilePath)
if test.success {
if err != nil {
t.Errorf("Error is %v, expected nil", err)
continue
}
if prometheusCAClient.Transport.(*http.Transport).TLSClientConfig.RootCAs == nil {
t.Error("RootCAs is nil, expected *x509.CertPool")
continue
}
if test.tlsUsed {
if prometheusCAClient.Transport.(*http.Transport).TLSClientConfig.Certificates == nil {
t.Error("TLS certificates is nil, expected []tls.Certificate")
continue
}
} else {
if prometheusCAClient.Transport.(*http.Transport).TLSClientConfig.Certificates != nil {
t.Errorf("TLS certificates is %+v, expected nil", prometheusCAClient.Transport.(*http.Transport).TLSClientConfig.Certificates)
}
}
} else if err == nil {
t.Errorf("Error is nil, expected %v", err)
}
}
}
func TestParseHeaderArgs(t *testing.T) {
tests := []struct {
args []string
headers http.Header
}{
{
headers: http.Header{},
},
{
args: []string{"foo=bar"},
headers: http.Header{
"Foo": []string{"bar"},
},
},
{
args: []string{"foo"},
headers: http.Header{
"Foo": []string{""},
},
},
{
args: []string{"foo=bar", "foo=baz", "bux=baz=23"},
headers: http.Header{
"Foo": []string{"bar", "baz"},
"Bux": []string{"baz=23"},
},
},
}
for _, test := range tests {
got := parseHeaderArgs(test.args)
if !reflect.DeepEqual(got, test.headers) {
t.Errorf("Expected %#v but got %#v", test.headers, got)
}
}
}
func TestFlags(t *testing.T) {
cmd := &PrometheusAdapter{
PrometheusURL: "https://localhost",
}
cmd.addFlags()
flags := cmd.FlagSet
if flags == nil {
t.Fatalf("FlagSet should not be nil")
}
expectedFlags := []struct {
flag string
defaultValue string
}{
{flag: "v", defaultValue: "0"}, // logging flag (klog)
{flag: "prometheus-url", defaultValue: "https://localhost"}, // default is set in cmd
}
for _, e := range expectedFlags {
flag := flags.Lookup(e.flag)
if flag == nil {
t.Errorf("Flag %q expected to be present, was absent", e.flag)
continue
}
if flag.DefValue != e.defaultValue {
t.Errorf("Expected default value %q for flag %q, got %q", e.defaultValue, e.flag, flag.DefValue)
}
}
}

16
cmd/adapter/testdata/ca-error.pem vendored Normal file
View file

@ -0,0 +1,16 @@
-----BEGIN CERTIFICATE-----
MIIDdjCCAl4CCQDdbOsYxSKoeDANBgkqhkiG9w0BAQsFADB9MQswCQYDVQQGEwJV
UzENMAsGA1UEBwwEVGVzdDEaMBgGA1UECgwRUHJvbWV0aGV1c0FkYXB0ZXIxJDAi
BgNVBAMMG2s4cy1wcm9tZXRoZXVzLWFkYXB0ZXIudGVzdDEdMBsGCSqGSIb3DQEJ
ARYOdGVzdEB0ZXN0LnRlc3QwHhcNMjEwMjA5MTE0NzUwWhcNMjYwMjA4MTE0NzUw
WjB9MQswCQYDVQQGEwJVUzENMAsGA1UEBwwEVGVzdDEaMBgGA1UECgwRUHJvbWV0
aGV1c0FkYXB0ZXIxJDAiBgNVBAMMG2s4cy1wcm9tZXRoZXVzLWFkYXB0ZXIudGVz
dDEdMBsGCSqGSIb3DQEJARYOdGVzdEB0ZXN0LnRlc3QwggEiMA0GCSqGSIb3DQEB
AQUAA4IBDwAwggEKAoIBAQC24TDfTWLtYZPLDXqEjF7yn4K7oBOltX5Nngsk7LNd
AQELBQADggEBAD/bbeAZuyvtuEwdJ+4wkhBsHYXQ4OPxff1f3t4buIQFtnilWTXE
S60K3SEaQS8rOw8V9eHmzCsh3mPuVCoM7WsgKhp2mVhbGVZoBWBZ8kPQXqtsw+v4
tqTuJXnFPiF4clXb6Wp96Rc7nxzRAfn/6uVbSWds4JwRToUVszVOxe+yu0I84vuB
SHrRa077b1V+UT8otm+C5tC3jBZ0/IPRNWoT/rVcSoVLouX0fkbtxNF7c9v+PYg6
849A9T8cGKWKpKPGNEwBL9HYwtK6W0tTJr8A8pnAJ/UlniHA6u7SMHN+NoqBfi6M
bqq9lQ4QhjFrN2B1z9r3ak+EzQX1711TQ8w=
-----END CERTIFICATE-----

20
cmd/adapter/testdata/ca.pem vendored Normal file
View file

@ -0,0 +1,20 @@
-----BEGIN CERTIFICATE-----
MIIDLjCCAhYCCQDlnNCOw7JHFDANBgkqhkiG9w0BAQsFADBYMQswCQYDVQQGEwJV
UzENMAsGA1UECgwEVGVzdDEbMBkGA1UEAwwScHJvbWV0aGV1cy1hZGFwdGVyMR0w
GwYJKoZIhvcNAQkBFg50ZXN0QHRlc3QudGVzdDAgFw0yMTAyMjIyMDMxNTBaGA80
NzU5MDEyMDIwMzE1MFowWDELMAkGA1UEBhMCVVMxDTALBgNVBAoMBFRlc3QxGzAZ
BgNVBAMMEnByb21ldGhldXMtYWRhcHRlcjEdMBsGCSqGSIb3DQEJARYOdGVzdEB0
ZXN0LnRlc3QwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCvn9HQlfhw
qDH77+eEFU+N+ztqCtat54neVez+sFa4dfYxuvVYK+nc+oh4E7SS4u+17eKV+QFb
ZhRhrOTNI+fmuO+xDPKyU1MuYUDfwasRfMDcUpssea2fO/SrsxHmX9oOam0kgefJ
8aSwI9TYw4N4kIpG+EGatogDlR2KXhrqsRfx5PUB4npFaCrdoglyvvAQig83Iq5L
+bCknSe6NUMiqtL9CcuLzzRKB3DMOrvbB0tJdb4uv/gS26sx/Hp/1ri73/tv4I9z
GLLoUUoff7vfvxrhiGR9i+qBOda7THbbmYBD54y+SR0dBa2uuDDX0JbgNNfXtjiG
52hvAnc1/wv7AgMBAAEwDQYJKoZIhvcNAQELBQADggEBACCysIzT9NKaniEvXtnx
Yx/jRxpiEEUGl8kg83a95X4f13jdPpUSwcn3/iK5SAE/7ntGVM+ajtlXrHGxwjB7
ER0w4WC6Ozypzoh/yI/VXs+DRJTJu8CBJOBRQEpzkK4r64HU8iN2c9lPp1+6b3Vy
jfbf3yfnRUbJztSjOFDUeA2t3FThVddhqif/oxj65s5R8p9HEurcwhA3Q6lE53yx
jgee8qV9HXAqa4V0qQQJ0tjcpajhQahDTtThRr+Z2H4TzQuwHa3dM7IIF6EPWsCo
DtbUXEPL7zT3EBH7THOdvNsFlD/SFmT2RwiQ5606bRAHwAzzxjxjxFTMl7r4tX5W
Ldc=
-----END CERTIFICATE-----

17
cmd/adapter/testdata/kubeconfig vendored Normal file
View file

@ -0,0 +1,17 @@
apiVersion: v1
kind: Config
clusters:
- name: test
cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURMakNDQWhZQ0NRRGxuTkNPdzdKSEZEQU5CZ2txaGtpRzl3MEJBUXNGQURCWU1Rc3dDUVlEVlFRR0V3SlYKVXpFTk1Bc0dBMVVFQ2d3RVZHVnpkREViTUJrR0ExVUVBd3dTY0hKdmJXVjBhR1YxY3kxaFpHRndkR1Z5TVIwdwpHd1lKS29aSWh2Y05BUWtCRmc1MFpYTjBRSFJsYzNRdWRHVnpkREFnRncweU1UQXlNakl5TURNeE5UQmFHQTgwCk56VTVNREV5TURJd016RTFNRm93V0RFTE1Ba0dBMVVFQmhNQ1ZWTXhEVEFMQmdOVkJBb01CRlJsYzNReEd6QVoKQmdOVkJBTU1FbkJ5YjIxbGRHaGxkWE10WVdSaGNIUmxjakVkTUJzR0NTcUdTSWIzRFFFSkFSWU9kR1Z6ZEVCMApaWE4wTG5SbGMzUXdnZ0VpTUEwR0NTcUdTSWIzRFFFQkFRVUFBNElCRHdBd2dnRUtBb0lCQVFDdm45SFFsZmh3CnFESDc3K2VFRlUrTit6dHFDdGF0NTRuZVZleitzRmE0ZGZZeHV2VllLK25jK29oNEU3U1M0dSsxN2VLVitRRmIKWmhSaHJPVE5JK2ZtdU8reERQS3lVMU11WVVEZndhc1JmTURjVXBzc2VhMmZPL1Nyc3hIbVg5b09hbTBrZ2VmSgo4YVN3STlUWXc0TjRrSXBHK0VHYXRvZ0RsUjJLWGhycXNSZng1UFVCNG5wRmFDcmRvZ2x5dnZBUWlnODNJcTVMCitiQ2tuU2U2TlVNaXF0TDlDY3VMenpSS0IzRE1PcnZiQjB0SmRiNHV2L2dTMjZzeC9IcC8xcmk3My90djRJOXoKR0xMb1VVb2ZmN3ZmdnhyaGlHUjlpK3FCT2RhN1RIYmJtWUJENTR5K1NSMGRCYTJ1dUREWDBKYmdOTmZYdGppRwo1Mmh2QW5jMS93djdBZ01CQUFFd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFDQ3lzSXpUOU5LYW5pRXZYdG54Cll4L2pSeHBpRUVVR2w4a2c4M2E5NVg0ZjEzamRQcFVTd2NuMy9pSzVTQUUvN250R1ZNK2FqdGxYckhHeHdqQjcKRVIwdzRXQzZPenlwem9oL3lJL1ZYcytEUkpUSnU4Q0JKT0JSUUVwemtLNHI2NEhVOGlOMmM5bFBwMSs2YjNWeQpqZmJmM3lmblJVYkp6dFNqT0ZEVWVBMnQzRlRoVmRkaHFpZi9veGo2NXM1UjhwOUhFdXJjd2hBM1E2bEU1M3l4CmpnZWU4cVY5SFhBcWE0VjBxUVFKMHRqY3BhamhRYWhEVHRUaFJyK1oySDRUelF1d0hhM2RNN0lJRjZFUFdzQ28KRHRiVVhFUEw3elQzRUJIN1RIT2R2TnNGbEQvU0ZtVDJSd2lRNTYwNmJSQUh3QXp6eGp4anhGVE1sN3I0dFg1VwpMZGM9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
server: test.test
contexts:
- name: test
context:
cluster: test
user: test-user
current-context: test
users:
- name: test-user
user:
token: abcde12345

18
cmd/adapter/testdata/kubeconfig-error vendored Normal file
View file

@ -0,0 +1,18 @@
apiVersion: v1
kind: Config
clusters:
- name: test
cluster:
certificate-authority-data: abcde12345
server: test.test
contexts:
- name: test
context:
cluster: test
user: test-user
current-context: test
users:
- name: test-user
user:
token: abcde12345

24
cmd/adapter/testdata/tlscert-error.crt vendored Normal file
View file

@ -0,0 +1,24 @@
-----BEGIN CERTIFICATE-----
MIIFdjCCA14CCQC+svUhDVv51DANBgkqhkiG9w0BAQsFADB9MQswCQYDVQQGEwJV
UzENMAsGA1UEBwwEVGVzdDEaMBgGA1UECgwRUHJvbWV0aGV1c0FkYXB0ZXIxJDAi
BgNVBAMMG2s4cy1wcm9tZXRoZXVzLWFkYXB0ZXIudGVzdDEdMBsGCSqGSIb3DQEJ
ARYOdGVzdEB0ZXN0LnRlc3QwHhcNMjEwMjA5MTE0NDMyWhcNMjIwMjA5MTE0NDMy
WjB9MQswCQYDVQQGEwJVUzENMAsGA1UEBwwEVGVzdDEaMBgGA1UECgwRUHJvbWV0
aGV1c0FkYXB0ZXIxJDAiBgNVBAMMG2s4cy1wcm9tZXRoZXVzLWFkYXB0ZXIudGVz
dDEdMBsGCSqGSIb3DQEJARYOdGVzdEB0ZXN0LnRlc3QwggIiMA0GCSqGSIb3DQEB
AQUAA4ICDwAwggIKAoICAQDtLqKuIJqRETLOUSMDBtUDmIBaD2pG9Qv+cOBhQbVS
apZRWk8uKZKxqBOxgQ3UxY1szeVkx1Dphe3RN6ndmofiRc23ns1qncbDllgbtflk
GFvLKGcVBa6Z/lZ6FCZDWn6K6mJb0a7jtkOMG6+J/5eJHfZ23u/GYL1RKxH+qPPc
AwIDAQABMA0GCSqGSIb3DQEBCwUAA4ICAQC0oE4/Yel5GHeuKJ5+U9KEBsLCrBjj
jUW8or7SIZ1Gl4J2ak3p3WabhZjqOe10rsIXCNTaC2rEMvkNiP5Om0aeE6bx14jV
e9FIfJ7ytAL/PISaZXINgml05m4Su3bUaxCQpajBgqneNp4w57jfeFhcPt35j0H4
bxGk/hnIY1MmRULSOFBstmxNZSDNsGlTcZoN3+0KtuqLg6vTNuuJIyx1zd9/QT8t
RJ4fgrffJcPJscvq6hEdWmtcJhaDLWOEblsbFfN0J+zK07hHhqRavQrnwaBZgFWa
OIqAo6NfZONhCFy9mWFxLvQky1NXr60y220+N1GkEiLRQES7+p1pcKgn0v+f2EfW
uN6+LCppWX7FqtkB3OhZkHM6nbE/9GP5T76Kj30Fed/nHkTJ3QORRMQUTs4J6LNk
BD1i14MZMCn3UBZh8wX+d63xJHtfMvfac7L655GwLEnWW8JM8h8DDfRYM7JuEURG
pSbvoaygyvddT0FKRLcFGhfI7aBSWGcJH5rHdEcUQ+mnloD1RioQqTC+kxUSddJI
QNjgYivl9kwW9cJV1jzmKd8GQfg+j1X+jR9icNT5cacvclwnL0Mim0w/ZLfWQYmJ
q2ud+GS9+5RtPzWwHR60+Qs3dr8oQGh5wO12qUJ8d5MI+4YGWRjKRyYdio6g1Bhi
9WInD4va9cC7fw==
-----END CERTIFICATE-----

30
cmd/adapter/testdata/tlscert.crt vendored Normal file
View file

@ -0,0 +1,30 @@
-----BEGIN CERTIFICATE-----
MIIFLjCCAxYCCQDMlabDYYlDKzANBgkqhkiG9w0BAQsFADBYMQswCQYDVQQGEwJV
UzENMAsGA1UECgwEVGVzdDEbMBkGA1UEAwwScHJvbWV0aGV1cy1hZGFwdGVyMR0w
GwYJKoZIhvcNAQkBFg50ZXN0QHRlc3QudGVzdDAgFw0yMTAyMjIyMDMwMTNaGA80
NzU5MDEyMDIwMzAxM1owWDELMAkGA1UEBhMCVVMxDTALBgNVBAoMBFRlc3QxGzAZ
BgNVBAMMEnByb21ldGhldXMtYWRhcHRlcjEdMBsGCSqGSIb3DQEJARYOdGVzdEB0
ZXN0LnRlc3QwggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQDIJOO8apS3
84bssvnTVHp1VAiPg1tX+E6wPjayVPx+S4LgMKA4QM2kNKoPLQtvGV+lyqfhYp0H
cdGzCPVQ6aBlbHuOhInusJaXjgOlNTalThgigzky0t1jFxqaNjFtXqv6ME1Zcb9H
VrGMreEfNj8/Ijp7cfsBe7Jv2rBc06aidx9j66+oPTC98XcNnURUGO95UF8SjTQt
oi10m17uA7z/JUUSBvDJAg5Z62myZPU2stz38cuthROyEyXRBWimHh7bD17rwqhc
WRCfkFORWvwz9GMV5KfFCfWm2D2pm1f3ZWm5/FQbQrlxgxUagwDMoma+F6hQp84a
/sYPqqkWDRUK0NGZzWwxjfra8r8H2xFab+5ZFVr9+FhMgy6eelZ1JJc860s35Qpk
ZrSRH8RNMqLRG1cnDwHn9Md6joCZgLJhEW9L5xjpCVWkLXK59yA9ry5Jau9/2zDs
wlRzYI4TNazbVa84KliEjt3nZ6DgQh3PRtxHDrqJIQSSYu1MtUmPArLtEDDP/BqD
fGWCayc/SdxSWW9qU/aOq+D4KMKQXV44qc22f6rd/LKt/fcvDpbfcexXbeeNABQg
x1rAnhA8L/rYc4WTTbTrb8jwhaUoqJve6XOsHPVbk/L4CS9ReP1UwvhMM1C+Ast6
rr/a2bZkoMK+jmkA2QTwUsjvt/8G4dtVOwIDAQABMA0GCSqGSIb3DQEBCwUAA4IC
AQAvrWslCR21l8XRGI+l4z6bpZ7+089KQRemYcHWOKZ/nDTYcydDWQMdMtDZS43d
B2Wuu4UfOnK27YwuP4Ojf2hAzeaDBgv7xOcKRZ1K+zOCm+VqbtWV49c/Ow0Rc5KU
N7rApohdpXeJBp5TB1qQJsKcBv3gveLAivCFTeD0LiLMVdxjRRl9ZbMXtD3PABDC
KKFtE/n2MV/0wroMD9hHs9LckcNjHSrIFaQEy9cESn8q3kngFf3wvc2oM77yCOZ1
5y0AN+9ZXyMHHlMjye7GuW0Mpiwo1O4tW2brC0boqSmvSFNW9KRogKvu6Oij9Pm6
jJpuUsM0KOnID8m9jJ+Xb+DGC9cgLGHRJc+zw74X2KMQnH4/pZDNbIGG7d8xEoPn
RS/EbCoALmUbI2kqflVN88kN4ZUchsoHly5gIdidfo9yjeOihTF0xEEou/tzGW+K
AYxwy9uIYhz4lmH894H5nqJWPY/aLxD4M9nFW0yxczCQ8tpjwVYmP3/dCKp1IUXy
0h9TjyBRPv9O3JrTLTYBPLisLNqiU+YOZM6wgqmZTPtTCxKMmNlhGWKa8prAhMdb
GRxwkO6ylyL/j3J3HgcDHEC22/685L21HVFv8z/DMuj/eba4yyn1FBVXOU9hgLWS
LVLoVFFp7RaGSIECcqTyXldoZZpZrA89XDVuqSvHDiCOrg==
-----END CERTIFICATE-----

52
cmd/adapter/testdata/tlskey.key vendored Normal file
View file

@ -0,0 +1,52 @@
-----BEGIN PRIVATE KEY-----
MIIJQwIBADANBgkqhkiG9w0BAQEFAASCCS0wggkpAgEAAoICAQDIJOO8apS384bs
svnTVHp1VAiPg1tX+E6wPjayVPx+S4LgMKA4QM2kNKoPLQtvGV+lyqfhYp0HcdGz
CPVQ6aBlbHuOhInusJaXjgOlNTalThgigzky0t1jFxqaNjFtXqv6ME1Zcb9HVrGM
reEfNj8/Ijp7cfsBe7Jv2rBc06aidx9j66+oPTC98XcNnURUGO95UF8SjTQtoi10
m17uA7z/JUUSBvDJAg5Z62myZPU2stz38cuthROyEyXRBWimHh7bD17rwqhcWRCf
kFORWvwz9GMV5KfFCfWm2D2pm1f3ZWm5/FQbQrlxgxUagwDMoma+F6hQp84a/sYP
qqkWDRUK0NGZzWwxjfra8r8H2xFab+5ZFVr9+FhMgy6eelZ1JJc860s35QpkZrSR
H8RNMqLRG1cnDwHn9Md6joCZgLJhEW9L5xjpCVWkLXK59yA9ry5Jau9/2zDswlRz
YI4TNazbVa84KliEjt3nZ6DgQh3PRtxHDrqJIQSSYu1MtUmPArLtEDDP/BqDfGWC
ayc/SdxSWW9qU/aOq+D4KMKQXV44qc22f6rd/LKt/fcvDpbfcexXbeeNABQgx1rA
nhA8L/rYc4WTTbTrb8jwhaUoqJve6XOsHPVbk/L4CS9ReP1UwvhMM1C+Ast6rr/a
2bZkoMK+jmkA2QTwUsjvt/8G4dtVOwIDAQABAoICAQCFd9RG+exjH4uCnXfsbhGb
3IY47igj6frPnS1sjzAyKLkGOGcgHFcGgfhGVouhcxJNxW9e5hxBsq1c70RoyOOl
v0pGKCyzeB90wce8jFf8tK9zlH64XdY1FlsvK6Sagt+84Ck01J3yPOX6IppV7h8P
Qwws9j2lJ5A+919VB++/uCC+yZVCZEv03um9snq2ekp4ZBiCjpeVNumJMXOE1glb
PMdq1iYMZcqcPFkoFhtQdsbUsfJZrL0Nq6c0VJ8M6Fk7TGzIW+9aZiqnvd98t2go
XXkWSH148MNYmCvGx0lKOd7foF2WMFDqWbfhDiuiS0qoya3822qepff+ypgnlGHK
vr+9pLsWT7TG8pBfbXj47a7TwYAXkRMi+78vFQwoaeiKdehJM1YXZg9vBVS8BV3r
+0wYNE4WpdxUvX3aAnJO6ntRU6KCz3/D1+fxUT/w1rKX2Z1uTH5x2UxB6UUGDSF9
HiJfDp6RRtXHbQMR6uowM6UYBn0dl9Aso21oc2K4Gpx5QlsZaPi9M6BBMbPUhFcx
QH+w7fLmccwneJVGxjHkYOcLVLF7nuH5C2DsffrMubrgwuhSw2b8zy7ZpZ0eJ83D
CjJN9EgqwbmH0Or5N91YyVdR0Zm4EtODAo615O1kEMCKasKjpolOx/t9cgtbdkiq
pbLruOS+8jEG1erA7nYkQQKCAQEA4yba38hSkfIMUzfrlgF7AkXHbU4iINgKpHti
A9QrvEL9W4VHRiA5UTezzblyfMck9w/Hhx74pQjjGLj76L+8ZssCFI8ingNo3ItL
/AX3MN68NT4heiy8EvKRwRNWV05WEehZg9tTUKexIDRcDSr/9E+qG/cW5KOIQpYl
RIsKW2RUNFd3TVCQVUIzwe/0n6LuO2b7Btow+nfJ7U3mWQmHGYu7+SYjVlvIoQ68
jFGviGRineu/J7EiPND7qQzj78AtnXkULf+mjK2JdapRcn2EBNL34QepVCyjfXZf
QWm/ykI9nVOKRy1F38OhRHKrBICfWhN2Bgyvw3PPhGcb8EdknwKCAQEA4Y/2bpiz
S0H8LPUUsOLZWCadpp8yzvTerB/vuigkQiHM8w4QUwEpL2iXSF36MD8yV/c4ilVN
8m1p5prp1YtasTxsOWv7FDEs4oZfum1w3QsAvlrFRhctsACsZ1i4i3mvxQWJ955q
zZxs5vhO5CL24rVoQYGVQj/uCSHlyK7ko9AA8XkejTlZMJ5h0Mip+oWNxz3M/VTa
sJlYkQrbP0cWxCjKJLEmtVlVSCMeHoILGZzLcol6RVPbaAb57i27SRwY9YIFt1A+
OMpHFs4fgDa4A1IlobBwhhd1dAw3CL5QJN+ylDnBYsm1bwBRHx/AKUjpRv+7ZXQb
H9ngSivFHrXN5QKCAQBAqzUw9LUdO83qe0ck47MDiJ4oLlBlDVyqSz4yXNs+s8ux
nJYYDuCCkNstvJgtkfyiIenqPBUJ1yfgR/nf34ZhtXYYKE/wsIPQFhBB5ejkDuWC
OvgI8mdw9YItd7XjEThLzNx/P5fOpI823fE/BnjsMyn44DWyTiRi4KAnjXYbYsre
Q/CBIGiW/UwC8K+yKw6r9ruMzd2X0Ta5yq3Dt4Sw7ylK22LAGU1bHPjs8eyJZhr1
XsKDKFjY+55KGJNkFFBoPqpSFjByaI1z5FNfxwAo528Or8GzZyn8dBDWbKbfjFBC
VCBP90GnXOiytfqeQ4gaeuPlAQOhH3168mfv1kN9AoIBABOZzgFYVaRBjKdfeLfS
Tq7BVEvJY8HmN39fmxZjLJtukn/AhhygajLLdPH98KLGqxpHymsC9K4PYfd/GLjM
zkm+hW0L/BqKF2tr39+0aO1camkgPCpWE0tLE7A7XnYIUgTd8VpKMt/BKxl7FGfw
veF/gBrJJu5F3ep/PpeM0yOFDL/vFX+SLzTxXnClL1gsyOA6d5jACez0tmSMO/co
t0q+fKpploKFy8pj+tcN1+cW3/sJBU4G9nb4vDk9UhwNTAHxlYuTdoS61yidKtGa
b60iM1D0oyKT4Un/Ubz5xL8fjUYiKrLp8lE+Bs6clLdBtbvMtz0etMi0xy/K0+tS
Qx0CggEBALfe2TUfAt9aMqpcidaFwgNFTr61wgOeoLWLt559OeRIeZWKAEB81bnz
EJLxDF51Y2tLc/pEXrc0zJzzrFIfk/drYe0uD5RnJjRxE3+spwin6D32ZOZW3KSX
1zReW1On80o/LJU6nyDJrNJvay2eL9PyWi47nBdO7MRZi53im72BmmwxaAKXf40l
StykjloyFdI+eyGyQUqcs4nFHd3WWmV+lLIDhGDlF5EBUgueCJz1xO54oPj1PKGl
vDs7JXdJiS3HDf20GREGwvL1y1kewX+KqdO7aBZhLN3Rx/fZnS/UFC3xCtbikuG4
LeU1NmvuCRmWmrgEkqiKs3jgjbEPVQI=
-----END PRIVATE KEY-----

View file

@ -8,7 +8,7 @@ import (
"github.com/spf13/cobra"
yaml "gopkg.in/yaml.v2"
"github.com/directxman12/k8s-prometheus-adapter/cmd/config-gen/utils"
"sigs.k8s.io/prometheus-adapter/cmd/config-gen/utils"
)
func main() {

View file

@ -4,65 +4,66 @@ import (
"fmt"
"time"
prom "github.com/directxman12/k8s-prometheus-adapter/pkg/client"
. "github.com/directxman12/k8s-prometheus-adapter/pkg/config"
pmodel "github.com/prometheus/common/model"
prom "sigs.k8s.io/prometheus-adapter/pkg/client"
"sigs.k8s.io/prometheus-adapter/pkg/config"
)
// DefaultConfig returns a configuration equivalent to the former
// pre-advanced-config settings. This means that "normal" series labels
// will be of the form `<prefix><<.Resource>>`, cadvisor series will be
// of the form `container_`, and have the label `pod_name`. Any series ending
// of the form `container_`, and have the label `pod`. Any series ending
// in total will be treated as a rate metric.
func DefaultConfig(rateInterval time.Duration, labelPrefix string) *MetricsDiscoveryConfig {
return &MetricsDiscoveryConfig{
Rules: []DiscoveryRule{
func DefaultConfig(rateInterval time.Duration, labelPrefix string) *config.MetricsDiscoveryConfig {
return &config.MetricsDiscoveryConfig{
Rules: []config.DiscoveryRule{
// container seconds rate metrics
{
SeriesQuery: string(prom.MatchSeries("", prom.NameMatches("^container_.*"), prom.LabelNeq("container_name", "POD"), prom.LabelNeq("namespace", ""), prom.LabelNeq("pod_name", ""))),
Resources: ResourceMapping{
Overrides: map[string]GroupResource{
SeriesQuery: string(prom.MatchSeries("", prom.NameMatches("^container_.*"), prom.LabelNeq("container", "POD"), prom.LabelNeq("namespace", ""), prom.LabelNeq("pod", ""))),
Resources: config.ResourceMapping{
Overrides: map[string]config.GroupResource{
"namespace": {Resource: "namespace"},
"pod_name": {Resource: "pod"},
"pod": {Resource: "pod"},
},
},
Name: NameMapping{Matches: "^container_(.*)_seconds_total$"},
MetricsQuery: fmt.Sprintf(`sum(rate(<<.Series>>{<<.LabelMatchers>>,container_name!="POD"}[%s])) by (<<.GroupBy>>)`, pmodel.Duration(rateInterval).String()),
Name: config.NameMapping{Matches: "^container_(.*)_seconds_total$"},
MetricsQuery: fmt.Sprintf(`sum(rate(<<.Series>>{<<.LabelMatchers>>,container!="POD"}[%s])) by (<<.GroupBy>>)`, pmodel.Duration(rateInterval).String()),
},
// container rate metrics
{
SeriesQuery: string(prom.MatchSeries("", prom.NameMatches("^container_.*"), prom.LabelNeq("container_name", "POD"), prom.LabelNeq("namespace", ""), prom.LabelNeq("pod_name", ""))),
SeriesFilters: []RegexFilter{{IsNot: "^container_.*_seconds_total$"}},
Resources: ResourceMapping{
Overrides: map[string]GroupResource{
SeriesQuery: string(prom.MatchSeries("", prom.NameMatches("^container_.*"), prom.LabelNeq("container", "POD"), prom.LabelNeq("namespace", ""), prom.LabelNeq("pod", ""))),
SeriesFilters: []config.RegexFilter{{IsNot: "^container_.*_seconds_total$"}},
Resources: config.ResourceMapping{
Overrides: map[string]config.GroupResource{
"namespace": {Resource: "namespace"},
"pod_name": {Resource: "pod"},
"pod": {Resource: "pod"},
},
},
Name: NameMapping{Matches: "^container_(.*)_total$"},
MetricsQuery: fmt.Sprintf(`sum(rate(<<.Series>>{<<.LabelMatchers>>,container_name!="POD"}[%s])) by (<<.GroupBy>>)`, pmodel.Duration(rateInterval).String()),
Name: config.NameMapping{Matches: "^container_(.*)_total$"},
MetricsQuery: fmt.Sprintf(`sum(rate(<<.Series>>{<<.LabelMatchers>>,container!="POD"}[%s])) by (<<.GroupBy>>)`, pmodel.Duration(rateInterval).String()),
},
// container non-cumulative metrics
{
SeriesQuery: string(prom.MatchSeries("", prom.NameMatches("^container_.*"), prom.LabelNeq("container_name", "POD"), prom.LabelNeq("namespace", ""), prom.LabelNeq("pod_name", ""))),
SeriesFilters: []RegexFilter{{IsNot: "^container_.*_total$"}},
Resources: ResourceMapping{
Overrides: map[string]GroupResource{
SeriesQuery: string(prom.MatchSeries("", prom.NameMatches("^container_.*"), prom.LabelNeq("container", "POD"), prom.LabelNeq("namespace", ""), prom.LabelNeq("pod", ""))),
SeriesFilters: []config.RegexFilter{{IsNot: "^container_.*_total$"}},
Resources: config.ResourceMapping{
Overrides: map[string]config.GroupResource{
"namespace": {Resource: "namespace"},
"pod_name": {Resource: "pod"},
"pod": {Resource: "pod"},
},
},
Name: NameMapping{Matches: "^container_(.*)$"},
MetricsQuery: `sum(<<.Series>>{<<.LabelMatchers>>,container_name!="POD"}) by (<<.GroupBy>>)`,
Name: config.NameMapping{Matches: "^container_(.*)$"},
MetricsQuery: `sum(<<.Series>>{<<.LabelMatchers>>,container!="POD"}) by (<<.GroupBy>>)`,
},
// normal non-cumulative metrics
{
SeriesQuery: string(prom.MatchSeries("", prom.LabelNeq(fmt.Sprintf("%snamespace", labelPrefix), ""), prom.NameNotMatches("^container_.*"))),
SeriesFilters: []RegexFilter{{IsNot: ".*_total$"}},
Resources: ResourceMapping{
SeriesFilters: []config.RegexFilter{{IsNot: ".*_total$"}},
Resources: config.ResourceMapping{
Template: fmt.Sprintf("%s<<.Resource>>", labelPrefix),
},
MetricsQuery: "sum(<<.Series>>{<<.LabelMatchers>>}) by (<<.GroupBy>>)",
@ -71,9 +72,9 @@ func DefaultConfig(rateInterval time.Duration, labelPrefix string) *MetricsDisco
// normal rate metrics
{
SeriesQuery: string(prom.MatchSeries("", prom.LabelNeq(fmt.Sprintf("%snamespace", labelPrefix), ""), prom.NameNotMatches("^container_.*"))),
SeriesFilters: []RegexFilter{{IsNot: ".*_seconds_total"}},
Name: NameMapping{Matches: "^(.*)_total$"},
Resources: ResourceMapping{
SeriesFilters: []config.RegexFilter{{IsNot: ".*_seconds_total"}},
Name: config.NameMapping{Matches: "^(.*)_total$"},
Resources: config.ResourceMapping{
Template: fmt.Sprintf("%s<<.Resource>>", labelPrefix),
},
MetricsQuery: fmt.Sprintf("sum(rate(<<.Series>>{<<.LabelMatchers>>}[%s])) by (<<.GroupBy>>)", pmodel.Duration(rateInterval).String()),
@ -82,38 +83,38 @@ func DefaultConfig(rateInterval time.Duration, labelPrefix string) *MetricsDisco
// seconds rate metrics
{
SeriesQuery: string(prom.MatchSeries("", prom.LabelNeq(fmt.Sprintf("%snamespace", labelPrefix), ""), prom.NameNotMatches("^container_.*"))),
Name: NameMapping{Matches: "^(.*)_seconds_total$"},
Resources: ResourceMapping{
Name: config.NameMapping{Matches: "^(.*)_seconds_total$"},
Resources: config.ResourceMapping{
Template: fmt.Sprintf("%s<<.Resource>>", labelPrefix),
},
MetricsQuery: fmt.Sprintf("sum(rate(<<.Series>>{<<.LabelMatchers>>}[%s])) by (<<.GroupBy>>)", pmodel.Duration(rateInterval).String()),
},
},
ResourceRules: &ResourceRules{
CPU: ResourceRule{
ResourceRules: &config.ResourceRules{
CPU: config.ResourceRule{
ContainerQuery: fmt.Sprintf("sum(rate(container_cpu_usage_seconds_total{<<.LabelMatchers>>}[%s])) by (<<.GroupBy>>)", pmodel.Duration(rateInterval).String()),
NodeQuery: fmt.Sprintf("sum(rate(container_cpu_usage_seconds_total{<<.LabelMatchers>>, id='/'}[%s])) by (<<.GroupBy>>)", pmodel.Duration(rateInterval).String()),
Resources: ResourceMapping{
Overrides: map[string]GroupResource{
Resources: config.ResourceMapping{
Overrides: map[string]config.GroupResource{
"namespace": {Resource: "namespace"},
"pod_name": {Resource: "pod"},
"pod": {Resource: "pod"},
"instance": {Resource: "node"},
},
},
ContainerLabel: fmt.Sprintf("%scontainer_name", labelPrefix),
ContainerLabel: fmt.Sprintf("%scontainer", labelPrefix),
},
Memory: ResourceRule{
Memory: config.ResourceRule{
ContainerQuery: "sum(container_memory_working_set_bytes{<<.LabelMatchers>>}) by (<<.GroupBy>>)",
NodeQuery: "sum(container_memory_working_set_bytes{<<.LabelMatchers>>,id='/'}) by (<<.GroupBy>>)",
Resources: ResourceMapping{
Overrides: map[string]GroupResource{
Resources: config.ResourceMapping{
Overrides: map[string]config.GroupResource{
"namespace": {Resource: "namespace"},
"pod_name": {Resource: "pod"},
"pod": {Resource: "pod"},
"instance": {Resource: "node"},
},
},
ContainerLabel: fmt.Sprintf("%scontainer_name", labelPrefix),
ContainerLabel: fmt.Sprintf("%scontainer", labelPrefix),
},
Window: pmodel.Duration(rateInterval),
},

View file

@ -1,6 +0,0 @@
ARG BASEIMAGE
FROM ${BASEIMAGE}
ARG ARCH
COPY ${ARCH}/adapter /
USER 1001:1001
ENTRYPOINT ["/adapter"]

View file

@ -1,20 +1,11 @@
Example Deployment
==================
1. Make sure you've built the included Dockerfile with `make docker-build`. The image should be tagged as `directxman12/k8s-prometheus-adapter:latest`.
1. Make sure you've built the included Dockerfile with `TAG=latest make container`. The image should be tagged as `registry.k8s.io/prometheus-adapter/staging-prometheus-adapter:latest`.
2. Create a secret called `cm-adapter-serving-certs` with two values:
`serving.crt` and `serving.key`. These are the serving certificates used
by the adapter for serving HTTPS traffic. For more information on how to
generate these certificates, see the [auth concepts
documentation](https://github.com/kubernetes-incubator/apiserver-builder/blob/master/docs/concepts/auth.md)
in the apiserver-builder repository.
The kube-prometheus project published two scripts [gencerts.sh](https://github.com/coreos/prometheus-operator/blob/master/contrib/kube-prometheus/experimental/custom-metrics-api/gencerts.sh)
and [deploy.sh](https://github.com/coreos/prometheus-operator/blob/master/contrib/kube-prometheus/experimental/custom-metrics-api/deploy.sh) to create the `cm-adapter-serving-certs` secret.
3. `kubectl create namespace custom-metrics` to ensure that the namespace that we're installing
2. `kubectl create namespace monitoring` to ensure that the namespace that we're installing
the custom metrics adapter in exists.
4. `kubectl create -f manifests/`, modifying the Deployment as necessary to
3. `kubectl create -f manifests/`, modifying the Deployment as necessary to
point to your Prometheus server, and the ConfigMap to contain your desired
metrics discovery configuration.

View file

@ -0,0 +1,17 @@
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
labels:
app.kubernetes.io/component: metrics-adapter
app.kubernetes.io/name: prometheus-adapter
app.kubernetes.io/version: 0.12.0
name: v1beta1.metrics.k8s.io
spec:
group: metrics.k8s.io
groupPriorityMinimum: 100
insecureSkipTLSVerify: true
service:
name: prometheus-adapter
namespace: monitoring
version: v1beta1
versionPriority: 100

View file

@ -0,0 +1,22 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app.kubernetes.io/component: metrics-adapter
app.kubernetes.io/name: prometheus-adapter
app.kubernetes.io/version: 0.12.0
rbac.authorization.k8s.io/aggregate-to-admin: "true"
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rbac.authorization.k8s.io/aggregate-to-view: "true"
name: system:aggregated-metrics-reader
namespace: monitoring
rules:
- apiGroups:
- metrics.k8s.io
resources:
- pods
- nodes
verbs:
- get
- list
- watch

View file

@ -0,0 +1,17 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
app.kubernetes.io/component: metrics-adapter
app.kubernetes.io/name: prometheus-adapter
app.kubernetes.io/version: 0.12.0
name: resource-metrics:system:auth-delegator
namespace: monitoring
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: prometheus-adapter
namespace: monitoring

View file

@ -2,6 +2,9 @@ apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: hpa-controller-custom-metrics
labels:
app.kubernetes.io/component: metrics-adapter
app.kubernetes.io/name: prometheus-adapter
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole

View file

@ -0,0 +1,17 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
app.kubernetes.io/component: metrics-adapter
app.kubernetes.io/name: prometheus-adapter
app.kubernetes.io/version: 0.12.0
name: prometheus-adapter
namespace: monitoring
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: prometheus-adapter
subjects:
- kind: ServiceAccount
name: prometheus-adapter
namespace: monitoring

View file

@ -0,0 +1,15 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app.kubernetes.io/component: metrics-adapter
app.kubernetes.io/name: prometheus-adapter
app.kubernetes.io/version: 0.12.0
name: resource-metrics-server-resources
rules:
- apiGroups:
- metrics.k8s.io
resources:
- '*'
verbs:
- '*'

View file

@ -0,0 +1,20 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app.kubernetes.io/component: metrics-adapter
app.kubernetes.io/name: prometheus-adapter
app.kubernetes.io/version: 0.12.0
name: prometheus-adapter
rules:
- apiGroups:
- ""
resources:
- nodes
- namespaces
- pods
- services
verbs:
- get
- list
- watch

View file

@ -0,0 +1,53 @@
apiVersion: v1
data:
config.yaml: |-
"resourceRules":
"cpu":
"containerLabel": "container"
"containerQuery": |
sum by (<<.GroupBy>>) (
irate (
container_cpu_usage_seconds_total{<<.LabelMatchers>>,container!="",pod!=""}[4m]
)
)
"nodeQuery": |
sum by (<<.GroupBy>>) (
irate(
node_cpu_usage_seconds_total{<<.LabelMatchers>>}[4m]
)
)
"resources":
"overrides":
"namespace":
"resource": "namespace"
"node":
"resource": "node"
"pod":
"resource": "pod"
"memory":
"containerLabel": "container"
"containerQuery": |
sum by (<<.GroupBy>>) (
container_memory_working_set_bytes{<<.LabelMatchers>>,container!="",pod!=""}
)
"nodeQuery": |
sum by (<<.GroupBy>>) (
node_memory_working_set_bytes{<<.LabelMatchers>>}
)
"resources":
"overrides":
"node":
"resource": "node"
"namespace":
"resource": "namespace"
"pod":
"resource": "pod"
"window": "5m"
kind: ConfigMap
metadata:
labels:
app.kubernetes.io/component: metrics-adapter
app.kubernetes.io/name: prometheus-adapter
app.kubernetes.io/version: 0.12.0
name: adapter-config
namespace: monitoring

View file

@ -1,51 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: custom-metrics-apiserver
name: custom-metrics-apiserver
namespace: custom-metrics
spec:
replicas: 1
selector:
matchLabels:
app: custom-metrics-apiserver
template:
metadata:
labels:
app: custom-metrics-apiserver
name: custom-metrics-apiserver
spec:
serviceAccountName: custom-metrics-apiserver
containers:
- name: custom-metrics-apiserver
image: directxman12/k8s-prometheus-adapter-amd64
args:
- --secure-port=6443
- --tls-cert-file=/var/run/serving-cert/serving.crt
- --tls-private-key-file=/var/run/serving-cert/serving.key
- --logtostderr=true
- --prometheus-url=http://prometheus.prom.svc:9090/
- --metrics-relist-interval=1m
- --v=10
- --config=/etc/adapter/config.yaml
ports:
- containerPort: 6443
volumeMounts:
- mountPath: /var/run/serving-cert
name: volume-serving-cert
readOnly: true
- mountPath: /etc/adapter/
name: config
readOnly: true
- mountPath: /tmp
name: tmp-vol
volumes:
- name: volume-serving-cert
secret:
secretName: cm-adapter-serving-certs
- name: config
configMap:
name: adapter-config
- name: tmp-vol
emptyDir: {}

View file

@ -1,12 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: custom-metrics-resource-reader
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: custom-metrics-resource-reader
subjects:
- kind: ServiceAccount
name: custom-metrics-apiserver
namespace: custom-metrics

View file

@ -1,5 +0,0 @@
kind: ServiceAccount
apiVersion: v1
metadata:
name: custom-metrics-apiserver
namespace: custom-metrics

View file

@ -1,11 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: custom-metrics-apiserver
namespace: custom-metrics
spec:
ports:
- port: 443
targetPort: 6443
selector:
app: custom-metrics-apiserver

View file

@ -1,42 +0,0 @@
apiVersion: apiregistration.k8s.io/v1beta1
kind: APIService
metadata:
name: v1beta1.custom.metrics.k8s.io
spec:
service:
name: custom-metrics-apiserver
namespace: custom-metrics
group: custom.metrics.k8s.io
version: v1beta1
insecureSkipTLSVerify: true
groupPriorityMinimum: 100
versionPriority: 100
---
apiVersion: apiregistration.k8s.io/v1beta1
kind: APIService
metadata:
name: v1beta2.custom.metrics.k8s.io
spec:
service:
name: custom-metrics-apiserver
namespace: custom-metrics
group: custom.metrics.k8s.io
version: v1beta2
insecureSkipTLSVerify: true
groupPriorityMinimum: 100
versionPriority: 200
---
apiVersion: apiregistration.k8s.io/v1beta1
kind: APIService
metadata:
name: v1beta1.external.metrics.k8s.io
spec:
service:
name: custom-metrics-apiserver
namespace: custom-metrics
group: external.metrics.k8s.io
version: v1beta1
insecureSkipTLSVerify: true
groupPriorityMinimum: 100
versionPriority: 100
---

View file

@ -1,10 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: custom-metrics-server-resources
rules:
- apiGroups:
- custom.metrics.k8s.io
- external.metrics.k8s.io
resources: ["*"]
verbs: ["*"]

View file

@ -1,117 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: adapter-config
namespace: custom-metrics
data:
config.yaml: |
rules:
- seriesQuery: '{__name__=~"^container_.*",container_name!="POD",namespace!="",pod_name!=""}'
seriesFilters: []
resources:
overrides:
namespace:
resource: namespace
pod_name:
resource: pod
name:
matches: ^container_(.*)_seconds_total$
as: ""
metricsQuery: sum(rate(<<.Series>>{<<.LabelMatchers>>,container_name!="POD"}[1m])) by (<<.GroupBy>>)
- seriesQuery: '{__name__=~"^container_.*",container_name!="POD",namespace!="",pod_name!=""}'
seriesFilters:
- isNot: ^container_.*_seconds_total$
resources:
overrides:
namespace:
resource: namespace
pod_name:
resource: pod
name:
matches: ^container_(.*)_total$
as: ""
metricsQuery: sum(rate(<<.Series>>{<<.LabelMatchers>>,container_name!="POD"}[1m])) by (<<.GroupBy>>)
- seriesQuery: '{__name__=~"^container_.*",container_name!="POD",namespace!="",pod_name!=""}'
seriesFilters:
- isNot: ^container_.*_total$
resources:
overrides:
namespace:
resource: namespace
pod_name:
resource: pod
name:
matches: ^container_(.*)$
as: ""
metricsQuery: sum(<<.Series>>{<<.LabelMatchers>>,container_name!="POD"}) by (<<.GroupBy>>)
- seriesQuery: '{namespace!="",__name__!~"^container_.*"}'
seriesFilters:
- isNot: .*_total$
resources:
template: <<.Resource>>
name:
matches: ""
as: ""
metricsQuery: sum(<<.Series>>{<<.LabelMatchers>>}) by (<<.GroupBy>>)
- seriesQuery: '{namespace!="",__name__!~"^container_.*"}'
seriesFilters:
- isNot: .*_seconds_total
resources:
template: <<.Resource>>
name:
matches: ^(.*)_total$
as: ""
metricsQuery: sum(rate(<<.Series>>{<<.LabelMatchers>>}[1m])) by (<<.GroupBy>>)
- seriesQuery: '{namespace!="",__name__!~"^container_.*"}'
seriesFilters: []
resources:
template: <<.Resource>>
name:
matches: ^(.*)_seconds_total$
as: ""
metricsQuery: sum(rate(<<.Series>>{<<.LabelMatchers>>}[1m])) by (<<.GroupBy>>)
resourceRules:
cpu:
containerQuery: sum(rate(container_cpu_usage_seconds_total{<<.LabelMatchers>>}[1m])) by (<<.GroupBy>>)
nodeQuery: sum(rate(container_cpu_usage_seconds_total{<<.LabelMatchers>>, id='/'}[1m])) by (<<.GroupBy>>)
resources:
overrides:
instance:
resource: node
namespace:
resource: namespace
pod_name:
resource: pod
containerLabel: container_name
memory:
containerQuery: sum(container_memory_working_set_bytes{<<.LabelMatchers>>}) by (<<.GroupBy>>)
nodeQuery: sum(container_memory_working_set_bytes{<<.LabelMatchers>>,id='/'}) by (<<.GroupBy>>)
resources:
overrides:
instance:
resource: node
namespace:
resource: namespace
pod_name:
resource: pod
containerLabel: container_name
window: 1m
externalRules:
- seriesQuery: '{__name__=~"^.*_queue_(length|size)$",namespace!=""}'
resources:
overrides:
namespace:
resource: namespace
name:
matches: ^.*_queue_(length|size)$
as: "$0"
metricsQuery: max(<<.Series>>{<<.LabelMatchers>>})
- seriesQuery: '{__name__=~"^.*_queue$",namespace!=""}'
resources:
overrides:
namespace:
resource: namespace
name:
matches: ^.*_queue$
as: "$0"
metricsQuery: max(<<.Series>>{<<.LabelMatchers>>})

View file

@ -1,15 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: custom-metrics-resource-reader
rules:
- apiGroups:
- ""
resources:
- pods
- nodes
- nodes/stats
verbs:
- get
- list
- watch

View file

@ -0,0 +1,89 @@
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/component: metrics-adapter
app.kubernetes.io/name: prometheus-adapter
app.kubernetes.io/version: 0.12.0
name: prometheus-adapter
namespace: monitoring
spec:
replicas: 2
selector:
matchLabels:
app.kubernetes.io/component: metrics-adapter
app.kubernetes.io/name: prometheus-adapter
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
template:
metadata:
labels:
app.kubernetes.io/component: metrics-adapter
app.kubernetes.io/name: prometheus-adapter
app.kubernetes.io/version: 0.12.0
spec:
automountServiceAccountToken: true
containers:
- args:
- --cert-dir=/var/run/serving-cert
- --config=/etc/adapter/config.yaml
- --metrics-relist-interval=1m
- --prometheus-url=https://prometheus.monitoring.svc:9090/
- --secure-port=6443
- --tls-cipher-suites=TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
image: registry.k8s.io/prometheus-adapter/prometheus-adapter:v0.12.0
livenessProbe:
failureThreshold: 5
httpGet:
path: /livez
port: https
scheme: HTTPS
initialDelaySeconds: 30
periodSeconds: 5
name: prometheus-adapter
ports:
- containerPort: 6443
name: https
readinessProbe:
failureThreshold: 5
httpGet:
path: /readyz
port: https
scheme: HTTPS
initialDelaySeconds: 30
periodSeconds: 5
resources:
requests:
cpu: 102m
memory: 180Mi
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
terminationMessagePolicy: FallbackToLogsOnError
volumeMounts:
- mountPath: /tmp
name: tmpfs
readOnly: false
- mountPath: /var/run/serving-cert
name: volume-serving-cert
readOnly: false
- mountPath: /etc/adapter
name: config
readOnly: false
nodeSelector:
kubernetes.io/os: linux
securityContext: {}
serviceAccountName: prometheus-adapter
volumes:
- emptyDir: {}
name: tmpfs
- emptyDir: {}
name: volume-serving-cert
- configMap:
name: adapter-config
name: config

View file

@ -0,0 +1,21 @@
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
labels:
app.kubernetes.io/component: metrics-adapter
app.kubernetes.io/name: prometheus-adapter
app.kubernetes.io/version: 0.12.0
name: prometheus-adapter
namespace: monitoring
spec:
egress:
- {}
ingress:
- {}
podSelector:
matchLabels:
app.kubernetes.io/component: metrics-adapter
app.kubernetes.io/name: prometheus-adapter
policyTypes:
- Egress
- Ingress

View file

@ -0,0 +1,15 @@
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
labels:
app.kubernetes.io/component: metrics-adapter
app.kubernetes.io/name: prometheus-adapter
app.kubernetes.io/version: 0.12.0
name: prometheus-adapter
namespace: monitoring
spec:
minAvailable: 1
selector:
matchLabels:
app.kubernetes.io/component: metrics-adapter
app.kubernetes.io/name: prometheus-adapter

View file

@ -1,7 +1,11 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: custom-metrics-auth-reader
labels:
app.kubernetes.io/component: metrics-adapter
app.kubernetes.io/name: prometheus-adapter
app.kubernetes.io/version: 0.12.0
name: resource-metrics-auth-reader
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
@ -9,5 +13,5 @@ roleRef:
name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
name: custom-metrics-apiserver
namespace: custom-metrics
name: prometheus-adapter
namespace: monitoring

View file

@ -0,0 +1,10 @@
apiVersion: v1
automountServiceAccountToken: false
kind: ServiceAccount
metadata:
labels:
app.kubernetes.io/component: metrics-adapter
app.kubernetes.io/name: prometheus-adapter
app.kubernetes.io/version: 0.12.0
name: prometheus-adapter
namespace: monitoring

View file

@ -0,0 +1,17 @@
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/component: metrics-adapter
app.kubernetes.io/name: prometheus-adapter
app.kubernetes.io/version: 0.12.0
name: prometheus-adapter
namespace: monitoring
spec:
ports:
- name: https
port: 443
targetPort: 6443
selector:
app.kubernetes.io/component: metrics-adapter
app.kubernetes.io/name: prometheus-adapter

View file

@ -31,13 +31,13 @@ might look like:
```yaml
rules:
# this rule matches cumulative cAdvisor metrics measured in seconds
- seriesQuery: '{__name__=~"^container_.*",container_name!="POD",namespace!="",pod_name!=""}'
- seriesQuery: '{__name__=~"^container_.*",container!="POD",namespace!="",pod!=""}'
resources:
# skip specifying generic resource<->label mappings, and just
# attach only pod and namespace resources by mapping label names to group-resources
overrides:
namespace: {resource: "namespace"},
pod_name: {resource: "pod"},
namespace: {resource: "namespace"}
pod: {resource: "pod"}
# specify that the `container_` and `_seconds_total` suffixes should be removed.
# this also introduces an implicit filter on metric family names
name:
@ -48,7 +48,7 @@ rules:
# This is a Go template where the `.Series` and `.LabelMatchers` string values
# are available, and the delimiters are `<<` and `>>` to avoid conflicts with
# the prometheus query language
metricsQuery: "sum(rate(<<.Series>>{<<.LabelMatchers>>,container_name!="POD"}[2m])) by (<<.GroupBy>>)"
metricsQuery: "sum(rate(<<.Series>>{<<.LabelMatchers>>,container!="POD"}[2m])) by (<<.GroupBy>>)"
```
Discovery
@ -83,7 +83,7 @@ For example:
```yaml
# match all cAdvisor metrics that aren't measured in seconds
seriesQuery: '{__name__=~"^container_.*_total",container_name!="POD",namespace!="",pod_name!=""}'
seriesQuery: '{__name__=~"^container_.*_total",container!="POD",namespace!="",pod!=""}'
seriesFilters:
- isNot: "^container_.*_seconds_total"
```
@ -211,5 +211,5 @@ For example:
```yaml
# convert cumulative cAdvisor metrics into rates calculated over 2 minutes
metricsQuery: "sum(rate(<<.Series>>{<<.LabelMatchers>>,container_name!="POD"}[2m])) by (<<.GroupBy>>)"
metricsQuery: "sum(rate(<<.Series>>{<<.LabelMatchers>>,container!="POD"}[2m])) by (<<.GroupBy>>)"
```

84
docs/externalmetrics.md Normal file
View file

@ -0,0 +1,84 @@
External Metrics
===========
It's possible to configure [Autoscaling on metrics not related to Kubernetes objects](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-metrics-not-related-to-kubernetes-objects) in Kubernetes. This is done with a special `External Metrics` system. Using external metrics in Kubernetes with the adapter requires you to configure special `external` rules in the configuration.
The configuration for `external` metrics rules is almost identical to the normal `rules`:
```yaml
externalRules:
- seriesQuery: '{__name__="queue_consumer_lag",name!=""}'
metricsQuery: sum(<<.Series>>{<<.LabelMatchers>>}) by (name)
resources:
overrides: { namespace: {resource: "namespace"} }
```
Namespacing
-----------
All Kubernetes Horizontal Pod Autoscaler (HPA) resources are namespaced. And when you create an HPA that
references an external metric the adapter will automatically add a `namespace` label to the `seriesQuery` you have configured.
This is done because the External Merics API Specification *requires* a namespace component in the URL:
```shell
kubectl get --raw "/apis/external.metrics.k8s.io/v1beta1/namespaces/default/queue_consumer_lag"
```
Cross-Namespace or No Namespace Queries
---------------------------------------
A semi-common scenario is to have a `workload` in one namespace that needs to scale based on a metric from a different namespace. This is normally not
possible with `external` rules because the `namespace` label is set to match that of the source `workload`.
However, you can explicitly disable the automatic add of the HPA namepace to the query, and instead opt to not set a namespace at all, or to target a different namespace.
This is done by setting `namespaced: false` in the `resources` section of the `external` rule:
```yaml
# rules: ...
externalRules:
- seriesQuery: '{__name__="queue_depth",name!=""}'
metricsQuery: sum(<<.Series>>{<<.LabelMatchers>>}) by (name)
resources:
namespaced: false
```
Given the `external` rules defined above any `External` metric query for `queue_depth` will simply ignore the source `namespace` of the HPA. This allows you to explicilty not put a namespace into an external query, or to set the namespace to one that might be different from that of the HPA.
```yaml
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: external-queue-scaler
# the HPA and scaleTargetRef must exist in a namespace
namespace: default
annotations:
# The "External" metric below targets a metricName that has namespaced=false
# and this allows the metric to explicitly query a different
# namespace than that of the HPA and scaleTargetRef
autoscaling.alpha.kubernetes.io/metrics: |
[
{
"type": "External",
"external": {
"metricName": "queue_depth",
"metricSelector": {
"matchLabels": {
"namespace": "queue",
"name": "my-sample-queue"
}
},
"targetAverageValue": "50"
}
}
]
spec:
maxReplicas: 5
minReplicas: 1
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: my-app
```

View file

@ -10,13 +10,13 @@ rules:
# can be found in pkg/config/default.go
# this rule matches cumulative cAdvisor metrics measured in seconds
- seriesQuery: '{__name__=~"^container_.*",container_name!="POD",namespace!="",pod_name!=""}'
- seriesQuery: '{__name__=~"^container_.*",container!="POD",namespace!="",pod!=""}'
resources:
# skip specifying generic resource<->label mappings, and just
# attach only pod and namespace resources by mapping label names to group-resources
overrides:
namespace: {resource: "namespace"},
pod_name: {resource: "pod"},
namespace: {resource: "namespace"}
pod: {resource: "pod"}
# specify that the `container_` and `_seconds_total` suffixes should be removed.
# this also introduces an implicit filter on metric family names
name:
@ -27,19 +27,19 @@ rules:
# This is a Go template where the `.Series` and `.LabelMatchers` string values
# are available, and the delimiters are `<<` and `>>` to avoid conflicts with
# the prometheus query language
metricsQuery: "sum(rate(<<.Series>>{<<.LabelMatchers>>,container_name!="POD"}[2m])) by (<<.GroupBy>>)"
metricsQuery: "sum(rate(<<.Series>>{<<.LabelMatchers>>,container!="POD"}[2m])) by (<<.GroupBy>>)"
# this rule matches cumulative cAdvisor metrics not measured in seconds
- seriesQuery: '{__name__=~"^container_.*_total",container_name!="POD",namespace!="",pod_name!=""}'
- seriesQuery: '{__name__=~"^container_.*_total",container!="POD",namespace!="",pod!=""}'
resources:
overrides:
namespace: {resource: "namespace"},
pod_name: {resource: "pod"},
namespace: {resource: "namespace"}
pod: {resource: "pod"}
seriesFilters:
# since this is a superset of the query above, we introduce an additional filter here
- isNot: "^container_.*_seconds_total$"
name: {matches: "^container_(.*)_total$"}
metricsQuery: "sum(rate(<<.Series>>{<<.LabelMatchers>>,container_name!="POD"}[2m])) by (<<.GroupBy>>)"
metricsQuery: "sum(rate(<<.Series>>{<<.LabelMatchers>>,container!="POD"}[2m])) by (<<.GroupBy>>)"
# this rule matches cumulative non-cAdvisor metrics
- seriesQuery: '{namespace!="",__name__!="^container_.*"}'
@ -52,7 +52,7 @@ rules:
# Group will be converted to a form acceptible for use as a label automatically.
template: "<<.Resource>>"
# if we wanted to, we could also specify overrides here
metricsQuery: "sum(rate(<<.Series>>{<<.LabelMatchers>>,container_name!="POD"}[2m])) by (<<.GroupBy>>)"
metricsQuery: "sum(rate(<<.Series>>{<<.LabelMatchers>>,container!="POD"}[2m])) by (<<.GroupBy>>)"
# this rule matches only a single metric, explicitly naming it something else
# It's series query *must* return only a single metric family
@ -63,7 +63,21 @@ rules:
overrides:
# this should still resolve in our cluster
brand: {group: "cheese.io", resource: "brand"}
metricQuery: 'count(cheddar{sharp="true"})'
metricsQuery: 'count(cheddar{sharp="true"})'
# external rules are not tied to a Kubernetes resource and can reference any metric
# https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-metrics-not-related-to-kubernetes-objects
externalRules:
- seriesQuery: '{__name__="queue_consumer_lag",name!=""}'
metricsQuery: sum(<<.Series>>{<<.LabelMatchers>>}) by (name)
- seriesQuery: '{__name__="queue_depth",topic!=""}'
metricsQuery: sum(<<.Series>>{<<.LabelMatchers>>}) by (name)
# Kubernetes metric queries include a namespace in the query by default
# but you can explicitly disable namespaces if needed with "namespaced: false"
# this is useful if you have an HPA with an external metric in namespace A
# but want to query for metrics from namespace B
resources:
namespaced: false
# TODO: should we be able to map to a constant instance of a resource
# (e.g. `resources: {constant: [{resource: "namespace", name: "kube-system"}}]`)?

View file

@ -34,20 +34,24 @@ significantly different.
In order to follow this walkthrough, you'll need container images for
Prometheus and the custom metrics adapter.
The [Prometheus Operator](https://coreos.com/operators/prometheus/docs/latest/),
The [Prometheus Operator](https://github.com/prometheus-operator/prometheus-operator),
makes it easy to get up and running with Prometheus. This walkthrough
will assume you're planning on doing that -- if you've deployed it by hand
instead, you'll need to make a few adjustments to the way you expose
metrics to Prometheus.
The adapter has different images for each arch, which can be found at
`directxman12/k8s-prometheus-adapter-${ARCH}`. For instance, if you're on
an x86_64 machine, use the `directxman12/k8s-prometheus-adapter-amd64`
image.
`gcr.io/k8s-staging-prometheus-adapter/prometheus-adapter-${ARCH}`. For
instance, if you're on an x86_64 machine, use
`gcr.io/k8s-staging-prometheus-adapter/prometheus-adapter-amd64` image.
If you're feeling adventurous, you can build the latest version of the
custom metrics adapter by running `make docker-build` or `make
build-local-image`.
There is also an official multi arch image available at
`registry.k8s.io/prometheus-adapter/prometheus-adapter:${VERSION}`.
If you're feeling adventurous, you can build the latest version of
prometheus-adapter by running `make container` or get the latest image from the
staging registry `gcr.io/k8s-staging-prometheus-adapter/prometheus-adapter`.
Special thanks to [@luxas](https://github.com/luxas) for providing the
demo application for this walkthrough.
@ -94,9 +98,33 @@ spec:
</details>
<details>
<summary>sample-app.service.yaml</summary>
```yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: sample-app
name: sample-app
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
selector:
app: sample-app
type: ClusterIP
```
</details>
```shell
$ kubectl create -f sample-app.deploy.yaml
$ kubectl create service clusterip sample-app --tcp=80:8080
$ kubectl create -f sample-app.service.yaml
```
Now, check your app, which exposes metrics and counts the number of
@ -114,11 +142,11 @@ a HorizontalPodAutoscaler like this to accomplish the autoscaling:
<details>
<summary>sample-app-hpa.yaml</summary>
<summary>sample-app.hpa.yaml</summary>
```yaml
kind: HorizontalPodAutoscaler
apiVersion: autoscaling/v2beta1
apiVersion: autoscaling/v2
metadata:
name: sample-app
spec:
@ -137,10 +165,13 @@ spec:
- type: Pods
pods:
# use the metric that you used above: pods/http_requests
metricName: http_requests
metric:
name: http_requests
# target 500 milli-requests per second,
# which is 1 request every two seconds
targetAverageValue: 500m
target:
type: Value
averageValue: 500m
```
</details>
@ -148,7 +179,7 @@ spec:
If you try creating that now (and take a look at your controller-manager
logs), you'll see that the that the HorizontalPodAutoscaler controller is
attempting to fetch metrics from
`/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/*/http_requests?selector=app%3Dsample-app`,
`/apis/custom.metrics.k8s.io/v1beta2/namespaces/default/pods/*/http_requests?selector=app%3Dsample-app`,
but right now, nothing's serving that API.
Before you can autoscale your application, you'll need to make sure that
@ -165,15 +196,15 @@ Prometheus adapter to serve metrics out of Prometheus.
### Launching Prometheus
First, you'll need to deploy the Prometheus Operator. Check out the
[getting started
guide](https://coreos.com/operators/prometheus/docs/latest/user-guides/getting-started.html)
[quick start
guide](https://github.com/prometheus-operator/prometheus-operator#quickstart)
for the Operator to deploy a copy of Prometheus.
This walkthrough assumes that Prometheus is deployed in the `prom`
This walkthrough assumes that Prometheus is deployed in the `monitoring`
namespace. Most of the sample commands and files are namespace-agnostic,
but there are a few commands or pieces of configuration that rely on that
namespace. If you're using a different namespace, simply substitute that
in for `prom` when it appears.
in for `monitoring` when it appears.
### Monitoring Your Application
@ -185,7 +216,7 @@ service:
<details>
<summary>service-monitor.yaml</summary>
<summary>sample-app.monitor.yaml</summary>
```yaml
kind: ServiceMonitor
@ -205,12 +236,12 @@ spec:
</details>
```shell
$ kubectl create -f service-monitor.yaml
$ kubectl create -f sample-app.monitor.yaml
```
Now, you should see your metrics appear in your Prometheus instance. Look
Now, you should see your metrics (`http_requests_total`) appear in your Prometheus instance. Look
them up via the dashboard, and make sure they have the `namespace` and
`pod` labels.
`pod` labels. If not, check the labels on the service monitor match the ones on the Prometheus CRD.
### Launching the Adapter
@ -228,7 +259,46 @@ the steps to deploy the adapter. Note that if you're deploying on
a non-x86_64 (amd64) platform, you'll need to change the `image` field in
the Deployment to be the appropriate image for your platform.
The default adapter configuration should work for this walkthrough and
However an update to the adapter config is necessary in order to
expose custom metrics.
<details>
<summary>prom-adapter.config.yaml</summary>
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: adapter-config
namespace: monitoring
data:
config.yaml: |-
"rules":
- "seriesQuery": |
{namespace!="",__name__!~"^container_.*"}
"resources":
"template": "<<.Resource>>"
"name":
"matches": "^(.*)_total"
"as": ""
"metricsQuery": |
sum by (<<.GroupBy>>) (
irate (
<<.Series>>{<<.LabelMatchers>>}[1m]
)
)
```
</details>
```shell
$ kubectl apply -f prom-adapter.config.yaml
# Restart prom-adapter pods
$ kubectl rollout restart deployment prometheus-adapter -n monitoring
```
This adapter configuration should work for this walkthrough together with
a standard Prometheus Operator configuration, but if you've got custom
relabelling rules, or your labels above weren't exactly `namespace` and
`pod`, you may need to edit the configuration in the ConfigMap. The
@ -237,11 +307,36 @@ overview of how configuration works.
### The Registered API
As part of the creation of the adapter Deployment and associated objects
(performed above), we registered the API with the API aggregator (part of
the main Kubernetes API server).
We also need to register the custom metrics API with the API aggregator (part of
the main Kubernetes API server). For that we need to create an APIService resource
The API is registered as `custom.metrics.k8s.io/v1beta1`, and you can find
<details>
<summary>api-service.yaml</summary>
```yaml
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
name: v1beta2.custom.metrics.k8s.io
spec:
group: custom.metrics.k8s.io
groupPriorityMinimum: 100
insecureSkipTLSVerify: true
service:
name: prometheus-adapter
namespace: monitoring
version: v1beta2
versionPriority: 100
```
</details>
```shell
$ kubectl create -f api-service.yaml
```
The API is registered as `custom.metrics.k8s.io/v1beta2`, and you can find
more information about aggregation at [Concepts:
Aggregation](https://github.com/kubernetes-incubator/apiserver-builder/blob/master/docs/concepts/aggregation.md).
@ -252,7 +347,7 @@ With that all set, your custom metrics API should show up in discovery.
Try fetching the discovery information for it:
```shell
$ kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1
$ kubectl get --raw /apis/custom.metrics.k8s.io/v1beta2
```
Since you've set up Prometheus to collect your app's metrics, you should
@ -266,12 +361,12 @@ sends a raw GET request to the Kubernetes API server, automatically
injecting auth information:
```shell
$ kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/*/http_requests?selector=app%3Dsample-app"
$ kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta2/namespaces/default/pods/*/http_requests?selector=app%3Dsample-app"
```
Because of the adapter's configuration, the cumulative metric
`http_requests_total` has been converted into a rate metric,
`pods/http_requests`, which measures requests per second over a 2 minute
`pods/http_requests`, which measures requests per second over a 1 minute
interval. The value should currently be close to zero, since there's no
traffic to your app, except for the regular metrics collection from
Prometheus.
@ -322,7 +417,7 @@ and make decisions based on it.
If you didn't create the HorizontalPodAutoscaler above, create it now:
```shell
$ kubectl create -f sample-app-hpa.yaml
$ kubectl create -f sample-app.hpa.yaml
```
Wait a little bit, and then examine the HPA:
@ -368,4 +463,4 @@ setting different labels or using the `Object` metric source type.
For more information on how metrics are exposed by the Prometheus adapter,
see [config documentation](/docs/config.md), and check the [default
configuration](/deploy/manifests/custom-metrics-config-map.yaml).
configuration](/deploy/manifests/config-map.yaml).

138
go.mod
View file

@ -1,30 +1,118 @@
module github.com/directxman12/k8s-prometheus-adapter
module sigs.k8s.io/prometheus-adapter
go 1.13
go 1.22.1
toolchain go1.22.2
require (
github.com/NYTimes/gziphandler v1.0.1 // indirect
github.com/go-openapi/spec v0.19.8
github.com/imdario/mergo v0.3.8 // indirect
github.com/kubernetes-sigs/custom-metrics-apiserver v0.0.0-20201110135240-8c12d6d92362
github.com/onsi/ginkgo v1.11.0
github.com/onsi/gomega v1.7.0
github.com/prometheus/client_golang v1.7.1
github.com/prometheus/common v0.10.0
github.com/spf13/cobra v1.0.0
github.com/stretchr/testify v1.6.1
gopkg.in/yaml.v2 v2.2.8
k8s.io/api v0.19.3
k8s.io/apimachinery v0.19.3
k8s.io/apiserver v0.19.3
k8s.io/client-go v0.19.3
k8s.io/component-base v0.19.3
k8s.io/klog/v2 v2.3.0
k8s.io/kube-openapi v0.0.0-20200805222855-6aeccd4b50c6
k8s.io/metrics v0.19.3
k8s.io/sample-apiserver v0.18.5
sigs.k8s.io/metrics-server v0.3.7-0.20201028092756-2a1d1385123b
github.com/onsi/ginkgo v1.16.5
github.com/onsi/gomega v1.33.1
github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring v0.73.2
github.com/prometheus-operator/prometheus-operator/pkg/client v0.73.2
github.com/prometheus/client_golang v1.18.0
github.com/prometheus/common v0.46.0
github.com/spf13/cobra v1.8.0
github.com/stretchr/testify v1.9.0
gopkg.in/yaml.v2 v2.4.0
k8s.io/api v0.30.0
k8s.io/apimachinery v0.30.0
k8s.io/apiserver v0.30.0
k8s.io/client-go v0.30.0
k8s.io/component-base v0.30.0
k8s.io/klog/v2 v2.120.1
k8s.io/kube-openapi v0.0.0-20240430033511-f0e62f92d13f
k8s.io/metrics v0.30.0
sigs.k8s.io/custom-metrics-apiserver v1.30.0
sigs.k8s.io/metrics-server v0.7.1
)
// forced by the inclusion of sigs.k8s.io/metrics-server's use of this in their go.mod
replace k8s.io/kubernetes/pkg/kubelet/apis/stats/v1alpha1 => ./localvendor/k8s.io/kubernetes/pkg/kubelet/apis/stats/v1alpha1
require (
github.com/NYTimes/gziphandler v1.1.1 // indirect
github.com/antlr/antlr4/runtime/Go/antlr/v4 v4.0.0-20230305170008-8188dc5388df // indirect
github.com/asaskevich/govalidator v0.0.0-20230301143203-a9d515a09cc2 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/blang/semver/v4 v4.0.0 // indirect
github.com/cenkalti/backoff/v4 v4.2.1 // indirect
github.com/cespare/xxhash/v2 v2.2.0 // indirect
github.com/coreos/go-semver v0.3.1 // indirect
github.com/coreos/go-systemd/v22 v22.5.0 // indirect
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
github.com/emicklei/go-restful/v3 v3.12.0 // indirect
github.com/evanphx/json-patch v5.9.0+incompatible // indirect
github.com/felixge/httpsnoop v1.0.4 // indirect
github.com/fsnotify/fsnotify v1.7.0 // indirect
github.com/go-logr/logr v1.4.1 // indirect
github.com/go-logr/stdr v1.2.2 // indirect
github.com/go-openapi/jsonpointer v0.21.0 // indirect
github.com/go-openapi/jsonreference v0.21.0 // indirect
github.com/go-openapi/swag v0.23.0 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
github.com/golang/protobuf v1.5.4 // indirect
github.com/google/cel-go v0.17.8 // indirect
github.com/google/gnostic-models v0.6.8 // indirect
github.com/google/go-cmp v0.6.0 // indirect
github.com/google/gofuzz v1.2.0 // indirect
github.com/google/uuid v1.6.0 // indirect
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0 // indirect
github.com/grpc-ecosystem/grpc-gateway/v2 v2.18.1 // indirect
github.com/imdario/mergo v0.3.16 // indirect
github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/josharian/intern v1.0.0 // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/mailru/easyjson v0.7.7 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/nxadm/tail v1.4.8 // indirect
github.com/pkg/errors v0.9.1 // indirect
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
github.com/prometheus/client_model v0.5.0 // indirect
github.com/prometheus/procfs v0.12.0 // indirect
github.com/spf13/pflag v1.0.5 // indirect
github.com/stoewer/go-strcase v1.3.0 // indirect
go.etcd.io/etcd/api/v3 v3.5.11 // indirect
go.etcd.io/etcd/client/pkg/v3 v3.5.11 // indirect
go.etcd.io/etcd/client/v3 v3.5.11 // indirect
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.46.1 // indirect
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.46.1 // indirect
go.opentelemetry.io/otel v1.21.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.21.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.21.0 // indirect
go.opentelemetry.io/otel/metric v1.21.0 // indirect
go.opentelemetry.io/otel/sdk v1.21.0 // indirect
go.opentelemetry.io/otel/trace v1.21.0 // indirect
go.opentelemetry.io/proto/otlp v1.0.0 // indirect
go.uber.org/multierr v1.11.0 // indirect
go.uber.org/zap v1.26.0 // indirect
golang.org/x/crypto v0.31.0 // indirect
golang.org/x/exp v0.0.0-20231226003508-02704c960a9b // indirect
golang.org/x/mod v0.17.0 // indirect
golang.org/x/net v0.25.0 // indirect
golang.org/x/oauth2 v0.18.0 // indirect
golang.org/x/sync v0.10.0 // indirect
golang.org/x/sys v0.28.0 // indirect
golang.org/x/term v0.27.0 // indirect
golang.org/x/text v0.21.0 // indirect
golang.org/x/time v0.5.0 // indirect
golang.org/x/tools v0.21.1-0.20240508182429-e35e4ccd0d2d // indirect
google.golang.org/appengine v1.6.8 // indirect
google.golang.org/genproto v0.0.0-20231212172506-995d672761c0 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20231212172506-995d672761c0 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20231212172506-995d672761c0 // indirect
google.golang.org/grpc v1.60.1 // indirect
google.golang.org/protobuf v1.33.0 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect
gopkg.in/natefinch/lumberjack.v2 v2.2.1 // indirect
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
k8s.io/apiextensions-apiserver v0.29.3 // indirect
k8s.io/gengo/v2 v2.0.0-20240228010128-51d4e06bde70 // indirect
k8s.io/kms v0.30.0 // indirect
k8s.io/utils v0.0.0-20240423183400-0849a56e8f22 // indirect
sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.29.0 // indirect
sigs.k8s.io/controller-runtime v0.17.2 // indirect
sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd // indirect
sigs.k8s.io/structured-merge-diff/v4 v4.4.1 // indirect
sigs.k8s.io/yaml v1.4.0 // indirect
)

967
go.sum

File diff suppressed because it is too large Load diff

View file

@ -1,41 +0,0 @@
#!/bin/bash
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -o errexit
set -o nounset
set -o pipefail
verify=0
if [[ ${1:-} = "--verify" || ${1:-} = "-v" ]]; then
verify=1
fi
find_files() {
find . -not \( \( \
-wholename './_output' \
-o -wholename './vendor' \
\) -prune \) -name '*.go'
}
if [[ $verify -eq 1 ]]; then
diff=$(find_files | xargs gofmt -s -d 2>&1)
if [[ -n "${diff}" ]]; then
echo "gofmt -s -w $(echo "${diff}" | awk '/^diff / { print $2 }' | tr '\n' ' ')"
exit 1
fi
else
find_files | xargs gofmt -s -w
fi

View file

@ -1,10 +1,10 @@
// Copyright 2016 The etcd Authors
// Copyright 2021 The Kubernetes Authors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
@ -12,15 +12,13 @@
// See the License for the specific language governing permissions and
// limitations under the License.
package fileutil
//go:build tools
// +build tools
// Package tools tracks dependencies for tools that used in the build process.
// See https://github.com/golang/go/wiki/Modules
package tools
import (
"errors"
"os"
_ "k8s.io/kube-openapi/cmd/openapi-gen"
)
var (
ErrLocked = errors.New("fileutil: file already locked")
)
type LockedFile struct{ *os.File }

View file

@ -1 +0,0 @@
module k8s.io/kubernetes/pkg/kubelet/apis/stats/v1alpha1

View file

@ -1,332 +0,0 @@
/*
Copyright 2015 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1alpha1
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
// Summary is a top-level container for holding NodeStats and PodStats.
type Summary struct {
// Overall node stats.
Node NodeStats `json:"node"`
// Per-pod stats.
Pods []PodStats `json:"pods"`
}
// NodeStats holds node-level unprocessed sample stats.
type NodeStats struct {
// Reference to the measured Node.
NodeName string `json:"nodeName"`
// Stats of system daemons tracked as raw containers.
// The system containers are named according to the SystemContainer* constants.
// +optional
// +patchMergeKey=name
// +patchStrategy=merge
SystemContainers []ContainerStats `json:"systemContainers,omitempty" patchStrategy:"merge" patchMergeKey:"name"`
// The time at which data collection for the node-scoped (i.e. aggregate) stats was (re)started.
StartTime metav1.Time `json:"startTime"`
// Stats pertaining to CPU resources.
// +optional
CPU *CPUStats `json:"cpu,omitempty"`
// Stats pertaining to memory (RAM) resources.
// +optional
Memory *MemoryStats `json:"memory,omitempty"`
// Stats pertaining to network resources.
// +optional
Network *NetworkStats `json:"network,omitempty"`
// Stats pertaining to total usage of filesystem resources on the rootfs used by node k8s components.
// NodeFs.Used is the total bytes used on the filesystem.
// +optional
Fs *FsStats `json:"fs,omitempty"`
// Stats about the underlying container runtime.
// +optional
Runtime *RuntimeStats `json:"runtime,omitempty"`
// Stats about the rlimit of system.
// +optional
Rlimit *RlimitStats `json:"rlimit,omitempty"`
}
// RlimitStats are stats rlimit of OS.
type RlimitStats struct {
Time metav1.Time `json:"time"`
// The max PID of OS.
MaxPID *int64 `json:"maxpid,omitempty"`
// The number of running process in the OS.
NumOfRunningProcesses *int64 `json:"curproc,omitempty"`
}
// RuntimeStats are stats pertaining to the underlying container runtime.
type RuntimeStats struct {
// Stats about the underlying filesystem where container images are stored.
// This filesystem could be the same as the primary (root) filesystem.
// Usage here refers to the total number of bytes occupied by images on the filesystem.
// +optional
ImageFs *FsStats `json:"imageFs,omitempty"`
}
const (
// SystemContainerKubelet is the container name for the system container tracking Kubelet usage.
SystemContainerKubelet = "kubelet"
// SystemContainerRuntime is the container name for the system container tracking the runtime (e.g. docker) usage.
SystemContainerRuntime = "runtime"
// SystemContainerMisc is the container name for the system container tracking non-kubernetes processes.
SystemContainerMisc = "misc"
// SystemContainerPods is the container name for the system container tracking user pods.
SystemContainerPods = "pods"
)
// PodStats holds pod-level unprocessed sample stats.
type PodStats struct {
// Reference to the measured Pod.
PodRef PodReference `json:"podRef"`
// The time at which data collection for the pod-scoped (e.g. network) stats was (re)started.
StartTime metav1.Time `json:"startTime"`
// Stats of containers in the measured pod.
// +patchMergeKey=name
// +patchStrategy=merge
Containers []ContainerStats `json:"containers" patchStrategy:"merge" patchMergeKey:"name"`
// Stats pertaining to CPU resources consumed by pod cgroup (which includes all containers' resource usage and pod overhead).
// +optional
CPU *CPUStats `json:"cpu,omitempty"`
// Stats pertaining to memory (RAM) resources consumed by pod cgroup (which includes all containers' resource usage and pod overhead).
// +optional
Memory *MemoryStats `json:"memory,omitempty"`
// Stats pertaining to network resources.
// +optional
Network *NetworkStats `json:"network,omitempty"`
// Stats pertaining to volume usage of filesystem resources.
// VolumeStats.UsedBytes is the number of bytes used by the Volume
// +optional
// +patchMergeKey=name
// +patchStrategy=merge
VolumeStats []VolumeStats `json:"volume,omitempty" patchStrategy:"merge" patchMergeKey:"name"`
// EphemeralStorage reports the total filesystem usage for the containers and emptyDir-backed volumes in the measured Pod.
// +optional
EphemeralStorage *FsStats `json:"ephemeral-storage,omitempty"`
}
// ContainerStats holds container-level unprocessed sample stats.
type ContainerStats struct {
// Reference to the measured container.
Name string `json:"name"`
// The time at which data collection for this container was (re)started.
StartTime metav1.Time `json:"startTime"`
// Stats pertaining to CPU resources.
// +optional
CPU *CPUStats `json:"cpu,omitempty"`
// Stats pertaining to memory (RAM) resources.
// +optional
Memory *MemoryStats `json:"memory,omitempty"`
// Metrics for Accelerators. Each Accelerator corresponds to one element in the array.
Accelerators []AcceleratorStats `json:"accelerators,omitempty"`
// Stats pertaining to container rootfs usage of filesystem resources.
// Rootfs.UsedBytes is the number of bytes used for the container write layer.
// +optional
Rootfs *FsStats `json:"rootfs,omitempty"`
// Stats pertaining to container logs usage of filesystem resources.
// Logs.UsedBytes is the number of bytes used for the container logs.
// +optional
Logs *FsStats `json:"logs,omitempty"`
// User defined metrics that are exposed by containers in the pod. Typically, we expect only one container in the pod to be exposing user defined metrics. In the event of multiple containers exposing metrics, they will be combined here.
// +patchMergeKey=name
// +patchStrategy=merge
UserDefinedMetrics []UserDefinedMetric `json:"userDefinedMetrics,omitempty" patchStrategy:"merge" patchMergeKey:"name"`
}
// PodReference contains enough information to locate the referenced pod.
type PodReference struct {
Name string `json:"name"`
Namespace string `json:"namespace"`
UID string `json:"uid"`
}
// InterfaceStats contains resource value data about interface.
type InterfaceStats struct {
// The name of the interface
Name string `json:"name"`
// Cumulative count of bytes received.
// +optional
RxBytes *uint64 `json:"rxBytes,omitempty"`
// Cumulative count of receive errors encountered.
// +optional
RxErrors *uint64 `json:"rxErrors,omitempty"`
// Cumulative count of bytes transmitted.
// +optional
TxBytes *uint64 `json:"txBytes,omitempty"`
// Cumulative count of transmit errors encountered.
// +optional
TxErrors *uint64 `json:"txErrors,omitempty"`
}
// NetworkStats contains data about network resources.
type NetworkStats struct {
// The time at which these stats were updated.
Time metav1.Time `json:"time"`
// Stats for the default interface, if found
InterfaceStats `json:",inline"`
Interfaces []InterfaceStats `json:"interfaces,omitempty"`
}
// CPUStats contains data about CPU usage.
type CPUStats struct {
// The time at which these stats were updated.
Time metav1.Time `json:"time"`
// Total CPU usage (sum of all cores) averaged over the sample window.
// The "core" unit can be interpreted as CPU core-nanoseconds per second.
// +optional
UsageNanoCores *uint64 `json:"usageNanoCores,omitempty"`
// Cumulative CPU usage (sum of all cores) since object creation.
// +optional
UsageCoreNanoSeconds *uint64 `json:"usageCoreNanoSeconds,omitempty"`
}
// MemoryStats contains data about memory usage.
type MemoryStats struct {
// The time at which these stats were updated.
Time metav1.Time `json:"time"`
// Available memory for use. This is defined as the memory limit - workingSetBytes.
// If memory limit is undefined, the available bytes is omitted.
// +optional
AvailableBytes *uint64 `json:"availableBytes,omitempty"`
// Total memory in use. This includes all memory regardless of when it was accessed.
// +optional
UsageBytes *uint64 `json:"usageBytes,omitempty"`
// The amount of working set memory. This includes recently accessed memory,
// dirty memory, and kernel memory. WorkingSetBytes is <= UsageBytes
// +optional
WorkingSetBytes *uint64 `json:"workingSetBytes,omitempty"`
// The amount of anonymous and swap cache memory (includes transparent
// hugepages).
// +optional
RSSBytes *uint64 `json:"rssBytes,omitempty"`
// Cumulative number of minor page faults.
// +optional
PageFaults *uint64 `json:"pageFaults,omitempty"`
// Cumulative number of major page faults.
// +optional
MajorPageFaults *uint64 `json:"majorPageFaults,omitempty"`
}
// AcceleratorStats contains stats for accelerators attached to the container.
type AcceleratorStats struct {
// Make of the accelerator (nvidia, amd, google etc.)
Make string `json:"make"`
// Model of the accelerator (tesla-p100, tesla-k80 etc.)
Model string `json:"model"`
// ID of the accelerator.
ID string `json:"id"`
// Total accelerator memory.
// unit: bytes
MemoryTotal uint64 `json:"memoryTotal"`
// Total accelerator memory allocated.
// unit: bytes
MemoryUsed uint64 `json:"memoryUsed"`
// Percent of time over the past sample period (10s) during which
// the accelerator was actively processing.
DutyCycle uint64 `json:"dutyCycle"`
}
// VolumeStats contains data about Volume filesystem usage.
type VolumeStats struct {
// Embedded FsStats
FsStats
// Name is the name given to the Volume
// +optional
Name string `json:"name,omitempty"`
// Reference to the PVC, if one exists
// +optional
PVCRef *PVCReference `json:"pvcRef,omitempty"`
}
// PVCReference contains enough information to describe the referenced PVC.
type PVCReference struct {
Name string `json:"name"`
Namespace string `json:"namespace"`
}
// FsStats contains data about filesystem usage.
type FsStats struct {
// The time at which these stats were updated.
Time metav1.Time `json:"time"`
// AvailableBytes represents the storage space available (bytes) for the filesystem.
// +optional
AvailableBytes *uint64 `json:"availableBytes,omitempty"`
// CapacityBytes represents the total capacity (bytes) of the filesystems underlying storage.
// +optional
CapacityBytes *uint64 `json:"capacityBytes,omitempty"`
// UsedBytes represents the bytes used for a specific task on the filesystem.
// This may differ from the total bytes used on the filesystem and may not equal CapacityBytes - AvailableBytes.
// e.g. For ContainerStats.Rootfs this is the bytes used by the container rootfs on the filesystem.
// +optional
UsedBytes *uint64 `json:"usedBytes,omitempty"`
// InodesFree represents the free inodes in the filesystem.
// +optional
InodesFree *uint64 `json:"inodesFree,omitempty"`
// Inodes represents the total inodes in the filesystem.
// +optional
Inodes *uint64 `json:"inodes,omitempty"`
// InodesUsed represents the inodes used by the filesystem
// This may not equal Inodes - InodesFree because this filesystem may share inodes with other "filesystems"
// e.g. For ContainerStats.Rootfs, this is the inodes used only by that container, and does not count inodes used by other containers.
InodesUsed *uint64 `json:"inodesUsed,omitempty"`
}
// UserDefinedMetricType defines how the metric should be interpreted by the user.
type UserDefinedMetricType string
const (
// MetricGauge is an instantaneous value. May increase or decrease.
MetricGauge UserDefinedMetricType = "gauge"
// MetricCumulative is a counter-like value that is only expected to increase.
MetricCumulative UserDefinedMetricType = "cumulative"
// MetricDelta is a rate over a time period.
MetricDelta UserDefinedMetricType = "delta"
)
// UserDefinedMetricDescriptor contains metadata that describes a user defined metric.
type UserDefinedMetricDescriptor struct {
// The name of the metric.
Name string `json:"name"`
// Type of the metric.
Type UserDefinedMetricType `json:"type"`
// Display Units for the stats.
Units string `json:"units"`
// Metadata labels associated with this metric.
// +optional
Labels map[string]string `json:"labels,omitempty"`
}
// UserDefinedMetric represents a metric defined and generated by users.
type UserDefinedMetric struct {
UserDefinedMetricDescriptor `json:",inline"`
// The time at which these stats were updated.
Time metav1.Time `json:"time"`
// Value of the metric. Float64s have 53 bit precision.
// We do not foresee any metrics exceeding that value.
Value float64 `json:"value"`
}

File diff suppressed because it is too large Load diff

View file

@ -1,3 +1,4 @@
//go:build codegen
// +build codegen
/*

View file

@ -21,10 +21,10 @@ import (
"encoding/json"
"fmt"
"io"
"io/ioutil"
"net/http"
"net/url"
"path"
"strings"
"time"
"github.com/prometheus/common/model"
@ -47,17 +47,31 @@ type GenericAPIClient interface {
type httpAPIClient struct {
client *http.Client
baseURL *url.URL
headers http.Header
}
func (c *httpAPIClient) Do(ctx context.Context, verb, endpoint string, query url.Values) (APIResponse, error) {
u := *c.baseURL
u.Path = path.Join(c.baseURL.Path, endpoint)
u.RawQuery = query.Encode()
req, err := http.NewRequest(verb, u.String(), nil)
var reqBody io.Reader
if verb == http.MethodGet {
u.RawQuery = query.Encode()
} else if verb == http.MethodPost {
reqBody = strings.NewReader(query.Encode())
}
req, err := http.NewRequestWithContext(ctx, verb, u.String(), reqBody)
if err != nil {
return APIResponse{}, fmt.Errorf("error constructing HTTP request to Prometheus: %v", err)
}
req.WithContext(ctx)
for key, values := range c.headers {
for _, value := range values {
req.Header.Add(key, value)
}
}
if verb == http.MethodPost {
req.Header.Set("Content-Type", "application/x-www-form-urlencoded")
}
resp, err := c.client.Do(req)
defer func() {
@ -86,7 +100,7 @@ func (c *httpAPIClient) Do(ctx context.Context, verb, endpoint string, query url
var body io.Reader = resp.Body
if klog.V(8).Enabled() {
data, err := ioutil.ReadAll(body)
data, err := io.ReadAll(body)
if err != nil {
return APIResponse{}, fmt.Errorf("unable to log response body: %v", err)
}
@ -113,10 +127,11 @@ func (c *httpAPIClient) Do(ctx context.Context, verb, endpoint string, query url
}
// NewGenericAPIClient builds a new generic Prometheus API client for the given base URL and HTTP Client.
func NewGenericAPIClient(client *http.Client, baseURL *url.URL) GenericAPIClient {
func NewGenericAPIClient(client *http.Client, baseURL *url.URL, headers http.Header) GenericAPIClient {
return &httpAPIClient{
client: client,
baseURL: baseURL,
headers: headers,
}
}
@ -128,20 +143,22 @@ const (
// queryClient is a Client that connects to the Prometheus HTTP API.
type queryClient struct {
api GenericAPIClient
api GenericAPIClient
verb string
}
// NewClientForAPI creates a Client for the given generic Prometheus API client.
func NewClientForAPI(client GenericAPIClient) Client {
func NewClientForAPI(client GenericAPIClient, verb string) Client {
return &queryClient{
api: client,
api: client,
verb: verb,
}
}
// NewClient creates a Client for the given HTTP client and base URL (the location of the Prometheus server).
func NewClient(client *http.Client, baseURL *url.URL) Client {
genericClient := NewGenericAPIClient(client, baseURL)
return NewClientForAPI(genericClient)
func NewClient(client *http.Client, baseURL *url.URL, headers http.Header, verb string) Client {
genericClient := NewGenericAPIClient(client, baseURL, headers)
return NewClientForAPI(genericClient, verb)
}
func (h *queryClient) Series(ctx context.Context, interval model.Interval, selectors ...Selector) ([]Series, error) {
@ -157,7 +174,7 @@ func (h *queryClient) Series(ctx context.Context, interval model.Interval, selec
vals.Add("match[]", string(selector))
}
res, err := h.api.Do(ctx, "GET", seriesURL, vals)
res, err := h.api.Do(ctx, h.verb, seriesURL, vals)
if err != nil {
return nil, err
}
@ -177,7 +194,7 @@ func (h *queryClient) Query(ctx context.Context, t model.Time, query Selector) (
vals.Set("timeout", model.Duration(timeout).String())
}
res, err := h.api.Do(ctx, "GET", queryURL, vals)
res, err := h.api.Do(ctx, h.verb, queryURL, vals)
if err != nil {
return QueryResult{}, err
}
@ -204,7 +221,7 @@ func (h *queryClient) QueryRange(ctx context.Context, r Range, query Selector) (
vals.Set("timeout", model.Duration(timeout).String())
}
res, err := h.api.Do(ctx, "GET", queryRangeURL, vals)
res, err := h.api.Do(ctx, h.verb, queryRangeURL, vals)
if err != nil {
return QueryResult{}, err
}
@ -218,7 +235,7 @@ func (h *queryClient) QueryRange(ctx context.Context, r Range, query Selector) (
// when present
func timeoutFromContext(ctx context.Context) (time.Duration, bool) {
if deadline, hasDeadline := ctx.Deadline(); hasDeadline {
return time.Now().Sub(deadline), true
return time.Since(deadline), true
}
return time.Duration(0), false

View file

@ -20,8 +20,9 @@ import (
"context"
"fmt"
prom "github.com/directxman12/k8s-prometheus-adapter/pkg/client"
pmodel "github.com/prometheus/common/model"
prom "sigs.k8s.io/prometheus-adapter/pkg/client"
)
// FakePrometheusClient is a fake instance of prom.Client

View file

@ -13,6 +13,7 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package client
import (

View file

@ -13,6 +13,7 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package client
import (

View file

@ -13,34 +13,51 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package metrics
import (
"context"
"net/http"
"net/url"
"time"
"github.com/prometheus/client_golang/prometheus"
"github.com/directxman12/k8s-prometheus-adapter/pkg/client"
apimetrics "k8s.io/apiserver/pkg/endpoints/metrics"
"k8s.io/component-base/metrics"
"k8s.io/component-base/metrics/legacyregistry"
"sigs.k8s.io/prometheus-adapter/pkg/client"
)
var (
// queryLatency is the total latency of any query going through the
// various endpoints (query, range-query, series). It includes some deserialization
// overhead and HTTP overhead.
queryLatency = prometheus.NewHistogramVec(
prometheus.HistogramOpts{
Name: "cmgateway_prometheus_query_latency_seconds",
Help: "Prometheus client query latency in seconds. Broken down by target prometheus endpoint and target server",
Buckets: prometheus.ExponentialBuckets(0.0001, 2, 10),
queryLatency = metrics.NewHistogramVec(
&metrics.HistogramOpts{
Namespace: "prometheus_adapter",
Subsystem: "prometheus_client",
Name: "request_duration_seconds",
Help: "Prometheus client query latency in seconds. Broken down by target prometheus endpoint and target server",
Buckets: prometheus.DefBuckets,
},
[]string{"endpoint", "server"},
[]string{"path", "server"},
)
)
func init() {
prometheus.MustRegister(queryLatency)
func MetricsHandler() (http.HandlerFunc, error) {
registry := metrics.NewKubeRegistry()
err := registry.Register(queryLatency)
if err != nil {
return nil, err
}
apimetrics.Register()
return func(w http.ResponseWriter, req *http.Request) {
legacyregistry.Handler().ServeHTTP(w, req)
metrics.HandlerFor(registry, metrics.HandlerOpts{}).ServeHTTP(w, req)
}, nil
}
// instrumentedClient is a client.GenericAPIClient which instruments calls to Do,
@ -62,7 +79,7 @@ func (c *instrumentedGenericClient) Do(ctx context.Context, verb, endpoint strin
return
}
}
queryLatency.With(prometheus.Labels{"endpoint": endpoint, "server": c.serverName}).Observe(endTime.Sub(startTime).Seconds())
queryLatency.With(prometheus.Labels{"path": endpoint, "server": c.serverName}).Observe(endTime.Sub(startTime).Seconds())
}()
var resp client.APIResponse

View file

@ -25,10 +25,10 @@ type ErrorType string
const (
ErrBadData ErrorType = "bad_data"
ErrTimeout = "timeout"
ErrCanceled = "canceled"
ErrExec = "execution"
ErrBadResponse = "bad_response"
ErrTimeout ErrorType = "timeout"
ErrCanceled ErrorType = "canceled"
ErrExec ErrorType = "execution"
ErrBadResponse ErrorType = "bad_response"
)
// Error is an error returned by the API.
@ -46,7 +46,7 @@ type ResponseStatus string
const (
ResponseSucceeded ResponseStatus = "succeeded"
ResponseError = "error"
ResponseError ResponseStatus = "error"
)
// APIResponse represents the raw response returned by the API.

View file

@ -9,9 +9,9 @@ type MetricsDiscoveryConfig struct {
// custom metrics API resources. The rules are applied independently,
// and thus must be mutually exclusive. Rules with the same SeriesQuery
// will make only a single API call.
Rules []DiscoveryRule `yaml:"rules"`
ResourceRules *ResourceRules `yaml:"resourceRules,omitempty"`
ExternalRules []DiscoveryRule `yaml:"externalRules,omitempty"`
Rules []DiscoveryRule `json:"rules" yaml:"rules"`
ResourceRules *ResourceRules `json:"resourceRules,omitempty" yaml:"resourceRules,omitempty"`
ExternalRules []DiscoveryRule `json:"externalRules,omitempty" yaml:"externalRules,omitempty"`
}
// DiscoveryRule describes a set of rules for transforming Prometheus metrics to/from
@ -19,32 +19,32 @@ type MetricsDiscoveryConfig struct {
type DiscoveryRule struct {
// SeriesQuery specifies which metrics this rule should consider via a Prometheus query
// series selector query.
SeriesQuery string `yaml:"seriesQuery"`
SeriesQuery string `json:"seriesQuery" yaml:"seriesQuery"`
// SeriesFilters specifies additional regular expressions to be applied on
// the series names returned from the query. This is useful for constraints
// that can't be represented in the SeriesQuery (e.g. series matching `container_.+`
// not matching `container_.+_total`. A filter will be automatically appended to
// match the form specified in Name.
SeriesFilters []RegexFilter `yaml:"seriesFilters"`
SeriesFilters []RegexFilter `json:"seriesFilters" yaml:"seriesFilters"`
// Resources specifies how associated Kubernetes resources should be discovered for
// the given metrics.
Resources ResourceMapping `yaml:"resources"`
Resources ResourceMapping `json:"resources" yaml:"resources"`
// Name specifies how the metric name should be transformed between custom metric
// API resources, and Prometheus metric names.
Name NameMapping `yaml:"name"`
Name NameMapping `json:"name" yaml:"name"`
// MetricsQuery specifies modifications to the metrics query, such as converting
// cumulative metrics to rate metrics. It is a template where `.LabelMatchers` is
// a the comma-separated base label matchers and `.Series` is the series name, and
// `.GroupBy` is the comma-separated expected group-by label names. The delimeters
// are `<<` and `>>`.
MetricsQuery string `yaml:"metricsQuery,omitempty"`
MetricsQuery string `json:"metricsQuery,omitempty" yaml:"metricsQuery,omitempty"`
}
// RegexFilter is a filter that matches positively or negatively against a regex.
// Only one field may be set at a time.
type RegexFilter struct {
Is string `yaml:"is,omitempty"`
IsNot string `yaml:"isNot,omitempty"`
Is string `json:"is,omitempty" yaml:"is,omitempty"`
IsNot string `json:"isNot,omitempty" yaml:"isNot,omitempty"`
}
// ResourceMapping specifies how to map Kubernetes resources to Prometheus labels
@ -54,16 +54,18 @@ type ResourceMapping struct {
// the `.Group` and `.Resource` fields. The `.Group` field will have
// dots replaced with underscores, and the `.Resource` field will be
// singularized. The delimiters are `<<` and `>>`.
Template string `yaml:"template,omitempty"`
Template string `json:"template,omitempty" yaml:"template,omitempty"`
// Overrides specifies exceptions to the above template, mapping label names
// to group-resources
Overrides map[string]GroupResource `yaml:"overrides,omitempty"`
Overrides map[string]GroupResource `json:"overrides,omitempty" yaml:"overrides,omitempty"`
// Namespaced ignores the source namespace of the requester and requires one in the query
Namespaced *bool `json:"namespaced,omitempty" yaml:"namespaced,omitempty"`
}
// GroupResource represents a Kubernetes group-resource.
type GroupResource struct {
Group string `yaml:"group,omitempty"`
Resource string `yaml:"resource"`
Group string `json:"group,omitempty" yaml:"group,omitempty"`
Resource string `json:"resource" yaml:"resource"`
}
// NameMapping specifies how to convert Prometheus metrics
@ -72,38 +74,38 @@ type NameMapping struct {
// Matches is a regular expression that is used to match
// Prometheus series names. It may be left blank, in which
// case it is equivalent to `.*`.
Matches string `yaml:"matches"`
Matches string `json:"matches" yaml:"matches"`
// As is the name used in the API. Captures from Matches
// are available for use here. If not specified, it defaults
// to $0 if no capture groups are present in Matches, or $1
// if only one is present, and will error if multiple are.
As string `yaml:"as"`
As string `json:"as" yaml:"as"`
}
// ResourceRules describe the rules for querying resource metrics
// API results. It's assumed that the same metrics can be used
// to aggregate across different resources.
type ResourceRules struct {
CPU ResourceRule `yaml:"cpu"`
Memory ResourceRule `yaml:"memory"`
CPU ResourceRule `json:"cpu" yaml:"cpu"`
Memory ResourceRule `json:"memory" yaml:"memory"`
// Window is the window size reported by the resource metrics API. It should match the value used
// in your containerQuery and nodeQuery if you use a `rate` function.
Window pmodel.Duration `yaml:"window"`
Window pmodel.Duration `json:"window" yaml:"window"`
}
// ResourceRule describes how to query metrics for some particular
// system resource metric.
type ResourceRule struct {
// Container is the query used to fetch the metrics for containers.
ContainerQuery string `yaml:"containerQuery"`
ContainerQuery string `json:"containerQuery" yaml:"containerQuery"`
// NodeQuery is the query used to fetch the metrics for nodes
// (for instance, simply aggregating by node label is insufficient for
// cadvisor metrics -- you need to select the `/` container).
NodeQuery string `yaml:"nodeQuery"`
NodeQuery string `json:"nodeQuery" yaml:"nodeQuery"`
// Resources specifies how associated Kubernetes resources should be discovered for
// the given metrics.
Resources ResourceMapping `yaml:"resources"`
Resources ResourceMapping `json:"resources" yaml:"resources"`
// ContainerLabel indicates the name of the Prometheus label containing the container name
// (since "container" is not a resource, this can't go in the `resources` block, but is similar).
ContainerLabel string `yaml:"containerLabel"`
ContainerLabel string `json:"containerLabel" yaml:"containerLabel"`
}

View file

@ -2,7 +2,7 @@ package config
import (
"fmt"
"io/ioutil"
"io"
"os"
yaml "gopkg.in/yaml.v2"
@ -11,11 +11,11 @@ import (
// FromFile loads the configuration from a particular file.
func FromFile(filename string) (*MetricsDiscoveryConfig, error) {
file, err := os.Open(filename)
defer file.Close()
if err != nil {
return nil, fmt.Errorf("unable to load metrics discovery config file: %v", err)
}
contents, err := ioutil.ReadAll(file)
defer file.Close()
contents, err := io.ReadAll(file)
if err != nil {
return nil, fmt.Errorf("unable to load metrics discovery config file: %v", err)
}

View file

@ -22,9 +22,8 @@ import (
"math"
"time"
"github.com/kubernetes-sigs/custom-metrics-apiserver/pkg/provider"
"github.com/kubernetes-sigs/custom-metrics-apiserver/pkg/provider/helpers"
pmodel "github.com/prometheus/common/model"
apierr "k8s.io/apimachinery/pkg/api/errors"
apimeta "k8s.io/apimachinery/pkg/api/meta"
"k8s.io/apimachinery/pkg/api/resource"
@ -37,8 +36,11 @@ import (
"k8s.io/klog/v2"
"k8s.io/metrics/pkg/apis/custom_metrics"
prom "github.com/directxman12/k8s-prometheus-adapter/pkg/client"
"github.com/directxman12/k8s-prometheus-adapter/pkg/naming"
"sigs.k8s.io/custom-metrics-apiserver/pkg/provider"
"sigs.k8s.io/custom-metrics-apiserver/pkg/provider/helpers"
prom "sigs.k8s.io/prometheus-adapter/pkg/client"
"sigs.k8s.io/prometheus-adapter/pkg/naming"
)
// Runnable represents something that can be run until told to stop.
@ -78,7 +80,7 @@ func NewPrometheusProvider(mapper apimeta.RESTMapper, kubeClient dynamic.Interfa
}, lister
}
func (p *prometheusProvider) metricFor(value pmodel.SampleValue, name types.NamespacedName, info provider.CustomMetricInfo) (*custom_metrics.MetricValue, error) {
func (p *prometheusProvider) metricFor(value pmodel.SampleValue, name types.NamespacedName, info provider.CustomMetricInfo, metricSelector labels.Selector) (*custom_metrics.MetricValue, error) {
ref, err := helpers.ReferenceFor(p.mapper, name, info)
if err != nil {
return nil, err
@ -90,18 +92,29 @@ func (p *prometheusProvider) metricFor(value pmodel.SampleValue, name types.Name
} else {
q = resource.NewMilliQuantity(int64(value*1000.0), resource.DecimalSI)
}
return &custom_metrics.MetricValue{
metric := &custom_metrics.MetricValue{
DescribedObject: ref,
Metric: custom_metrics.MetricIdentifier{
Name: info.Metric,
},
// TODO(directxman12): use the right timestamp
Timestamp: metav1.Time{time.Now()},
Timestamp: metav1.Time{Time: time.Now()},
Value: *q,
}, nil
}
if !metricSelector.Empty() {
sel, err := metav1.ParseToLabelSelector(metricSelector.String())
if err != nil {
return nil, err
}
metric.Metric.Selector = sel
}
return metric, nil
}
func (p *prometheusProvider) metricsFor(valueSet pmodel.Vector, info provider.CustomMetricInfo, namespace string, names []string) (*custom_metrics.MetricValueList, error) {
func (p *prometheusProvider) metricsFor(valueSet pmodel.Vector, namespace string, names []string, info provider.CustomMetricInfo, metricSelector labels.Selector) (*custom_metrics.MetricValueList, error) {
values, found := p.MatchValuesToNames(info, valueSet)
if !found {
return nil, provider.NewMetricNotFoundError(info.GroupResource, info.Metric)
@ -113,7 +126,7 @@ func (p *prometheusProvider) metricsFor(valueSet pmodel.Vector, info provider.Cu
continue
}
value, err := p.metricFor(values[name], types.NamespacedName{Namespace: namespace, Name: name}, info)
value, err := p.metricFor(values[name], types.NamespacedName{Namespace: namespace, Name: name}, info, metricSelector)
if err != nil {
return nil, err
}
@ -125,14 +138,14 @@ func (p *prometheusProvider) metricsFor(valueSet pmodel.Vector, info provider.Cu
}, nil
}
func (p *prometheusProvider) buildQuery(info provider.CustomMetricInfo, namespace string, metricSelector labels.Selector, names ...string) (pmodel.Vector, error) {
func (p *prometheusProvider) buildQuery(ctx context.Context, info provider.CustomMetricInfo, namespace string, metricSelector labels.Selector, names ...string) (pmodel.Vector, error) {
query, found := p.QueryForMetric(info, namespace, metricSelector, names...)
if !found {
return nil, provider.NewMetricNotFoundError(info.GroupResource, info.Metric)
}
// TODO: use an actual context
queryResults, err := p.promClient.Query(context.TODO(), pmodel.Now(), query)
queryResults, err := p.promClient.Query(ctx, pmodel.Now(), query)
if err != nil {
klog.Errorf("unable to fetch metrics from prometheus: %v", err)
// don't leak implementation details to the user
@ -147,9 +160,9 @@ func (p *prometheusProvider) buildQuery(info provider.CustomMetricInfo, namespac
return *queryResults.Vector, nil
}
func (p *prometheusProvider) GetMetricByName(name types.NamespacedName, info provider.CustomMetricInfo, metricSelector labels.Selector) (*custom_metrics.MetricValue, error) {
func (p *prometheusProvider) GetMetricByName(ctx context.Context, name types.NamespacedName, info provider.CustomMetricInfo, metricSelector labels.Selector) (*custom_metrics.MetricValue, error) {
// construct a query
queryResults, err := p.buildQuery(info, name.Namespace, metricSelector, name.Name)
queryResults, err := p.buildQuery(ctx, info, name.Namespace, metricSelector, name.Name)
if err != nil {
return nil, err
}
@ -175,10 +188,10 @@ func (p *prometheusProvider) GetMetricByName(name types.NamespacedName, info pro
}
// return the resulting metric
return p.metricFor(resultValue, name, info)
return p.metricFor(resultValue, name, info, metricSelector)
}
func (p *prometheusProvider) GetMetricBySelector(namespace string, selector labels.Selector, info provider.CustomMetricInfo, metricSelector labels.Selector) (*custom_metrics.MetricValueList, error) {
func (p *prometheusProvider) GetMetricBySelector(ctx context.Context, namespace string, selector labels.Selector, info provider.CustomMetricInfo, metricSelector labels.Selector) (*custom_metrics.MetricValueList, error) {
// fetch a list of relevant resource names
resourceNames, err := helpers.ListObjectNames(p.mapper, p.kubeClient, namespace, selector, info)
if err != nil {
@ -188,13 +201,13 @@ func (p *prometheusProvider) GetMetricBySelector(namespace string, selector labe
}
// construct the actual query
queryResults, err := p.buildQuery(info, namespace, metricSelector, resourceNames...)
queryResults, err := p.buildQuery(ctx, info, namespace, metricSelector, resourceNames...)
if err != nil {
return nil, err
}
// return the resulting metrics
return p.metricsFor(queryResults, info, namespace, resourceNames)
return p.metricsFor(queryResults, namespace, resourceNames, info, metricSelector)
}
type cachingMetricsLister struct {
@ -243,7 +256,7 @@ func (l *cachingMetricsLister) updateMetrics() error {
}
selectors[sel] = struct{}{}
go func() {
series, err := l.promClient.Series(context.TODO(), pmodel.Interval{startTime, 0}, sel)
series, err := l.promClient.Series(context.TODO(), pmodel.Interval{Start: startTime, End: 0}, sel)
if err != nil {
errs <- fmt.Errorf("unable to fetch metrics for query %q: %v", sel, err)
return

View file

@ -19,17 +19,19 @@ package provider
import (
"time"
"github.com/kubernetes-sigs/custom-metrics-apiserver/pkg/provider"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
pmodel "github.com/prometheus/common/model"
"k8s.io/apimachinery/pkg/runtime/schema"
fakedyn "k8s.io/client-go/dynamic/fake"
config "github.com/directxman12/k8s-prometheus-adapter/cmd/config-gen/utils"
prom "github.com/directxman12/k8s-prometheus-adapter/pkg/client"
fakeprom "github.com/directxman12/k8s-prometheus-adapter/pkg/client/fake"
"github.com/directxman12/k8s-prometheus-adapter/pkg/naming"
pmodel "github.com/prometheus/common/model"
"sigs.k8s.io/custom-metrics-apiserver/pkg/provider"
config "sigs.k8s.io/prometheus-adapter/cmd/config-gen/utils"
prom "sigs.k8s.io/prometheus-adapter/pkg/client"
fakeprom "sigs.k8s.io/prometheus-adapter/pkg/client/fake"
"sigs.k8s.io/prometheus-adapter/pkg/naming"
)
const fakeProviderUpdateInterval = 2 * time.Second
@ -45,13 +47,13 @@ func setupPrometheusProvider() (provider.CustomMetricsProvider, *fakeprom.FakePr
prov, _ := NewPrometheusProvider(restMapper(), fakeKubeClient, fakeProm, namers, fakeProviderUpdateInterval, fakeProviderStartDuration)
containerSel := prom.MatchSeries("", prom.NameMatches("^container_.*"), prom.LabelNeq("container_name", "POD"), prom.LabelNeq("namespace", ""), prom.LabelNeq("pod_name", ""))
containerSel := prom.MatchSeries("", prom.NameMatches("^container_.*"), prom.LabelNeq("container", "POD"), prom.LabelNeq("namespace", ""), prom.LabelNeq("pod", ""))
namespacedSel := prom.MatchSeries("", prom.LabelNeq("namespace", ""), prom.NameNotMatches("^container_.*"))
fakeProm.SeriesResults = map[prom.Selector][]prom.Series{
containerSel: {
{
Name: "container_some_usage",
Labels: pmodel.LabelSet{"pod_name": "somepod", "namespace": "somens", "container_name": "somecont"},
Labels: pmodel.LabelSet{"pod": "somepod", "namespace": "somens", "container": "somecont"},
},
},
namespacedSel: {
@ -85,7 +87,7 @@ var _ = Describe("Custom Metrics Provider", func() {
By("ensuring that no metrics are present before we start listing")
Expect(prov.ListAllMetrics()).To(BeEmpty())
By("setting the acceptible interval to now until the next update, with a bit of wiggle room")
By("setting the acceptable interval to now until the next update, with a bit of wiggle room")
startTime := pmodel.Now().Add(-1*fakeProviderUpdateInterval - fakeProviderUpdateInterval/10)
fakeProm.AcceptableInterval = pmodel.Interval{Start: startTime, End: 0}
@ -96,16 +98,16 @@ var _ = Describe("Custom Metrics Provider", func() {
By("listing all metrics, and checking that they contain the expected results")
Expect(prov.ListAllMetrics()).To(ConsistOf(
provider.CustomMetricInfo{schema.GroupResource{Resource: "services"}, true, "ingress_hits"},
provider.CustomMetricInfo{schema.GroupResource{Group: "extensions", Resource: "ingresses"}, true, "ingress_hits"},
provider.CustomMetricInfo{schema.GroupResource{Resource: "pods"}, true, "ingress_hits"},
provider.CustomMetricInfo{schema.GroupResource{Resource: "namespaces"}, false, "ingress_hits"},
provider.CustomMetricInfo{schema.GroupResource{Resource: "services"}, true, "service_proxy_packets"},
provider.CustomMetricInfo{schema.GroupResource{Resource: "namespaces"}, false, "service_proxy_packets"},
provider.CustomMetricInfo{schema.GroupResource{Group: "extensions", Resource: "deployments"}, true, "work_queue_wait"},
provider.CustomMetricInfo{schema.GroupResource{Resource: "namespaces"}, false, "work_queue_wait"},
provider.CustomMetricInfo{schema.GroupResource{Resource: "namespaces"}, false, "some_usage"},
provider.CustomMetricInfo{schema.GroupResource{Resource: "pods"}, true, "some_usage"},
provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "services"}, Namespaced: true, Metric: "ingress_hits"},
provider.CustomMetricInfo{GroupResource: schema.GroupResource{Group: "extensions", Resource: "ingresses"}, Namespaced: true, Metric: "ingress_hits"},
provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "pods"}, Namespaced: true, Metric: "ingress_hits"},
provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "namespaces"}, Namespaced: false, Metric: "ingress_hits"},
provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "services"}, Namespaced: true, Metric: "service_proxy_packets"},
provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "namespaces"}, Namespaced: false, Metric: "service_proxy_packets"},
provider.CustomMetricInfo{GroupResource: schema.GroupResource{Group: "extensions", Resource: "deployments"}, Namespaced: true, Metric: "work_queue_wait"},
provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "namespaces"}, Namespaced: false, Metric: "work_queue_wait"},
provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "namespaces"}, Namespaced: false, Metric: "some_usage"},
provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "pods"}, Namespaced: true, Metric: "some_usage"},
))
})
})

View file

@ -20,14 +20,16 @@ import (
"fmt"
"sync"
"github.com/kubernetes-sigs/custom-metrics-apiserver/pkg/provider"
pmodel "github.com/prometheus/common/model"
apimeta "k8s.io/apimachinery/pkg/api/meta"
"k8s.io/apimachinery/pkg/labels"
prom "github.com/directxman12/k8s-prometheus-adapter/pkg/client"
"github.com/directxman12/k8s-prometheus-adapter/pkg/naming"
pmodel "github.com/prometheus/common/model"
"k8s.io/klog/v2"
"sigs.k8s.io/custom-metrics-apiserver/pkg/provider"
prom "sigs.k8s.io/prometheus-adapter/pkg/client"
"sigs.k8s.io/prometheus-adapter/pkg/naming"
)
// NB: container metrics sourced from cAdvisor don't consistently follow naming conventions,

View file

@ -20,10 +20,11 @@ import (
"fmt"
"time"
"github.com/kubernetes-sigs/custom-metrics-apiserver/pkg/provider"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
pmodel "github.com/prometheus/common/model"
coreapi "k8s.io/api/core/v1"
extapi "k8s.io/api/extensions/v1beta1"
apimeta "k8s.io/apimachinery/pkg/api/meta"
@ -31,9 +32,11 @@ import (
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/selection"
config "github.com/directxman12/k8s-prometheus-adapter/cmd/config-gen/utils"
prom "github.com/directxman12/k8s-prometheus-adapter/pkg/client"
"github.com/directxman12/k8s-prometheus-adapter/pkg/naming"
"sigs.k8s.io/custom-metrics-apiserver/pkg/provider"
config "sigs.k8s.io/prometheus-adapter/cmd/config-gen/utils"
prom "sigs.k8s.io/prometheus-adapter/pkg/client"
"sigs.k8s.io/prometheus-adapter/pkg/naming"
)
// restMapper creates a RESTMapper with just the types we need for
@ -65,23 +68,23 @@ var seriesRegistryTestSeries = [][]prom.Series{
{
{
Name: "container_some_time_seconds_total",
Labels: pmodel.LabelSet{"pod_name": "somepod", "namespace": "somens", "container_name": "somecont"},
Labels: pmodel.LabelSet{"pod": "somepod", "namespace": "somens", "container": "somecont"},
},
},
{
{
Name: "container_some_count_total",
Labels: pmodel.LabelSet{"pod_name": "somepod", "namespace": "somens", "container_name": "somecont"},
Labels: pmodel.LabelSet{"pod": "somepod", "namespace": "somens", "container": "somecont"},
},
},
{
{
Name: "container_some_usage",
Labels: pmodel.LabelSet{"pod_name": "somepod", "namespace": "somens", "container_name": "somecont"},
Labels: pmodel.LabelSet{"pod": "somepod", "namespace": "somens", "container": "somecont"},
},
},
{
// guage metrics
// gauge metrics
{
Name: "node_gigawatts",
Labels: pmodel.LabelSet{"kube_node": "somenode"},
@ -156,35 +159,35 @@ var _ = Describe("Series Registry", func() {
// container metrics
{
title: "container metrics gauge / multiple resource names",
info: provider.CustomMetricInfo{schema.GroupResource{Resource: "pods"}, true, "some_usage"},
info: provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "pods"}, Namespaced: true, Metric: "some_usage"},
namespace: "somens",
resourceNames: []string{"somepod1", "somepod2"},
metricSelector: labels.Everything(),
expectedQuery: "sum(container_some_usage{namespace=\"somens\",pod_name=~\"somepod1|somepod2\",container_name!=\"POD\"}) by (pod_name)",
expectedQuery: "sum(container_some_usage{namespace=\"somens\",pod=~\"somepod1|somepod2\",container!=\"POD\"}) by (pod)",
},
{
title: "container metrics counter",
info: provider.CustomMetricInfo{schema.GroupResource{Resource: "pods"}, true, "some_count"},
info: provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "pods"}, Namespaced: true, Metric: "some_count"},
namespace: "somens",
resourceNames: []string{"somepod1", "somepod2"},
metricSelector: labels.Everything(),
expectedQuery: "sum(rate(container_some_count_total{namespace=\"somens\",pod_name=~\"somepod1|somepod2\",container_name!=\"POD\"}[1m])) by (pod_name)",
expectedQuery: "sum(rate(container_some_count_total{namespace=\"somens\",pod=~\"somepod1|somepod2\",container!=\"POD\"}[1m])) by (pod)",
},
{
title: "container metrics seconds counter",
info: provider.CustomMetricInfo{schema.GroupResource{Resource: "pods"}, true, "some_time"},
info: provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "pods"}, Namespaced: true, Metric: "some_time"},
namespace: "somens",
resourceNames: []string{"somepod1", "somepod2"},
metricSelector: labels.Everything(),
expectedQuery: "sum(rate(container_some_time_seconds_total{namespace=\"somens\",pod_name=~\"somepod1|somepod2\",container_name!=\"POD\"}[1m])) by (pod_name)",
expectedQuery: "sum(rate(container_some_time_seconds_total{namespace=\"somens\",pod=~\"somepod1|somepod2\",container!=\"POD\"}[1m])) by (pod)",
},
// namespaced metrics
{
title: "namespaced metrics counter / multidimensional (service)",
info: provider.CustomMetricInfo{schema.GroupResource{Resource: "service"}, true, "ingress_hits"},
info: provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "service"}, Namespaced: true, Metric: "ingress_hits"},
namespace: "somens",
resourceNames: []string{"somesvc"},
metricSelector: labels.Everything(),
@ -193,7 +196,7 @@ var _ = Describe("Series Registry", func() {
},
{
title: "namespaced metrics counter / multidimensional (service) / selection using labels",
info: provider.CustomMetricInfo{schema.GroupResource{Resource: "service"}, true, "ingress_hits"},
info: provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "service"}, Namespaced: true, Metric: "ingress_hits"},
namespace: "somens",
resourceNames: []string{"somesvc"},
metricSelector: labels.NewSelector().Add(
@ -203,7 +206,7 @@ var _ = Describe("Series Registry", func() {
},
{
title: "namespaced metrics counter / multidimensional (ingress)",
info: provider.CustomMetricInfo{schema.GroupResource{Group: "extensions", Resource: "ingress"}, true, "ingress_hits"},
info: provider.CustomMetricInfo{GroupResource: schema.GroupResource{Group: "extensions", Resource: "ingress"}, Namespaced: true, Metric: "ingress_hits"},
namespace: "somens",
resourceNames: []string{"someingress"},
metricSelector: labels.Everything(),
@ -212,7 +215,7 @@ var _ = Describe("Series Registry", func() {
},
{
title: "namespaced metrics counter / multidimensional (pod)",
info: provider.CustomMetricInfo{schema.GroupResource{Resource: "pod"}, true, "ingress_hits"},
info: provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "pod"}, Namespaced: true, Metric: "ingress_hits"},
namespace: "somens",
resourceNames: []string{"somepod"},
metricSelector: labels.Everything(),
@ -221,7 +224,7 @@ var _ = Describe("Series Registry", func() {
},
{
title: "namespaced metrics gauge",
info: provider.CustomMetricInfo{schema.GroupResource{Resource: "service"}, true, "service_proxy_packets"},
info: provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "service"}, Namespaced: true, Metric: "service_proxy_packets"},
namespace: "somens",
resourceNames: []string{"somesvc"},
metricSelector: labels.Everything(),
@ -230,7 +233,7 @@ var _ = Describe("Series Registry", func() {
},
{
title: "namespaced metrics seconds counter",
info: provider.CustomMetricInfo{schema.GroupResource{Group: "extensions", Resource: "deployment"}, true, "work_queue_wait"},
info: provider.CustomMetricInfo{GroupResource: schema.GroupResource{Group: "extensions", Resource: "deployment"}, Namespaced: true, Metric: "work_queue_wait"},
namespace: "somens",
resourceNames: []string{"somedep"},
metricSelector: labels.Everything(),
@ -240,7 +243,7 @@ var _ = Describe("Series Registry", func() {
// non-namespaced series
{
title: "root scoped metrics gauge",
info: provider.CustomMetricInfo{schema.GroupResource{Resource: "node"}, false, "node_gigawatts"},
info: provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "node"}, Namespaced: false, Metric: "node_gigawatts"},
resourceNames: []string{"somenode"},
metricSelector: labels.Everything(),
@ -248,7 +251,7 @@ var _ = Describe("Series Registry", func() {
},
{
title: "root scoped metrics counter",
info: provider.CustomMetricInfo{schema.GroupResource{Resource: "persistentvolume"}, false, "volume_claims"},
info: provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "persistentvolume"}, Namespaced: false, Metric: "volume_claims"},
resourceNames: []string{"somepv"},
metricSelector: labels.Everything(),
@ -256,7 +259,7 @@ var _ = Describe("Series Registry", func() {
},
{
title: "root scoped metrics seconds counter",
info: provider.CustomMetricInfo{schema.GroupResource{Resource: "node"}, false, "node_fan"},
info: provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "node"}, Namespaced: false, Metric: "node_fan"},
resourceNames: []string{"somenode"},
metricSelector: labels.Everything(),
@ -278,23 +281,23 @@ var _ = Describe("Series Registry", func() {
It("should list all metrics", func() {
Expect(registry.ListAllMetrics()).To(ConsistOf(
provider.CustomMetricInfo{schema.GroupResource{Resource: "pods"}, true, "some_count"},
provider.CustomMetricInfo{schema.GroupResource{Resource: "namespaces"}, false, "some_count"},
provider.CustomMetricInfo{schema.GroupResource{Resource: "pods"}, true, "some_time"},
provider.CustomMetricInfo{schema.GroupResource{Resource: "namespaces"}, false, "some_time"},
provider.CustomMetricInfo{schema.GroupResource{Resource: "pods"}, true, "some_usage"},
provider.CustomMetricInfo{schema.GroupResource{Resource: "namespaces"}, false, "some_usage"},
provider.CustomMetricInfo{schema.GroupResource{Resource: "services"}, true, "ingress_hits"},
provider.CustomMetricInfo{schema.GroupResource{Group: "extensions", Resource: "ingresses"}, true, "ingress_hits"},
provider.CustomMetricInfo{schema.GroupResource{Resource: "pods"}, true, "ingress_hits"},
provider.CustomMetricInfo{schema.GroupResource{Resource: "namespaces"}, false, "ingress_hits"},
provider.CustomMetricInfo{schema.GroupResource{Resource: "services"}, true, "service_proxy_packets"},
provider.CustomMetricInfo{schema.GroupResource{Resource: "namespaces"}, false, "service_proxy_packets"},
provider.CustomMetricInfo{schema.GroupResource{Group: "extensions", Resource: "deployments"}, true, "work_queue_wait"},
provider.CustomMetricInfo{schema.GroupResource{Resource: "namespaces"}, false, "work_queue_wait"},
provider.CustomMetricInfo{schema.GroupResource{Resource: "nodes"}, false, "node_gigawatts"},
provider.CustomMetricInfo{schema.GroupResource{Resource: "persistentvolumes"}, false, "volume_claims"},
provider.CustomMetricInfo{schema.GroupResource{Resource: "nodes"}, false, "node_fan"},
provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "pods"}, Namespaced: true, Metric: "some_count"},
provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "namespaces"}, Namespaced: false, Metric: "some_count"},
provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "pods"}, Namespaced: true, Metric: "some_time"},
provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "namespaces"}, Namespaced: false, Metric: "some_time"},
provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "pods"}, Namespaced: true, Metric: "some_usage"},
provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "namespaces"}, Namespaced: false, Metric: "some_usage"},
provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "services"}, Namespaced: true, Metric: "ingress_hits"},
provider.CustomMetricInfo{GroupResource: schema.GroupResource{Group: "extensions", Resource: "ingresses"}, Namespaced: true, Metric: "ingress_hits"},
provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "pods"}, Namespaced: true, Metric: "ingress_hits"},
provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "namespaces"}, Namespaced: false, Metric: "ingress_hits"},
provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "services"}, Namespaced: true, Metric: "service_proxy_packets"},
provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "namespaces"}, Namespaced: false, Metric: "service_proxy_packets"},
provider.CustomMetricInfo{GroupResource: schema.GroupResource{Group: "extensions", Resource: "deployments"}, Namespaced: true, Metric: "work_queue_wait"},
provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "namespaces"}, Namespaced: false, Metric: "work_queue_wait"},
provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "nodes"}, Namespaced: false, Metric: "node_gigawatts"},
provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "persistentvolumes"}, Namespaced: false, Metric: "volume_claims"},
provider.CustomMetricInfo{GroupResource: schema.GroupResource{Resource: "nodes"}, Namespaced: false, Metric: "node_fan"},
))
})
})

View file

@ -21,11 +21,12 @@ import (
"fmt"
"time"
pmodel "github.com/prometheus/common/model"
"k8s.io/klog/v2"
prom "github.com/directxman12/k8s-prometheus-adapter/pkg/client"
"github.com/directxman12/k8s-prometheus-adapter/pkg/naming"
prom "sigs.k8s.io/prometheus-adapter/pkg/client"
"sigs.k8s.io/prometheus-adapter/pkg/naming"
pmodel "github.com/prometheus/common/model"
)
// Runnable represents something that can be run until told to stop.
@ -99,7 +100,7 @@ func (l *basicMetricLister) ListAllMetrics() (MetricUpdateResult, error) {
}
selectors[sel] = struct{}{}
go func() {
series, err := l.promClient.Series(context.TODO(), pmodel.Interval{startTime, 0}, sel)
series, err := l.promClient.Series(context.TODO(), pmodel.Interval{Start: startTime, End: 0}, sel)
if err != nil {
errs <- fmt.Errorf("unable to fetch metrics for query %q: %v", sel, err)
return

View file

@ -16,12 +16,13 @@ package provider
import (
"sync"
"github.com/kubernetes-sigs/custom-metrics-apiserver/pkg/provider"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/klog/v2"
prom "github.com/directxman12/k8s-prometheus-adapter/pkg/client"
"github.com/directxman12/k8s-prometheus-adapter/pkg/naming"
"sigs.k8s.io/custom-metrics-apiserver/pkg/provider"
prom "sigs.k8s.io/prometheus-adapter/pkg/client"
"sigs.k8s.io/prometheus-adapter/pkg/naming"
)
// ExternalSeriesRegistry acts as the top-level converter for transforming Kubernetes requests
@ -102,7 +103,6 @@ func (r *externalSeriesRegistry) filterAndStoreMetrics(result MetricUpdateResult
r.metrics = apiMetricsCache
r.metricsInfo = rawMetricsCache
}
func (r *externalSeriesRegistry) ListAllMetrics() []provider.ExternalMetricInfo {

View file

@ -17,12 +17,15 @@ import (
"errors"
"fmt"
prom "github.com/directxman12/k8s-prometheus-adapter/pkg/client"
"github.com/kubernetes-sigs/custom-metrics-apiserver/pkg/provider"
"github.com/prometheus/common/model"
"k8s.io/apimachinery/pkg/api/resource"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/metrics/pkg/apis/external_metrics"
"sigs.k8s.io/custom-metrics-apiserver/pkg/provider"
prom "sigs.k8s.io/prometheus-adapter/pkg/client"
)
// MetricConverter provides a unified interface for converting the results of
@ -58,7 +61,7 @@ func (c *metricConverter) convertSample(info provider.ExternalMetricInfo, sample
singleMetric := external_metrics.ExternalMetricValue{
MetricName: info.Metric,
Timestamp: metav1.Time{
sample.Timestamp.Time(),
Time: sample.Timestamp.Time(),
},
Value: *resource.NewMilliQuantity(int64(sample.Value*1000.0), resource.DecimalSI),
MetricLabels: labels,
@ -130,7 +133,7 @@ func (c *metricConverter) convertScalar(info provider.ExternalMetricInfo, queryR
{
MetricName: info.Metric,
Timestamp: metav1.Time{
toConvert.Timestamp.Time(),
Time: toConvert.Timestamp.Time(),
},
Value: *resource.NewMilliQuantity(int64(toConvert.Value*1000.0), resource.DecimalSI),
},

View file

@ -69,9 +69,9 @@ func (l *periodicMetricLister) updateMetrics() error {
return err
}
//Cache the result.
// Cache the result.
l.mostRecentResult = result
//Let our listeners know we've got new data ready for them.
// Let our listeners know we've got new data ready for them.
l.notifyListeners()
return nil
}
@ -85,5 +85,7 @@ func (l *periodicMetricLister) notifyListeners() {
}
func (l *periodicMetricLister) UpdateNow() {
l.updateMetrics()
if err := l.updateMetrics(); err != nil {
utilruntime.HandleError(err)
}
}

View file

@ -17,7 +17,8 @@ import (
"testing"
"time"
prom "github.com/directxman12/k8s-prometheus-adapter/pkg/client"
prom "sigs.k8s.io/prometheus-adapter/pkg/client"
"github.com/stretchr/testify/require"
)

View file

@ -18,19 +18,18 @@ import (
"fmt"
"time"
"k8s.io/apimachinery/pkg/runtime/schema"
pmodel "github.com/prometheus/common/model"
"k8s.io/klog/v2"
"github.com/kubernetes-sigs/custom-metrics-apiserver/pkg/provider"
apierr "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/klog/v2"
"k8s.io/metrics/pkg/apis/external_metrics"
prom "github.com/directxman12/k8s-prometheus-adapter/pkg/client"
"github.com/directxman12/k8s-prometheus-adapter/pkg/naming"
"sigs.k8s.io/custom-metrics-apiserver/pkg/provider"
prom "sigs.k8s.io/prometheus-adapter/pkg/client"
"sigs.k8s.io/prometheus-adapter/pkg/naming"
)
type externalPrometheusProvider struct {
@ -40,7 +39,7 @@ type externalPrometheusProvider struct {
seriesRegistry ExternalSeriesRegistry
}
func (p *externalPrometheusProvider) GetExternalMetric(namespace string, metricSelector labels.Selector, info provider.ExternalMetricInfo) (*external_metrics.ExternalMetricValueList, error) {
func (p *externalPrometheusProvider) GetExternalMetric(ctx context.Context, namespace string, metricSelector labels.Selector, info provider.ExternalMetricInfo) (*external_metrics.ExternalMetricValueList, error) {
selector, found, err := p.seriesRegistry.QueryForMetric(namespace, info.Metric, metricSelector)
if err != nil {
@ -52,7 +51,7 @@ func (p *externalPrometheusProvider) GetExternalMetric(namespace string, metricS
return nil, provider.NewMetricNotFoundError(p.selectGroupResource(namespace), info.Metric)
}
// Here is where we're making the query, need to be before here xD
queryResults, err := p.promClient.Query(context.TODO(), pmodel.Now(), selector)
queryResults, err := p.promClient.Query(ctx, pmodel.Now(), selector)
if err != nil {
klog.Errorf("unable to fetch metrics from prometheus: %v", err)
@ -78,9 +77,9 @@ func (p *externalPrometheusProvider) selectGroupResource(namespace string) schem
}
// NewExternalPrometheusProvider creates an ExternalMetricsProvider capable of responding to Kubernetes requests for external metric data
func NewExternalPrometheusProvider(promClient prom.Client, namers []naming.MetricNamer, updateInterval time.Duration) (provider.ExternalMetricsProvider, Runnable) {
func NewExternalPrometheusProvider(promClient prom.Client, namers []naming.MetricNamer, updateInterval time.Duration, maxAge time.Duration) (provider.ExternalMetricsProvider, Runnable) {
metricConverter := NewMetricConverter()
basicLister := NewBasicMetricLister(promClient, namers, updateInterval)
basicLister := NewBasicMetricLister(promClient, namers, maxAge)
periodicLister, _ := NewPeriodicMetricLister(basicLister, updateInterval)
seriesRegistry := NewExternalSeriesRegistry(periodicLister)
return &externalPrometheusProvider{

View file

@ -22,7 +22,6 @@ import (
"regexp"
"text/template"
apimeta "k8s.io/apimachinery/pkg/api/meta"
"k8s.io/apimachinery/pkg/runtime/schema"
pmodel "github.com/prometheus/common/model"
@ -34,7 +33,6 @@ type labelGroupResExtractor struct {
resourceInd int
groupInd *int
mapper apimeta.RESTMapper
}
// newLabelGroupResExtractor creates a new labelGroupResExtractor for labels whose form
@ -42,7 +40,10 @@ type labelGroupResExtractor struct {
// so anything in the template which limits resource or group name length will cause issues.
func newLabelGroupResExtractor(labelTemplate *template.Template) (*labelGroupResExtractor, error) {
labelRegexBuff := new(bytes.Buffer)
if err := labelTemplate.Execute(labelRegexBuff, schema.GroupResource{"(?P<group>.+?)", "(?P<resource>.+?)"}); err != nil {
if err := labelTemplate.Execute(labelRegexBuff, schema.GroupResource{
Group: "(?P<group>.+?)",
Resource: "(?P<resource>.+?)"},
); err != nil {
return nil, fmt.Errorf("unable to convert label template to matcher: %v", err)
}
if labelRegexBuff.Len() == 0 {

View file

@ -24,8 +24,8 @@ import (
"k8s.io/apimachinery/pkg/labels"
"k8s.io/apimachinery/pkg/runtime/schema"
prom "github.com/directxman12/k8s-prometheus-adapter/pkg/client"
"github.com/directxman12/k8s-prometheus-adapter/pkg/config"
prom "sigs.k8s.io/prometheus-adapter/pkg/client"
"sigs.k8s.io/prometheus-adapter/pkg/config"
)
// MetricNamer knows how to convert Prometheus series names and label names to
@ -131,8 +131,6 @@ func (n *metricNamer) QueryForSeries(series string, resource schema.GroupResourc
}
func (n *metricNamer) QueryForExternalSeries(series string, namespace string, metricSelector labels.Selector) (prom.Selector, error) {
//test := prom.Selector()
//return test, nil
return n.metricsQuery.BuildExternal(series, namespace, "", []string{}, metricSelector)
}
@ -155,7 +153,13 @@ func NamersFromConfig(cfg []config.DiscoveryRule, mapper apimeta.RESTMapper) ([]
return nil, err
}
metricsQuery, err := NewMetricsQuery(rule.MetricsQuery, resConv)
// queries are namespaced by default unless the rule specifically disables it
namespaced := true
if rule.Resources.Namespaced != nil {
namespaced = *rule.Resources.Namespaced
}
metricsQuery, err := NewExternalMetricsQuery(rule.MetricsQuery, resConv, namespaced)
if err != nil {
return nil, fmt.Errorf("unable to construct metrics query associated with series query %q: %v", rule.SeriesQuery, err)
}
@ -190,13 +194,14 @@ func NamersFromConfig(cfg []config.DiscoveryRule, mapper apimeta.RESTMapper) ([]
if nameAs == "" {
// check if we have an obvious default
subexpNames := nameMatches.SubexpNames()
if len(subexpNames) == 1 {
switch len(subexpNames) {
case 1:
// no capture groups, use the whole thing
nameAs = "$0"
} else if len(subexpNames) == 2 {
case 2:
// one capture group, use that
nameAs = "$1"
} else {
default:
return nil, fmt.Errorf("must specify an 'as' value for name matcher %q associated with series query %q", rule.Name.Matches, rule.SeriesQuery)
}
}

View file

@ -27,7 +27,7 @@ import (
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/selection"
prom "github.com/directxman12/k8s-prometheus-adapter/pkg/client"
prom "sigs.k8s.io/prometheus-adapter/pkg/client"
)
// MetricsQuery represents a compiled metrics query for some set of
@ -59,6 +59,27 @@ func NewMetricsQuery(queryTemplate string, resourceConverter ResourceConverter)
return &metricsQuery{
resConverter: resourceConverter,
template: templ,
namespaced: true,
}, nil
}
// NewExternalMetricsQuery constructs a new MetricsQuery by compiling the given Go template.
// The delimiters on the template are `<<` and `>>`, and it may use the following fields:
// - Series: the series in question
// - LabelMatchers: a pre-stringified form of the label matchers for the resources in the query
// - LabelMatchersByName: the raw map-form of the above matchers
// - GroupBy: the group-by clause to use for the resources in the query (stringified)
// - GroupBySlice: the raw slice form of the above group-by clause
func NewExternalMetricsQuery(queryTemplate string, resourceConverter ResourceConverter, namespaced bool) (MetricsQuery, error) {
templ, err := template.New("metrics-query").Delims("<<", ">>").Parse(queryTemplate)
if err != nil {
return nil, fmt.Errorf("unable to parse metrics query template %q: %v", queryTemplate, err)
}
return &metricsQuery{
resConverter: resourceConverter,
template: templ,
namespaced: namespaced,
}, nil
}
@ -68,6 +89,7 @@ func NewMetricsQuery(queryTemplate string, resourceConverter ResourceConverter)
type metricsQuery struct {
resConverter ResourceConverter
template *template.Template
namespaced bool
}
// queryTemplateArgs contains the arguments for the template used in metricsQuery.
@ -150,7 +172,7 @@ func (q *metricsQuery) BuildExternal(seriesName string, namespace string, groupB
// Build up the query parts from the selector.
queryParts = append(queryParts, q.createQueryPartsFromSelector(metricSelector)...)
if namespace != "" {
if q.namespaced && namespace != "" {
namespaceLbl, err := q.resConverter.LabelForResource(NsGroupResource)
if err != nil {
return "", err
@ -261,9 +283,8 @@ func (q *metricsQuery) processQueryParts(queryParts []queryPart) ([]string, map[
}
func (q *metricsQuery) selectMatcher(operator selection.Operator, values []string) (func(string, string) string, error) {
numValues := len(values)
if numValues == 0 {
switch len(values) {
case 0:
switch operator {
case selection.Exists:
return prom.LabelNeq, nil
@ -272,7 +293,7 @@ func (q *metricsQuery) selectMatcher(operator selection.Operator, values []strin
case selection.Equals, selection.DoubleEquals, selection.NotEquals, selection.In, selection.NotIn:
return nil, ErrMalformedQuery
}
} else if numValues == 1 {
case 1:
switch operator {
case selection.Equals, selection.DoubleEquals:
return prom.LabelEq, nil
@ -283,7 +304,7 @@ func (q *metricsQuery) selectMatcher(operator selection.Operator, values []strin
case selection.DoesNotExist, selection.NotIn:
return prom.LabelNotMatches, nil
}
} else {
default:
// Since labels can only have one value, providing multiple
// values results in a regex match, even if that's not what the user
// asked for.
@ -299,8 +320,8 @@ func (q *metricsQuery) selectMatcher(operator selection.Operator, values []strin
}
func (q *metricsQuery) selectTargetValue(operator selection.Operator, values []string) (string, error) {
numValues := len(values)
if numValues == 0 {
switch len(values) {
case 0:
switch operator {
case selection.Exists, selection.DoesNotExist:
// Return an empty string when values are equal to 0
@ -312,7 +333,7 @@ func (q *metricsQuery) selectTargetValue(operator selection.Operator, values []s
case selection.Equals, selection.DoubleEquals, selection.NotEquals, selection.In, selection.NotIn:
return "", ErrMalformedQuery
}
} else if numValues == 1 {
case 1:
switch operator {
case selection.Equals, selection.DoubleEquals, selection.NotEquals, selection.In, selection.NotIn:
// Pass the value through as-is.
@ -325,7 +346,7 @@ func (q *metricsQuery) selectTargetValue(operator selection.Operator, values []s
case selection.Exists, selection.DoesNotExist:
return "", ErrQueryUnsupportedValues
}
} else {
default:
switch operator {
case selection.Equals, selection.DoubleEquals, selection.NotEquals, selection.In, selection.NotIn:
// Pass the value through as-is.

View file

@ -20,11 +20,13 @@ import (
"fmt"
"testing"
prom "github.com/directxman12/k8s-prometheus-adapter/pkg/client"
pmodel "github.com/prometheus/common/model"
labels "k8s.io/apimachinery/pkg/labels"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/selection"
prom "sigs.k8s.io/prometheus-adapter/pkg/client"
pmodel "github.com/prometheus/common/model"
)
type resourceConverterMock struct {
@ -272,7 +274,15 @@ func TestBuildSelector(t *testing.T) {
func TestBuildExternalSelector(t *testing.T) {
mustNewQuery := func(queryTemplate string) MetricsQuery {
mq, err := NewMetricsQuery(queryTemplate, &resourceConverterMock{true})
mq, err := NewExternalMetricsQuery(queryTemplate, &resourceConverterMock{true}, true)
if err != nil {
t.Fatal(err)
}
return mq
}
mustNewNonNamespacedQuery := func(queryTemplate string) MetricsQuery {
mq, err := NewExternalMetricsQuery(queryTemplate, &resourceConverterMock{true}, false)
if err != nil {
t.Fatal(err)
}
@ -348,6 +358,19 @@ func TestBuildExternalSelector(t *testing.T) {
hasSelector("default [foo bar]"),
),
},
{
name: "multiple GroupBySlice values with namespace disabled",
mq: mustNewNonNamespacedQuery(`<<index .LabelValuesByName "namespaces">> <<.GroupBySlice>>`),
namespace: "default",
groupBySlice: []string{"foo", "bar"},
metricSelector: labels.NewSelector(),
check: checks(
hasError(nil),
hasSelector(" [foo bar]"),
),
},
{
name: "single LabelMatchers value",

View file

@ -21,7 +21,7 @@ import (
"github.com/stretchr/testify/require"
"github.com/directxman12/k8s-prometheus-adapter/pkg/config"
"sigs.k8s.io/prometheus-adapter/pkg/config"
)
func TestReMatcherIs(t *testing.T) {

View file

@ -23,14 +23,16 @@ import (
"sync"
"text/template"
pmodel "github.com/prometheus/common/model"
apimeta "k8s.io/apimachinery/pkg/api/meta"
"k8s.io/apimachinery/pkg/runtime/schema"
prom "github.com/directxman12/k8s-prometheus-adapter/pkg/client"
"github.com/directxman12/k8s-prometheus-adapter/pkg/config"
"github.com/kubernetes-sigs/custom-metrics-apiserver/pkg/provider"
pmodel "github.com/prometheus/common/model"
"k8s.io/klog/v2"
"sigs.k8s.io/custom-metrics-apiserver/pkg/provider"
prom "sigs.k8s.io/prometheus-adapter/pkg/client"
"sigs.k8s.io/prometheus-adapter/pkg/config"
)
var (

View file

@ -19,28 +19,30 @@ package resourceprovider
import (
"context"
"fmt"
"math"
"sync"
"time"
corev1 "k8s.io/api/core/v1"
apimeta "k8s.io/apimachinery/pkg/api/meta"
"k8s.io/apimachinery/pkg/api/resource"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/apimachinery/pkg/runtime/schema"
apitypes "k8s.io/apimachinery/pkg/types"
"k8s.io/klog/v2"
metrics "k8s.io/metrics/pkg/apis/metrics"
"sigs.k8s.io/metrics-server/pkg/api"
"github.com/directxman12/k8s-prometheus-adapter/pkg/client"
"github.com/directxman12/k8s-prometheus-adapter/pkg/config"
"github.com/directxman12/k8s-prometheus-adapter/pkg/naming"
"sigs.k8s.io/prometheus-adapter/pkg/client"
"sigs.k8s.io/prometheus-adapter/pkg/config"
"sigs.k8s.io/prometheus-adapter/pkg/naming"
pmodel "github.com/prometheus/common/model"
)
var (
nodeResource = schema.GroupResource{Resource: "nodes"}
nsResource = schema.GroupResource{Resource: "ns"}
podResource = schema.GroupResource{Resource: "pods"}
)
@ -69,7 +71,6 @@ func newResourceQuery(cfg config.ResourceRule, mapper apimeta.RESTMapper) (resou
nodeQuery: nodeQuery,
containerLabel: cfg.ContainerLabel,
}, nil
}
// resourceQuery represents query information for querying resource metrics for some resource,
@ -119,10 +120,12 @@ type nsQueryResults struct {
err error
}
// GetContainerMetrics implements the api.MetricsProvider interface. It may return nil, nil, nil.
func (p *resourceProvider) GetContainerMetrics(pods ...apitypes.NamespacedName) ([]api.TimeInfo, [][]metrics.ContainerMetrics) {
// GetPodMetrics implements the api.MetricsProvider interface.
func (p *resourceProvider) GetPodMetrics(pods ...*metav1.PartialObjectMetadata) ([]metrics.PodMetrics, error) {
resMetrics := make([]metrics.PodMetrics, 0, len(pods))
if len(pods) == 0 {
return nil, nil
return resMetrics, nil
}
// TODO(directxman12): figure out how well this scales if we go to list 1000+ pods
@ -162,39 +165,40 @@ func (p *resourceProvider) GetContainerMetrics(pods ...apitypes.NamespacedName)
// convert the unorganized per-container results into results grouped
// together by namespace, pod, and container
resTimes := make([]api.TimeInfo, len(pods))
resMetrics := make([][]metrics.ContainerMetrics, len(pods))
for i, pod := range pods {
p.assignForPod(pod, resultsByNs, &resMetrics[i], &resTimes[i])
for _, pod := range pods {
podMetric := p.assignForPod(pod, resultsByNs)
if podMetric != nil {
resMetrics = append(resMetrics, *podMetric)
}
}
return resTimes, resMetrics
return resMetrics, nil
}
// assignForPod takes the resource metrics for all containers in the given pod
// from resultsByNs, and places them in MetricsProvider response format in resMetrics,
// also recording the earliest time in resTime. It will return without operating if
// any data is missing.
func (p *resourceProvider) assignForPod(pod apitypes.NamespacedName, resultsByNs map[string]nsQueryResults, resMetrics *[]metrics.ContainerMetrics, resTime *api.TimeInfo) {
func (p *resourceProvider) assignForPod(pod *metav1.PartialObjectMetadata, resultsByNs map[string]nsQueryResults) *metrics.PodMetrics {
// check to make sure everything is present
nsRes, nsResPresent := resultsByNs[pod.Namespace]
if !nsResPresent {
klog.Errorf("unable to fetch metrics for pods in namespace %q, skipping pod %s", pod.Namespace, pod.String())
return
return nil
}
cpuRes, hasResult := nsRes.cpu[pod.Name]
if !hasResult {
klog.Errorf("unable to fetch CPU metrics for pod %s, skipping", pod.String())
return
return nil
}
memRes, hasResult := nsRes.mem[pod.Name]
if !hasResult {
klog.Errorf("unable to fetch memory metrics for pod %s, skipping", pod.String())
return
return nil
}
earliestTs := pmodel.Latest
containerMetrics := make(map[string]metrics.ContainerMetrics)
earliestTS := pmodel.Latest
// organize all the CPU results
for _, cpu := range cpuRes {
@ -206,8 +210,8 @@ func (p *resourceProvider) assignForPod(pod apitypes.NamespacedName, resultsByNs
}
}
containerMetrics[containerName].Usage[corev1.ResourceCPU] = *resource.NewMilliQuantity(int64(cpu.Value*1000.0), resource.DecimalSI)
if cpu.Timestamp.Before(earliestTs) {
earliestTs = cpu.Timestamp
if cpu.Timestamp.Before(earliestTS) {
earliestTS = cpu.Timestamp
}
}
@ -221,8 +225,8 @@ func (p *resourceProvider) assignForPod(pod apitypes.NamespacedName, resultsByNs
}
}
containerMetrics[containerName].Usage[corev1.ResourceMemory] = *resource.NewMilliQuantity(int64(mem.Value*1000.0), resource.BinarySI)
if mem.Timestamp.Before(earliestTs) {
earliestTs = mem.Timestamp
if mem.Timestamp.Before(earliestTS) {
earliestTS = mem.Timestamp
}
}
@ -237,40 +241,50 @@ func (p *resourceProvider) assignForPod(pod apitypes.NamespacedName, resultsByNs
}
}
// store the time in the final format
*resTime = api.TimeInfo{
Timestamp: earliestTs.Time(),
Window: p.window,
podMetric := &metrics.PodMetrics{
ObjectMeta: metav1.ObjectMeta{
Name: pod.Name,
Namespace: pod.Namespace,
Labels: pod.Labels,
CreationTimestamp: metav1.Now(),
},
// store the time in the final format
Timestamp: metav1.NewTime(earliestTS.Time()),
Window: metav1.Duration{Duration: p.window},
}
// store the container metrics in the final format
containerMetricsList := make([]metrics.ContainerMetrics, 0, len(containerMetrics))
podMetric.Containers = make([]metrics.ContainerMetrics, 0, len(containerMetrics))
for _, containerMetric := range containerMetrics {
containerMetricsList = append(containerMetricsList, containerMetric)
podMetric.Containers = append(podMetric.Containers, containerMetric)
}
*resMetrics = containerMetricsList
return podMetric
}
// GetNodeMetrics implements the api.MetricsProvider interface. It may return nil, nil.
func (p *resourceProvider) GetNodeMetrics(nodes ...string) ([]api.TimeInfo, []corev1.ResourceList) {
// GetNodeMetrics implements the api.MetricsProvider interface.
func (p *resourceProvider) GetNodeMetrics(nodes ...*corev1.Node) ([]metrics.NodeMetrics, error) {
resMetrics := make([]metrics.NodeMetrics, 0, len(nodes))
if len(nodes) == 0 {
return nil, nil
return resMetrics, nil
}
now := pmodel.Now()
// run the actual query
qRes := p.queryBoth(now, nodeResource, "", nodes...)
if qRes.err != nil {
klog.Errorf("failed querying node metrics: %v", qRes.err)
return nil, nil
nodeNames := make([]string, 0, len(nodes))
for _, node := range nodes {
nodeNames = append(nodeNames, node.Name)
}
resTimes := make([]api.TimeInfo, len(nodes))
resMetrics := make([]corev1.ResourceList, len(nodes))
// run the actual query
qRes := p.queryBoth(now, nodeResource, "", nodeNames...)
if qRes.err != nil {
klog.Errorf("failed querying node metrics: %v", qRes.err)
return resMetrics, nil
}
// organize the results
for i, nodeName := range nodes {
for i, nodeName := range nodeNames {
// skip if any data is missing
rawCPUs, gotResult := qRes.cpu[nodeName]
if !gotResult {
@ -286,28 +300,30 @@ func (p *resourceProvider) GetNodeMetrics(nodes ...string) ([]api.TimeInfo, []co
rawMem := rawMems[0]
rawCPU := rawCPUs[0]
// store the results
resMetrics[i] = corev1.ResourceList{
corev1.ResourceCPU: *resource.NewMilliQuantity(int64(rawCPU.Value*1000.0), resource.DecimalSI),
corev1.ResourceMemory: *resource.NewMilliQuantity(int64(rawMem.Value*1000.0), resource.BinarySI),
}
// use the earliest timestamp available (in order to be conservative
// when determining if metrics are tainted by startup)
if rawMem.Timestamp.Before(rawCPU.Timestamp) {
resTimes[i] = api.TimeInfo{
Timestamp: rawMem.Timestamp.Time(),
Window: p.window,
}
} else {
resTimes[i] = api.TimeInfo{
Timestamp: rawCPU.Timestamp.Time(),
Window: 1 * time.Minute,
}
ts := rawCPU.Timestamp.Time()
if ts.After(rawMem.Timestamp.Time()) {
ts = rawMem.Timestamp.Time()
}
// store the results
resMetrics = append(resMetrics, metrics.NodeMetrics{
ObjectMeta: metav1.ObjectMeta{
Name: nodes[i].Name,
Labels: nodes[i].Labels,
CreationTimestamp: metav1.Now(),
},
Usage: corev1.ResourceList{
corev1.ResourceCPU: *resource.NewMilliQuantity(int64(rawCPU.Value*1000.0), resource.DecimalSI),
corev1.ResourceMemory: *resource.NewMilliQuantity(int64(rawMem.Value*1000.0), resource.BinarySI),
},
Timestamp: metav1.NewTime(ts),
Window: metav1.Duration{Duration: p.window},
})
}
return resTimes, resMetrics
return resMetrics, nil
}
// queryBoth queries for both CPU and memory metrics on the given
@ -387,13 +403,17 @@ func (p *resourceProvider) runQuery(now pmodel.Time, queryInfo resourceQuery, re
// associate the results back to each given pod or node
res := make(queryResults, len(*rawRes.Vector))
for _, val := range *rawRes.Vector {
if val == nil {
// skip empty values
for _, sample := range *rawRes.Vector {
// skip empty samples
if sample == nil {
continue
}
resKey := string(val.Metric[resourceLbl])
res[resKey] = append(res[resKey], val)
// replace NaN and negative values by zero
if math.IsNaN(float64(sample.Value)) || sample.Value < 0 {
sample.Value = 0
}
resKey := string(sample.Metric[resourceLbl])
res[resKey] = append(res[resKey], sample)
}
return res, nil

View file

@ -17,22 +17,25 @@ limitations under the License.
package resourceprovider
import (
"math"
"time"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
corev1 "k8s.io/api/core/v1"
apimeta "k8s.io/apimachinery/pkg/api/meta"
"k8s.io/apimachinery/pkg/api/resource"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/types"
"k8s.io/metrics/pkg/apis/metrics"
"sigs.k8s.io/metrics-server/pkg/api"
config "github.com/directxman12/k8s-prometheus-adapter/cmd/config-gen/utils"
prom "github.com/directxman12/k8s-prometheus-adapter/pkg/client"
fakeprom "github.com/directxman12/k8s-prometheus-adapter/pkg/client/fake"
config "sigs.k8s.io/prometheus-adapter/cmd/config-gen/utils"
prom "sigs.k8s.io/prometheus-adapter/pkg/client"
fakeprom "sigs.k8s.io/prometheus-adapter/pkg/client/fake"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
pmodel "github.com/prometheus/common/model"
)
@ -49,9 +52,9 @@ func restMapper() apimeta.RESTMapper {
func buildPodSample(namespace, pod, container string, val float64, ts int64) *pmodel.Sample {
return &pmodel.Sample{
Metric: pmodel.Metric{
"namespace": pmodel.LabelValue(namespace),
"pod_name": pmodel.LabelValue(pod),
"container_name": pmodel.LabelValue(container),
"namespace": pmodel.LabelValue(namespace),
"pod": pmodel.LabelValue(pod),
"container": pmodel.LabelValue(container),
},
Value: pmodel.SampleValue(val),
Timestamp: pmodel.Time(ts),
@ -119,10 +122,10 @@ var _ = Describe("Resource Metrics Provider", func() {
})
It("should be able to list metrics pods across different namespaces", func() {
pods := []types.NamespacedName{
{Namespace: "some-ns", Name: "pod1"},
{Namespace: "some-ns", Name: "pod3"},
{Namespace: "other-ns", Name: "pod27"},
pods := []*metav1.PartialObjectMetadata{
{ObjectMeta: metav1.ObjectMeta{Namespace: "some-ns", Name: "pod1"}},
{ObjectMeta: metav1.ObjectMeta{Namespace: "some-ns", Name: "pod3"}},
{ObjectMeta: metav1.ObjectMeta{Namespace: "other-ns", Name: "pod27"}},
}
fakeProm.QueryResults = map[prom.Selector]prom.QueryResult{
mustBuild(cpuQueries.contQuery.Build("", podResource, "some-ns", []string{cpuQueries.containerLabel}, labels.Everything(), "pod1", "pod3")): buildQueryRes("container_cpu_usage_seconds_total",
@ -146,27 +149,34 @@ var _ = Describe("Resource Metrics Provider", func() {
}
By("querying for metrics for some pods")
times, metricVals := prov.GetContainerMetrics(pods...)
podMetrics, err := prov.GetPodMetrics(pods...)
Expect(err).NotTo(HaveOccurred())
By("verifying that metrics have been fetched for all the pods")
Expect(podMetrics).To(HaveLen(3))
By("verifying that the reported times for each are the earliest times for each pod")
Expect(times).To(Equal([]api.TimeInfo{
{Timestamp: pmodel.Time(10).Time(), Window: 1 * time.Minute},
{Timestamp: pmodel.Time(10).Time(), Window: 1 * time.Minute},
{Timestamp: pmodel.Time(270).Time(), Window: 1 * time.Minute},
}))
Expect(podMetrics[0].Timestamp.Time).To(Equal(pmodel.Time(10).Time()))
Expect(podMetrics[0].Window.Duration).To(Equal(time.Minute))
Expect(podMetrics[1].Timestamp.Time).To(Equal(pmodel.Time(10).Time()))
Expect(podMetrics[1].Window.Duration).To(Equal(time.Minute))
Expect(podMetrics[2].Timestamp.Time).To(Equal(pmodel.Time(270).Time()))
Expect(podMetrics[2].Window.Duration).To(Equal(time.Minute))
By("verifying that the right metrics were fetched")
Expect(metricVals).To(HaveLen(3))
Expect(metricVals[0]).To(ConsistOf(
Expect(podMetrics).To(HaveLen(3))
Expect(podMetrics[0].Containers).To(ConsistOf(
metrics.ContainerMetrics{Name: "cont1", Usage: buildResList(1100.0, 3100.0)},
metrics.ContainerMetrics{Name: "cont2", Usage: buildResList(1110.0, 3110.0)},
))
Expect(metricVals[1]).To(ConsistOf(
Expect(podMetrics[1].Containers).To(ConsistOf(
metrics.ContainerMetrics{Name: "cont1", Usage: buildResList(1300.0, 3300.0)},
metrics.ContainerMetrics{Name: "cont2", Usage: buildResList(1310.0, 3310.0)},
))
Expect(metricVals[2]).To(ConsistOf(
Expect(podMetrics[2].Containers).To(ConsistOf(
metrics.ContainerMetrics{Name: "cont1", Usage: buildResList(2200.0, 4200.0)},
))
})
@ -184,22 +194,65 @@ var _ = Describe("Resource Metrics Provider", func() {
}
By("querying for metrics for some pods, one of which is missing")
times, metricVals := prov.GetContainerMetrics(
types.NamespacedName{Namespace: "some-ns", Name: "pod1"},
types.NamespacedName{Namespace: "some-ns", Name: "pod-nonexistant"},
podMetrics, err := prov.GetPodMetrics(
&metav1.PartialObjectMetadata{ObjectMeta: metav1.ObjectMeta{Namespace: "some-ns", Name: "pod1"}},
&metav1.PartialObjectMetadata{ObjectMeta: metav1.ObjectMeta{Namespace: "some-ns", Name: "pod-nonexistant"}},
)
Expect(err).NotTo(HaveOccurred())
By("verifying that the missing pod had nil metrics")
Expect(metricVals).To(HaveLen(2))
Expect(metricVals[1]).To(BeNil())
By("verifying that the missing pod had no metrics")
Expect(podMetrics).To(HaveLen(1))
By("verifying that the rest of time metrics and times are correct")
Expect(metricVals[0]).To(ConsistOf(
Expect(podMetrics[0].Timestamp.Time).To(Equal(pmodel.Time(10).Time()))
Expect(podMetrics[0].Window.Duration).To(Equal(time.Minute))
Expect(podMetrics[0].Containers).To(ConsistOf(
metrics.ContainerMetrics{Name: "cont1", Usage: buildResList(1100.0, 3100.0)},
metrics.ContainerMetrics{Name: "cont2", Usage: buildResList(1110.0, 3110.0)},
))
Expect(times).To(HaveLen(2))
Expect(times[0]).To(Equal(api.TimeInfo{Timestamp: pmodel.Time(10).Time(), Window: 1 * time.Minute}))
})
It("should return metrics of value zero when pod metrics have NaN or negative values", func() {
fakeProm.QueryResults = map[prom.Selector]prom.QueryResult{
mustBuild(cpuQueries.contQuery.Build("", podResource, "some-ns", []string{cpuQueries.containerLabel}, labels.Everything(), "pod1", "pod3")): buildQueryRes("container_cpu_usage_seconds_total",
buildPodSample("some-ns", "pod1", "cont1", -1100.0, 10),
buildPodSample("some-ns", "pod1", "cont2", math.NaN(), 20),
buildPodSample("some-ns", "pod3", "cont1", -1300.0, 10),
buildPodSample("some-ns", "pod3", "cont2", 1310.0, 20),
),
mustBuild(memQueries.contQuery.Build("", podResource, "some-ns", []string{cpuQueries.containerLabel}, labels.Everything(), "pod1", "pod3")): buildQueryRes("container_memory_working_set_bytes",
buildPodSample("some-ns", "pod1", "cont1", 3100.0, 11),
buildPodSample("some-ns", "pod1", "cont2", -3110.0, 21),
buildPodSample("some-ns", "pod3", "cont1", math.NaN(), 11),
buildPodSample("some-ns", "pod3", "cont2", -3310.0, 21),
),
}
By("querying for metrics for some pods")
podMetrics, err := prov.GetPodMetrics(
&metav1.PartialObjectMetadata{ObjectMeta: metav1.ObjectMeta{Namespace: "some-ns", Name: "pod1"}},
&metav1.PartialObjectMetadata{ObjectMeta: metav1.ObjectMeta{Namespace: "some-ns", Name: "pod3"}},
)
Expect(err).NotTo(HaveOccurred())
By("verifying that metrics have been fetched for all the pods")
Expect(podMetrics).To(HaveLen(2))
By("verifying that the reported times for each are the earliest times for each pod")
Expect(podMetrics[0].Timestamp.Time).To(Equal(pmodel.Time(10).Time()))
Expect(podMetrics[0].Window.Duration).To(Equal(time.Minute))
Expect(podMetrics[1].Timestamp.Time).To(Equal(pmodel.Time(10).Time()))
Expect(podMetrics[1].Window.Duration).To(Equal(time.Minute))
By("verifying that NaN and negative values were replaced by zero")
Expect(podMetrics[0].Containers).To(ConsistOf(
metrics.ContainerMetrics{Name: "cont1", Usage: buildResList(0, 3100.0)},
metrics.ContainerMetrics{Name: "cont2", Usage: buildResList(0, 0)},
))
Expect(podMetrics[1].Containers).To(ConsistOf(
metrics.ContainerMetrics{Name: "cont1", Usage: buildResList(0, 0)},
metrics.ContainerMetrics{Name: "cont2", Usage: buildResList(1310.0, 0)},
))
})
It("should be able to list metrics for nodes", func() {
@ -214,19 +267,24 @@ var _ = Describe("Resource Metrics Provider", func() {
),
}
By("querying for metrics for some nodes")
times, metricVals := prov.GetNodeMetrics("node1", "node2")
nodeMetrics, err := prov.GetNodeMetrics(
&corev1.Node{ObjectMeta: metav1.ObjectMeta{Name: "node1"}},
&corev1.Node{ObjectMeta: metav1.ObjectMeta{Name: "node2"}},
)
Expect(err).NotTo(HaveOccurred())
By("verifying that the reported times for each are the earliest times for each pod")
Expect(times).To(Equal([]api.TimeInfo{
{Timestamp: pmodel.Time(10).Time(), Window: 1 * time.Minute},
{Timestamp: pmodel.Time(12).Time(), Window: 1 * time.Minute},
}))
By("verifying that metrics have been fetched for all the nodes")
Expect(nodeMetrics).To(HaveLen(2))
By("verifying that the reported times for each are the earliest times for each node")
Expect(nodeMetrics[0].Timestamp.Time).To(Equal(pmodel.Time(10).Time()))
Expect(nodeMetrics[0].Window.Duration).To(Equal(time.Minute))
Expect(nodeMetrics[1].Timestamp.Time).To(Equal(pmodel.Time(12).Time()))
Expect(nodeMetrics[1].Window.Duration).To(Equal(time.Minute))
By("verifying that the right metrics were fetched")
Expect(metricVals).To(Equal([]corev1.ResourceList{
buildResList(1100.0, 2100.0),
buildResList(1200.0, 2200.0),
}))
Expect(nodeMetrics[0].Usage).To(Equal(buildResList(1100.0, 2100.0)))
Expect(nodeMetrics[1].Usage).To(Equal(buildResList(1200.0, 2200.0)))
})
It("should return nil metrics for missing nodes, but still return partial results", func() {
@ -241,22 +299,54 @@ var _ = Describe("Resource Metrics Provider", func() {
),
}
By("querying for metrics for some nodes, one of which is missing")
times, metricVals := prov.GetNodeMetrics("node1", "node2", "node3")
nodeMetrics, err := prov.GetNodeMetrics(
&corev1.Node{ObjectMeta: metav1.ObjectMeta{Name: "node1"}},
&corev1.Node{ObjectMeta: metav1.ObjectMeta{Name: "node2"}},
&corev1.Node{ObjectMeta: metav1.ObjectMeta{Name: "node3"}},
)
Expect(err).NotTo(HaveOccurred())
By("verifying that the missing pod had nil metrics")
Expect(metricVals).To(HaveLen(3))
Expect(metricVals[2]).To(BeNil())
By("verifying that the missing pod had no metrics")
Expect(nodeMetrics).To(HaveLen(2))
By("verifying that the rest of time metrics and times are correct")
Expect(metricVals).To(Equal([]corev1.ResourceList{
buildResList(1100.0, 2100.0),
buildResList(1200.0, 2200.0),
nil,
}))
Expect(times).To(Equal([]api.TimeInfo{
{Timestamp: pmodel.Time(10).Time(), Window: 1 * time.Minute},
{Timestamp: pmodel.Time(12).Time(), Window: 1 * time.Minute},
{},
}))
Expect(nodeMetrics[0].Usage).To(Equal(buildResList(1100.0, 2100.0)))
Expect(nodeMetrics[0].Timestamp.Time).To(Equal(pmodel.Time(10).Time()))
Expect(nodeMetrics[0].Window.Duration).To(Equal(time.Minute))
Expect(nodeMetrics[1].Usage).To(Equal(buildResList(1200.0, 2200.0)))
Expect(nodeMetrics[1].Timestamp.Time).To(Equal(pmodel.Time(12).Time()))
Expect(nodeMetrics[1].Window.Duration).To(Equal(time.Minute))
})
It("should return metrics of value zero when node metrics have NaN or negative values", func() {
fakeProm.QueryResults = map[prom.Selector]prom.QueryResult{
mustBuild(cpuQueries.nodeQuery.Build("", nodeResource, "", nil, labels.Everything(), "node1", "node2")): buildQueryRes("container_cpu_usage_seconds_total",
buildNodeSample("node1", -1100.0, 10),
buildNodeSample("node2", 1200.0, 14),
),
mustBuild(memQueries.nodeQuery.Build("", nodeResource, "", nil, labels.Everything(), "node1", "node2")): buildQueryRes("container_memory_working_set_bytes",
buildNodeSample("node1", 2100.0, 11),
buildNodeSample("node2", math.NaN(), 12),
),
}
By("querying for metrics for some nodes")
nodeMetrics, err := prov.GetNodeMetrics(
&corev1.Node{ObjectMeta: metav1.ObjectMeta{Name: "node1"}},
&corev1.Node{ObjectMeta: metav1.ObjectMeta{Name: "node2"}},
)
Expect(err).NotTo(HaveOccurred())
By("verifying that metrics have been fetched for all the nodes")
Expect(nodeMetrics).To(HaveLen(2))
By("verifying that the reported times for each are the earliest times for each pod")
Expect(nodeMetrics[0].Timestamp.Time).To(Equal(pmodel.Time(10).Time()))
Expect(nodeMetrics[0].Window.Duration).To(Equal(time.Minute))
Expect(nodeMetrics[1].Timestamp.Time).To(Equal(pmodel.Time(12).Time()))
Expect(nodeMetrics[1].Window.Duration).To(Equal(time.Minute))
By("verifying that NaN and negative values were replaced by zero")
Expect(nodeMetrics[0].Usage).To(Equal(buildResList(0, 2100.0)))
Expect(nodeMetrics[1].Usage).To(Equal(buildResList(1200.0, 0)))
})
})

41
test/README.md Normal file
View file

@ -0,0 +1,41 @@
# End-to-end tests
## With [kind](https://kind.sigs.k8s.io/)
[`kind`](https://kind.sigs.k8s.io/) and `kubectl` are automatically downloaded
except if `SKIP_INSTALL=true` is set.
A `kind` cluster is automatically created before the tests, and deleted after
the tests.
The `prometheus-adapter` container image is build locally and imported
into the cluster.
```bash
KIND_E2E=true make test-e2e
```
## With an existing Kubernetes cluster
If you already have a Kubernetes cluster, you can use:
```bash
KUBECONFIG="/path/to/kube/config" REGISTRY="my.registry/prefix" make test-e2e
```
- The cluster should not have a namespace `prometheus-adapter-e2e`.
The namespace will be created and deleted as part of the E2E tests.
- `KUBECONFIG` is the path of the [`kubeconfig` file].
**Optional**, defaults to `${HOME}/.kube/config`
- `REGISTRY` is the image registry where the container image should be pushed.
**Required**.
[`kubeconfig` file]: https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/
## Additional environment variables
These environment variables may also be used (with any non-empty value):
- `SKIP_INSTALL`: skip the installation of `kind` and `kubectl` binaries;
- `SKIP_CLEAN_AFTER`: skip the deletion of resources (`Kind` cluster or
Kubernetes namespace) and of the temporary directory `.e2e`;
- `CLEAN_BEFORE`: clean before running the tests, e.g. if `SKIP_CLEAN_AFTER`
was used on the previous run.

213
test/e2e/e2e_test.go Normal file
View file

@ -0,0 +1,213 @@
/*
Copyright 2022 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package e2e
import (
"context"
"fmt"
"log"
"os"
"testing"
"time"
monitoringv1 "github.com/prometheus-operator/prometheus-operator/pkg/apis/monitoring/v1"
monitoring "github.com/prometheus-operator/prometheus-operator/pkg/client/versioned"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/wait"
clientset "k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/clientcmd"
metricsv1beta1 "k8s.io/metrics/pkg/apis/metrics/v1beta1"
metrics "k8s.io/metrics/pkg/client/clientset/versioned"
)
const (
ns = "prometheus-adapter-e2e"
prometheusInstance = "prometheus"
deployment = "prometheus-adapter"
)
var (
client clientset.Interface
promOpClient monitoring.Interface
metricsClient metrics.Interface
)
func TestMain(m *testing.M) {
kubeconfig := os.Getenv("KUBECONFIG")
if len(kubeconfig) == 0 {
log.Fatal("KUBECONFIG not provided")
}
var err error
client, promOpClient, metricsClient, err = initializeClients(kubeconfig)
if err != nil {
log.Fatalf("Cannot create clients: %v", err)
}
ctx := context.Background()
err = waitForPrometheusReady(ctx, ns, prometheusInstance)
if err != nil {
log.Fatalf("Prometheus instance 'prometheus' not ready: %v", err)
}
err = waitForDeploymentReady(ctx, ns, deployment)
if err != nil {
log.Fatalf("Deployment prometheus-adapter not ready: %v", err)
}
exitVal := m.Run()
os.Exit(exitVal)
}
func initializeClients(kubeconfig string) (clientset.Interface, monitoring.Interface, metrics.Interface, error) {
cfg, err := clientcmd.BuildConfigFromFlags("", kubeconfig)
if err != nil {
return nil, nil, nil, fmt.Errorf("Error during client configuration with %v", err)
}
clientSet, err := clientset.NewForConfig(cfg)
if err != nil {
return nil, nil, nil, fmt.Errorf("Error during client creation with %v", err)
}
promOpClient, err := monitoring.NewForConfig(cfg)
if err != nil {
return nil, nil, nil, fmt.Errorf("Error during dynamic client creation with %v", err)
}
metricsClientSet, err := metrics.NewForConfig(cfg)
if err != nil {
return nil, nil, nil, fmt.Errorf("Error during metrics client creation with %v", err)
}
return clientSet, promOpClient, metricsClientSet, nil
}
func waitForPrometheusReady(ctx context.Context, namespace string, name string) error {
return wait.PollUntilContextTimeout(ctx, 5*time.Second, 120*time.Second, true, func(ctx context.Context) (bool, error) {
prom, err := promOpClient.MonitoringV1().Prometheuses(ns).Get(ctx, name, metav1.GetOptions{})
if err != nil {
return false, err
}
var reconciled, available *monitoringv1.Condition
for _, condition := range prom.Status.Conditions {
cond := condition
if cond.Type == monitoringv1.Reconciled {
reconciled = &cond
} else if cond.Type == monitoringv1.Available {
available = &cond
}
}
if reconciled == nil {
log.Printf("Prometheus instance '%s': Waiting for reconciliation status...", name)
return false, nil
}
if reconciled.Status != monitoringv1.ConditionTrue {
log.Printf("Prometheus instance '%s': Reconciiled = %v. Waiting for reconciliation (reason %s, %q)...", name, reconciled.Status, reconciled.Reason, reconciled.Message)
return false, nil
}
specReplicas := *prom.Spec.Replicas
availableReplicas := prom.Status.AvailableReplicas
if specReplicas != availableReplicas {
log.Printf("Prometheus instance '%s': %v/%v pods are ready. Waiting for all pods to be ready...", name, availableReplicas, specReplicas)
return false, err
}
if available == nil {
log.Printf("Prometheus instance '%s': Waiting for Available status...", name)
return false, nil
}
if available.Status != monitoringv1.ConditionTrue {
log.Printf("Prometheus instance '%s': Available = %v. Waiting for Available status... (reason %s, %q)", name, available.Status, available.Reason, available.Message)
return false, nil
}
log.Printf("Prometheus instance '%s': Ready.", name)
return true, nil
})
}
func waitForDeploymentReady(ctx context.Context, namespace string, name string) error {
return wait.PollUntilContextTimeout(ctx, 5*time.Second, 30*time.Second, true, func(ctx context.Context) (bool, error) {
sts, err := client.AppsV1().Deployments(namespace).Get(ctx, name, metav1.GetOptions{})
if err != nil {
return false, err
}
if sts.Status.ReadyReplicas == *sts.Spec.Replicas {
log.Printf("Deployment %s: %v/%v pods are ready.", name, sts.Status.ReadyReplicas, *sts.Spec.Replicas)
return true, nil
}
log.Printf("Deployment %s: %v/%v pods are ready. Waiting for all pods to be ready...", name, sts.Status.ReadyReplicas, *sts.Spec.Replicas)
return false, nil
})
}
func TestNodeMetrics(t *testing.T) {
ctx := context.Background()
var nodeMetrics *metricsv1beta1.NodeMetricsList
err := wait.PollUntilContextTimeout(ctx, 2*time.Second, 30*time.Second, true, func(ctx context.Context) (bool, error) {
var err error
nodeMetrics, err = metricsClient.MetricsV1beta1().NodeMetricses().List(ctx, metav1.ListOptions{})
if err != nil {
return false, err
}
nonEmptyNodeMetrics := len(nodeMetrics.Items) > 0
if !nonEmptyNodeMetrics {
t.Logf("Node metrics empty... Retrying.")
}
return nonEmptyNodeMetrics, nil
})
require.NoErrorf(t, err, "Node metrics should not be empty")
for _, nodeMetric := range nodeMetrics.Items {
positiveMemory := nodeMetric.Usage.Memory().CmpInt64(0)
assert.Positivef(t, positiveMemory, "Memory usage for node %s is %v, should be > 0", nodeMetric.Name, nodeMetric.Usage.Memory())
positiveCPU := nodeMetric.Usage.Cpu().CmpInt64(0)
assert.Positivef(t, positiveCPU, "CPU usage for node %s is %v, should be > 0", nodeMetric.Name, nodeMetric.Usage.Cpu())
}
}
func TestPodMetrics(t *testing.T) {
ctx := context.Background()
var podMetrics *metricsv1beta1.PodMetricsList
err := wait.PollUntilContextTimeout(ctx, 2*time.Second, 30*time.Second, true, func(ctx context.Context) (bool, error) {
var err error
podMetrics, err = metricsClient.MetricsV1beta1().PodMetricses(ns).List(ctx, metav1.ListOptions{})
if err != nil {
return false, err
}
nonEmptyNodeMetrics := len(podMetrics.Items) > 0
if !nonEmptyNodeMetrics {
t.Logf("Pod metrics empty... Retrying.")
}
return nonEmptyNodeMetrics, nil
})
require.NoErrorf(t, err, "Pod metrics should not be empty")
for _, pod := range podMetrics.Items {
for _, containerMetric := range pod.Containers {
positiveMemory := containerMetric.Usage.Memory().CmpInt64(0)
assert.Positivef(t, positiveMemory, "Memory usage for pod %s/%s is %v, should be > 0", pod.Name, containerMetric.Name, containerMetric.Usage.Memory())
}
}
}

View file

@ -1,12 +1,12 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: custom-metrics:system:auth-delegator
name: prometheus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
name: prometheus
subjects:
- kind: ServiceAccount
name: custom-metrics-apiserver
namespace: custom-metrics
name: prometheus
namespace: prometheus-adapter-e2e

View file

@ -0,0 +1,24 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: prometheus
rules:
- apiGroups: [""]
resources:
- nodes
- nodes/metrics
- services
- endpoints
- pods
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources:
- configmaps
verbs: ["get"]
- apiGroups:
- networking.k8s.io
resources:
- ingresses
verbs: ["get", "list", "watch"]
- nonResourceURLs: ["/metrics"]
verbs: ["get"]

View file

@ -0,0 +1,9 @@
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
name: prometheus
namespace: prometheus-adapter-e2e
spec:
replicas: 2
serviceAccountName: prometheus
serviceMonitorSelector: {}

View file

@ -0,0 +1,5 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: prometheus
namespace: prometheus-adapter-e2e

View file

@ -0,0 +1,25 @@
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
labels:
app.kubernetes.io/name: kubelet
name: kubelet
namespace: prometheus-adapter-e2e
spec:
endpoints:
- bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
honorLabels: true
honorTimestamps: false
interval: 10s
path: /metrics/resource
port: https-metrics
scheme: https
tlsConfig:
insecureSkipVerify: true
jobLabel: app.kubernetes.io/name
namespaceSelector:
matchNames:
- kube-system
selector:
matchLabels:
app.kubernetes.io/name: kubelet

View file

@ -0,0 +1,14 @@
apiVersion: v1
kind: Service
metadata:
name: prometheus
namespace: prometheus-adapter-e2e
spec:
ports:
- name: web
port: 9090
targetPort: web
selector:
app.kubernetes.io/instance: prometheus
app.kubernetes.io/name: prometheus
sessionAffinity: ClientIP

134
test/run-e2e-tests.sh Executable file
View file

@ -0,0 +1,134 @@
#!/usr/bin/env bash
# Copyright 2022 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -x
set -o errexit
set -o nounset
# Tool versions
K8S_VERSION=${KUBERNETES_VERSION:-v1.30.0} # cf https://hub.docker.com/r/kindest/node/tags
KIND_VERSION=${KIND_VERSION:-v0.23.0} # cf https://github.com/kubernetes-sigs/kind/releases
PROM_OPERATOR_VERSION=${PROM_OPERATOR_VERSION:-v0.73.2} # cf https://github.com/prometheus-operator/prometheus-operator/releases
# Variables; set to empty if unbound/empty
REGISTRY=${REGISTRY:-}
KIND_E2E=${KIND_E2E:-}
SKIP_INSTALL=${SKIP_INSTALL:-}
SKIP_CLEAN_AFTER=${SKIP_CLEAN_AFTER:-}
CLEAN_BEFORE=${CLEAN_BEFORE:-}
# KUBECONFIG - will be overriden if a cluster is deployed with Kind
KUBECONFIG=${KUBECONFIG:-"${HOME}/.kube/config"}
# A temporary directory used by the tests
E2E_DIR="${PWD}/.e2e"
# The namespace where prometheus-adapter is deployed
NAMESPACE="prometheus-adapter-e2e"
if [[ -z "${REGISTRY}" && -z "${KIND_E2E}" ]]; then
echo -e "Either REGISTRY or KIND_E2E should be set."
exit 1
fi
function clean {
if [[ -n "${KIND_E2E}" ]]; then
kind delete cluster || true
else
kubectl delete -f ./deploy/manifests || true
kubectl delete -f ./test/prometheus-manifests || true
kubectl delete namespace "${NAMESPACE}" || true
fi
rm -rf "${E2E_DIR}"
}
if [[ -n "${CLEAN_BEFORE}" ]]; then
clean
fi
function on_exit {
local error_code="$?"
echo "Obtaining prometheus-adapter pod logs..."
kubectl logs -l app.kubernetes.io/name=prometheus-adapter -n "${NAMESPACE}" || true
if [[ -z "${SKIP_CLEAN_AFTER}" ]]; then
clean
fi
test "${error_code}" == 0 && return;
}
trap on_exit EXIT
if [[ -d "${E2E_DIR}" ]]; then
echo -e "${E2E_DIR} already exists."
exit 1
fi
mkdir -p "${E2E_DIR}"
if [[ -n "${KIND_E2E}" ]]; then
# Install kubectl and kind, if we did not set SKIP_INSTALL
if [[ -z "${SKIP_INSTALL}" ]]; then
BIN="${E2E_DIR}/bin"
mkdir -p "${BIN}"
curl -Lo "${BIN}/kubectl" "https://dl.k8s.io/release/${K8S_VERSION}/bin/linux/amd64/kubectl" && chmod +x "${BIN}/kubectl"
curl -Lo "${BIN}/kind" "https://kind.sigs.k8s.io/dl/${KIND_VERSION}/kind-linux-amd64" && chmod +x "${BIN}/kind"
export PATH="${BIN}:${PATH}"
fi
kind create cluster --image "kindest/node:${K8S_VERSION}"
REGISTRY="localhost"
KUBECONFIG="${E2E_DIR}/kubeconfig"
kind get kubeconfig > "${KUBECONFIG}"
fi
# Create the test namespace
kubectl create namespace "${NAMESPACE}"
export REGISTRY
IMAGE_NAME="${REGISTRY}/prometheus-adapter-$(go env GOARCH)"
IMAGE_TAG="v$(cat VERSION)"
if [[ -n "${KIND_E2E}" ]]; then
make container
kind load docker-image "${IMAGE_NAME}:${IMAGE_TAG}"
else
make push
fi
# Install prometheus-operator
kubectl apply -f "https://github.com/prometheus-operator/prometheus-operator/releases/download/${PROM_OPERATOR_VERSION}/bundle.yaml" --server-side
# Install and setup prometheus
kubectl apply -f ./test/prometheus-manifests --server-side
# Customize prometheus-adapter manifests
# TODO: use Kustomize or generate manifests from Jsonnet
cp -r ./deploy/manifests "${E2E_DIR}/manifests"
prom_url="http://prometheus.${NAMESPACE}.svc:9090/"
sed -i -e "s|--prometheus-url=.*$|--prometheus-url=${prom_url}|g" "${E2E_DIR}/manifests/deployment.yaml"
sed -i -e "s|image: .*$|image: ${IMAGE_NAME}:${IMAGE_TAG}|g" "${E2E_DIR}/manifests/deployment.yaml"
find "${E2E_DIR}/manifests" -type f -exec sed -i -e "s|namespace: monitoring|namespace: ${NAMESPACE}|g" {} \;
# Deploy prometheus-adapter
kubectl apply -f "${E2E_DIR}/manifests" --server-side
PROJECT_PREFIX="sigs.k8s.io/prometheus-adapter"
export KUBECONFIG
go test "${PROJECT_PREFIX}/test/e2e/" -v -count=1

View file

@ -1 +0,0 @@
*.swp

View file

@ -1,6 +0,0 @@
language: go
go:
- 1.7
- 1.8
- tip

View file

@ -1,75 +0,0 @@
---
layout: code-of-conduct
version: v1.0
---
This code of conduct outlines our expectations for participants within the **NYTimes/gziphandler** community, as well as steps to reporting unacceptable behavior. We are committed to providing a welcoming and inspiring community for all and expect our code of conduct to be honored. Anyone who violates this code of conduct may be banned from the community.
Our open source community strives to:
* **Be friendly and patient.**
* **Be welcoming**: We strive to be a community that welcomes and supports people of all backgrounds and identities. This includes, but is not limited to members of any race, ethnicity, culture, national origin, colour, immigration status, social and economic class, educational level, sex, sexual orientation, gender identity and expression, age, size, family status, political belief, religion, and mental and physical ability.
* **Be considerate**: Your work will be used by other people, and you in turn will depend on the work of others. Any decision you take will affect users and colleagues, and you should take those consequences into account when making decisions. Remember that we're a world-wide community, so you might not be communicating in someone else's primary language.
* **Be respectful**: Not all of us will agree all the time, but disagreement is no excuse for poor behavior and poor manners. We might all experience some frustration now and then, but we cannot allow that frustration to turn into a personal attack. Its important to remember that a community where people feel uncomfortable or threatened is not a productive one.
* **Be careful in the words that we choose**: we are a community of professionals, and we conduct ourselves professionally. Be kind to others. Do not insult or put down other participants. Harassment and other exclusionary behavior aren't acceptable.
* **Try to understand why we disagree**: Disagreements, both social and technical, happen all the time. It is important that we resolve disagreements and differing views constructively. Remember that were different. The strength of our community comes from its diversity, people from a wide range of backgrounds. Different people have different perspectives on issues. Being unable to understand why someone holds a viewpoint doesnt mean that theyre wrong. Dont forget that it is human to err and blaming each other doesnt get us anywhere. Instead, focus on helping to resolve issues and learning from mistakes.
## Definitions
Harassment includes, but is not limited to:
- Offensive comments related to gender, gender identity and expression, sexual orientation, disability, mental illness, neuro(a)typicality, physical appearance, body size, race, age, regional discrimination, political or religious affiliation
- Unwelcome comments regarding a persons lifestyle choices and practices, including those related to food, health, parenting, drugs, and employment
- Deliberate misgendering. This includes deadnaming or persistently using a pronoun that does not correctly reflect a person's gender identity. You must address people by the name they give you when not addressing them by their username or handle
- Physical contact and simulated physical contact (eg, textual descriptions like “*hug*” or “*backrub*”) without consent or after a request to stop
- Threats of violence, both physical and psychological
- Incitement of violence towards any individual, including encouraging a person to commit suicide or to engage in self-harm
- Deliberate intimidation
- Stalking or following
- Harassing photography or recording, including logging online activity for harassment purposes
- Sustained disruption of discussion
- Unwelcome sexual attention, including gratuitous or off-topic sexual images or behaviour
- Pattern of inappropriate social contact, such as requesting/assuming inappropriate levels of intimacy with others
- Continued one-on-one communication after requests to cease
- Deliberate “outing” of any aspect of a persons identity without their consent except as necessary to protect others from intentional abuse
- Publication of non-harassing private communication
Our open source community prioritizes marginalized peoples safety over privileged peoples comfort. We will not act on complaints regarding:
- Reverse -isms, including reverse racism, reverse sexism, and cisphobia
- Reasonable communication of boundaries, such as “leave me alone,” “go away,” or “Im not discussing this with you”
- Refusal to explain or debate social justice concepts
- Communicating in a tone you dont find congenial
- Criticizing racist, sexist, cissexist, or otherwise oppressive behavior or assumptions
### Diversity Statement
We encourage everyone to participate and are committed to building a community for all. Although we will fail at times, we seek to treat everyone both as fairly and equally as possible. Whenever a participant has made a mistake, we expect them to take responsibility for it. If someone has been harmed or offended, it is our responsibility to listen carefully and respectfully, and do our best to right the wrong.
Although this list cannot be exhaustive, we explicitly honor diversity in age, gender, gender identity or expression, culture, ethnicity, language, national origin, political beliefs, profession, race, religion, sexual orientation, socioeconomic status, and technical ability. We will not tolerate discrimination based on any of the protected
characteristics above, including participants with disabilities.
### Reporting Issues
If you experience or witness unacceptable behavior—or have any other concerns—please report it by contacting us via **code@nytimes.com**. All reports will be handled with discretion. In your report please include:
- Your contact information.
- Names (real, nicknames, or pseudonyms) of any individuals involved. If there are additional witnesses, please
include them as well. Your account of what occurred, and if you believe the incident is ongoing. If there is a publicly available record (e.g. a mailing list archive or a public IRC logger), please include a link.
- Any additional information that may be helpful.
After filing a report, a representative will contact you personally, review the incident, follow up with any additional questions, and make a decision as to how to respond. If the person who is harassing you is part of the response team, they will recuse themselves from handling your incident. If the complaint originates from a member of the response team, it will be handled by a different member of the response team. We will respect confidentiality requests for the purpose of protecting victims of abuse.
### Attribution & Acknowledgements
We all stand on the shoulders of giants across many open source communities. We'd like to thank the communities and projects that established code of conducts and diversity statements as our inspiration:
* [Django](https://www.djangoproject.com/conduct/reporting/)
* [Python](https://www.python.org/community/diversity/)
* [Ubuntu](http://www.ubuntu.com/about/about-ubuntu/conduct)
* [Contributor Covenant](http://contributor-covenant.org/)
* [Geek Feminism](http://geekfeminism.org/about/code-of-conduct/)
* [Citizen Code of Conduct](http://citizencodeofconduct.org/)
This Code of Conduct was based on https://github.com/todogroup/opencodeofconduct

Some files were not shown because too many files have changed in this diff Show more