When serving the resource metrics API, prometheus-adapter may return
negative values for pods/nodes memory and CPU usage. This happens
because Prometheus sees counter resets which results in Prometheus
interpolating data incorrectly to avoid the counter value going down.
To prevent that, we need to add some guards in prometheus-adapter to
replace the negative value by zero whenever it detects one.
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
The code that we are reusing from metrics-server to call
GetContainerMetrics and GetNodeMetrics requires that both functions
returns arrays of lengths of the number of pods/nodes given as
arguments. In some cases, prometheus-adapter was returning nil which
caused panics in metrics-server code.
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
Pick up changes to 1.17 to custom-metrics-apiserver and the latest
changes in metrics-server to allow us to show table results for
podmetrics and nodemetrics. Fix import and interface changes as
necessary.
The localvendor directory is an artifact of a change in sigs.k8s.io:
sigs.k8s.io/metrics-server now requires this dependency in order to
resolve, even though we do not use the scraper package.
go: sigs.k8s.io/metrics-server@v0.3.7 requires
k8s.io/kubernetes/pkg/kubelet/apis/stats/v1alpha1@v0.0.0: reading k8s.io/kubernetes/pkg/kubelet/apis/stats/v1alpha1/pkg/kubelet/apis/stats/v1alpha1/go.mod at revision pkg/kubelet/apis/stats/v1alpha1/v0.0.0: unknown revision pkg/kubelet/apis/stats/v1alpha1/v0.0.0
This introduces support for the resource metrics in the adapter.
The individual queries and window sizes are fully customizable via a new
config section. This uses just the generic machinery from
metrics-server to serve the API.