Giter Site home page Giter Site logo

zalando-incubator / kube-metrics-adapter Goto Github PK

View Code? Open in Web Editor NEW
507.0 18.0 112.0 1.74 MB

General purpose metrics adapter for Kubernetes HPA metrics

License: MIT License

Dockerfile 0.16% Makefile 1.28% Go 97.61% Shell 0.96%
kubernetes custom metrics external hpa horizontal pod autoscaling zalando prometheus

kube-metrics-adapter's People

Contributors

adutchak-x avatar aermakov-zalando avatar affo avatar arjunrn avatar csenol avatar daftping avatar demoncoder95 avatar dependabot-preview[bot] avatar dependabot[bot] avatar dilinade avatar doyshinda avatar gargravarr avatar jfuechsl avatar jiri-pinkava avatar jonathanbeber avatar jrake-revelant avatar katyanna avatar linki avatar lucastt avatar mikkeloscar avatar miniland1333 avatar muaazsaleem avatar njuettner avatar owengo avatar perploug avatar prune998 avatar szuecs avatar tanersener avatar tomaspinho avatar zaklawrencea avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kube-metrics-adapter's Issues

Docker image tags

Hi,
There is a list of current all available tags of the kube-metrics-adapter docker image ?. Currently I find a latest tag only.

I want to check the kube-metrics-adapter but I'd like to stay in a version for a while instead of always pulling the latest without know about when the image has been built.

I also tried stable but seems that doesn't exist.

Adapter failing to scrape Prometheus metrics

I've been trying to work through this example of HPA through istio metrics via Prometheus, but the example is pretty old. The docker container was built quite some time ago. However, the example does work correctly. However, upgrading the image to v0.1.5 through the banzai helm chart yields an error:

 time="2020-06-09T22:55:49Z" level=info msg="Event(v1.ObjectReference{Kind:\"HorizontalPodAutoscaler\", Namespace:\"test\", Name:\"podinfo\", UID:\"26036502-3285-4359-a2f4-606dbc5d32ed\", APIVersion:\"autoscaling/v2beta2\", ResourceVersion:\"73634\", FieldPath:\"\"}): type: 'Warning' reason: 'CreateNewMetricsCollector' Failed to create new metrics collector: no plugin found for {Object {istio-requests-total nil}}"

I've verified that this metric exists in Prometheus. I've also tried building v0.1.2 myself, but dependencies fail.

Expected Behavior

the adapter scrapes metrics from istio-prometheus, and provides them to the metrics server.

Actual Behavior

The adapter does not scrape metrics from istio's prometheus.

Steps to Reproduce the Problem

  1. using the v0.1.1 banzai helm chart , install the adapter., setting the image to be v0.1.5 in the deployment
  2. from the example repository, deploy the test pod and load simulator
  3. Tail the logs from the adapter, observing the errors.

Specifications

  • Version: v0.1.5 (built from source on internal image)
  • Platform: AWS
  • Subsystem: Kubernetes 1.15.7

Additional Logs:

I0609 22:51:18.975548       6 serving.go:306] Generated self-signed cert (apiserver.local.config/certificates/apiserver.crt, apiserver.local.config/certificates/apiserver.key)
W0609 22:51:19.424396       6 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found
W0609 22:51:19.424432       6 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found
time="2020-06-09T22:51:19Z" level=info msg="Looking for HPAs" provider=hpa
I0609 22:51:19.438024       6 configmap_cafile_content.go:205] Starting client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0609 22:51:19.438030       6 configmap_cafile_content.go:205] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0609 22:51:19.438047       6 shared_informer.go:197] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0609 22:51:19.438047       6 shared_informer.go:197] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0609 22:51:19.438163       6 dynamic_serving_content.go:129] Starting serving-cert::apiserver.local.config/certificates/apiserver.crt::apiserver.local.config/certificates/apiserver.key
I0609 22:51:19.439245       6 secure_serving.go:178] Serving securely on [::]:443
I0609 22:51:19.439269       6 tlsconfig.go:219] Starting DynamicServingCertificateController
time="2020-06-09T22:51:19Z" level=info msg="Removing previously scheduled metrics collector: {istio-ingressgateway istio-system}" provider=hpa
time="2020-06-09T22:51:19Z" level=info msg="Removing previously scheduled metrics collector: {istio-pilot istio-system}" provider=hpa
time="2020-06-09T22:51:19Z" level=info msg="Removing previously scheduled metrics collector: {istio-policy istio-system}" provider=hpa
time="2020-06-09T22:51:19Z" level=info msg="Removing previously scheduled metrics collector: {istio-telemetry istio-system}" provider=hpa
time="2020-06-09T22:51:19Z" level=info msg="Removing previously scheduled metrics collector: {podinfo test}" provider=hpa
time="2020-06-09T22:51:19Z" level=info msg="Found 5 new/updated HPA(s)" provider=hpa
time="2020-06-09T22:51:19Z" level=info msg="Event(v1.ObjectReference{Kind:\"HorizontalPodAutoscaler\", Namespace:\"test\", Name:\"podinfo\", UID:\"26036502-3285-4359-a2f4-606dbc5d32ed\", APIVersion:\"autoscaling/v2beta2\", ResourceVersion:\"73538\", FieldPath:\"\"}): type: 'Warning' reason: 'CreateNewMetricsCollector' Failed to create new metrics collector: no plugin found for {Object {istio-requests-total nil}}"
I0609 22:51:19.538176       6 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0609 22:51:19.538176       6 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
time="2020-06-09T22:51:49Z" level=info msg="Looking for HPAs" provider=hpa
time="2020-06-09T22:51:49Z" level=info msg="Removing previously scheduled metrics collector: {podinfo test}" provider=hpa
time="2020-06-09T22:51:49Z" level=info msg="Found 1 new/updated HPA(s)" provider=hpa
time="2020-06-09T22:51:49Z" level=info msg="Event(v1.ObjectReference{Kind:\"HorizontalPodAutoscaler\", Namespace:\"test\", Name:\"podinfo\", UID:\"26036502-3285-4359-a2f4-606dbc5d32ed\", APIVersion:\"autoscaling/v2beta2\", ResourceVersion:\"73634\", FieldPath:\"\"}): type: 'Warning' reason: 'CreateNewMetricsCollector' Failed to create new metrics collector: no plugin found for {Object {istio-requests-total nil}}"

v2beta2 HPA w/prometheus query: 404 logs & <unknown>

Expected Behavior

I was creating a test HPA (v2beta2) with a simple Prometheus query (to test, not to get a valid scaling value), and it doesn't seem to get populated in the custom-metrics. The HPA value thus the target is stuck with the <unkown> value.

Actual Behavior

I'm getting a 404 error in the kube-metrics-adapter logs (below) and the HPA is never getting updated.
I'm probably missing something obvious but couldn't find enough examples to figure it out.

Steps to Reproduce the Problem

HPA:

apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
  name: hpa-test-v2beta2
  namespace: test
  annotations: 
    metric-config.external.prometheus-query.prometheus/prometheus-server: http://prometheus.kube-system-monitoring.svc.cluster.local:9090
    metric-config.external.prometheus-query.prometheus/rabbitmq_queue_messages_ready: rabbitmq_queue_messages_ready{queue="task_priority_apiworker"}
spec:
  scaleTargetRef:
   #apiVersion: extensions/v1beta1
    apiVersion: apps/v1
    kind: Deployment
    name: apiworker
  minReplicas: 1
  maxReplicas: 15
  metrics:
  - type: External
    external:
      metric:
        name: prometheus-query
        selector:
          matchLabels:
            query-name: rabbitmq_queue_messages_ready
      target:
        type: AverageValue
        averageValue: 1

Logs:

2020-05-20T21:04:16+00:00 localhost docker/k8s_kube-metrics-adapter_kube-metrics-adapter-8474d5546f-nv7wk_kube-system_abe181ea-ad95-46d6-9fa1-662d486a1cf8_0[51530]: time="2020-05-20T21:04:16Z" level=info msg="Looking for HPAs" provider=hpa
2020-05-20T21:04:16+00:00 localhost docker/k8s_kube-metrics-adapter_kube-metrics-adapter-8474d5546f-nv7wk_kube-system_abe181ea-ad95-46d6-9fa1-662d486a1cf8_0[51530]: time="2020-05-20T21:04:16Z" level=info msg="Removing previously scheduled metrics collector: {hpa-test-v2beta2 test}" provider=hpa
2020-05-20T21:04:16+00:00 localhost docker/k8s_kube-metrics-adapter_kube-metrics-adapter-8474d5546f-nv7wk_kube-system_abe181ea-ad95-46d6-9fa1-662d486a1cf8_0[51530]: time="2020-05-20T21:04:16Z" level=info msg="Adding new metrics collector: *collector.PrometheusCollector" provider=hpa
2020-05-20T21:04:16+00:00 localhost docker/k8s_kube-metrics-adapter_kube-metrics-adapter-8474d5546f-nv7wk_kube-system_abe181ea-ad95-46d6-9fa1-662d486a1cf8_0[51530]: time="2020-05-20T21:04:16Z" level=info msg="Found 1 new/updated HPA(s)" provider=hpa
2020-05-20T21:04:16+00:00 localhost docker/k8s_kube-metrics-adapter_kube-metrics-adapter-8474d5546f-nv7wk_kube-system_abe181ea-ad95-46d6-9fa1-662d486a1cf8_0[51530]: time="2020-05-20T21:04:16Z" level=error msg="Failed to collect metrics: client_error: client error: 404" provider=hpa
2020-05-20T21:04:16+00:00 localhost docker/k8s_kube-metrics-adapter_kube-metrics-adapter-8474d5546f-nv7wk_kube-system_abe181ea-ad95-46d6-9fa1-662d486a1cf8_0[51530]: time="2020-05-20T21:04:16Z" level=info msg="Collected 0 new metric(s)" provider=hpa

kubectl get hpa --all-namespaces -o wide

NAMESPACE   NAME               REFERENCE                     TARGETS             MINPODS   MAXPODS   REPLICAS   AGE
test        hpa-test-v2beta2   Deployment/warden-apiworker   <unknown>/1 (avg)   1         15        15         8m

kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1/ | jq

{
  "kind": "APIResourceList",
  "apiVersion": "v1",
  "groupVersion": "custom.metrics.k8s.io/v1beta1",
  "resources": []
}

Specifications

  • Version: K8s 1.18.2
  • Platform: Ubuntu 20.04
  • Subsystem: Prometheus 2.18.1

Cheers! :)

CreateNewMetricsCollector : Failed to create new metrics collector

Expected Behavior

external and custom metrics api should give some output but there is none

Actual Behavior

kubectl get --raw "/apis/external.metrics.k8s.io/v1beta1" | jq .
{
"kind": "APIResourceList",
"apiVersion": "v1",
"groupVersion": "external.metrics.k8s.io/v1beta1",
"resources": []
}

Steps to Reproduce the Problem

  1. Using the manifests with some updation of config parameters in (https://github.com/zalando-incubator/kubernetes-on-aws/tree/dev/cluster/manifests/kube-metrics-adapter), I am able to run the kube-metrics-apdapter but the logs of the container constantly show the below warnings
time="2019-01-22T10:32:24Z" level=info msg="Looking for HPAs" provider=hpa
time="2019-01-22T10:32:24Z" level=info msg="Found 1 new/updated HPA(s)" provider=hpa
time="2019-01-22T10:32:24Z" level=info msg="Event(v1.ObjectReference{Kind:\"HorizontalPodAutoscaler\", Namespace:\"default\", Name:\"myservice-k8\", UID:\"ba31da13-1e10-11e9-a841-06f3d10b141c\", APIVersion:\"autoscaling/v2beta1\", ResourceVersion:\"36437\", FieldPath:\"\"}): type: 'Warning' reason: 'CreateNewMetricsCollector' Failed to create new metrics collector: format '' not supported"

Can not find any documentation or examples explaining what is expected here ?

Using the example I am able to successfully run kube-ingress-aws-controller in (https://github.com/zalando-incubator/kube-ingress-aws-controller/tree/master/deploy)

And my prometheus stack detects the skipper metrics and I can see in my promethues graphs the values such as

skipper_response_duration_seconds_sum{application="skipper",code="200",component="ingress",controller_revision_hash="1183544327",instance="172.20.4.24:9911",job="kubernetes-pods",kubernetes_namespace="kube-system",kubernetes_pod_name="skipper-ingress-mpm6g",method="GET",pod_template_generation="3",route="kube_default__myservice_k8__myservice_k8_stag_mydomain_com____myservice_k8_0__lb_group"}

Now I am expecting/assuming that when I run kube-metrics-adapter with configurations mentioned above , I should be able to see some metrics under api /apis/external.metrics.k8s.io/v1beta1 or /apis/custom.metrics.k8s.io/v1beta1
but its empty .

Remember myservice is NOT running any metrics exporter.

Specifications

  • Version:1.11
  • Platform: EKS
  • Subsystem:

ParseHPAMetrics in package collector may modify MatchLabels for external metric

Expected Behavior

The HPAProvider caches HPA resources and recreates associated metrics collectors, should the HPA resource change. I expect it to not recreate collectors if the HPA didn't change.

Actual Behavior

HPAs with external metric collectors (e.g. Prometheus) are modified at each step (call of updateHPAs), thus bypassing the caching logic which causes the recreation of the metrics collectors.

This happens because of the way the metric config is created for external metrics in ParseHPAMetrics.
The Config field is set to the address of the MatchLabels map in the HPA resource object: https://github.com/zalando-incubator/kube-metrics-adapter/blob/master/pkg/collector/collector.go#L216
Later (https://github.com/zalando-incubator/kube-metrics-adapter/blob/master/pkg/collector/collector.go#L227) this map is modified, thus modifying the HPA resource object.

The fix, in my opinion, would be to perform a copy of the MatchLabels map to the Config field.

Steps to Reproduce the Problem

  1. Create a HPA with a Prometheus external metric.
  2. Observe the kube-metrics-adapter logs. They recreate the metrics collector at each step. E.g.
time="2019-11-07T05:53:46Z" level=info msg="Looking for HPAs" provider=hpa
time="2019-11-07T05:53:46Z" level=info msg="Removing previously scheduled metrics collector: {xxx yyy}" provider=hpa
time="2019-11-07T05:53:46Z" level=info msg="Adding new metrics collector: *collector.PrometheusCollector" provider=hpa
time="2019-11-07T05:53:46Z" level=info msg="Found 1 new/updated HPA(s)" provider=hpa
time="2019-11-07T05:53:46Z" level=info msg="stopping collector runner..."
time="2019-11-07T05:53:46Z" level=info msg="Collected 1 new metric(s)" provider=hpa
time="2019-11-07T05:53:46Z" level=info msg="Collected new external metric 'prometheus-query' (99) [test=scalar(vector(99.0)),query-name=test]" provider=hpa

Specifications

  • Version: v0.0.4
  • Platform: linux amd64
  • Subsystem: package collector

Feature request: add Prometheus URL as annotation in HorizontalPodAutoscaler definition

In our cluster we have several Prometheus instances. I was wondering, if it is possible to add (optional) Prometheus URL in HorizontalPodAutoscaler definition, for example:

apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
  name: myapp-hpa
  annotations:
    metric-config.external.prometheus-query.prometheus/url: prometheus.namespace2.svc:9090 
    
# metric-config.<metricType>.<metricName>.<collectorName>/<configKey>
    # <configKey> == query-name
    metric-config.external.prometheus-query.prometheus/processed-events-per-second: |
      scalar(sum(rate(event-service_events_count{application="event-service",processed="true"}[1m])))
spec:

If no such annotation defined - use the one defined as command line arguments.

unable to get metric requests-per-second: no metrics returned from custom metrics API

Expected Behavior

The HPA should be showing current number of http_requests

Actual Behavior

HPA Shows Unknown as status.
Metrics: ( current / target )
"requests-per-second" on pods: / 10
Warning FailedGetPodsMetric 5s horizontal-pod-autoscaler unable to get metric requests-per-second: no metrics returned from custom metrics API
Warning FailedComputeMetricsReplicas 5s horizontal-pod-autoscaler failed to get pods metric value: unable to get metric requests-per-second: no metrics returned from custom metrics API
[r

Steps to Reproduce the Problem

  1. Followed installation steps by running the docs/
  2. Created an HPA as in the readme and replaced deployment name with a deployment in the name space on which HPA is present.

[Metric-Config] JsonPath support string indexers for json prop keys with dots

Expected Behavior

When setting up metric configs using jsonpath, it's expected to support the regular jsonpath expression.

Actual Behavior

The actual behaviour is in the case of using string indexers for accessing json props with .'s in them, the jsonpath lib is not able to extract the value and go into the value.

https://github.com/zalando-incubator/kube-metrics-adapter/blob/master/pkg/collector/json_path_collector.go#L13

Steps to Reproduce the Problem

  1. Setup a json-path/json-key value with something like: $.histograms['service.pool.Usage']mean
  2. On k8s the autoscaler will report: Failed to create new metrics collector: format '' not supported

Specifications

  • Version: latest

Templates from Doc folder gives some 403 errors

I was able to bring up pod with the template and only thing i changed was Prometheus URL
to : --prometheus-server=http://prometheus.monitoring.svc.cluster.local:9090

I see the bunch of errors

time="2018-12-07T19:26:19Z" level=error msg="horizontalpodautoscalers.autoscaling is forbidden: User \"system:serviceaccount:kube-system:custom-metrics-apiserver\" cannot list horizontalpodautoscalers.autoscaling at the cluster scope" provider=hpa

E1207 19:26:21.268780 1 writers.go:149] apiserver was unable to write a JSON response: expected pointer, but got nil E1207 19:26:21.268808 1 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"expected pointer, but got nil"}

Custom and external metrics are empty
(venv) [ec2-user@ip-10-230-198-112 qaas]$ kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1" --kubeconfig=config-demo| jq { "kind": "APIResourceList", "apiVersion": "v1", "groupVersion": "custom.metrics.k8s.io/v1beta1", "resources": [] }

(venv) [ec2-user@ip-10-230-198-112 qaas]$ kubectl get --raw "/apis/external.metrics.k8s.io/v1beta1" --kubeconfig=config-demo| jq { "kind": "APIResourceList", "apiVersion": "v1", "groupVersion": "external.metrics.k8s.io/v1beta1", "resources": [] }

I have prometheus server running and it has metrics if i do
curl http://prometheus.monitoring.svc.cluster.local:9090/mertics

Specifications

  • Version: K8s 1.11.0
  • Platform: KOPS k8s cluster

kube-metrics-adapter (with pod collector enabled) returns stale data

Hello,

I have been testing kube-metrics-adapter (with pod collector enabled) and noticed that it returns stale data during specific conditions, which results in HPA misbehaving. The conditions occur when HPA scales up a deployment due to high load, then load subsides and it scales the deployment back to min-replica. However, if I increase the load again that should result in a scale-up, the HPA would not scale up because the data it is acting upon is stale. When it queries kube-metrics-adapter for custom metrics, the adapter is returning old data that contains non-existent pods and old values. This results in incorrect averages for the HPA.

I tracked it down to the in-memory cache that kube-metrics-adapter uses to track pods and metrics. The metrics have TTL of 15 mins and the GC is run every 10 mins. The issue mentioned above disappears after old metrics are cleaned up by GC. To fix the issue permanently, I modified TTL to 30s and GC to run every minute. HPA is able to act on new conditions much better. This should cover other edge cases, such as if HPA is in a middle of scale-down and there is a new load, it should able to act on it faster.

I think this is a simple solution for now that solves the issue and there is a better way of dealing with it. The cache can be updated on each run, so HPA can have an up-to-date view of the cluster without a delay of up to 1 minute.

Expected Behavior

kube-metrics-adapter doesn't return stale data to HPA that contains information about non-existent pods and their values.

Actual Behavior

kube-metrics-adapter returns stale data to HPA due to 15min TTL and 10 mins garbage collection. This causes HPA to calculate wrong averages based on custom metrics.

Steps to Reproduce the Problem

  1. Let HPA scale up a deployment based on a custom metric from pod (pod collector)
  2. Let HPA scale down the deployment due to load subsiding
  3. As soon as the deployment hits minReplica, increase the load. Notice that averages in HPA are not correct. Specifically, the average is a function of the current value of the custom metric divided by number of pods that HPA scaled up to previously.

Specifications

  • Version: v0.0.5

I have a fix for this, but would like to discuss if it is appropriate. I propose to make TTL and GC run configurable and set the default values to 30s and 1m respectively. A better long term solution can be keeping the the cache up-to-date on each run.

Metrics from an outside service?

So this works great with prometheus but we autoscale so much that 100 nodes might go up and down per hour and thousands of pods autoscale up and down and prometheus is having memory issues that's OOMing.

I am wondering if this can hit an outside service/hostname for json metrics. Like I want to autoscale deployment sidekiq with metrics from service sidekiq-collector

If not currently.. then are you open to PRs to add this feature in? I like the ability to just remove promethus all together if possible for some HPAs

bug: hpa provider only provides values for the last sqs metric in list

Expected Behavior

With two external metrics defined, the hpa provider should provide metrics for both, but only returns values for the second one.

Here's the hpa manifest:

apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
  name: foo
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: foo
  minReplicas: 1
  maxReplicas: 10
  metrics:
  - type: External
    external:
      metricName: sqs-queue-length
      metricSelector:
        matchLabels:
          queue-name: bar
          region: us-west-2
      targetAverageValue: 600
  - type: External
    external:
      metricName: sqs-queue-length
      metricSelector:
        matchLabels:
          queue-name: baz
          region: us-west-2
      targetAverageValue: 600

Actual Behavior

The collector finds bar and baz on the first run, but then only provides values for baz after the first round of metrics is collected.

kube-metrics-adapter log:

I0226 03:42:58.340810       1 serving.go:273] Generated self-signed cert (apiserver.local.config/certificates/apiserver.crt, apiserver.local.config/certificates/apiserver.key)
time="2019-02-26T03:43:04Z" level=info msg="Looking for HPAs" provider=hpa
I0226 03:43:04.343794       1 serve.go:96] Serving securely on [::]:443
time="2019-02-26T03:43:05Z" level=info msg="Found 0 new/updated HPA(s)" provider=hpa
time="2019-02-26T03:43:35Z" level=info msg="Looking for HPAs" provider=hpa
.
.
.
time="2019-02-26T03:46:05Z" level=info msg="Found 0 new/updated HPA(s)" provider=hpa
time="2019-02-26T03:46:35Z" level=info msg="Looking for HPAs" provider=hpa
time="2019-02-26T03:46:36Z" level=info msg="Adding new metrics collector: *collector.AWSSQSCollector" provider=hpa
time="2019-02-26T03:46:36Z" level=info msg="Adding new metrics collector: *collector.AWSSQSCollector" provider=hpa
time="2019-02-26T03:46:36Z" level=info msg="Found 1 new/updated HPA(s)" provider=hpa
time="2019-02-26T03:46:36Z" level=info msg="stopping collector runner..."
time="2019-02-26T03:46:36Z" level=info msg="Collected 1 new metric(s)" provider=hpa
time="2019-02-26T03:46:36Z" level=info msg="Collected new external metric 'sqs-queue-length' (3476) [queue-name=bar,region=us-west-2]" provider=hpa
time="2019-02-26T03:46:36Z" level=info msg="Collected 1 new metric(s)" provider=hpa
time="2019-02-26T03:46:36Z" level=info msg="Collected new external metric 'sqs-queue-length' (8271) [queue-name=baz,region=us-west-2]" provider=hpa
time="2019-02-26T03:47:06Z" level=info msg="Looking for HPAs" provider=hpa
time="2019-02-26T03:47:06Z" level=info msg="Found 0 new/updated HPA(s)" provider=hpa
time="2019-02-26T03:47:36Z" level=info msg="Looking for HPAs" provider=hpa
time="2019-02-26T03:47:36Z" level=info msg="Found 0 new/updated HPA(s)" provider=hpa
time="2019-02-26T03:47:36Z" level=info msg="Collected 1 new metric(s)" provider=hpa
time="2019-02-26T03:47:36Z" level=info msg="Collected new external metric 'sqs-queue-length' (8268) [queue-name=baz,region=us-west-2]" provider=hpa
time="2019-02-26T03:48:06Z" level=info msg="Looking for HPAs" provider=hpa
time="2019-02-26T03:48:06Z" level=info msg="Found 0 new/updated HPA(s)" provider=hpa
time="2019-02-26T03:48:36Z" level=info msg="Looking for HPAs" provider=hpa
time="2019-02-26T03:48:36Z" level=info msg="Found 0 new/updated HPA(s)" provider=hpa
time="2019-02-26T03:48:37Z" level=info msg="Collected 1 new metric(s)" provider=hpa
time="2019-02-26T03:48:37Z" level=info msg="Collected new external metric 'sqs-queue-length' (8268) [queue-name=baz,region=us-west-2]" provider=hpa

Steps to Reproduce the Problem

  1. create 2 sqs queues, bar and baz in us-west-2 (or change the yaml)
  2. create a deployment, foo
  3. apply the above hpa manifest
  4. wait a few minutes
  5. check logs for kube-metrics-adapter and kubectl get hpa foo -o yaml to see that no values are being collected for the bar queue

Specifications

  • Version: latest
  • Platform: gke
  • Subsystem: aws collector

Create a fullstack example - aka 1 minute guide

As seen in #30, the docs should have an example that deploys a full stack that shows the capability of this tool.
The example could show an example that have an example hpa with requests-per-second, that also deploys the components required to have it.

Support for multiple external http metrics.

Hi,

Can we somehow get metrics from multiple external HTTP endpoints.
Something like below

apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
  name: echo-server
  annotations:
    # metric-config.<metricType>.<metricName>.<collectorName>/<configKey>
    metric-config.external.http-0.json/json-key: "$stats"
    metric-config.external.http-0.json/endpoint: "http://www.mocky.io/v2/5eb3fd280e0000670008180c"
    metric-config.external.http-1.json/json-key: "$stats"
    metric-config.external.http-1.json/endpoint: "http://www.mocky.io/v2/5eb3fd280e0000670008180c"
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: echo-server
  minReplicas: 1
  maxReplicas: 10
  metrics:
  - type: External
    external:
      metric:
        name: http-0
        selector:
          matchLabels:
            identifier: kafka_queue_length
      target:
        averageValue: 1
        type: AverageValue
  - type: External
    external:
      metric:
        name: http-1
        selector:
          matchLabels:
            identifier: kafka_ingest_rate
      target:
        averageValue: 2
        type: AverageValue

[Metric-Config] Support for passing query params to pod metrics endpoint

Expected Behavior

Ability to add URL query parameters (foo=bar&baz=bop) to the metrics config json-path to be passed on to the pod's metric endpoint.

Actual Behavior

There is no explicit option to pass query parameters to the endpoint defined when collecting pod metrics. Appending the query to the path config option results in Go correctly percent encoding the ? because it is a reserved character:

metric-config.pods.concurrency-average.json-path/path: /metrics?foo=bar

will generate the following URL:

http://<podIP>:<port>/metrics%3Ffoo=bar

This results in the application not using the ? as delimiter and usually returning a 400 (or ignorning the query params).

Steps to Reproduce the Problem

  1. Setup a custom API that expects query params
  2. Setup json-path to add those query params: ...json-path/path: /metrics?foo=bar
  3. Observe that the k8s autoscaler reports: unable to get metric <metric>: unable to fetch metrics from custom metrics API: the server is currently unable to handle the request (get pods.custom.metrics.k8s.io *)

Specifications

  • Version: 0.0.5

Errors in logs when attempting to query a label value which doesn't exist

When using an external metrics with a query such as the following:

http_requests_total{status="5xx"}

If no 5xx events have been registered yet, we get the following in the logs:

level=error msg="Failed to collect metrics: query 'http_requests_total{status=\"5xx\"}\n' did not result a valid response" provider=hpa

Perhaps this should be a warning instead of error? Considering that this label value can be updated at any time.

Support older autoscaling versions

Currently, the adapter only tries to discover autoscaling/v2beta2 HPAs. This means it can only be used on newer kubernetes versions that provide autoscaling/v2beta2. It would be nice to have support for the old autoscaling/v2beta1 API as well.

Alternatively, the README could have a support matrix showing which versions of the adapter support which versions of kubernetes.

Prometheus External Metric not working

Expected Behavior

kube-metrics-adapter should be able to create a metrics collector based on the example in the readme.

Actual Behavior

kube-metrics-adapter logs Failed to create new metrics collector: no plugin found for {External prometheus-query} for my HPA.

Steps to Reproduce the Problem

  1. I'm running kube-metrics-adapter from the banzaicloud chart at https://github.com/banzaicloud/banzai-charts/tree/master/kube-metrics-adapter
  2. I have the following HPA:
---
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
  name: selenium-node-chrome-hpa
  namespace: selenium-grid-demo
  annotations:
    "metric-config.external.prometheus-query.prometheus/selenium-grid-node-chrome-ready-count": |-
      sum(selenium_grid_node_ready{app="selenium-node-chrome",namespace="selenium-grid-demo"})
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: selenium-node-chrome
  minReplicas: 2
  maxReplicas: 20
  metrics:
  - type: External
    external:
      metricName: prometheus-query
      metricSelector:
        matchLabels:
          query-name: selenium-grid-node-chrome-ready-count
      targetAverageValue: 5
  1. Verified that I'm not getting any metrics
$ kubectl get --raw "/apis/external.metrics.k8s.io/v1beta1" | jq .
{
  "kind": "APIResourceList",
  "apiVersion": "v1",
  "groupVersion": "external.metrics.k8s.io/v1beta1",
  "resources": []
}

unable to get metric error : using prometheus-collector

Expected Behavior

I have a prometheus query that can be used for scaling, expected behavoir scale up using metric

Actual Behavior

Error: Warning FailedGetObjectMetric 3s (x2 over 18s) horizontal-pod-autoscaler unable to get metric jvm-memory-bytes-used: Service on dev event-service/unable to fetch metrics from custom metrics API: the server could not find the metric jvm-memory-bytes-used for services event-service

Steps to Reproduce the Problem

HPA yaml file:

apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
  name: aggregation-hazelcast
  namespace: dev
  annotations:
    # metric-config.<metricType>.<metricName>.<collectorName>/<configKey>
    metric-config.object.jvm-memory-bytes-used.prometheus/query: |
      scalar((sum(jvm_memory_bytes_used{area="heap"}) ) / (sum(jvm_memory_bytes_max{area="heap"}))
    metric-config.object.jvm-memory-bytes-used.prometheus/per-replica: "true"
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: StatefulSet
    name: hazelcast
  minReplicas: 1
  maxReplicas: 10
  metrics:
  - type: Object
    object:
      metricName: jvm-memory-bytes-used
      target:
        apiVersion: v1
        kind: Service
        name: event-service
      targetValue: 80Mi # this will be treated as targetAverageValue

Specifications

  • Version: kops version 1.12.5
  • Platform: aws
    I am I missing anything ? i provided the exact query which works in prometheus

OpenAPI spec does not exist

Hi guys! More of an annoyance than anything else really, but would nice if we could avoid having the API server stop spewing logs about "OpenAPI spec does not exist" for the v1beta1.external.metrics.k8s.io and v1beta1.custom.metrics.k8s.io objects.

Expected Behavior

API server logs not full of messages like:

1 controller.go:114] loading OpenAPI spec for "v1beta1.external.metrics.k8s.io" failed with: OpenAPI spec does not exist
1 controller.go:114] loading OpenAPI spec for "v1beta1.custom.metrics.k8s.io" failed with: OpenAPI spec does not exist

Actual Behavior

API server logs full of messages like that.

I guess the best course of action would be to expose OpenAPI specs for those 2 objects - found this commit elsewhere which seemed to detail a similar annoyance.

Anyway, thanks for kube-metrics-adapter!

Doesn't play well with another collector

Expected Behavior

kube-metrics-adapter should behave well in an environment with a different external or custom metrics provider.

Actual Behavior

kube-metrics-adapter will attempt modifying HPAs that have external or custom metrics when it is not configured as a metrics provider for that metric type.

Logs:

time="2019-11-26T10:00:23Z" level=info msg="Event(v1.ObjectReference{Kind:\"HorizontalPodAutoscaler\", Namespace:\"development\", Name:\"task-worker\", UID:\"8e7d8d12-0c84-11ea-8744-0a26919e3b5a\", APIVersion:\"autoscaling/v2beta2\", ResourceVersion:\"92543736\", FieldPath:\"\"}): type: 'Warning' reason: 'CreateNewMetricsCollector' Failed to create new metrics collector: no plugin found for {External {aws.sqs.sqs_messages_visible &LabelSelector{MatchLabels:map[string]string{aws_account_name: development,queue_name: tasks,region: eu-west-1,},MatchExpressions:[]LabelSelectorRequirement{},}}}"

Event on respective HPA:

  Type     Reason                     Age                   From                  Message
  ----     ------                     ----                  ----                  -------
  Warning  CreateNewMetricsCollector  94s (x1947 over 16h)  kube-metrics-adapter  Failed to create new metrics collector: no plugin found for {External {aws.sqs.sqs_messages_visible &LabelSelector{MatchLabels:map[string]string{aws_account_name: development,queue_name: tasks,region: eu-west-1,},MatchExpressions:[]LabelSelectorRequirement{},}}}

Steps to Reproduce the Problem

  1. Install this project only as a Custom Metrics Provider
  2. Install datadog-cluster-agent as an External Metrics Provider, for instance.
  3. Configure a HPA targetting External Metrics

Specifications

  • Version: v0.0.5
  • Platform: Kubernetes
  • Subsystem: HPA provider

too many /metrics requests

We flood skipper instances with quite amount of calls per second:

3.121.220.107 - - [27/Feb/2019:18:46:47 +0000] "GET /metrics HTTP/1.1" 200 18 "-" "Go-http-client/2.0" 2 sample-custom-metrics-autoscaling-e2e.teapot.zalan.do - -
35.156.0.203 - - [27/Feb/2019:18:46:47 +0000] "GET /metrics HTTP/1.1" 200 18 "-" "Go-http-client/2.0" 2 sample-custom-metrics-autoscaling-e2e.teapot.zalan.do - -
3.122.97.48 - - [27/Feb/2019:18:46:47 +0000] "GET /metrics HTTP/1.1" 200 18 "-" "Go-http-client/2.0" 2 sample-custom-metrics-autoscaling-e2e.teapot.zalan.do - -
3.121.220.107 - - [27/Feb/2019:18:46:47 +0000] "GET /metrics HTTP/1.1" 200 18 "-" "Go-http-client/2.0" 2 sample-custom-metrics-autoscaling-e2e.teapot.zalan.do - -
35.156.0.203 - - [27/Feb/2019:18:46:47 +0000] "GET /metrics HTTP/1.1" 200 18 "-" "Go-http-client/2.0" 3 sample-custom-metrics-autoscaling-e2e.teapot.zalan.do - -
3.122.97.48 - - [27/Feb/2019:18:46:47 +0000] "GET /metrics HTTP/1.1" 200 18 "-" "Go-http-client/2.0" 2 sample-custom-metrics-autoscaling-e2e.teapot.zalan.do - -
3.121.220.107 - - [27/Feb/2019:18:46:47 +0000] "GET /metrics HTTP/1.1" 200 18 "-" "Go-http-client/2.0" 2 sample-custom-metrics-autoscaling-e2e.teapot.zalan.do - -
35.156.0.203 - - [27/Feb/2019:18:46:47 +0000] "GET /metrics HTTP/1.1" 200 18 "-" "Go-http-client/2.0" 3 sample-custom-metrics-autoscaling-e2e.teapot.zalan.do - -
3.122.97.48 - - [27/Feb/2019:18:46:47 +0000] "GET /metrics HTTP/1.1" 200 18 "-" "Go-http-client/2.0" 2 sample-custom-metrics-autoscaling-e2e.teapot.zalan.do - -

I guess we have to decouple metrics gathering from the hpa instances in some way.

Possible to write HPA using nested prometheus metric ?

Expected Behavior

  • prometheus is setup and recording metrics from collectd using collectd to Prometheus and can be queried Prometheus like this which gives

curl prometheus.monitoring.svc:9090/api/v1/query?query="collectd_statsd_derive_total"

{
"status": "success",
"data": {
"resultType": "vector",
"result": [{
"metric": {
"name": "collectd_statsd_derive_total",
"exported_instance": "collectd-statsd-6f5475dd46-92fx4",
"instance": "collectd-statsd.monitoring.svc:9103",
"job": "collect-statsd",
"statsd": "prometheus_target_kbasync_lenh_sec"
},
"value": [1544343252.741, "6"]
}, {
"metric": {
"name": "collectd_statsd_derive_total",
"exported_instance": "collectd-statsd-6f5475dd46-92fx4",
"instance": "collectd-statsd.monitoring.svc:9103",
"job": "collect-statsd",
"statsd": "targetkb_kbasync_lenh_sec"
},
"value": [1544343252.741, "3"]
}, {
"metric": {
"name": "collectd_statsd_derive_total",
"exported_instance": "collectd-statsd-6f5475dd46-92fx4",
"instance": "collectd-statsd.monitoring.svc:9103",
"job": "collect-statsd",
"statsd": "targetkb_kbasync_lenh_sec_g"
},
"value": [1544343252.741, "240"]
}, {
"metric": {
"name": "collectd_statsd_derive_total",
"exported_instance": "collectd-statsd-6f5475dd46-92fx4",
"instance": "collectd-statsd.monitoring.svc:9103",
"job": "collect-statsd",
"statsd": "targetkb_kbasync_lenh_sec_gg"
},
"value": [1544343252.741, "3"]
}, {
"metric": {
"name": "collectd_statsd_derive_total",
"exported_instance": "collectd-statsd-6f5475dd46-92fx4",
"instance": "collectd-statsd.monitoring.svc:9103",
"job": "collect-statsd",
"statsd": "targetkb_kbasync_lenh_sec_ggg"
},
"value": [1544343252.741, "3"]
}]
}
}

My metric name is stored in key statsd key in above response returned by prometheus
eg :"statsd": "targetkb_kbasync_lenh_sec_ggg"

i want to write an HPA using metric name from key statsd

Hard to distinguish which metrics query is in play ... maybe this is a feature than a defect

Expected Behavior

I have multiple external metrics in play similar to below,

    metric-config.object.istio-requests-error-rate.prometheus/query: |
      sum(rate(istio_requests_total{destination_workload="go-demo-7-primary",
               destination_workload_namespace="go-demo-7", reporter="destination",response_code=~"5.*"}[1m])) 
      / 
      sum(rate(istio_requests_total{destination_workload="go-demo-7-primary", 
               destination_workload_namespace="go-demo-7",reporter="destination"}[1m]) > 0)* 100
      or
      sum(rate(istio_requests_total{destination_workload="go-demo-7-primary", 
               destination_workload_namespace="go-demo-7",reporter="destination"}[1m])) > bool 0 * 100

    metric-config.external.prometheus-query.prometheus/istio-requests-per-replica: |
      sum(rate(istio_requests_total{destination_service_name="go-demo-7",destination_workload_namespace="go-demo-7",
                reporter="destination"}[1m])) 
      /
      count(count(container_memory_usage_bytes{namespace="go-demo-7",pod_name=~"go-demo-7-primary.*"}) by (pod_name))
    metric-config.external.prometheus-query.prometheus/istio-requests-average-resp-time: |
      sum(rate(istio_request_duration_seconds_sum{destination_workload="go-demo-7-primary", reporter="destination"}[1m])) 
      / 
      sum(rate(istio_request_duration_seconds_count{destination_workload="go-demo-7-primary", reporter="destination"}[1m]) > 0)
      or
      sum(rate(istio_request_duration_seconds_count{destination_workload="go-demo-7-primary", reporter="destination"}[1m])) 
      > bool 0

spec:
  maxReplicas: 10
  minReplicas: 1
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: go-demo-7-primary
  metrics:
  - type: External
    external:
      metric:
        name: prometheus-query
        selector:
          matchLabels:
            query-name: istio-requests-per-replica
      target:
        type: AverageValue
        value: 5
  - type: External
    external:
      metric:
        name: prometheus-query
        selector:
          matchLabels:
            query-name: istio-requests-average-resp-time
      target:
        type: Value
        value: 100m
  - type: Object
    object:
      metric:
        name: istio-requests-error-rate
      describedObject:
        apiVersion: v1 #make sure you check the api version on the targeted resource using get command.
        kind: Pod # note Pod can be used as resource kind for kube-metrics-adapter.
        name: go-demo-7-primary
      target:
        type: Value
        value: 5

Then when I describe HPA I would expect that I will be able to distinguish each metrics that are in play as below,

  "istio-requests-per-replica" (target value):                                    0 / 5
  "istio-requests-average-resp-time" (target value):                                    0 / 100m
  "istio-requests-error-rate" on Pod/go-demo-7-primary (target value):  0 / 5

Actual Behavior

I see below in HPA which is very hard to distinguish which external metrics is in play

Metrics:                                                                ( current / target )
  "prometheus-query" (target value):                                    0 / 5
  "prometheus-query" (target value):                                    0 / 100m
  "istio-requests-error-rate" on Pod/go-demo-7-primary (target value):  0 / 5

Steps to Reproduce the Problem

1.Install Kube-metrics-adaptor v0.1.1
1.Create an hpa with the above metrics
1.Describe HPA

Specifications

  • Version:
    --set image.repository=registry.opensource.zalan.do/teapot/kube-metrics-adapter \
    --set image.tag=v0.1.0

  • Platform:
    AWS KOPS
  • Subsystem:

Scale pods with metrics from other pods

I have implemented HPA based on metrics server (based on memory and CPU), however, it is not suitable for my use case and I'd like to scale the pods (logstash) based on KAFKA consumer lag, I have the metrics in Prometheus that is running inside the k8s cluster under a different namespace.

Prometheus query for getting the metrics:
sum(kafka_consumergroup_lag{instance="xxxx:9308",consumergroup=~"solr-large-consumer"}) by (consumergroup, topic)

Currently, I am able to get these metrics with some other applications in json format. Can I scale the logstash pods with the metrics from the different pods?

Prometheus collector should create External target.type

Expected Behavior

I think it makes more sense for the Prometheus collector to create an External metric instead of an Object. The syntax fits better. Also, although it can provide metrics about an Object, I doubt it will be used much in that way.

Actual Behavior

It currently provides the metrics of type Object. So a dummy target has to be added and for Kubernetes <1.12 targetAverageValue requires a workaround.

There is no custom metrics api after kube-metrics-adapter installation

Expected Behavior

Custom metrics api can be accessed after installing kube-metrics-adapter

Actual Behavior

$ kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1" | jq .
Error from server (ServiceUnavailable): the server is currently unable to handle the request

Steps to Reproduce the Problem

  1. Deploy the adapter using the yaml file in the docs directory
  2. kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1" | jq .

Specifications

  • Version: kubernetes 1.14
  • Platform: centos 7.3
  • Subsystem:

Helm Chart

We should have a published helm chart for this project.

Metrics server seems to fail to pull metrics on eks

Expected Behavior

Metrics should be pulled .

Actual Behavior

  "istio-requests-error-rate" on Pod/go-demo-7-app (target value):        <unknown>/ 100m
  "istio-requests-max-resp-time" on Pod/go-demo-7-app (target value):      <unknown> / 500m
  "istio-requests-average-resp-time" on Pod/go-demo-7-app (target value):  <unknown> / 250m
  "istio-requests-per-replica" on Pod/go-demo-7-app (target value):        <unknown> / 5

Steps to Reproduce the Problem

annotations:
   metric-config.object.istio-requests-error-rate.prometheus/query: |
     (sum(rate(istio_requests_total{destination_workload=~"go-demo-7-app.*",
              destination_workload_namespace="go-demo-7", reporter="destination",response_code=~"5.*"}[5m])) 
     / 
     sum(rate(istio_requests_total{destination_workload=~"go-demo-7-app.*", 
              destination_workload_namespace="go-demo-7",reporter="destination"}[5m]))) > 0 or on() vector(0)
   metric-config.object.istio-requests-per-replica.prometheus/query: |
     sum(rate(istio_requests_total{destination_workload=~"go-demo-7-app.*",destination_workload_namespace="go-demo-7",
               reporter="destination"}[5m])) 
     /
     count(count(container_memory_usage_bytes{namespace="go-demo-7",pod=~"go-demo-7-app.*"}) by (pod))
   metric-config.object.istio-requests-average-resp-time.prometheus/query: | 
     (sum(rate(istio_request_duration_milliseconds_sum{destination_workload=~"go-demo-7-app.*", reporter="destination"}[5m])) 
     / 
     sum(rate(istio_request_duration_milliseconds_count{destination_workload=~"go-demo-7-app.*", reporter="destination"}[5m])))/1000 > 0 or on() vector(0)
   metric-config.object.istio-requests-max-resp-time.prometheus/query: |
     histogram_quantile(0.95, 
                 sum(irate(istio_request_duration_milliseconds_bucket{destination_workload=~"go-demo-7-app.*"}[1m])) by (le))/1000 > 0  or on() vector(0)

Specifications

  • Version:
kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.6", GitCommit:"dff82dc0de47299ab66c83c626e08b245ab19037", GitTreeState:"clean", BuildDate:"2020-07-16T00:04:31Z", GoVersion:"go1.14.4", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.8-eks-fd1ea7", GitCommit:"fd1ea7c64d0e3ccbf04b124431c659f65330562a", GitTreeState:"clean", BuildDate:"2020-05-28T19:06:00Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
  • Platform:
    EKS
  • Subsystem:
    --set image.repository=registry.opensource.zalan.do/teapot/kube-metrics-adapter \
    --set image.tag=v0.1.5

logs shows...

**1 reflector.go:307] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:105: Failed to watch *v1.ConfigMap: unknown (get configmaps)
kube-metrics-adapter-7b79498f9-7b8rt kube-metrics-adapter E0717 03:49:16.970700       1 reflector.go:307] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:105: Failed to watch *v1.ConfigMap: unknown (get configmaps)
kube-metrics-adapter-7b79498f9-7b8rt kube-metrics-adapter E0717 03:49:17.972675       1 reflector.go:307] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:105: Failed to watch *v1.ConfigMap: unknown (get configmaps)
kube-metrics-adapter-7b79498f9-7b8rt kube-metrics-adapter E0717 03:49:17.973213**

works fine with,

    **--set image.repository=registry.opensource.zalan.do/teapot/kube-metrics-adapter \
    --set image.tag=v0.1.0**

not sure how to install the adapter

Expected Behavior

Actual Behavior

im not sure how to install the adapter. make was working fine. but when I start the kube-metrics-adapter binary I get some error. is there more doc somewhere, which I missed to read?

$ make

$ KUBERNETES_SERVICE_HOST=100.63.0.10 KUBERNETES_SERVICE_PORT=443 ./kube-metrics-adapter

panic: failed to get delegated authentication kubeconfig: failed to get delegated authentication kubeconfig: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory

goroutine 1 [running]:
main.main()
	/Users/geri/Work/cmv-tfserving/provisioning/zalando-kube-metrics-adapter/kube-metrics-adapter/main.go:40 +0x10e

is there an official helm chart to install the adapter - which works with aws Kops?
what I found is this:
https://hub.helm.sh/charts/banzaicloud-stable/kube-metrics-adapter
can I use this helm chart for install the zalando metrics-server?

currently I have this metrics-server installed, will I need to deinstall it to use zalando metrics-server?

$ helm ls
NAME          	REVISION	UPDATED                 	STATUS  	CHART               	APP VERSION	NAMESPACE   
istio         	2       	Fri Dec 27 18:33:51 2019	DEPLOYED	istio-1.4.0         	1.4.0      	istio-system
kube2iam      	1       	Mon Dec 16 16:36:50 2019	DEPLOYED	kube2iam-2.1.0      	0.10.7     	kube-system 
metrics-server	1       	Mon Dec 16 14:55:56 2019	DEPLOYED	metrics-server-2.8.8	0.3.5      	kube-system 

Steps to Reproduce the Problem

Specifications

  • Version:
git clone https://zalando-incubator/kube-metrics-adapter
commit 4412e3dca486658a04bc2585e1843c170da85e21 (HEAD -> master, origin/master, origin/HEAD)
  • Platform:
    I locally have Mac osx

  • Subsystem:
    aws kops with k8s

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.6", GitCommit:"96fac5cd13a5dc064f7d9f4f23030a6aeface6cc", GitTreeState:"clean", BuildDate:"2019-08-19T11:13:49Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.6", GitCommit:"7015f71e75f670eb9e7ebd4b5749639d42e20079", GitTreeState:"clean", BuildDate:"2019-11-13T11:11:50Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
$ helm version
Client: &version.Version{SemVer:"v2.16.1", GitCommit:"bbdfe5e7803a12bbdf97e94cd847859890cf4050", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.16.1", GitCommit:"bbdfe5e7803a12bbdf97e94cd847859890cf4050", GitTreeState:"clean"}

Support for AWS collector from external cluster

I need to scale based on sqs queue length, but my cluster will be running on GKE (or locally,) and I'm not 100% sure if this use-case is supported yet. Can I just add the aws credentials to env, and get it working externally?

Cant retrieve Custom Metrics

Expected Behavior

Custom metrics exposed by spring boot actuator metric is read

Actual Behavior

Unable to get metrics

Steps to Reproduce the Problem

  1. Deploy spring boot application in k8s using the dockerhub image https://cloud.docker.com/repository/docker/vinaybalamuru/spring-boot-hpa
kubectl run springboot-webapp --image=vinaybalamuru/spring-boot-hpa  --requests=cpu=200m --limits=cpu=500m --expose --port=7070
  1. Install kube metrics adapter (custom and external etc). The adapter currently works for external Prometheus queries
sh-4.2$ kubectl get apiservices |grep metrics
v1beta1.custom.metrics.k8s.io          kube-system/kube-metrics-adapter   True        18h
v1beta1.external.metrics.k8s.io        kube-system/kube-metrics-adapter   True        5d23h
v1beta1.metrics.k8s.io                 kube-system/metrics-server         True        9d
sh-4.2$ kubectl get po -n kube-system |grep metrics
kube-metrics-adapter-55cbb64dc9-g22pb   1/1     Running   0          3d
metrics-server-9cb648b76-4j6tf          1/1     Running   2          9d

  1. Create a HPA associated with springboot-webapp and intended to scale on metric load per minute
cat <<EOF | kubectl apply -f -
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
  name: springboot-custom-hpa
  namespace: default
  labels:
    application: custom-metrics-consumer
  annotations:
    # metric-config.<metricType>.<metricName>.<collectorName>/<configKey>
    metric-config.pods.load-per-min.json-path/json-key: "$.measurements[:1].value"
    metric-config.pods.load-per-min.json-path/path: /actuator/metrics/system.load.average.1m
    metric-config.pods.load-per-min.json-path/port: "7070"
    metric-config.pods.load-per-min.json-path/scheme: "http"
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: springboot-webapp
  minReplicas: 1
  maxReplicas: 10
  metrics:
  - type: Pods
    pods:
      metricName: load-per-min
      targetAverageValue: 1
EOF
sh-4.2$ kubectl get hpa                  
NAME                                     REFERENCE                      TARGETS             MINPODS   MAXPODS   REPLICAS   AGE
springboot-custom-hpa                    Deployment/springboot-webapp   <unknown>/1         1         10        2          18h
sh-4.2$ 

  1. Describing the HPA. it looks like the HPA couldn't read the custom metrics
$ kubectl describe hpa springboot-custom-hpa 
Name:                      springboot-custom-hpa
Namespace:                 default
Labels:                    application=custom-metrics-consumer
Annotations:               kubectl.kubernetes.io/last-applied-configuration:
                             {"apiVersion":"autoscaling/v2beta1","kind":"HorizontalPodAutoscaler","metadata":{"annotations":{"metric-config.pods.load-per-min.json-path...
                           metric-config.pods.load-per-min.json-path/json-key: $.measurements[:1].value
                           metric-config.pods.load-per-min.json-path/path: /actuator/metrics/system.load.average.1m
                           metric-config.pods.load-per-min.json-path/port: 7070
                           metric-config.pods.load-per-min.json-path/scheme: http
CreationTimestamp:         Wed, 21 Aug 2019 17:03:16 -0500
Reference:                 Deployment/springboot-webapp
Metrics:                   ( current / target )
  "load-per-min" on pods:  <unknown> / 1
Min replicas:              1
Max replicas:              10
Deployment pods:           2 current / 2 desired
Conditions:
  Type            Status  Reason               Message
  ----            ------  ------               -------
  AbleToScale     True    SucceededGetScale    the HPA controller was able to get the target's current scale
  ScalingActive   False   FailedGetPodsMetric  the HPA was unable to compute the replica count: unable to get metric load-per-min: no metrics returned from custom metrics API
  ScalingLimited  True    TooFewReplicas       the desired replica count is increasing faster than the maximum scale rate
Events:
  Type     Reason                        Age                  From                       Message
  ----     ------                        ----                 ----                       -------
  Warning  FailedGetPodsMetric           53m (x9 over 55m)    horizontal-pod-autoscaler  unable to get metric queue-length: no metrics returned from custom metrics API
  Warning  FailedComputeMetricsReplicas  53m (x9 over 55m)    horizontal-pod-autoscaler  Invalid metrics (1 invalid out of 1), last error was: failed to get object metric value: unable to get metric queue-length: no metrics returned from custom metrics API
  Warning  FailedComputeMetricsReplicas  53m (x3 over 53m)    horizontal-pod-autoscaler  Invalid metrics (1 invalid out of 1), last error was: failed to get object metric value: unable to get metric load-per-min: no metrics returned from custom metrics API
  Warning  FailedGetPodsMetric           54s (x209 over 53m)  horizontal-pod-autoscaler  unable to get metric load-per-min: no metrics returned from custom metrics API

I know that I can get into a pod and retrieve metrics from the service endpoint so I'm not sure where things are going wrong
eg

/ # wget -q -O- http://springboot-webapp.default.svc.cluster.local:7070/actuator/metrics/system.load.average.1m
{
  "name" : "system.load.average.1m",
  "description" : "The sum of the number of runnable entities queued to available processors and the number of runnable entities running on the available processors averaged over a period of time",
  "baseUnit" : null,
  "measurements" : [ {
    "statistic" : "VALUE",
    "value" : 0.0439453125
  } ],
  "availableTags" : [ ]
}/ # 

Specifications

  • Version:
  kube-metrics-adapter:
    Container ID:  docker://5b32f88953acbb290a22279899d10f853dabc1d7d9a0dd582c9652e32ee61295
    Image:         registry.opensource.zalan.do/teapot/kube-metrics-adapter:latest
    Image ID:      docker-pullable://registry.opensource.zalan.do/teapot/kube-metrics-adapter@sha256:12bd1e57c8448ed935a876959b143827b1f8f070d7
  • Platform:
    K8s 1.15
  • Subsystem:

Docs on using HTTPS with Pod collector

Expected Behavior

Add as an annotation:
metric-config.pods.requests-per-second.json-path/protocol: "https"
to have the pod collector reach out to an https pod endpoint.

Actual Behavior

I couldn't find a way to achieve this. If there is any way to have the hpa use https, please let me know. Thank you.

Crash in Kube Controller Manager

Actual Behavior

Kube-Metrics-adapter produces kernel Panic in Kube-Controller

Steps to Reproduce the Problem

  1. Configured Secret with serving.crt and serving.key that has common_name kube-metrics-adapter and alt_names kube-metrics-adapter.kube-system,kube-metrics-adapter.kube-system.svc,kube-metrics-adapter.kube-system.svc.cluster.local
  2. Create all the files as described in docs, but remove --skipper-ingress-metrics --aws-external-metrics and add instead --tls-cert-file=/var/run/serving-cert/serving.crt, --tls-private-key-file=/var/run/serving-cert/serving.key and ving-cert/serving.key
  3. Now you can see the following logs for kube-metrics-deployment:
time="2019-03-24T15:24:44Z" level=info msg="Looking for HPAs" provider=hpa
I0324 15:24:44.156768       1 serve.go:96] Serving securely on [::]:443
time="2019-03-24T15:24:44Z" level=info msg="Found 6 new/updated HPA(s)" provider=hpa
time="2019-03-24T15:24:44Z" level=info msg="Event(v1.ObjectReference{Kind:\"HorizontalPodAutoscaler\", Namespace:\"microservices\", Name:\"ms-1\", UID:\"f861d0ed-4ce2-11e9-b661-025217a46e36\", APIVersion:\"autoscaling/v2beta1\", ResourceVersion:\"2296595\", FieldPath:\"\"}): type: 'Warning' reason: 'CreateNewMetricsCollector' Failed to create new metrics collector: format '' not supported"
E0324 15:24:51.243240       1 writers.go:149] apiserver was unable to write a JSON response: expected pointer, but got nil
E0324 15:24:51.243267       1 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"expected pointer, but got nil"}
  1. And for Kube-Controller-Manager:
I0324 15:24:37.134887       6 replica_set.go:477] Too few replicas for ReplicaSet kube-system/kube-metrics-adapter-f6cb64c84, need 1, creating 1
I0324 15:24:37.138649       6 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"kube-metrics-adapter", UID:"ef50f882-4e48-11e9-b661-025217a46e36", APIVersion:"apps/v1", ResourceVersion:"2300163", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set kube-metrics-adapter-f6cb64c84 to 1
I0324 15:24:37.150869       6 deployment_controller.go:484] Error syncing deployment kube-system/kube-metrics-adapter: Operation cannot be fulfilled on deployments.apps "kube-metrics-adapter": the object has been modified; please apply your changes to the latest version and try again
I0324 15:24:37.161632       6 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"kube-metrics-adapter-f6cb64c84", UID:"ef530b22-4e48-11e9-b661-025217a46e36", APIVersion:"apps/v1", ResourceVersion:"2300164", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-metrics-adapter-f6cb64c84-jt5sf
W0324 15:24:37.731700       6 garbagecollector.go:647] failed to discover some groups: map[custom.metrics.k8s.io/v1beta1:the server is currently unable to handle the request external.metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
I0324 15:24:51.233692       6 horizontal.go:777] Successfully updated status for istio-telemetry-autoscaler
E0324 15:24:51.245935       6 runtime.go:69] Observed a panic: &runtime.TypeAssertionError{_interface:(*runtime._type)(0x334f2e0), concrete:(*runtime._type)(0x39236e0), asserted:(*runtime._type)(0x390c980), missingMethod:""} (interface conversion: runtime.Object is *v1.Status, not *v1beta2.MetricValueList)
/workspace/anago-v1.13.4-beta.0.55+c27b913fddd1a6/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:76
/workspace/anago-v1.13.4-beta.0.55+c27b913fddd1a6/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:65
/workspace/anago-v1.13.4-beta.0.55+c27b913fddd1a6/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:51
/usr/local/go/src/runtime/asm_amd64.s:522
/usr/local/go/src/runtime/panic.go:513
/usr/local/go/src/runtime/iface.go:248
/usr/local/go/src/runtime/iface.go:258
/workspace/anago-v1.13.4-beta.0.55+c27b913fddd1a6/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/metrics/pkg/client/custom_metrics/versioned_client.go:269
/workspace/anago-v1.13.4-beta.0.55+c27b913fddd1a6/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/metrics/pkg/client/custom_metrics/multi_client.go:136
/workspace/anago-v1.13.4-beta.0.55+c27b913fddd1a6/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/controller/podautoscaler/metrics/rest_metrics_client.go:113
/workspace/anago-v1.13.4-beta.0.55+c27b913fddd1a6/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/controller/podautoscaler/replica_calculator.go:158
/workspace/anago-v1.13.4-beta.0.55+c27b913fddd1a6/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/controller/podautoscaler/horizontal.go:347
/workspace/anago-v1.13.4-beta.0.55+c27b913fddd1a6/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/controller/podautoscaler/horizontal.go:274
/workspace/anago-v1.13.4-beta.0.55+c27b913fddd1a6/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/controller/podautoscaler/horizontal.go:550
/workspace/anago-v1.13.4-beta.0.55+c27b913fddd1a6/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/controller/podautoscaler/horizontal.go:318
/workspace/anago-v1.13.4-beta.0.55+c27b913fddd1a6/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/controller/podautoscaler/horizontal.go:210
/workspace/anago-v1.13.4-beta.0.55+c27b913fddd1a6/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/controller/podautoscaler/horizontal.go:198
/workspace/anago-v1.13.4-beta.0.55+c27b913fddd1a6/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/controller/podautoscaler/horizontal.go:164
/workspace/anago-v1.13.4-beta.0.55+c27b913fddd1a6/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133
/workspace/anago-v1.13.4-beta.0.55+c27b913fddd1a6/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134
/workspace/anago-v1.13.4-beta.0.55+c27b913fddd1a6/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88
/usr/local/go/src/runtime/asm_amd64.s:1333
panic: interface conversion: runtime.Object is *v1.Status, not *v1beta2.MetricValueList [recovered]
	panic: interface conversion: runtime.Object is *v1.Status, not *v1beta2.MetricValueList

goroutine 3060 [running]:
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	/workspace/anago-v1.13.4-beta.0.55+c27b913fddd1a6/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:58 +0x108
panic(0x3315300, 0xc006b86030)
	/usr/local/go/src/runtime/panic.go:513 +0x1b9
k8s.io/kubernetes/vendor/k8s.io/metrics/pkg/client/custom_metrics.(*namespacedMetrics).GetForObjects(0xc006916660, 0x0, 0x0, 0x3a216bc, 0x3, 0x3efb4a0, 0xc005cb7e00, 0xc005614e20, 0x19, 0x3efb500, ...)
	/workspace/anago-v1.13.4-beta.0.55+c27b913fddd1a6/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/metrics/pkg/client/custom_metrics/versioned_client.go:269 +0x4c6
k8s.io/kubernetes/vendor/k8s.io/metrics/pkg/client/custom_metrics.(*multiClientInterface).GetForObjects(0xc004c448b0, 0x0, 0x0, 0x3a216bc, 0x3, 0x3efb4a0, 0xc005cb7e00, 0xc005614e20, 0x19, 0x3efb500, ...)
	/workspace/anago-v1.13.4-beta.0.55+c27b913fddd1a6/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/metrics/pkg/client/custom_metrics/multi_client.go:136 +0x118
k8s.io/kubernetes/pkg/controller/podautoscaler/metrics.(*customMetricsClient).GetRawMetric(0xc0003f44d0, 0xc005614e20, 0x19, 0xc0047b20a0, 0xd, 0x3efb4a0, 0xc005cb7e00, 0x3efb500, 0x6747b10, 0x0, ...)
	/workspace/anago-v1.13.4-beta.0.55+c27b913fddd1a6/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/controller/podautoscaler/metrics/rest_metrics_client.go:113 +0x102
k8s.io/kubernetes/pkg/controller/podautoscaler.(*ReplicaCalculator).GetMetricReplicas(0xc00095af40, 0x1, 0x2710, 0xc005614e20, 0x19, 0xc0047b20a0, 0xd, 0x3efb4a0, 0xc005cb7e00, 0x3efb500, ...)
	/workspace/anago-v1.13.4-beta.0.55+c27b913fddd1a6/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/controller/podautoscaler/replica_calculator.go:158 +0xb0
k8s.io/kubernetes/pkg/controller/podautoscaler.(*HorizontalController).computeStatusForPodsMetric(0xc0005f2780, 0x1, 0xc0064a8918, 0x4, 0x0, 0xc004d34cc0, 0x0, 0x0, 0xc0056ed340, 0x3efb4a0, ...)
	/workspace/anago-v1.13.4-beta.0.55+c27b913fddd1a6/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/controller/podautoscaler/horizontal.go:347 +0xd8
k8s.io/kubernetes/pkg/controller/podautoscaler.(*HorizontalController).computeReplicasForMetrics(0xc0005f2780, 0xc0056ed340, 0xc00690e500, 0xc004802cf0, 0x1, 0x1, 0x11, 0x3af8df0, 0x3d, 0x0, ...)
	/workspace/anago-v1.13.4-beta.0.55+c27b913fddd1a6/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/controller/podautoscaler/horizontal.go:274 +0xb10
k8s.io/kubernetes/pkg/controller/podautoscaler.(*HorizontalController).reconcileAutoscaler(0xc0005f2780, 0xc0002dab30, 0xc0046c96a0, 0x12, 0x0, 0x0)
	/workspace/anago-v1.13.4-beta.0.55+c27b913fddd1a6/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/controller/podautoscaler/horizontal.go:550 +0x1678
k8s.io/kubernetes/pkg/controller/podautoscaler.(*HorizontalController).reconcileKey(0xc0005f2780, 0xc0046c96a0, 0x12, 0x30439c0, 0xc004798750, 0x0)
	/workspace/anago-v1.13.4-beta.0.55+c27b913fddd1a6/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/controller/podautoscaler/horizontal.go:318 +0x278
k8s.io/kubernetes/pkg/controller/podautoscaler.(*HorizontalController).processNextWorkItem(0xc0005f2780, 0x3eb7700)
	/workspace/anago-v1.13.4-beta.0.55+c27b913fddd1a6/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/controller/podautoscaler/horizontal.go:210 +0xdf
k8s.io/kubernetes/pkg/controller/podautoscaler.(*HorizontalController).worker(0xc0005f2780)
	/workspace/anago-v1.13.4-beta.0.55+c27b913fddd1a6/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/controller/podautoscaler/horizontal.go:198 +0x2b
k8s.io/kubernetes/pkg/controller/podautoscaler.(*HorizontalController).worker-fm()
	/workspace/anago-v1.13.4-beta.0.55+c27b913fddd1a6/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/controller/podautoscaler/horizontal.go:164 +0x2a
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc004819940)
	/workspace/anago-v1.13.4-beta.0.55+c27b913fddd1a6/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x54
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc004819940, 0x3b9aca00, 0x0, 0x1, 0xc000394900)
	/workspace/anago-v1.13.4-beta.0.55+c27b913fddd1a6/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134 +0xbe
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc004819940, 0x3b9aca00, 0xc000394900)
	/workspace/anago-v1.13.4-beta.0.55+c27b913fddd1a6/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88 +0x4d
created by k8s.io/kubernetes/pkg/controller/podautoscaler.(*HorizontalController).Run
	/workspace/anago-v1.13.4-beta.0.55+c27b913fddd1a6/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/controller/podautoscaler/horizontal.go:164 +0x1c6

Specifications

  • Version: registry.opensource.zalan.do/teapot/kube-metrics-adapter:latest
  • Platform: Kubernetes 1.13.4
  • Subsystem: Ubuntu

bug:crash when removing previously scheduled metrics collector and prometheus external metric is null

Expected Behavior

Don‘t crash

apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
  name: "hpa-foo"
  namespace: default
  annotations:
    metric-config.external.prometheus-query.prometheus/ingress-foo-requests-per-second: |
      scalar(max(rate(nginx_ingress_controller_requests{ingress="ingress-foo", status="200"}[30s])))
spec:
  scaleTargetRef:
    apiVersion: extensions/v1beta1
    kind: Deployment
    name: "foo"
  minReplicas: 1
  maxReplicas: 2
  metrics:
  - type: External
    external:
      metricName: prometheus-query
      metircSelector:
        matchLabels:
          query-name: "ingress-foo-requests-per-second"
      targetAverageValue: 10k
  - type: Resource
    resource:
      name: cpu
      targetAverageUtilization: 80
  - type: Resource
    resource:
      name: memory
      targetAverageUtilization: 90

Actual Behavior

Crash.
The bug is try to remove a collector that do not exist
May be to make a jugement about whether collector exists or not .
Only reproduced when the prometheus metric is nil

kube-metrics-adapter log:

time="2019-07-26T09:24:29Z" level=info msg="Removing previously scheduled metrics collector: {hpa-2679 default}" provider=hpa
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x16ae6ea]

goroutine 128 [running]:
github.com/zalando-incubator/kube-metrics-adapter/pkg/collector.ParseHPAMetrics(0xc000228a80, 0xc0008aef50, 0x8, 0xc0008aef58, 0x7, 0x1)
        /workspace/pkg/collector/collector.go:212 +0x53a
github.com/zalando-incubator/kube-metrics-adapter/pkg/provider.(*HPAProvider).updateHPAs(0xc0000fef60, 0x1c148e0, 0xc0000fef60)
        /workspace/pkg/provider/hpa.go:145 +0x88c
github.com/zalando-incubator/kube-metrics-adapter/pkg/provider.(*HPAProvider).Run(0xc0000fef60, 0x1efd9a0, 0xc000581cc0)
        /workspace/pkg/provider/hpa.go:97 +0xfb
created by github.com/zalando-incubator/kube-metrics-adapter/pkg/server.AdapterServerOptions.RunCustomMetricsAdapterServer
        /workspace/pkg/server/start.go:219 +0x8d7

Steps to Reproduce the Problem

1.Create a hpa containing promethues metric
2.Make sure the prometheus metric is nil

Specifications

  • Version:lasted
  • Platform:k8s 1.13.7

Failed to collect metrics: client_error: client error: 401

Hi all
I have setup the kube-metrics-adapter
but when it want to get the metrics it says :
Failed to collect metrics: client_error: client error: 401
what is the problem ?

myapp and hpa are in the namespace A
kube-metrics-adapter is in namespace kube-sysetem
and prometheus adapter is in namespace cattle-promethus

v0.1.3 requires additional rbac permissions

We just upgraded from 0.1.0 to 0.1.3 and started seeing errors in our logs like:

E0413 20:07:05.845509 1 reflector.go:153] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:105: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:kube-system:custom-metrics-apiserver" cannot list resource "configmaps" in API group "" in the namespace "kube-system"

I'm not sure what changed, but adding this apiGroups section to the rules for custom-metrics-resource-collector fixed it for us:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: custom-metrics-resource-collector
rules:
...
- apiGroups:
  - ""
  resources:
  - configmaps
  verbs:
  - list
  - watch

Expected Behavior

There should be no errors/warnings in the logs.

Actual Behavior

See above logs. This caused the HorizontalPodAutoscalers to fail.

Steps to Reproduce the Problem

  1. Create installation following the configuration in the docs/ folder.
  2. Use either AWS SQS queue or HTTP collector (following docs in the README)
  3. Look at the logs on the kube-metrics-adapter pod.

Specifications

  • Version: Kubernetes v.15.17

kube-metrics-adapter using multiple prometheus in different namespaces

Expected Behavior

When you have 2 HorizontalPodAutoscalers using annotations to specify the prometheus server (e.g. metric-config.external.prometheus-query.prometheus/prometheus-server: http://my-prometheus), metrics should be read from the prometheus server in the annotation.

Actual Behavior

The prometheus-server defined in the kube-metrics-server flags is used instead of using the value in the metric-config.external.prometheus-query.prometheus/prometheus-server annotation.

Steps to Reproduce the Problem

  1. Deploy two prometheus servers in two different namespaces.
  2. Create two HPAs using the metric-config.external.prometheus-query.prometheus/prometheus-server annotation. Each HPA should use one of the prometheus servers.
  3. Verify that the annotation is not honored.

Specifications

  • Version: v0.0.4
  • Platform: kubernetes 1.14.6
  • Subsystem: -

Comparison of Pod Collector to PodMonitor

While the difference between the Prometheus Collector and the Prometheus Adapter is documented, I am interested in a comparison of the Pod Collector and the experimental PodMonitor.

Recently, the prometheus operator got the concept of a PodMonitor: prometheus-operator/prometheus-operator#2566

The primary use case of a PodMonitor is scraping Pods directly without needing an explicit association to any specific service, as in sidecars shared by several services. For example, I want to generically scape all Istio sidecars.

non sence target value compare to the value in the prometheus

Hi all
I have set up the external metrics
from the Prometheus the query eg X shows the 34 value
and I have set the AverageValue to 15 in hpa

target:
        type: AverageValue
        averageValue: 15

it works correctly and for the value 34 the HPA create 3 replica
but in kubernetes the HPA shows 10334m/15
how 34 is being converted to 10334m ?

this is my full HPA :

apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
  namespace: test
  name: XX
  annotations:
prometheus.default.svc.cluster.local:9090
    metric-config.external.prometheus-query.prometheus/X: |
      X
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: test
  minReplicas: 2
  maxReplicas: 10
  metrics:
  - type: External
    external:
      metric:
        name: prometheus-query
        selector:
          matchLabels:
            query-name: X
      target:
        type: AverageValue
        averageValue: 15

SQS hpa throwing region error

Expected Behavior

HPA should get the given sqs queue metric.

Actual Behavior

time="2020-05-19T07:28:31Z" level=info msg="Event(v1.ObjectReference{Kind:\"HorizontalPodAutoscaler\", Namespace:\"test-app\", Name:\"test-hpa\", UID:\"916f0d14-999d-11ea-9292-0aead91950b0\", APIVersion:\"autoscaling/v2beta2\", ResourceVersion:\"88872595\", FieldPath:\"\"}): type: 'Warning' reason: 'CreateNewMetricsCollector' Failed to create new metrics collector: the metric region: us-west-2 is not configured"

Here is my hpa config:

apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
  name: test-hpa
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: test-service
  minReplicas: 1
  maxReplicas: 10
  metrics:
  - type: External
    external:
      metric:
        name: sqs-queue-length
        selector:
          matchLabels:
            queue-name: TEST_QUEUE
            region: us-west-2
      target:
        averageValue: "30"
        type: AverageValue

Specifications

  • Version: 1.14
  • Platform: centos
  • Subsystem:

Feature request - configure interval between scraping

Today

Today it seems that metrics are collected every 60 seconds (based on what I see in the logs from kube-metric-adapter)
This means that there is as much as 60 seconds delay before HPA gets a new metric to act on.
For services that need to scale up fast, that is adding a lot of delay.

Request

It would help to be able to configure this, so that HPA works faster.
In our case we collect CPU metrics every 15 seconds, so querying Prometheus every 15 seconds or even more often would be useful.
You of course need to be aware of not running heavy querys often, but in our case the query is very lightweight and we only have one HPA configured.

Specifications

  • Version: :latest
  • Platform: Kubernetes
  • Subsystem: Prometheus

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.