Giter Site home page Giter Site logo

robscott / kube-capacity Goto Github PK

View Code? Open in Web Editor NEW
2.1K 16.0 114.0 52.27 MB

A simple CLI that provides an overview of the resource requests, limits, and utilization in a Kubernetes cluster

License: Apache License 2.0

Go 100.00%
kubernetes utilization resource-management

kube-capacity's Introduction

kube-capacity

Go Report Card CircleCI

This is a simple CLI that provides an overview of the resource requests, limits, and utilization in a Kubernetes cluster. It attempts to combine the best parts of the output from kubectl top and kubectl describe into an easy to use CLI focused on cluster resources.

Installation

Go binaries are automatically built with each release by GoReleaser. These can be accessed on the GitHub releases page for this project.

Homebrew

This project can be installed with Homebrew:

brew tap robscott/tap
brew install robscott/tap/kube-capacity

Krew

This project can be installed with Krew:

kubectl krew install resource-capacity

Usage

By default, kube-capacity will output a list of nodes with the total CPU and Memory resource requests and limits for all the pods running on them. For clusters with more than one node, the first line will also include cluster wide totals. That output will look something like this:

kube-capacity

NODE              CPU REQUESTS    CPU LIMITS    MEMORY REQUESTS    MEMORY LIMITS
*                 560m (28%)      130m (7%)     572Mi (9%)         770Mi (13%)
example-node-1    220m (22%)      10m (1%)      192Mi (6%)         360Mi (12%)
example-node-2    340m (34%)      120m (12%)    380Mi (13%)        410Mi (14%)

Including Pods

For more detailed output, kube-capacity can include pods in the output. When -p or --pods are passed to kube-capacity, it will include pod specific output that looks like this:

kube-capacity --pods

NODE              NAMESPACE     POD                   CPU REQUESTS    CPU LIMITS    MEMORY REQUESTS    MEMORY LIMITS
*                 *             *                     560m (28%)      780m (38%)    572Mi (9%)         770Mi (13%)

example-node-1    *             *                     220m (22%)      320m (32%)    192Mi (6%)         360Mi (12%)
example-node-1    kube-system   metrics-server-lwc6z  100m (10%)      200m (20%)    100Mi (3%)         200Mi (7%)
example-node-1    kube-system   coredns-7b5bcb98f8    120m (12%)      120m (12%)    92Mi (3%)          160Mi (5%)

example-node-2    *             *                     340m (34%)      460m (46%)    380Mi (13%)        410Mi (14%)
example-node-2    kube-system   kube-proxy-3ki7       200m (20%)      280m (28%)    210Mi (7%)         210Mi (7%)
example-node-2    tiller        tiller-deploy         140m (14%)      180m (18%)    170Mi (5%)         200Mi (7%)

Including Utilization

To help understand how resource utilization compares to configured requests and limits, kube-capacity can include utilization metrics in the output. It's important to note that this output relies on metrics-server functioning correctly in your cluster. When -u or --util are passed to kube-capacity, it will include resource utilization information that looks like this:

kube-capacity --util

NODE              CPU REQUESTS    CPU LIMITS    CPU UTIL    MEMORY REQUESTS    MEMORY LIMITS   MEMORY UTIL
*                 560m (28%)      130m (7%)     40m (2%)    572Mi (9%)         770Mi (13%)     470Mi (8%)
example-node-1    220m (22%)      10m (1%)      10m (1%)    192Mi (6%)         360Mi (12%)     210Mi (7%)
example-node-2    340m (34%)      120m (12%)    30m (3%)    380Mi (13%)        410Mi (14%)     260Mi (9%)

Displaying Available Resources

To more clearly see the total available resources on the node it is possible to pass the --available option to kube-capacity, which will give output in the following format

kube-capacity --available

NODE              CPU REQUESTS    CPU LIMITS    MEMORY REQUESTS    MEMORY LIMITS
*                 560/2000m       130/2000m     572/5923Mi         770/5923Mi 
example-node-1    220/1000m       10/1000m      192/3200Mi         360/3200Mi 
example-node-2    340/1000m       120/1000m     380/2923Mi         410/2923Mi

Including Pods and Utilization

For more detailed output, kube-capacity can include both pods and resource utilization in the output. When --util and --pods are passed to kube-capacity, it will result in a wide output that looks like this:

kube-capacity --pods --util

NODE              NAMESPACE     POD                   CPU REQUESTS    CPU LIMITS   CPU UTIL     MEMORY REQUESTS    MEMORY LIMITS   MEMORY UTIL
*                 *             *                     560m (28%)      780m (38%)   340m (17%)   572Mi (9%)         770Mi (13%)     470Mi (8%)

example-node-1    *             *                     220m (22%)      320m (32%)   160m (16%)   192Mi (6%)         360Mi (12%)     210Mi (7%)
example-node-1    kube-system   metrics-server-lwc6z  100m (10%)      200m (20%)   70m (7%)     100Mi (3%)         200Mi (7%)      120Mi (4%)
example-node-1    kube-system   coredns-7b5bcb98f8    120m (12%)      120m (12%)   90m (9%)     92Mi (3%)          160Mi (5%)      90Mi (3%)

example-node-2    *             *                     340m (34%)      460m (46%)   180m (18%)   380Mi (13%)        410Mi (14%)     260Mi (9%)
example-node-2    kube-system   kube-proxy-3ki7       200m (20%)      280m (28%)   110m (11%)   210Mi (7%)         210Mi (7%)      120Mi (4%)
example-node-2    tiller        tiller-deploy         140m (14%)      180m (18%)   70m (7%)     170Mi (6%)         200Mi (7%)      140Mi (5%)

It's worth noting that utilization numbers from pods will likely not add up to the total node utilization numbers. Unlike request and limit numbers where node and cluster level numbers represent a sum of pod values, node metrics come directly from metrics-server and will likely include other forms of resource utilization.

Sorting

To highlight the nodes, pods, and containers with the highest metrics, you can sort by a variety of columns:

kube-capacity --util --sort cpu.util

NODE              CPU REQUESTS    CPU LIMITS    CPU UTIL    MEMORY REQUESTS    MEMORY LIMITS   MEMORY UTIL
*                 560m (28%)      130m (7%)     40m (2%)    572Mi (9%)         770Mi (13%)     470Mi (8%)
example-node-2    340m (34%)      120m (12%)    30m (3%)    380Mi (13%)        410Mi (14%)     260Mi (9%)
example-node-1    220m (22%)      10m (1%)      10m (1%)    192Mi (6%)         360Mi (12%)     210Mi (7%)

Note Starting in v0.7.4 you can append .percentage to sort by percentage. For example, kube-capacity --util --sort cpu.util.percentage.

Displaying Pod Count

To display the pod count of each node and the whole cluster, you can pass --pod-count argument:

$ kube-capacity --pod-count

NODE           CPU REQUESTS   CPU LIMITS   MEMORY REQUESTS   MEMORY LIMITS   POD COUNT
*              950m (2%)      200m (0%)    284Mi (0%)        284Mi (0%)      10/220
minikube       850m (5%)      100m (0%)    231Mi (1%)        231Mi (1%)      8/110
minikube-m02   100m (0%)      100m (0%)    53Mi (0%)         53Mi (0%)       2/110

Filtering By Labels

For more advanced usage, kube-capacity also supports filtering by pod, namespace, and/or node labels. The following examples show how to use these filters:

kube-capacity --pod-labels app=nginx
kube-capacity --namespace default
kube-capacity --namespace-labels team=api
kube-capacity --node-labels kubernetes.io/role=node

Filtering By Node Taints

Kube-capacity supports advanced filtering by taints. Users can filter in and filter out taints within the same expression. The following examples show how to use node taint filters:

kube-capacity --node-taints special=true:NoSchedule 
kube-capacity --node-taints special:NoSchedule 

These will return only special nodes.

kube-capacity --node-taints special=true:NoSchedule-
kube-capacity --node-taints special:NoSchedule-

These will filter out special nodes and return only unspecial nodes.

kube-capacity --node-taints special=true:NoSchedule,old-hardware:NoSchedule-

This will return special nodes that are not tainted with old-hardware:NoSchedule. In other words, display the special nodes but don't display the ones that are running on old hardware.

kube-capacity --no-taint

This will filter out all nodes with taints.

JSON and YAML Output

By default, kube-capacity will provide output in a table format. To view this data in JSON or YAML format, the output flag can be used. Here are some sample commands:

kube-capacity --pods --output json
kube-capacity --pods --containers --util --output yaml

CSV and TSV Output

If you would like the data in a comma or tab separated file to make importing the data into a spreadsheet easier the output flag has options for those as well. Here are some sample commands:

kube-capacity --pods --output csv
kube-capacity --pods --containers --util --output tsv

Note: the --available flag is ignored with these two choices as the values can be derived within a spreadsheet

Flags Supported

      --as string                 user to impersonate command with
      --as-group string           group to impersonate command with
  -c, --containers                includes containers in output
      --context string            context to use for Kubernetes config
  -h, --help                      help for kube-capacity
      --kubeconfig string         kubeconfig file to use for Kubernetes config
  -n, --namespace string          only include pods from this namespace
      --namespace-labels string   labels to filter namespaces with
      --no-taint                  exclude nodes with taints
      --node-labels string        labels to filter nodes with
  -o, --output string             output format for information
                                    (supports: [table json yaml csv tsv])
                                    (default "table")
  -a, --available                 includes quantity available instead of percentage used (ignored with csv or tsv output types)
  -t, --node-taints               taints to filter nodes with
  -l, --pod-labels string         labels to filter pods with
  -p, --pods                      includes pods in output
      --sort string               attribute to sort results by (supports:
                                    [cpu.util cpu.request cpu.limit mem.util mem.request mem.limit cpu.util.percentage
                                    cpu.request.percentage cpu.limit.percentage mem.util.percentage mem.request.percentage
                                    mem.limit.percentage name])
                                    (default "name")
  -u, --util                      includes resource utilization in output
      --pod-count                 includes pod counts for each of the nodes and the whole cluster

Prerequisites

Any commands requesting cluster utilization are dependent on metrics-server running on your cluster. If it's not already installed, you can install it with the official helm chart.

Similar Projects

There are already some great projects out there that have similar goals.

  • kube-resource-report: generates HTML/CSS report for resource requests and limits across multiple clusters.
  • kubetop: a CLI similar to top for Kubernetes, focused on resource utilization (not requests and limits).

Contributors

Although this project was originally developed by robscott, there have been some great contributions from others:

License

Apache License 2.0

kube-capacity's People

Contributors

barrykp avatar bilalcaliskan avatar clive-jevons avatar cloud-66 avatar dependabot[bot] avatar forget-c avatar isaacnboyd avatar ivanfetch avatar nickatsegment avatar padarn avatar pigeon-999 avatar piotrwielgolaski-tomtom avatar rajatjindal avatar robscott avatar suzuki-shunsuke avatar volatus avatar w21froster avatar yardenshoham avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kube-capacity's Issues

brew repository contains the old version

> brew info robscott/tap/kube-capacity
robscott/tap/kube-capacity: stable 0.5.0
kube-capacity provides an overview of the resource requests, limits, and utilization in a Kubernetes cluster

/opt/homebrew/Cellar/kube-capacity/0.5.0 (5 files, 31.0MB) *
  Built from source on 2021-04-27 at 13:56:04
From: https://github.com/robscott/homebrew-tap/blob/HEAD/Formula/kube-capacity.rb

Crash with node label and -u

$ kube-capacity  --node-labels 'kubernetes.io/hostname=h1349' -u
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x10 pc=0x10cf0c5]

goroutine 1 [running]:
github.com/robscott/kube-capacity/pkg/capacity.buildClusterMetric(0xc00028a9a0, 0xc0013a0700, 0xc00028a380, 0xc001464150, 0x0, 0x1c, 0x0)
        /usr/local/google/home/robertjscott/go/src/github.com/robscott/kube-capacity/pkg/capacity/resources.go:97 +0x4c5
github.com/robscott/kube-capacity/pkg/capacity.FetchAndPrint(0x1010000, 0x0, 0x0, 0x7fff578efb99, 0x1c, 0x0, 0x0, 0x0, 0x0, 0x136a805, ...)
        /usr/local/google/home/robertjscott/go/src/github.com/robscott/kube-capacity/pkg/capacity/capacity.go:53 +0x286
github.com/robscott/kube-capacity/pkg/cmd.glob..func1(0x20ca900, 0xc00043c300, 0x0, 0x3)
        /usr/local/google/home/robertjscott/go/src/github.com/robscott/kube-capacity/pkg/cmd/root.go:49 +0x21d
github.com/robscott/kube-capacity/vendor/github.com/spf13/cobra.(*Command).execute(0x20ca900, 0xc0000be010, 0x3, 0x3, 0x20ca900, 0xc0000be010)
        /usr/local/google/home/robertjscott/go/src/github.com/robscott/kube-capacity/vendor/github.com/spf13/cobra/command.go:766 +0x2ae
github.com/robscott/kube-capacity/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0x20ca900, 0xc00043bf68, 0x10e5cae, 0x20ca900)
        /usr/local/google/home/robertjscott/go/src/github.com/robscott/kube-capacity/vendor/github.com/spf13/cobra/command.go:852 +0x2ec
github.com/robscott/kube-capacity/vendor/github.com/spf13/cobra.(*Command).Execute(...)
        /usr/local/google/home/robertjscott/go/src/github.com/robscott/kube-capacity/vendor/github.com/spf13/cobra/command.go:800
github.com/robscott/kube-capacity/pkg/cmd.Execute()
        /usr/local/google/home/robertjscott/go/src/github.com/robscott/kube-capacity/pkg/cmd/root.go:79 +0x32
main.main()
        /usr/local/google/home/robertjscott/go/src/github.com/robscott/kube-capacity/main.go:22 +0x20

$ kube-capacity  version
kube-capacity version 0.3.2

Option `--available` is not supported in combinaison with yaml or json output

Context

I try to catch the available request/limits resource in my cluster with a script to determine if I have enough resource to deploy new pod in my cluster. I want catch this information from a yaml or json output to easily manipulate the output with a jq or yq tool.

Actual results

The option -a is ignored when the output is not table

$ kube-capacity
NODE                              CPU REQUESTS   CPU LIMITS      MEMORY REQUESTS   MEMORY LIMITS
*                                 2939m (30%)    12744m (134%)   28368Mi (50%)     52476Mi (94%)
aks-default-xxxxxxx-vmss000000   1024m (53%)    9344m (491%)    2320Mi (43%)      10260Mi (191%)
aks-spot-yyyyyyyy-vmss000000      510m (26%)     850m (44%)      12224Mi (97%)     16266Mi (129%)
aks-spot-yyyyyyyy-vmss000018      485m (25%)     850m (44%)      5128Mi (40%)      9170Mi (72%)
aks-spot-yyyyyyyy-vmss00001g      460m (24%)     850m (44%)      4336Mi (34%)      8378Mi (66%)
aks-spot-yyyyyyyy-vmss00001t      460m (24%)     850m (44%)      4360Mi (34%)      8402Mi (66%)

$ kube-capacity -a
NODE                              CPU REQUESTS   CPU LIMITS     MEMORY REQUESTS   MEMORY LIMITS
*                                 6561m/9500m    -3244m/9500m   27387Mi/55755Mi   3279Mi/55755Mi
aks-default-xxxxxxx-vmss000000   876m/1900m     -7444m/1900m   3045Mi/5365Mi     -4895Mi/5365Mi
aks-spot-yyyyyyyy-vmss000000      1390m/1900m    1050m/1900m    374Mi/12598Mi     -3668Mi/12598Mi
aks-spot-yyyyyyyy-vmss000018      1415m/1900m    1050m/1900m    7470Mi/12598Mi    3428Mi/12598Mi
aks-spot-yyyyyyyy-vmss00001g      1440m/1900m    1050m/1900m    8262Mi/12598Mi    4220Mi/12598Mi
aks-spot-yyyyyyyy-vmss00001t      1440m/1900m    1050m/1900m    8238Mi/12598Mi    4196Mi/12598Mi

$ kube-capacity -a -o json 
{
  "nodes": [
    {
      "name": "aks-default-xxxxxxx-vmss000000",
      "cpu": {
        "requests": "1024m",
        "requestsPercent": "53%",
        "limits": "9344m",
        "limitsPercent": "491%"
      },
      "memory": {
        "requests": "2320Mi",
        "requestsPercent": "43%",
        "limits": "10260Mi",
        "limitsPercent": "191%"
      }
    },
    {
      "name": "aks-spot-yyyyyyyy-vmss000000",
      "cpu": {
        "requests": "510m",
        "requestsPercent": "26%",
        "limits": "850m",
        "limitsPercent": "44%"
      },
      "memory": {
        "requests": "12224Mi",
        "requestsPercent": "97%",
        "limits": "16266Mi",
        "limitsPercent": "129%"
      }
    },
    {
      "name": "aks-spot-yyyyyyyy-vmss000018",
      "cpu": {
        "requests": "485m",
        "requestsPercent": "25%",
        "limits": "850m",
        "limitsPercent": "44%"
      },
      "memory": {
        "requests": "5128Mi",
        "requestsPercent": "40%",
        "limits": "9170Mi",
        "limitsPercent": "72%"
      }
    },
    {
      "name": "aks-spot-yyyyyyyy-vmss00001g",
      "cpu": {
        "requests": "460m",
        "requestsPercent": "24%",
        "limits": "850m",
        "limitsPercent": "44%"
      },
      "memory": {
        "requests": "4336Mi",
        "requestsPercent": "34%",
        "limits": "8378Mi",
        "limitsPercent": "66%"
      }
    },
    {
      "name": "aks-spot-yyyyyyyy-vmss00001t",
      "cpu": {
        "requests": "460m",
        "requestsPercent": "24%",
        "limits": "850m",
        "limitsPercent": "44%"
      },
      "memory": {
        "requests": "4360Mi",
        "requestsPercent": "34%",
        "limits": "8402Mi",
        "limitsPercent": "66%"
      }
    }
  ],
  "clusterTotals": {
    "cpu": {
      "requests": "2939m",
      "requestsPercent": "30%",
      "limits": "12744m",
      "limitsPercent": "134%"
    },
    "memory": {
      "requests": "28368Mi",
      "requestsPercent": "50%",
      "limits": "52476Mi",
      "limitsPercent": "94%"
    }
  }
}

Expected results

When the -a is specify with a json or yaml output, remplace requests and limits (or add a field) by requestsAvailable and limitsAvailable

Version

$ kube-capacity version 
kube-capacity version v0.7.4

Upgrade k8s.io deps as v0.23.1

Since kube-capacity is using go 1.17 and client-go v0.23.4 is compatible with go 1.17, i think client-go dependency should be increased to take advantage of latest release. @robscott If you assign it to me, i can handle the upgrade process. Regards! Also i can upgrade all direct deps if you want.

Pods with status Complete still shown

I have some pods that stay with status complete after CronJob ran. But they are still shown in a list

Any plans to make feature for namespace resource quota?
Thank you for a great tool!

Kubeconfig file modified when using Azure OIDC

We have started to use Azure OIDC to authenticate to our clusters. Whenever I execute kube-capacity command for a cluster, it modifies my kubeconfig file and removes a setting (environment: AzurePublicCloud) which makes kubectl not work with the authentication anymore.

Original content of kubeconfig:

...
users:
- name: oidc_user
  user:
    auth-provider:
      config:
        access-token: <TOKEN>
        apiserver-id: <APISERVER ID>
        client-id: <CLIENT ID>
        environment: AzurePublicCloud
        expires-in: "3599"
        expires-on: "1579869933"
        refresh-token: <REFRESH TOKEN>
        tenant-id: <TENANT ID>
      name: azure
...

kubeconfig contents after running kube-capacity:

...
users:
- name: oidc_user
  user:
    auth-provider:
      config:
        access-token: <TOKEN>
        apiserver-id: <APISERVER ID>
        client-id: <CLIENT ID>
        expires-in: "3599"
        expires-on: "1579869933"
        refresh-token: <REFRESH TOKEN>
        tenant-id: <TENANT ID>
      name: azure
...

Same bug seems to be in Stern: https://github.com/wercker/stern/issues/119

A Stern fork seems to have fixed the issue by upgrading to a newer Kubernetes API.

Error getting metrics

Taking kube-capacity for a spin, and error with:

$ kube-capacity
Error getting metrics
panic: the server could not find the requested resource (get nodes.metrics.k8s.io)

goroutine 1 [running]:
github.com/robscott/kube-capacity/pkg/capacity.getMetrics(0xc000116700, 0xc000371b90)
	/Users/rob/go/src/github.com/robscott/kube-capacity/pkg/capacity/list.go:66 +0x30a
github.com/robscott/kube-capacity/pkg/capacity.List(0x28b4dd0, 0x0, 0x0, 0x0)
	/Users/rob/go/src/github.com/robscott/kube-capacity/pkg/capacity/list.go:29 +0x3e
github.com/robscott/kube-capacity/pkg/cmd.glob..func1(0x288d420, 0x28b4dd0, 0x0, 0x0)
	/Users/rob/go/src/github.com/robscott/kube-capacity/pkg/cmd/root.go:38 +0xe1
github.com/robscott/kube-capacity/vendor/github.com/spf13/cobra.(*Command).execute(0x288d420, 0xc0000381b0, 0x0, 0x0, 0x288d420, 0xc0000381b0)
	/Users/rob/go/src/github.com/robscott/kube-capacity/vendor/github.com/spf13/cobra/command.go:766 +0x2cc
github.com/robscott/kube-capacity/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0x288d420, 0x288d680, 0xc000133f50, 0x1b6061e)
	/Users/rob/go/src/github.com/robscott/kube-capacity/vendor/github.com/spf13/cobra/command.go:852 +0x2fd
github.com/robscott/kube-capacity/vendor/github.com/spf13/cobra.(*Command).Execute(0x288d420, 0x10053b0, 0xc00009c058)
	/Users/rob/go/src/github.com/robscott/kube-capacity/vendor/github.com/spf13/cobra/command.go:800 +0x2b
github.com/robscott/kube-capacity/pkg/cmd.Execute()
	/Users/rob/go/src/github.com/robscott/kube-capacity/pkg/cmd/root.go:49 +0x2d
main.main()
	/Users/rob/go/src/github.com/robscott/kube-capacity/main.go:22 +0x20

kubectl get nodes does work, however. Am I missing a dependency?

Cannot use `--pods` and `-n` namespace filtering together

$ kubectl resource-capacity --pods shows too many pods, so I want to filter those with a given namespace-labels argument. If you try this: $ kubectl resource-capacity --pods -n kube-system, it shows all empty for some reason. What I expect is that it should gather the exact same values in the previous run and should list only the given namespace. But It prints * in the NAMESPACE column, not kube-system.

Screen Shot 2021-01-28 at 17 35 16

Screen Shot 2021-01-28 at 17 36 19

Is possible to use the tool without listing nodes?

Hi community,

With my limited permissions, I have this error

kubectl resource-capacity --sort cpu.limit --util --pods                     
Error listing Nodes: nodes is forbidden: User "320144150" cannot list resource "nodes" in API group "" at the cluster scope

Is possible to use the tool with my limited RBAC permission?

Best regards,
Jizu

Cannot use -u and --node-labels together

Trying to use these together results in the following error message:

 % kubectl resource-capacity --node-labels 'kubernetes.io/role=node' -u                                                           
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x10 pc=0x10cf0c5]

goroutine 1 [running]:
github.com/robscott/kube-capacity/pkg/capacity.buildClusterMetric(0xc0001324d0, 0xc000269ab0, 0xc000492d20, 0xc0001344d0, 0x0, 0x17, 0x0)
	/usr/local/google/home/robertjscott/go/src/github.com/robscott/kube-capacity/pkg/capacity/resources.go:97 +0x4c5
github.com/robscott/kube-capacity/pkg/capacity.FetchAndPrint(0x1010000, 0x0, 0x0, 0x7ffd49634e63, 0x17, 0x0, 0x0, 0x0, 0x0, 0x136a805, ...)
	/usr/local/google/home/robertjscott/go/src/github.com/robscott/kube-capacity/pkg/capacity/capacity.go:53 +0x286
github.com/robscott/kube-capacity/pkg/cmd.glob..func1(0x20ca900, 0xc00038c450, 0x0, 0x3)
	/usr/local/google/home/robertjscott/go/src/github.com/robscott/kube-capacity/pkg/cmd/root.go:49 +0x21d
github.com/robscott/kube-capacity/vendor/github.com/spf13/cobra.(*Command).execute(0x20ca900, 0xc00003a090, 0x3, 0x3, 0x20ca900, 0xc00003a090)
	/usr/local/google/home/robertjscott/go/src/github.com/robscott/kube-capacity/vendor/github.com/spf13/cobra/command.go:766 +0x2ae
github.com/robscott/kube-capacity/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0x20ca900, 0xc00041bf68, 0x10e5cae, 0x20ca900)
	/usr/local/google/home/robertjscott/go/src/github.com/robscott/kube-capacity/vendor/github.com/spf13/cobra/command.go:852 +0x2ec
github.com/robscott/kube-capacity/vendor/github.com/spf13/cobra.(*Command).Execute(...)
	/usr/local/google/home/robertjscott/go/src/github.com/robscott/kube-capacity/vendor/github.com/spf13/cobra/command.go:800
github.com/robscott/kube-capacity/pkg/cmd.Execute()
	/usr/local/google/home/robertjscott/go/src/github.com/robscott/kube-capacity/pkg/cmd/root.go:79 +0x32
main.main()
	/usr/local/google/home/robertjscott/go/src/github.com/robscott/kube-capacity/main.go:22 +0x20

Using kops deployed kubernetes on AWS, installed this plugin via krew

Add support to exclude node labels

Currently it is possible to filter node labels with --node-labels.
In reality some nodes have two labels like node-role.kubernetes.io/worker and node-role.kubernetes.io/infra.
To exclude a node with a second label I would like to propose a feature to exclude labels with --exclude-node-labels.

Option to calculate percentages according to requests/limits

Hi, first of all for this super useful plugin, it's exactly what I was missing for some time!

In order to reduce the memory footprint of apps on my cluster I'd like to tune memory requests and like to propose an option that calculates the mem metrics not according to the total node memory, but according to the requests/limits (same should be possible with CPU metrics, but my focus is on memory).
Right now kube-capacity shows 2% mem usage of the total node memory:

โฏ kubectl resource-capacity --pods --util --sort mem.util | grep -E '(NODE|kustom)'
NODE        NAMESPACE      POD                                        CPU REQUESTS   CPU LIMITS   CPU UTIL    MEMORY REQUESTS   MEMORY LIMITS   MEMORY UTIL
bitrigger   flux-system    kustomize-controller-7dd58878b8-7jmnb      100m (2%)      0Mi (0%)     3m (0%)     64Mi (3%)         1024Mi (51%)    49Mi (2%)

What I'd be interested in is the percentage of i.e. the mem requests (which is what k9s shows in the default pod list), which is actually 77%:

โ”‚ NAMESPACEโ†‘     NAME                                     PF   READY     RESTARTS STATUS       CPU   MEM   %CPU/R    %CPU/L    %MEM/R    %MEM/L IP             NODE          AGE       โ”‚
โ”‚ flux-system    kustomize-controller-7dd58878b8-7jmnb    โ—    1/1              0 Running        3    50        3       n/a        78         4 10.42.0.38     bitrigger     3h4m      

So a flag could be --percentage=[node|req|limit] which could apply to both CPU and mem metrics.

Json output shows CPU utilization with unknown unit

Hello folks,

I get different outputs in JSON and non-JSON outputs for the following command:

kube-capacity --util --sort cpu.util --output json

JSON output:

{ "nodes": [ { "name": "gke-snapblocs-dpstudi-default-f5be526-2c7e96ef-t6t4", "cpu": { "requests": "2062m", "requestsPercent": "52%", "limits": "5544m", "limitsPercent": "141%", "utilization": "1013446562n", "utilizationPercent": "25%" }, "memory": { "requests": "5821Mi", "requestsPercent": "43%", "limits": "6151Mi", "limitsPercent": "46%", "utilization": "3707128Ki", "utilizationPercent": "27%" } }, { "name": "gke-snapblocs-dpstudi-default-f5be526-e4e0068f-6zts", "cpu": { "requests": "1803m", "requestsPercent": "45%", "limits": "3100m", "limitsPercent": "79%", "utilization": "787564923n", "utilizationPercent": "20%" }, "memory": { "requests": "2768Mi", "requestsPercent": "20%", "limits": "3438Mi", "limitsPercent": "25%", "utilization": "2532912Ki", "utilizationPercent": "18%" } }, { "name": "gke-snapblocs-dpstudi-default-f5be526-e4e0068f-8xjc", "cpu": { "requests": "2083m", "requestsPercent": "53%", "limits": "3410m", "limitsPercent": "86%", "utilization": "626234143n", "utilizationPercent": "15%" }, "memory": { "requests": "3802Mi", "requestsPercent": "28%", "limits": "3458Mi", "limitsPercent": "26%", "utilization": "2229032Ki", "utilizationPercent": "16%" } }, { "name": "gke-snapblocs-dpstudi-default-f5be526-2c7e96ef-8w0p", "cpu": { "requests": "2113m", "requestsPercent": "53%", "limits": "4600m", "limitsPercent": "117%", "utilization": "597228442n", "utilizationPercent": "15%" }, "memory": { "requests": "4172Mi", "requestsPercent": "31%", "limits": "4252Mi", "limitsPercent": "31%", "utilization": "2956028Ki", "utilizationPercent": "21%" } }, { "name": "gke-snapblocs-dpstudi-default-f5be526-2c7e96ef-292x", "cpu": { "requests": "1813m", "requestsPercent": "46%", "limits": "3400m", "limitsPercent": "86%", "utilization": "538324970n", "utilizationPercent": "13%" }, "memory": { "requests": "4696Mi", "requestsPercent": "35%", "limits": "3228Mi", "limitsPercent": "24%", "utilization": "2081224Ki", "utilizationPercent": "15%" } }, { "name": "gke-snapblocs-dpstudi-default-f5be526-e4e0068f-p2m0", "cpu": { "requests": "1713m", "requestsPercent": "43%", "limits": "3200m", "limitsPercent": "81%", "utilization": "528323068n", "utilizationPercent": "13%" }, "memory": { "requests": "3762Mi", "requestsPercent": "28%", "limits": "3228Mi", "limitsPercent": "24%", "utilization": "2317608Ki", "utilizationPercent": "17%" } } ], "clusterTotals": { "cpu": { "requests": "11587m", "requestsPercent": "49%", "limits": "23254m", "limitsPercent": "98%", "utilization": "4091122108n", "utilizationPercent": "17%" }, "memory": { "requests": "25021Mi", "requestsPercent": "31%", "limits": "23755Mi", "limitsPercent": "29%", "utilization": "15823932Ki", "utilizationPercent": "19%" } } }

Non JSON output:

Screenshot 2021-10-05 at 5 58 33 PM

As you can notice for the node `gke-snapblocs-dpstudi-default-f5be526-2c7e96ef-t6t4` CPU utilization and Memory utilization in JSON are having `n` and `Ki` units which are not matching with Non JSON output which is `m` and `Mi`.

Version:
kubernetes: 1.20
kube-capacity: 0.6.1

Calling bottle :unneeded is deprecated! There is no replacement.

Warning: Calling bottle :unneeded is deprecated! There is no replacement.
Please report this issue to the robscott/tap tap (not Homebrew/brew or Homebrew/core):
  /usr/local/Homebrew/Library/Taps/robscott/homebrew-tap/Formula/kube-capacity.rb:10

See that anytime I do a brew update or install now.

Cpu request limit is wrong

kube-capacity -n ocsl-dev -p

configuration-server-92-42zwb 250m (1%) 1000m (6%) 750Mi (0%) 1024Mi (0%)

But the actual request in 200m. There are init containers, but none of them are 50m, so i don't understand where its getting the 250m from.

resources:
limits:
cpu: "1"
memory: 1Gi
requests:
cpu: 200m
memory: 750Mi

kube-capacity: command not found

Hi I setup this plugin followed instructions, when I issue

kubectl resource-capacity

I get an output as follows

NODE               CPU REQUESTS   CPU LIMITS   MEMORY REQUESTS   MEMORY LIMITS
*                  1700m (34%)    0m (0%)      330Mi (1%)        340Mi (1%)
kubernetes         250m (12%)     0m (0%)      0Mi (0%)          0Mi (0%)
kubernetes-node2   350m (35%)     0m (0%)      90Mi (1%)         0Mi (0%)
kubernetes2        1100m (55%)    0m (0%)      240Mi (4%)        340Mi (5%)

However, whatever commands I use I get errors as follows

kubectl resource-capacity โ€โ€sort cpu.limit
Error: unknown command "โ€โ€sort" for "kube-capacity"

kubectl resource-capacity โ€โ€sort cpu.util --util
Error: unknown command "โ€โ€sort" for "kube-capacity"

kube-capacity
kube-capacity: command not found

Is there any config to add or update following setup? Than ks

kube-capacity doesn't honor $KUBECONFIG with multiple config files

When $KUBECONFIG holds multiple paths to multiple config files, colon-separated, kube-capacity doesn't seem to expect this:

โžž  echo $KUBECONFIG                                                                                                                                                              
/home/serge/.kube/config.d/civo-civo-k3s-t0-kubeconfig:/home/serge/.kube/config.d/kube_config_rk0.yml                                                                            
                                                                                                                                                                                 
โžž  kubectx                                                                                                                                                                       
civo-k3s-t0                                                                                                                                                                      
rk0                                                                                                                                                                              
                                                                                                                                                                                 
โžž  kube-capacity                                                                                                                                                                 
/home/serge/.kube/config.d/civo-civo-k3s-t0-kubeconfig:/home/serge/.kube/config.d/kube_config_rk0.yml does not exist - please make sure you have a kubeconfig configured.        
panic: stat /home/serge/.kube/config.d/civo-civo-k3s-t0-kubeconfig:/home/serge/.kube/config.d/kube_config_rk0.yml: no such file or directory                                     
                                                                                                                                                                                 
goroutine 1 [running]:                                                                                                                                                           
github.com/robscott/kube-capacity/pkg/kube.getKubeConfig(0x203000, 0x2, 0x0)                                                                                                     
  /Users/rob/go/src/github.com/robscott/kube-capacity/pkg/kube/clientset.go:65 +0x2ea                                                                                            
github.com/robscott/kube-capacity/pkg/kube.NewClientSet(0xc000423ad8, 0x40b79f, 0xc0002a2e20)                                                                                    
  /Users/rob/go/src/github.com/robscott/kube-capacity/pkg/kube/clientset.go:33 +0x22                                                                                             
github.com/robscott/kube-capacity/pkg/capacity.getPodsAndNodes(0x0, 0x12d538a)                                                                                                   
  /Users/rob/go/src/github.com/robscott/kube-capacity/pkg/capacity/list.go:40 +0x34                                                                                              
github.com/robscott/kube-capacity/pkg/capacity.List(0x1f3f558, 0x0, 0x0, 0x0)                                                                                                    
  /Users/rob/go/src/github.com/robscott/kube-capacity/pkg/capacity/list.go:29 +0x37                                                                                              
github.com/robscott/kube-capacity/pkg/cmd.glob..func1(0x1f13680, 0x1f3f558, 0x0, 0x0)                                                                                            
  /Users/rob/go/src/github.com/robscott/kube-capacity/pkg/cmd/root.go:38 +0xe1                                                                                                   
github.com/robscott/kube-capacity/vendor/github.com/spf13/cobra.(*Command).execute(0x1f13680, 0xc0000381b0, 0x0, 0x0, 0x1f13680, 0xc0000381b0)                                   
  /Users/rob/go/src/github.com/robscott/kube-capacity/vendor/github.com/spf13/cobra/command.go:766 +0x2cc                                                                        
github.com/robscott/kube-capacity/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0x1f13680, 0x1f138e0, 0xc000423f50, 0x105aeae)                                               
  /Users/rob/go/src/github.com/robscott/kube-capacity/vendor/github.com/spf13/cobra/command.go:852 +0x2fd                                                                        
github.com/robscott/kube-capacity/vendor/github.com/spf13/cobra.(*Command).Execute(0x1f13680, 0x4056b0, 0xc00009c058)                                                            
  /Users/rob/go/src/github.com/robscott/kube-capacity/vendor/github.com/spf13/cobra/command.go:800 +0x2b                                                                         
github.com/robscott/kube-capacity/pkg/cmd.Execute()                                                                                                                              
  /Users/rob/go/src/github.com/robscott/kube-capacity/pkg/cmd/root.go:49 +0x2d                                                                                                   
main.main()                                                                                                                                                                      
  /Users/rob/go/src/github.com/robscott/kube-capacity/main.go:22 +0x20                                                                                                           

1Mi/1Mi in the cpu column

I get this result in some cases, with 1Mi in the cpu column

x.x.x.x    rio                     segment-recorder-2-579c6b8cdf-5mp8m                               55980m/56000m       53000m/56000m        289375Mi/289575Mi                   284455Mi/289575Mi                   
x.x.x.x    rio                     segment-recorder-6979fc899c-869mm                                 55980m/56000m       53000m/56000m        289375Mi/289575Mi                   284455Mi/289575Mi                   
x.x.x.x    rio                     stream-coordinator-9                                              55980m/56000m       52000m/56000m        289375Mi/289575Mi                   281383Mi/289575Mi                   
x.x.x.x    kube-system             sumatra-daemonset-855gg                                           1Mi/1Mi             1Mi/1Mi              289575Mi/289575Mi                   289575Mi/289575Mi          

Problems when using Active Directory Authentication

Hi, great tool but I have problems using it with Active Directory authentication.

I get:
$ kube-capacity --pods
Error connecting to Kubernetes: No Auth Provider found for name "service"

I use this connect:
user:
auth-provider:
config:
access-token:
apiserver-id
client-id

--namespace and --namespace-labels do not filter --util

I was expecting to see cpu/memory util only for pods in the given namespace, but it seems to include all pods, e.g.:

$ kube-capacity -u -n my-namespace
NODE      CPU REQUESTS   CPU LIMITS    CPU UTIL      MEMORY REQUESTS   MEMORY LIMITS   MEMORY UTIL
*         8250m (1%)     20000m (4%)   26768m (5%)   36508Mi (1%)      36508Mi (1%)    434069Mi (17%)
node-1    0Mi (0%)       0Mi (0%)      1124m (32%)   0Mi (0%)          0Mi (0%)        6105Mi (40%)
node-2    0Mi (0%)       0Mi (0%)      1957m (55%)   0Mi (0%)          0Mi (0%)        9312Mi (61%)
node-3    0Mi (0%)       0Mi (0%)      843m (24%)    0Mi (0%)          0Mi (0%)        4347Mi (28%)
node-4    0Mi (0%)       0Mi (0%)      910m (3%)     0Mi (0%)          0Mi (0%)        21097Mi (15%)
node-5    0Mi (0%)       0Mi (0%)      3766m (50%)   0Mi (0%)          0Mi (0%)        8770Mi (27%)

Similar output when filtering with --namespace-labels.

WDYT?

CPU util reporting garbage

Hello

I've just upgraded from version 0.4.0 to 0.6.0 and I'm noticing garbage when requesting the CPU utilization:

$ kubectl resource-capacity -u
NODE                                          CPU REQUESTS    CPU LIMITS       CPU UTIL              MEMORY REQUESTS   MEMORY LIMITS    MEMORY UTIL
*                                             147620m (80%)   474660m (260%)   77370554329n (42%)    233900Mi (33%)    650466Mi (93%)   135871836Ki (19%)

EDIT: this issue appeared in v0.5.0

Add option to select only some columns

Hello @robscott

It would be nice to be able to select only a particular column like: MEM UTIL as a filtering option.

Showing all the columns sometimes make it harder to read as the screen size cannot fit all the columns without having to add a new line which makes readability harder.

Cheers.

[Bug] 0% usage will cause wrong output

Here kc set the resources unit:

cm.nodeMetrics[node.Name] = &nodeMetric{
name: node.Name,
cpu: &resourceMetric{
resourceType: "cpu",
allocatable: node.Status.Allocatable["cpu"],
},
memory: &resourceMetric{
resourceType: "memory",
allocatable: node.Status.Allocatable["memory"],
},

And if the cpu or memory unit is a blank string(0% usage for example), this code will do the wrong judgement(all the judgement about Format, here just a example):

if actual.Format == resource.DecimalSI {
actualStr = fmt.Sprintf("%dm", allocatable.MilliValue()-actual.MilliValue())
allocatableStr = fmt.Sprintf("%dm", allocatable.MilliValue())
} else {
actualStr = fmt.Sprintf("%dMi", formatToMegiBytes(allocatable)-formatToMegiBytes(actual))
allocatableStr = fmt.Sprintf("%dMi", formatToMegiBytes(allocatable))
}

Because the value of cpu or memory is a blank string, so it's type won't be resource.DecimalSI, and it will always do the else code block. And of couse its calculate progress will be wrong too since the else code block will call formatTomegiBytes and the unit is not correct:

func formatToMegiBytes(actual resource.Quantity) int64 {
value := actual.Value() / Mebibyte
if actual.Value()%Mebibyte != 0 {
value++
}
return value
}

The Wrong output will be like(those red ones):
image

I will try to fix it and provide a PR, but my skills are not good, will do my best

Can't install plugin on new macs with Apple Silicon / M1 processors

kubectl krew install resource-capacity
Updated the local copy of plugin index.
Updated the local copy of plugin index "kvaps".
Installing plugin: resource-capacity
W0829 14:00:23.068357   12540 install.go:164] failed to install plugin "resource-capacity": plugin "resource-capacity" does not offer installation for this platform
F0829 14:00:23.068404   12540 root.go:79] failed to install some plugins: [resource-capacity]: plugin "resource-capacity" does not offer installation for this platform

Is it possible to add darwin/arm64 (I think that's right for go?) to the builds to make it work?

Add new Columns to show Network Related Information

Seem currently we don't have the network related information (like Total Incoming Bandwidth / Total Outgoing Bandwidth) shown the CLI. We can add them as separate columns. Please let me know what do you guys think?

Apple M1 support

trying to load with

kubectl krew install resource-capacity

getting this error

Updated the local copy of plugin index.
Installing plugin: resource-capacity
W0611 20:42:27.316608   79007 install.go:164] failed to install plugin "resource-capacity": plugin "resource-capacity" does not offer installation for this platform
F0611 20:42:27.316669   79007 root.go:79] failed to install some plugins: [resource-capacity]: plugin "resource-capacity" does not offer installation for this platform

The newer versions of Golang support Apple M1 architecture, any hope on getting this compiled for that too?

incorrect POD COUNT

There was fixed #49 by #60 . But it shows wrong number of pods, bigger than limit number. Because it shows all kind of pods : completed, Error etc.
But it should show the same number as command kubectl describe node , "Non-terminated Pods" field. This field show if it possible to schedule pod on this node or not.

image

Windows host support

What's the limiting factor for resource-capacity to be supported in Krew for Windows?

Percentages

This is probably a dumb question, but I didn't see it in in the help or README. What do the percentages mean for req/limit/usage?

Difference between kubectl top and kube-capacity Plug-In --util values

When using the following command:

~$ kubectl top nodes

NAME                                          CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
ip-10-1-0-100.eu-central-1.compute.internal   589m         7%     12429Mi         81%     

there is a difference in the utilization:

~$ kubectl resource-capacity --util --sort cpu.util | sed -r '/^\s*$/d'
NODE                                          CPU REQUESTS    CPU LIMITS       CPU UTIL       MEMORY REQUESTS   MEMORY LIMITS     MEMORY UTIL
ip-10-1-0-100.eu-central-1.compute.internal   6245m (78%)     9400m (117%)     346m (4%)      13177Mi (86%)     18224Mi (119%)    10593Mi (69%)

Why is that the case? Is this a bug, or do we need to take anything else into consideration?
Thanks.

Add the ability to print some label as a column

Hi!
I'd like to have the possibility to print out some label value (like beta.kubernetes.io/instance-type) along the rest of the node data, like this:

$ kubectl resource-capacity --show-label "beta.kubernetes.io/instance-type"
NODE                                            CPU REQUESTS   CPU LIMITS       MEMORY REQUESTS   MEMORY LIMITS    beta.kubernetes.io/instance-type
*                                               29695m (62%)   116800m (245%)   126647Mi (40%)    379666Mi (121%)  *
ip-10-210-1-115.us-east-1.compute.internal   5075m (31%)    18300m (115%)    25870Mi (41%)     51144Mi (81%) t3.medium
ip-10-210-1-82.us-east-1.compute.internal    6035m (76%)    26900m (340%)    26672Mi (42%)     93640Mi (150%) t3.medium
ip-10-210-3-174.us-east-1.compute.internal   5435m (68%)    19000m (240%)    20781Mi (33%)     71265Mi (114%) t3.small
ip-10-210-3-89.us-east-1.compute.internal    5975m (75%)    26300m (332%)    26432Mi (42%)     92104Mi (147%) t3.large
ip-10-210-8-230.us-east-1.compute.internal   7175m (90%)    26300m (332%)    26894Mi (43%)     71513Mi (114%) g4dn.xlarge

What do you think about this (sorry about the formatting)? is this feasible?

Thank you!

Add support for node taints

It would be nice if we could add support for node taints.
e.g. If a node has been cordoned and is set to no schedule/execute, it should be possible to exclude this.

e.g.

kubectl get nodes
NAME
example-node-1    Ready,SchedulingDisabled   <none>          425d   v1.24.6
example-node-2    Ready                      <none>          227d   v1.24.6

kube-capacity

NODE              CPU REQUESTS    CPU LIMITS    MEMORY REQUESTS    MEMORY LIMITS
*                 560m (28%)      130m (7%)     572Mi (9%)         770Mi (13%)
example-node-1    220m (22%)      10m (1%)      192Mi (6%)         360Mi (12%)
example-node-2    340m (34%)      120m (12%)    380Mi (13%)        410Mi (14%)

Now if we exclude cordoned nodes:

kube-capacity --exclude-noschedule-nodes

NODE              CPU REQUESTS    CPU LIMITS    MEMORY REQUESTS    MEMORY LIMITS
*                 340m (34%)      132m (12%)    380Mi (13%)        410Mi (13%)
example-node-2    340m (34%)      120m (12%)    380Mi (13%)        10Mi (14%)

We can see have less capacity available than we thought.

Allow Username and Group impersonating.

Most tools allow username and group impersonating like kubectl does.

      --as string                      Username to impersonate for the operation
      --as-group stringArray           Group to impersonate for the operation

Would be cool.

Ability to display non-allocated resources

I would like to be able to display non-allocated (non-requested) resources (allocatable - allocated)
This is useful to better understand why the scheduler is not able to schedule a pod due to lack of allocatable resources.
Great tool, thank you!

number of pod per node

Great plugin, very useful. It just miss one thing. Is it possible to add number of pods on node and show limit max pod number on node.
For example column

 pods
99/110

Segmentaion fault when using -u

$ kube-capacity -pu
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x10 pc=0x10cd9f2]

goroutine 1 [running]:
github.com/robscott/kube-capacity/pkg/capacity.(*clusterMetric).addPodMetric(0xc000290a38, 0xc0002914f0, 0x0, 0x0, 0x0, 0x0, 0xc0006bda80, 0x1b, 0x0, 0x0, ...)
        /Users/rob/go/src/github.com/robscott/kube-capacity/pkg/capacity/resources.go:175 +0x932
github.com/robscott/kube-capacity/pkg/capacity.buildClusterMetric(0xc000136930, 0xc000f748c0, 0xc000139d50, 0x0, 0x0, 0x0)
        /Users/rob/go/src/github.com/robscott/kube-capacity/pkg/capacity/resources.go:105 +0x82e
github.com/robscott/kube-capacity/pkg/capacity.FetchAndPrint(0x1010100, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x13670c5, ...)
        /Users/rob/go/src/github.com/robscott/kube-capacity/pkg/capacity/capacity.go:50 +0x216
github.com/robscott/kube-capacity/pkg/cmd.glob..func1(0x20c18a0, 0xc000370840, 0x0, 0x1)
        /Users/rob/go/src/github.com/robscott/kube-capacity/pkg/cmd/root.go:49 +0x21d
github.com/robscott/kube-capacity/vendor/github.com/spf13/cobra.(*Command).execute(0x20c18a0, 0xc0000b2030, 0x1, 0x1, 0x20c18a0, 0xc0000b2030)
        /Users/rob/go/src/github.com/robscott/kube-capacity/vendor/github.com/spf13/cobra/command.go:766 +0x2ae
github.com/robscott/kube-capacity/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0x20c18a0, 0xc000427f68, 0x10e368e, 0x20c18a0)
        /Users/rob/go/src/github.com/robscott/kube-capacity/vendor/github.com/spf13/cobra/command.go:852 +0x2ec
github.com/robscott/kube-capacity/vendor/github.com/spf13/cobra.(*Command).Execute(...)
        /Users/rob/go/src/github.com/robscott/kube-capacity/vendor/github.com/spf13/cobra/command.go:800
github.com/robscott/kube-capacity/pkg/cmd.Execute()
        /Users/rob/go/src/github.com/robscott/kube-capacity/pkg/cmd/root.go:79 +0x32
main.main()
        /Users/rob/go/src/github.com/robscott/kube-capacity/main.go:22 +0x20

$ kube-capacity -u
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x10 pc=0x10cd9f2]

goroutine 1 [running]:
github.com/robscott/kube-capacity/pkg/capacity.(*clusterMetric).addPodMetric(0xc0004f0a38, 0xc0004f14f0, 0x0, 0x0, 0x0, 0x0, 0xc00061b300, 0x1b, 0x0, 0x0, ...)
        /Users/rob/go/src/github.com/robscott/kube-capacity/pkg/capacity/resources.go:175 +0x932
github.com/robscott/kube-capacity/pkg/capacity.buildClusterMetric(0xc000102930, 0xc000933c70, 0xc00011ecb0, 0x0, 0x0, 0x0)
        /Users/rob/go/src/github.com/robscott/kube-capacity/pkg/capacity/resources.go:105 +0x82e
github.com/robscott/kube-capacity/pkg/capacity.FetchAndPrint(0x1010000, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x13670c5, ...)
        /Users/rob/go/src/github.com/robscott/kube-capacity/pkg/capacity/capacity.go:50 +0x216
github.com/robscott/kube-capacity/pkg/cmd.glob..func1(0x20c18a0, 0xc000344860, 0x0, 0x1)
        /Users/rob/go/src/github.com/robscott/kube-capacity/pkg/cmd/root.go:49 +0x21d
github.com/robscott/kube-capacity/vendor/github.com/spf13/cobra.(*Command).execute(0x20c18a0, 0xc00000c0b0, 0x1, 0x1, 0x20c18a0, 0xc00000c0b0)
        /Users/rob/go/src/github.com/robscott/kube-capacity/vendor/github.com/spf13/cobra/command.go:766 +0x2ae
github.com/robscott/kube-capacity/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0x20c18a0, 0xc0003fbf68, 0x10e368e, 0x20c18a0)
        /Users/rob/go/src/github.com/robscott/kube-capacity/vendor/github.com/spf13/cobra/command.go:852 +0x2ec
github.com/robscott/kube-capacity/vendor/github.com/spf13/cobra.(*Command).Execute(...)
        /Users/rob/go/src/github.com/robscott/kube-capacity/vendor/github.com/spf13/cobra/command.go:800
github.com/robscott/kube-capacity/pkg/cmd.Execute()
        /Users/rob/go/src/github.com/robscott/kube-capacity/pkg/cmd/root.go:79 +0x32
main.main()
        /Users/rob/go/src/github.com/robscott/kube-capacity/main.go:22 +0x20
$ kube-capacity version
kube-capacity version 0.3.1

Metrics-server is working for me with kubectl:

$ kubectl top node 
NAME    CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
h1355   1693m        21%    5304Mi          67%       
h1879   2835m        35%    5200Mi          66%       
h230    3412m        42%    4765Mi          60%       
h303    1792m        22%    5162Mi          65%       
h504    2599m        32%    4739Mi          60%       
h5345   79m          0%     11161Mi         46%       
h71     1598m        19%    11731Mi         73%       
h783    1161m        14%    4008Mi          50%       
h834    1263m        15%    5533Mi          70%       
h911    1763m        22%    5406Mi          68%       
s234    839m         10%    3980Mi          50%       
s237    975m         12%    5451Mi          69%       
s238    399m         4%     2985Mi          37%       
s239    526m         6%     3192Mi          40%       

GET https://api.example.com:8443/apis/metrics.k8s.io/v1beta1/nodes 200 OK in 98 milliseconds

$ kubectl version --short
Client Version: v1.15.0-alpha.3
Server Version: v1.14.1

GoReleaser did not run for 0.6.0?

I see that the homebrew tap has not been updated for 0.6.0, this should happen automatically right? The following worked for me

  if OS.mac?
    url "https://github.com/robscott/kube-capacity/releases/download/v0.6.0/kube-capacity_0.6.0_Darwin_x86_64.tar.gz"
    sha256 "db9161dc99fd217e2f2d4b9c7423d28150a9f47ddce0f8ce8ba8d0c36de06ec3"
  end
  if OS.linux? && Hardware::CPU.intel?
    url "https://github.com/robscott/kube-capacity/releases/download/v0.6.0/kube-capacity_0.6.0_Linux_x86_64.tar.gz"
    sha256 "250ae3b2e179c569cdb10b875ed49863d678297d873bfd3d3520c2f8a3f3ebcc"
  end

[feature request] Hability to sort based on percent

Great tool!

be able to sort by some field is great, but when we have mixed nodes, the sort end up been a bit weird.

So if we could sort by percent of the metric, would be really nice.

This is helpfull cause I want to see what nodes are close to been full, but because it sort as absolute, this became a bit harder

see bellow what i mean, the request are "out of order" based on percent!

โžœ k resource-capacity --util --sort cpu.request
NODE                          CPU REQUESTS   CPU LIMITS      CPU UTIL       MEMORY REQUESTS   MEMORY LIMITS    MEMORY UTIL
*                             93424m (38%)   177600m (72%)   25953m (10%)   129164Mi (25%)    290377Mi (58%)   172597Mi (34%)
xxxxxxxxxxxxxxxxxxxxxxxxxxxx   4555m (57%)    7000m (88%)     171m (2%)      8668Mi (59%)      11392Mi (77%)    2579Mi (17%)
xxxxxxxxxxxxxxxxxxxxxxxxxxxx   4555m (57%)    7100m (89%)     242m (3%)      9052Mi (61%)      12160Mi (83%)    2811Mi (19%)
xxxxxxxxxxxxxxxxxxxxxxxxxxxx   3905m (49%)    4800m (60%)     1012m (12%)    4700Mi (31%)      11264Mi (76%)    6233Mi (42%)
xxxxxxxxxxxxxxxxxxxxxxxxxxxx   3795m (96%)    9800m (250%)    874m (22%)     6436Mi (44%)      13764Mi (94%)    8882Mi (60%)
xxxxxxxxxxxxxxxxxxxxxxxxxxxx   3545m (90%)    7900m (201%)    795m (20%)     5726Mi (39%)      15922Mi (108%)   7536Mi (51%)
xxxxxxxxxxxxxxxxxxxxxxxxxxxx   3513m (89%)    8050m (205%)    3386m (86%)    7492Mi (51%)      11180Mi (76%)    10849Mi (74%)
xxxxxxxxxxxxxxxxxxxxxxxxxxxx   3165m (40%)    3800m (48%)     197m (2%)      2688Mi (18%)      6944Mi (46%)     3320Mi (22%)
xxxxxxxxxxxxxxxxxxxxxxxxxxxx   3165m (40%)    3800m (48%)     372m (4%)      2688Mi (18%)      6944Mi (46%)     4207Mi (28%)
xxxxxxxxxxxxxxxxxxxxxxxxxxxx   3075m (38%)    3700m (46%)     351m (4%)      2752Mi (18%)      6944Mi (46%)     4489Mi (30%)
xxxxxxxxxxxxxxxxxxxxxxxxxxxx   3033m (77%)    6450m (164%)    3081m (78%)    5317Mi (36%)      7785Mi (53%)     10692Mi (73%)
....

When i have some time, I might look if i can implement myself.. but i am not sure if i will have enough time to udnerstand everything!

Feature Request: Sorting

One ask here (which i'm happy help with) is a sort feature, Right now I'm with the sort command in linux and that works OK but it would be nice to sort things by default or with a flag. Bubbling signal of "oversubscription" to the top would be nice. In terms of the output with --usage, the parenthesis cause sort by column some issues since it wants to include the parenthesis in the evaluation of sort. As a minimal change, it would be helpful to not encapsulate the percents.

pull vpa recommendations (?)

just an idea, but vpa-recommender could be used to generate some extra info (if the user is running it in their cluster) for pods... the output might be something like this

NODE                                          NAMESPACE       POD                                                      CPU REQUESTS    CPU LIMITS     CPU UTIL     CPU RECOMMENDATION      MEMORY REQUESTS   MEMORY LIMITS    MEMORY UTIL    MEMORY RECOMMENDATION
...
ip-12-34-567-89.eu-west-2.compute.internal   my-namespace     my-pod-7d95ccc554-2ltsq                                  600m (3%)       0m (0%)        4m (0%)      123m                    1024Mi (1%)       1024Mi (1%)      431Mi (0%)     789Mi

CPU limits and requests totals not always matching sum of containers requests/limits

First of all - great tool - simple to use and powerful.
We noticed that in some cases the totals that the tool rolls up at the pod level do not match the total cpu limits/requests of the actual containers in the pods. This seems to happen when the pod has init containers that specify cpu requests/limits. For example:

        {
          "name": "zen-core-api-6bb6b6d64c-p624c",
          "namespace": "cp4ba",
          "cpu": {
            "requests": "100m",
            "requestsPercent": "0%",
            "limits": "2",
            "limitsPercent": "12%"
          },
          "memory": {
            "requests": "256Mi",
            "requestsPercent": "0%",
            "limits": "2Gi",
            "limitsPercent": "3%"
          },
          "containers": [
            {
              "name": "zen-core-api-container",
              "cpu": {
                "requests": "100m",
                "requestsPercent": "0%",
                "limits": "400m",
                "limitsPercent": "2%"
              },
              "memory": {
                "requests": "256Mi",
                "requestsPercent": "0%",
                "limits": "1Gi",
                "limitsPercent": "1%"
              }
            }
          ]
        }

In this case, there's only one active container in the pod and its cpu.limits are 400m - but the total reported at the pod level says cpu.limits is 2. We looked at the pod definition on the actual cluster and saw that it has an init container whose cpu.limits are in fact 2:
image
At this point we are left wondering whether this is an expected behavior - and if it is, whether the tool picks up the greater of the two values or just picks the first one for the pod.
Thanks.

incorrect data

in some cases, memory values for a node will not include the 'Mi' suffix:

10.145.197.168   42125m (75%)     148700m (265%)     221838Mi (82%)          416923Mi (154%)
10.145.197.169   45325m (80%)     121200m (216%)     62346Mi (23%)           180263Mi (66%)
10.145.197.170   14425m (25%)     37700m (67%)       45346Mi (16%)           100345Mi (37%)
162.150.14.214   13790m (24%)     45700m (81%)       39411368960000m (29%)   106336625408000m (78%)
162.150.14.215   13790m (24%)     39700m (70%)       38874498048000m (28%)   90767368960000m (67%)
162.150.14.216   16790m (29%)     42700m (76%)       46390690816000m (34%)   98283561728000m (72%)
162.150.14.217   12490m (22%)     39200m (70%)       38606062592000m (28%)   91841110784000m (68%)

In these cases, the report is wrong. need to change the logic here:
https://github.com/robscott/kube-capacity/blob/master/pkg/capacity/resources.go#L356

for example, use more specific requestString and limitString so the code does not fall on the wrong unit:
Example: add requestStringM() and limitStringM() that only converts Memory units to avoid the problem::

func (tp *tablePrinter) printClusterLine() {
	tp.printLine(&tableLine{
		node:           "*",
		namespace:      "*",
		pod:            "*",
		container:      "*",
		cpuRequests:    tp.cm.cpu.requestString(tp.availableFormat),
		cpuLimits:      tp.cm.cpu.limitString(tp.availableFormat),
		cpuUtil:        tp.cm.cpu.utilString(tp.availableFormat),
		memoryRequests: tp.cm.memory.requestStringM(tp.availableFormat),
		memoryLimits:   tp.cm.memory.limitStringM(tp.availableFormat),
		memoryUtil:     tp.cm.memory.utilString(tp.availableFormat),
		podCount:       tp.cm.podCount.podCountString(),
	})
}

func (tp *tablePrinter) printNodeLine(nodeName string, nm *nodeMetric) {
	tp.printLine(&tableLine{
		node:           nodeName,
		namespace:      "*",
		pod:            "*",
		container:      "*",
		cpuRequests:    nm.cpu.requestString(tp.availableFormat),
		cpuLimits:      nm.cpu.limitString(tp.availableFormat),
		cpuUtil:        nm.cpu.utilString(tp.availableFormat),
		memoryRequests: nm.memory.requestStringM(tp.availableFormat),
		memoryLimits:   nm.memory.limitStringM(tp.availableFormat),
		memoryUtil:     nm.memory.utilString(tp.availableFormat),
		podCount:       nm.podCount.podCountString(),
	})
}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.