Giter Site home page Giter Site logo

kubectl-blame's Introduction

kubectl-blame: git-like blame for kubectl

GitHub GitHub last commit

Annotate each line in the given resource's YAML with information from the managedFields to show who last modified the field.

As long as the field .metadata.manageFields of the resource is set properly, this command is able to display the manager of each field.

asciicast

Installing

Distribution Command / Link
Krew kubectl krew install blame
Pre-built binaries for macOS, Linux GitHub releases

Usage

# Blame pod 'foo' in default namespace
kubectl blame pods foo

# Blame deployment 'foo' and 'bar' in 'ns1' namespace
kubectl blame -n ns1 deploy foo bar

# Blame deployment 'bar' in 'ns1' namespace and hide the update time
kubectl blame -n ns1 --time none deploy bar

# Blame resources in file 'pod.yaml'(will access remote server)
kubectl blame -f pod.yaml

# Blame deployment saved in local file 'deployment.yaml'(will NOT access remote server)
kubectl blame -i deployment.yaml
# Or
cat deployment.yaml | kubectl blame -i -

Flags

flag default description
--time relative Time format. One of: full, relative, none.
--filename, -f Filename identifying the resource to get from a server.
--input, -i Read object from the give file. When the file is -, read standard input.

kubectl-blame's People

Contributors

alvaroaleman avatar dependabot[bot] avatar knight42 avatar sylr avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

kubectl-blame's Issues

Cryptic error trying to blame a service

Hey I am trying to use kubectl-blame for a service and I am getting the following error:

$ k blame service kube-apiserver
Error: unknown element: fieldpath.PathElement{FieldName:(*string)(nil), Key:(*value.FieldList)(nil), Value:(*value.Value)(0xc0000f09a0), Index:(*int)(nil)}

Any idea what the issue could be?

The service looks as follows:

$ k get service kube-apiserver -ojson --show-managed-fields=true
{
    "apiVersion": "v1",
    "kind": "Service",
    "metadata": {
        "creationTimestamp": "2021-10-21T16:13:06Z",
        "finalizers": [
            "service.kubernetes.io/load-balancer-cleanup"
        ],
        "managedFields": [
            {
                "apiVersion": "v1",
                "fieldsType": "FieldsV1",
                "fieldsV1": {
                    "f:metadata": {
                        "f:finalizers": {
                            ".": {},
                            "v:\"service.kubernetes.io/load-balancer-cleanup\"": {}
                        }
                    },
                    "f:status": {
                        "f:loadBalancer": {
                            "f:ingress": {}
                        }
                    }
                },
                "manager": "kube-controller-manager",
                "operation": "Update",
                "time": "2021-10-21T16:13:08Z"
            },
            {
                "apiVersion": "v1",
                "fieldsType": "FieldsV1",
                "fieldsV1": {
                    "f:metadata": {
                        "f:ownerReferences": {
                            ".": {},
                            "k:{\"uid\":\"03ff3876-b743-4302-8969-1cf517062f2b\"}": {
                                ".": {},
                                "f:apiVersion": {},
                                "f:blockOwnerDeletion": {},
                                "f:controller": {},
                                "f:kind": {},
                                "f:name": {},
                                "f:uid": {}
                            }
                        }
                    },
                    "f:spec": {
                        "f:externalTrafficPolicy": {},
                        "f:ports": {
                            ".": {},
                            "k:{\"port\":6443,\"protocol\":\"TCP\"}": {
                                ".": {},
                                "f:port": {},
                                "f:protocol": {},
                                "f:targetPort": {}
                            }
                        },
                        "f:selector": {
                            ".": {},
                            "f:app": {},
                            "f:hypershift.openshift.io/control-plane-component": {},
                            "f:hypershift.openshift.io/hosted-control-plane": {}
                        },
                        "f:sessionAffinity": {},
                        "f:type": {}
                    }
                },
                "manager": "hypershift-controlplane-manager",
                "operation": "Update",
                "time": "2021-10-21T16:51:04Z"
            }
        ],
        "name": "kube-apiserver",
        "namespace": "clusters-alvaro-test",
        "ownerReferences": [
            {
                "apiVersion": "hypershift.openshift.io/v1alpha1",
                "blockOwnerDeletion": true,
                "controller": true,
                "kind": "HostedControlPlane",
                "name": "alvaro-test",
                "uid": "03ff3876-b743-4302-8969-1cf517062f2b"
            }
        ],
        "resourceVersion": "124245",
        "uid": "48b6f513-8d78-4c3e-8382-7f52c0fadd81"
    },
    "spec": {
        "clusterIP": "172.30.2.255",
        "clusterIPs": [
            "172.30.2.255"
        ],
        "externalTrafficPolicy": "Cluster",
        "ipFamilies": [
            "IPv4"
        ],
        "ipFamilyPolicy": "SingleStack",
        "ports": [
            {
                "nodePort": 31556,
                "port": 6443,
                "protocol": "TCP",
                "targetPort": 6443
            }
        ],
        "selector": {
            "app": "kube-apiserver",
            "hypershift.openshift.io/control-plane-component": "kube-apiserver",
            "hypershift.openshift.io/hosted-control-plane": "clusters-alvaro-test"
        },
        "sessionAffinity": "None",
        "type": "LoadBalancer"
    },
    "status": {
        "loadBalancer": {
            "ingress": [
                {
                    "hostname": "<<REDACTED>.elb.amazonaws.com"
                }
            ]
        }
    }
}

Does not work correctly on associative array

I've reimplemented this plugin from scratch and noticed discrepancies between my implementation vs yours

Output from kubectl blame:

                                                   spec:
                                                     containers:
kubectl-client-side-apply (Update 2 weeks ago)       - env:
kubectl-client-side-apply (Update 2 weeks ago)         - name: barx
kubectl-client-side-apply (Update 2 weeks ago)           value: bar

ManagedFields input to kubectl blame:

  - apiVersion: apps/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:spec:
        f:template:
          f:spec:
            f:containers:
              k:{"name":"nginx"}:
                f:env:
                  .: {}
                  k:{"name":"barx"}:
                    .: {}
                    f:name: {}
                    f:value: {}
    manager: envpatcher
    operation: Update
    time: "2024-04-10T00:34:50Z"

It should be showing envpatcher as the owner.

full input
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "2"
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"labels":{"app":"nginx"},"name":"nginx-deployment","namespace":"default"},"spec":{"replicas":3,"selector":{"matchLabels":{"app":"nginx"}},"template":{"metadata":{"labels":{"app":"nginx"}},"spec":{"containers":[{"image":"nginx:1.14.2","name":"nginx","ports":[{"containerPort":80}]}]}}}}
  creationTimestamp: "2024-04-10T00:34:50Z"
  finalizers:
  - example.com/foo
  generation: 2
  labels:
    app: nginx
  managedFields:
  - apiVersion: apps/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:kubectl.kubernetes.io/last-applied-configuration: {}
        f:labels:
          .: {}
          f:app: {}
      f:spec:
        f:progressDeadlineSeconds: {}
        f:replicas: {}
        f:revisionHistoryLimit: {}
        f:selector: {}
        f:strategy:
          f:rollingUpdate:
            .: {}
            f:maxSurge: {}
            f:maxUnavailable: {}
          f:type: {}
        f:template:
          f:metadata:
            f:labels:
              .: {}
              f:app: {}
          f:spec:
            f:containers:
              k:{"name":"nginx"}:
                .: {}
                f:image: {}
                f:imagePullPolicy: {}
                f:name: {}
                f:ports:
                  .: {}
                  k:{"containerPort":80,"protocol":"TCP"}:
                    .: {}
                    f:containerPort: {}
                    f:protocol: {}
                f:resources: {}
                f:terminationMessagePath: {}
                f:terminationMessagePolicy: {}
            f:dnsPolicy: {}
            f:restartPolicy: {}
            f:schedulerName: {}
            f:securityContext: {}
            f:terminationGracePeriodSeconds: {}
    manager: kubectl-client-side-apply
    operation: Update
    time: "2024-04-10T00:44:50Z"
  - apiVersion: apps/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:spec:
        f:template:
          f:spec:
            f:containers:
              k:{"name":"nginx"}:
                f:env:
                  .: {}
                  k:{"name":"barx"}:
                    .: {}
                    f:name: {}
                    f:value: {}
    manager: envpatcher
    operation: Update
    time: "2024-04-10T00:34:50Z"
  - apiVersion: apps/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          f:deployment.kubernetes.io/revision: {}
      f:status:
        f:availableReplicas: {}
        f:conditions:
          .: {}
          k:{"type":"Available"}:
            .: {}
            f:lastTransitionTime: {}
            f:lastUpdateTime: {}
            f:message: {}
            f:reason: {}
            f:status: {}
            f:type: {}
          k:{"type":"Progressing"}:
            .: {}
            f:lastTransitionTime: {}
            f:lastUpdateTime: {}
            f:message: {}
            f:reason: {}
            f:status: {}
            f:type: {}
        f:observedGeneration: {}
        f:readyReplicas: {}
        f:replicas: {}
        f:updatedReplicas: {}
    manager: kube-controller-manager
    operation: Update
    subresource: status
    time: "2024-04-10T00:34:50Z"
  - apiVersion: apps/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:finalizers:
          .: {}
          v:"example.com/foo": {}
    manager: finalizerpatcher
    operation: Update
    time: "2024-04-10T00:35:29Z"
  name: nginx-deployment
  namespace: default
  resourceVersion: "7792385"
  uid: 2e77f9dd-e8da-47b0-be11-75b04f1b4460
spec:
  progressDeadlineSeconds: 600
  replicas: 3
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: nginx
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: nginx
    spec:
      containers:
      - env:
        - name: barx
          value: bar
        image: nginx:1.14.2
        imagePullPolicy: IfNotPresent
        name: nginx
        ports:
        - containerPort: 80
          protocol: TCP
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
status:
  availableReplicas: 3
  conditions:
  - lastTransitionTime: "2024-04-10T00:34:50Z"
    lastUpdateTime: "2024-04-10T00:34:50Z"
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  - lastTransitionTime: "2024-04-10T00:34:49Z"
    lastUpdateTime: "2024-04-10T00:35:14Z"
    message: ReplicaSet "nginx-deployment-779d59bcb" has successfully progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  observedGeneration: 2
  readyReplicas: 3
  replicas: 3
  updatedReplicas: 3

Support for co-ownership

Hey,

first of all great plugin, really useful. Thank you!

I noticed that only the latest manager is shows. It would be really nice if there is a way to show all managers of a field. (e.g. enabled via a flag)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.