Giter Site home page Giter Site logo

aylei / kubectl-debug Goto Github PK

View Code? Open in Web Editor NEW
2.3K 56.0 301.0 2.21 MB

This repository is no longer maintained, please checkout https://github.com/JamesTGrant/kubectl-debug.

License: Apache License 2.0

Go 96.01% Dockerfile 0.31% Shell 2.64% Makefile 0.55% Mustache 0.50%
kubectl kubectl-plugins troubleshooting kubernetes debug

kubectl-debug's Introduction

Deprecation Notice

This repository is no longer maintained, please checkout https://github.com/JamesTGrant/kubectl-debug.

Kubectl-debug

license travis Go Report Card docker

简体中文

Overview

kubectl-debug is an out-of-tree solution for troubleshooting running pods, which allows you to run a new container in running pods for debugging purpose (examples). The new container will join the pid, network, user and ipc namespaces of the target container, so you can use arbitrary trouble-shooting tools without pre-installing them in your production container image.

Screenshots

gif

Quick Start

Install the kubectl debug plugin

Homebrew:

brew install aylei/tap/kubectl-debug

Download the binary:

export PLUGIN_VERSION=0.1.1
# linux x86_64
curl -Lo kubectl-debug.tar.gz https://github.com/aylei/kubectl-debug/releases/download/v${PLUGIN_VERSION}/kubectl-debug_${PLUGIN_VERSION}_linux_amd64.tar.gz
# macos
curl -Lo kubectl-debug.tar.gz https://github.com/aylei/kubectl-debug/releases/download/v${PLUGIN_VERSION}/kubectl-debug_${PLUGIN_VERSION}_darwin_amd64.tar.gz

tar -zxvf kubectl-debug.tar.gz kubectl-debug
sudo mv kubectl-debug /usr/local/bin/

For windows users, download the latest archive from the release page, decompress the package and add it to your PATH.

(Optional) Install the debug agent DaemonSet

kubectl-debug requires an agent pod to communicate with the container runtime. In the agentless mode, the agent pod can be created when a debug session starts and to be cleaned up when the session ends.(Turn on agentless mode by default)

While convenient, creating pod before debugging can be time consuming. You can install the debug agent DaemonSet and use --agentless=false params in advance to skip this:

# if your kubernetes version is v1.16 or newer
kubectl apply -f https://raw.githubusercontent.com/aylei/kubectl-debug/master/scripts/agent_daemonset.yml
# if your kubernetes is old version(<v1.16), you should change the apiVersion to extensions/v1beta1, As follows
wget https://raw.githubusercontent.com/aylei/kubectl-debug/master/scripts/agent_daemonset.yml
sed -i '' '1s/apps\/v1/extensions\/v1beta1/g' agent_daemonset.yml
kubectl apply -f agent_daemonset.yml
# or using helm
helm install kubectl-debug -n=debug-agent ./contrib/helm/kubectl-debug
# use daemonset agent mode(close agentless mode)
kubectl debug --agentless=false POD_NAME

Debug instructions

Try it out!

# kubectl 1.12.0 or higher
kubectl debug -h
# if you installed the debug agent's daemonset, you can use --agentless=false to speed up the startup.
# the default agentless mode will be used in following commands
kubectl debug POD_NAME

# in case of your pod stuck in `CrashLoopBackoff` state and cannot be connected to,
# you can fork a new pod and diagnose the problem in the forked pod
kubectl debug POD_NAME --fork

# in fork mode, if you want the copied pod retains the labels of the original pod, you can use the --fork-pod-retain-labels parameter to set(comma separated, and spaces are not allowed)
# Example is as follows
# If not set, this parameter is empty by default (Means that any labels of the original pod are not retained, and the labels of the copied pods are empty.)
kubectl debug POD_NAME --fork --fork-pod-retain-labels=<labelKeyA>,<labelKeyB>,<labelKeyC>

# in order to enable node without public IP or direct access (firewall and other reasons) to access, port-forward mode is enabled by default.
# if you don't need to turn on port-forward mode, you can use --port-forward false to turn off it.
kubectl debug POD_NAME --port-forward=false --agentless=false --daemonset-ns=kube-system --daemonset-name=debug-agent

# old versions of kubectl cannot discover plugins, you may execute the binary directly
kubectl-debug POD_NAME

# use primary docker registry, set registry kubernets secret to pull image
# the default registry-secret-name is kubectl-debug-registry-secret, the default namespace is default
# please set the secret data source as {Username: <username>, Password: <password>}
kubectl-debug POD_NAME --image calmkart/netshoot:latest --registry-secret-name <k8s_secret_name> --registry-secret-namespace <namespace>
# in default agentless mode, you can set the agent pod's resource limits/requests, for example:
# default is not set
kubectl-debug POD_NAME --agent-pod-cpu-requests=250m --agent-pod-cpu-limits=500m --agent-pod-memory-requests=200Mi --agent-pod-memory-limits=500Mi
  • You can configure the default arguments to simplify usage, refer to Configuration
  • Refer to Examples for practical debugging examples

(Optional) Create a Secret for Use with Private Docker Registries

You can use a new or existing Kubernetes dockerconfigjson secret. For example:

# Be sure to run "docker login" beforehand.
kubectl create secret generic kubectl-debug-registry-secret \
    --from-file=.dockerconfigjson=<path/to/.docker/config.json> \
    --type=kubernetes.io/dockerconfigjson

Alternatively, you can create a secret with the key authStr and a JSON payload containing a Username and Password. For example:

echo -n '{"Username": "calmkart", "Password": "calmkart"}' > ./authStr
kubectl create secret generic kubectl-debug-registry-secret --from-file=./authStr

Refer to the official Kubernetes documentation on Secrets for more ways to create them.

Build from source

Clone this repo and:

# make will build plugin binary and debug-agent image
make
# install plugin
mv kubectl-debug /usr/local/bin

# build plugin only
make plugin
# build agent only
make agent-docker

port-forward mode And agentless mode(Default opening)

  • port-foward mode: By default, kubectl-debug will directly connect with the target host. When kubectl-debug cannot connect to targetHost:agentPort, you can enable port-forward mode. In port-forward mode, the local machine listens on localhost:agentPort and forwards data to/from targetPod:agentPort.

  • agentless mode: By default, debug-agent needs to be pre-deployed on each node of the cluster, which consumes cluster resources all the time. Unfortunately, debugging Pod is a low-frequency operation. To avoid loss of cluster resources, the agentless mode has been added in #31. In agentless mode, kubectl-debug will first start debug-agent on the host where the target Pod is located, and then debug-agent starts the debug container. After the user exits, kubectl-debug will delete the debug container and kubectl-debug will delete the debug-agent pod at last.

Configuration

kubectl-debug uses nicolaka/netshoot as the default image to run debug container, and use bash as default entrypoint.

You can override the default image and entrypoint with cli flag, or even better, with config file ~/.kube/debug-config:

# debug agent listening port(outside container)
# default to 10027
agentPort: 10027

# whether using agentless mode
# default to true
agentless: true
# namespace of debug-agent pod, used in agentless mode
# default to 'default'
agentPodNamespace: default
# prefix of debug-agent pod, used in agentless mode
# default to  'debug-agent-pod'
agentPodNamePrefix: debug-agent-pod
# image of debug-agent pod, used in agentless mode
# default to 'aylei/debug-agent:latest'
agentImage: aylei/debug-agent:latest

# daemonset name of the debug-agent, used in port-forward
# default to 'debug-agent'
debugAgentDaemonset: debug-agent
# daemonset namespace of the debug-agent, used in port-forwad
# default to 'default'
debugAgentNamespace: kube-system
# whether using port-forward when connecting debug-agent
# default true
portForward: true
# image of the debug container
# default as showed
image: nicolaka/netshoot:latest
# start command of the debug container
# default ['bash']
command:
- '/bin/bash'
- '-l'
# private docker registry auth kuberntes secret
# default registrySecretName is kubectl-debug-registry-secret
# default registrySecretNamespace is default
registrySecretName: my-debug-secret
registrySecretNamespace: debug
# in agentless mode, you can set the agent pod's resource limits/requests:
# default is not set
agentCpuRequests: ""
agentCpuLimits: ""
agentMemoryRequests: ""
agentMemoryLimits: ""
# in fork mode, if you want the copied pod retains the labels of the original pod, you can change this params
# format is []string
# If not set, this parameter is empty by default (Means that any labels of the original pod are not retained, and the labels of the copied pods are empty.)
forkPodRetainLabels: []
# You can disable SSL certificate check when communicating with image registry by 
# setting registrySkipTLSVerify to true.
registrySkipTLSVerify: false
# You can set the log level with the verbosity setting
verbosity : 0

If the debug-agent is not accessible from host port, it is recommended to set portForward: true to using port-forawrd mode.

PS: kubectl-debug will always override the entrypoint of the container, which is by design to avoid users running an unwanted service by mistake(of course you can always do this explicitly).

Authorization

Currently, kubectl-debug reuse the privilege of the pod/exec sub resource to do authorization, which means that it has the same privilege requirements with the kubectl exec command.

Auditing / Security

Some teams may want to limit what debug image users are allowed to use and to have an audit record for each command they run in the debug container.

You can use the environment variable KCTLDBG_RESTRICT_IMAGE_TO restrict the agent to using a specific container image. For example putting the following in the container spec section of your daemonset yaml will force the agent to always use the image docker.io/nicolaka/netshoot:latest regardless of what the user specifies on the kubectl-debug command line

          env : 
            - name: KCTLDBG_RESTRICT_IMAGE_TO
              value: docker.io/nicolaka/netshoot:latest

If KCTLDBG_RESTRICT_IMAGE_TO is set and as a result agent is using an image that is different than what the user requested then the agent will log to standard out a message that announces what is happening. The message will include the URI's of both images.

Auditing can be enabled by placing audit: true in the agent's config file.

There are 3 settings related to auditing.

audit
Boolean value that indicates whether auditing should be enabled or not. Default value is false
audit_fifo
Template of path to a FIFO that will be used to exchange audit information from the debug container to the agent. The default value is /var/data/kubectl-debug-audit-fifo/KCTLDBG-CONTAINER-ID. If auditing is enabled then the agent will :
  1. Prior to creating the debug container, create a fifo based on the value of audit_fifo. The agent will replace KCTLDBG-CONTAINER-ID with the id of the debug container it is creating.
  2. Create a thread that reads lines of text from the FIFO and then writes log messages to standard out, where the log messages look similar to example below
    2020/05/22 17:59:58 runtime.go:717: audit - user: USERNAME/885cbd0506868985a6fc491bb59a2d3c debugee: 48107cbdacf4b478cbf1e2e34dbea6ebb48a2942c5f3d1effbacf0a216eac94f exec: 265 execve("/bin/tar", ["tar", "--help"], 0x55a8d0dfa6c0 /* 7 vars */) = 0
    Where USERNAME is the kubernetes user as determined by the client that launched the debug container and debuggee is the container id of the container being debugged.
  3. Bind mount the fifo it creates to the debugger container.
audit_shim
String array that will be placed before the command that will be run in the debug container. The default value is {"/usr/bin/strace", "-o", "KCTLDBG-FIFO", "-f", "-e", "trace=/exec"}. The agent will replace KCTLDBG-FIFO with the fifo path ( see above ) If auditing is enabled then agent will use the concatenation of the array specified by audit_shim and the original command array it was going to use.

The easiest way to enable auditing is to define a config map in the yaml you use to deploy the deamonset. You can do this by place

apiVersion : v1
kind: ConfigMap 
metadata: 
  name : kubectl-debug-agent-config
data: 
  agent-config.yml: |  
    audit: true
---    

at the top of the file, adding a configmap volume like so

        - name: config
          configMap:
            name: kubectl-debug-agent-config

and a volume mount like so

            - name: config
              mountPath: "/etc/kubectl-debug/agent-config.yml"
              subPath: agent-config.yml

.

Roadmap

kubectl-debug is supposed to be just a troubleshooting helper, and is going be replaced by the native kubectl debug command when this proposal is implemented and merged in the future kubernetes release. But for now, there is still some works to do to improve kubectl-debug.

  • Security: currently, kubectl-debug do authorization in the client-side, which should be moved to the server-side (debug-agent)
  • More unit tests
  • More real world debugging example
  • e2e tests

If you are interested in any of the above features, please file an issue to avoid potential duplication.

Contribute

Feel free to open issues and pull requests. Any feedback is highly appreciated!

Acknowledgement

This project would not be here without the effort of our contributors, thanks!

kubectl-debug's People

Contributors

abowloflrf avatar andrew-demb avatar atheriel avatar aylei avatar calmkart avatar caruccio avatar dee0 avatar frots avatar gadiener avatar jseguillon avatar kchenzhi avatar kklin avatar markzhang0928 avatar mikhail-sakhnov avatar runzhen avatar scraly avatar tkanng avatar walfie avatar whalecold avatar xieyanker avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kubectl-debug's Issues

使用 --fork 选项不可以启动容器

报错

Error: failed to start container "coredns": Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "exec: \"sh\": executable file not found in $PATH": unknown

can't resolve dependency issue with go dep

When I was trying to run dep init command with kubectl-debug project, I received the following error:

init failed: unable to solve the dependency graph: Solving failure: package github.com/docker/docker/api/types/image does not exist within project githu
b.com/docker/docker

Can you please help how to solve it?

Possibility to connect to CrashLoopbackOff pod

Hi there!
It's FR. It would be just great, to have possibility to connect and debug crashing pods (I meant CrashLoopbackOff). So we see "bad" pod, and there are no meaningful logs in the pod describe and we could connect into it and do some useful things.
What do you think about such feature? Could such thing be done?

Doesn't work if HostIP is not directly accessible

I tried to use kubectl-debug in Kubernetes on MacOS Docker Desktop. Its kubelet is running inside hyperkit vm and have IP that is inaccessible from host machine where kubectl is running. So kubectl-debug "fails with error execute remote, error sending request: Post http://192.168.65.3:10027/api/v1/debug.... : dial tcp 192.168.65.3:10027: connect: operation timed out"

I suppose it also won't work in production envs where kubelet nodes doesn't have external IP or have everything except kubernetes internal traffic blocked at firewall.

One possible solution is to connect to agent via kubectl port-foward if some option is given in command line. Or allow to specify agent endpoint URL either directly in command line or in debug-config (where endpoint overrides can be specified as map nodename => endpoint-uri)

Doesn't work on GKE

I got this error when trying this against a GKE cluster:

2018/12/24 17:46:28 error loading file  open /Users/kevin/.kube/debug-config: no such file or directory
No Auth Provider found for name "gcp"
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x1b459b8]

goroutine 1 [running]:
github.com/aylei/kubectl-debug/pkg/plugin.(*DebugOptions).Run(0xc4202621c0, 0x0, 0x0)
        /Users/alei/go/src/github.com/aylei/kubectl-debug/pkg/plugin/cmd.go:193 +0x48
github.com/aylei/kubectl-debug/pkg/plugin.NewDebugCmd.func1(0xc420295180, 0xc420322bd0, 0x1, 0x3)
        /Users/alei/go/src/github.com/aylei/kubectl-debug/pkg/plugin/cmd.go:94 +0x134
github.com/aylei/kubectl-debug/vendor/github.com/spf13/cobra.(*Command).execute(0xc420295180, 0xc4200d60d0, 0x3, 0x3, 0xc420295180, 0xc4200d60d0)
        /Users/alei/go/src/github.com/aylei/kubectl-debug/vendor/github.com/spf13/cobra/command.go:766 +0x2c1
github.com/aylei/kubectl-debug/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc420295180, 0x1e5edf0, 0x1bed6c0, 0x1e5af50)
        /Users/alei/go/src/github.com/aylei/kubectl-debug/vendor/github.com/spf13/cobra/command.go:852 +0x30a
github.com/aylei/kubectl-debug/vendor/github.com/spf13/cobra.(*Command).Execute(0xc420295180, 0xc4200dc000, 0x1e64980)
        /Users/alei/go/src/github.com/aylei/kubectl-debug/vendor/github.com/spf13/cobra/command.go:800 +0x2b
main.main()
        /Users/alei/go/src/github.com/aylei/kubectl-debug/cmd/plugin/main.go:16 +0x112

I've seen this in my own projects before. It'll probably be fixed by importing the auth plugins: kubernetes/client-go#242 (comment).

unable to run kubectl-debug

kubectl-debug traefik-ingress-controller-v2-9vsvk -n kube-system
error execute remote, error sending request: Post http://192.168.0.183:10027/api/v1/debug?command=%5B%22bash%22%5D&container=docker%3A%2F%2F506c3ea81af0d7887a8c497072439225421392a1730d4fd720a6bb640567967c&image=nicolaka%2Fnetshoot%3Alatest: dial tcp 192.168.0.183:10027: connect: connection refused
error: error sending request: Post http://192.168.0.183:10027/api/v1/debug?command=%5B%22bash%22%5D&container=docker%3A%2F%2F506c3ea81af0d7887a8c497072439225421392a1730d4fd720a6bb640567967c&image=nicolaka%2Fnetshoot%3Alatest: dial tcp 192.168.0.183:10027: connect: connection refused

Panic when the user do not have the permission to delete pods

Start deleting agent pod debug-pod-585fcf9d59-njmwk
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x10c9a57]

goroutine 1 [running]:
fmt.Fprintf(0x0, 0x0, 0x20b845f, 0x4e, 0xc0007157f0, 0x2, 0x2, 0x39, 0x0, 0x0)
/usr/local/Cellar/go/1.12.4/libexec/src/fmt/print.go:200 +0x77
github.com/aylei/kubectl-debug/pkg/plugin.(*DebugOptions).Run.func2.1()
/Users/alei/go/src/github.com/aylei/kubectl-debug/pkg/plugin/cmd.go:472 +0x2c9
k8s.io/kubernetes/pkg/util/interrupt.(*Handler).Close.func1()
/Users/alei/go/pkg/mod/k8s.io/[email protected]/pkg/util/interrupt/interrupt.go:65 +0x46
sync.(*Once).Do(0xc0003ac080, 0xc000715888)
/usr/local/Cellar/go/1.12.4/libexec/src/sync/once.go:44 +0xb3
k8s.io/kubernetes/pkg/util/interrupt.(*Handler).Close(0xc0003ac060)
/Users/alei/go/pkg/mod/k8s.io/[email protected]/pkg/util/interrupt/interrupt.go:63 +0x54
k8s.io/kubernetes/pkg/util/interrupt.(*Handler).Run(0xc0003ac060, 0xc000758c40, 0x0, 0x0)
/Users/alei/go/pkg/mod/k8s.io/[email protected]/pkg/util/interrupt/interrupt.go:103 +0x113
github.com/aylei/kubectl-debug/pkg/plugin.(*DebugOptions).Run.func2(0xc000000008, 0x20fac30)
/Users/alei/go/src/github.com/aylei/kubectl-debug/pkg/plugin/cmd.go:475 +0xf3
k8s.io/kubernetes/pkg/util/interrupt.(*Handler).Run(0xc00076e300, 0xc00076e2d0, 0x0, 0x0)
/Users/alei/go/pkg/mod/k8s.io/[email protected]/pkg/util/interrupt/interrupt.go:103 +0xff
github.com/aylei/kubectl-debug/pkg/util.TTY.Safe(0x2244600, 0xc00000e010, 0x2244620, 0xc0000bc000, 0x1, 0x0, 0xc00015a2d0, 0xc00076e2d0, 0x0, 0x0)
/Users/alei/go/src/github.com/aylei/kubectl-debug/pkg/util/term.go:110 +0x189
github.com/aylei/kubectl-debug/pkg/plugin.(*DebugOptions).Run(0xc000182c40, 0x0, 0x20fac28)
/Users/alei/go/src/github.com/aylei/kubectl-debug/pkg/plugin/cmd.go:478 +0x8ea
github.com/aylei/kubectl-debug/pkg/plugin.NewDebugCmd.func1(0xc00037a280, 0xc00015a190, 0x2, 0x5)
/Users/alei/go/src/github.com/aylei/kubectl-debug/pkg/plugin/cmd.go:150 +0xe7
github.com/spf13/cobra.(*Command).execute(0xc00037a280, 0xc0000be0d0, 0x5, 0x5, 0xc00037a280, 0xc0000be0d0)
/Users/alei/go/pkg/mod/github.com/spf13/[email protected]/command.go:766 +0x2ae
github.com/spf13/cobra.(*Command).ExecuteC(0xc00037a280, 0xc00000e010, 0x2244620, 0xc0000bc000)
/Users/alei/go/pkg/mod/github.com/spf13/[email protected]/command.go:852 +0x2ec
github.com/spf13/cobra.(*Command).Execute(...)
/Users/alei/go/pkg/mod/github.com/spf13/[email protected]/command.go:800
main.main()
/Users/alei/go/src/github.com/aylei/kubectl-debug/cmd/plugin/main.go:17 +0x110

How to run one-time run command and get result

By running the command "kubectl exec -ti", we can get the output of the command result as follow; Using "kubectl-debug" can not get the desired result, Any Idea?

# kubectl exec -ti  tkservice-5ffbb64854-6k2h6  -- ls -hl
total 30M
-rw-r--r-- 1 root root  263 Oct 31 09:59 Dockerfile
drwxr-xr-x 7 root root 4.0K Oct 31 09:59 config
-rw-r--r-- 1 root root  188 Oct 31 09:59 main.yml
# kubectl-debug tkservice-5ffbb64854-6k2h6 -- ls -hl
Agent Pod info: [Name:debug-agent-pod-146dc5f3-fc7d-11e9-a689-00163e03e155, Namespace:default, Image:debug-agent:latest, HostPort:10027, ContainerPort:10027]
Waiting for pod debug-agent-pod-146dc5f3-fc7d-11e9-a689-00163e03e155 to run...
pod tkservice-5ffbb64854-6k2h6 PodIP 10.81.133.178, agentPodIP 172.xxx.xxx.211
wait for forward port to debug agent ready...
Forwarding from 127.0.0.1:10027 -> 10027
Handling connection for 10027
                             pulling image netshoot...
latest: Pulling from paas-dev/netshoot
Digest: sha256:8b020dc72d8ef07663e44c449f1294fc47c81a10ef5303dc8c2d9635e8ca22b1
Status: Image is up to date for netshoot:latest
starting debug container...
container created, open tty...
Start deleting agent pod tkservice-5ffbb64854-6k2h6
end port-forward...

debug正常运行pod报错

执行:kubectl-debug --namespace test test-pod --port-forward --agentless
显示:
Agent Pod info: [Name:debug-agent-pod-36a6c112-ad3e-11e9-bd4c-acde48001122, Namespace:default, Image:aylei/debug-agent:latest, HostPort:10027, ContainerPort:10027]
Waiting for pod debug-agent-pod-36a6c112-ad3e-11e9-bd4c-acde48001122 to run...
Error occurred while waiting for pod to run: pod ran to completion
error: pod ran to completion

README error

Hello!

  1. You have small mistype here:

chmod +x ./kubectl-debug
mv kubectdl-debug /usr/local/bin/

  1. Does kubectl debug should work from scratch as you described? Or maybe user should add also bash alias for kubectl-debug executable binary? I've tested your plugin on 1.10 version of cluster and client, and it's completely okay, but only when using just kubectl-debug directly.

add an extend apiserver as the gateway

kubectl-debug connect to the node agent directly, which is not secure. We should provide an extend apiserver to do centralized authorization & authentication. Extend apiserver should be an opt-in component, we can always use the simplest setup just like now.

The extend apiserver will also proxy the debug connection like kube apiserver, which addresses the inaccessible hostIP issue in #2

Warning about missing config

Awesome project!

I've followed the install instructions but it always complains about the config not being there.

$ kubectl debug mypod
2019/03/13 08:45:32 error loading file  open /Users/jacob/.kube/debug-config: no such file or directory

Would be nice for this to be created automatically.

How to copy tcpdump file to local machine

When I try 'kubectl exec -it debug-pod /bin/sh', get a message:

OCI runtime exec failed: exec failed: container_linux.go:344: starting container process caused "exec: "/bin/sh": stat /bin/sh: no such file or directory": unknown
command terminated with exit code 126
Can't get into pod to copy tcpdump file.

工具镜像使用自定义镜像,报连接镜像仓库错误

使用kubectl debug pod_name -n ns-* --image harbor..com/base/bianque:v1.0.1
pulling image harbor.
.com/base/bianque:v1.0.1...
message: error execute remote, Internal error occurred: error attaching to container: Error response from daemon: pull access denied for harbor.*.com/base/bianque, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
已经在agent-ds文件中添加imagessecret 尝试debug的服务器也登录了该HARBOR 手动直接拉该镜像也无问题

--agentless: agent not removed in case of error

While investigating #52 I saw this:

$ kubectl -n default get pod
No resources found.
$ kubectl debug harbor-harbor-portal-68df4cdb58-9xx4l --agentless --port-forward
Agent Pod info: [Name:debug-agent-pod-853aee9e-db9b-11e9-b976-88e9fe6345ac, Namespace:default, Image:aylei/debug-agent:latest, HostPort:10027, ContainerPort:10027]
Waiting for pod debug-agent-pod-853aee9e-db9b-11e9-b976-88e9fe6345ac to run...
error: container [portal] not ready
$ kubectl -n default get pod
NAME                                                   READY   STATUS    RESTARTS   AGE
debug-agent-pod-853aee9e-db9b-11e9-b976-88e9fe6345ac   1/1     Running   0          30s
$

That should've been deleted by now, right?

Agentless pod does not boot on cluster with CPU and memory restrictions

I am having an issue where the agent pod can not be created because it does not specify memory limits/requests.
For example:
kubectl debug <pod_name> --agentless --agent-pod-namespace=<namespace>
Output:
Agent Pod info: [Name:debug-agent-pod-<pod_name>, Namespace:<namespace>, Image:aylei/debug-agent:latest, HostPort:10027, ContainerPort:10027] Error from server (Forbidden): pods "debug-agent-pod-<pod_name>" is forbidden: [maximum memory usage per Pod is 1Ti. No limit is specified., memory max limit to request ratio per Pod is 2, but no request is specified or request is 0.]

Our kubernetes cluster enforces restrictions on CPU and memory requests/limits. Any pod without proper limits configured will not be allowed to boot.

It would be nice to allow the CPU and memory parameters to be configured in the config file so we could use the agentless setup.

kubectl debug状态为CrashLoopBackOff的Pod报错

[root@sz-5-centos163 src]# kubectl debug finance-payment-kingdee-realname-cgi--test5-6f46ff966c-rm6dl -n finance --fork

error parsing configuration file: yaml: line 36: found unexpected end of streamWaiting for pod finance-payment-kingdee-realname-cgi--test5-6f46ff966c-rm6dl-067b70f9-9197-11e9-8494-00163e340364-debug to run...
Error occurred while waiting for pod to run: pod ran to completion
error: pod ran to completion

但是查看debug-agent pod已经是running。
请问这个是什么情况?

Container is not ready

Hello there,

We are trying to debug a pod which failed to correctly startup in some conditions, only on production. But when launching the debug command, it failed because the container of the application is not ready.

kubectl debug app-XXX -n production --agentless
Agent Pod info: [Name:debug-agent-pod-f0fb8c3c-d083-11e9-844f-9cb6d0eeb5ef, Namespace:default, Image:aylei/debug-agent:latest, HostPort:10027, ContainerPort:10027]
Waiting for pod debug-agent-pod-f0fb8c3c-d083-11e9-844f-9cb6d0eeb5ef to run...
error: container [app-XXX] not ready

Does this condition is needed to debug pod ?

Best,

Matthieu

Bash completion

A bash completion is really nice to have😁
Maybe we just need some slight additions to the kubectl bash completion script.

Panic when exit the debug container

Start deleting agent pod tidb-pd-2
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x10c9a57]

goroutine 1 [running]:
fmt.Fprintf(0x0, 0x0, 0x20b845f, 0x4e, 0xc0008297f0, 0x2, 0x2, 0x25, 0x0, 0x0)
        /usr/local/Cellar/go/1.12.4/libexec/src/fmt/print.go:200 +0x77
github.com/aylei/kubectl-debug/pkg/plugin.(*DebugOptions).Run.func2.1()
        /Users/alei/go/src/github.com/aylei/kubectl-debug/pkg/plugin/cmd.go:472 +0x2c9
k8s.io/kubernetes/pkg/util/interrupt.(*Handler).Close.func1()
        /Users/alei/go/pkg/mod/k8s.io/[email protected]/pkg/util/interrupt/interrupt.go:65 +0x46
sync.(*Once).Do(0xc0000c74c0, 0xc000829888)
        /usr/local/Cellar/go/1.12.4/libexec/src/sync/once.go:44 +0xb3
k8s.io/kubernetes/pkg/util/interrupt.(*Handler).Close(0xc0000c74a0)
        /Users/alei/go/pkg/mod/k8s.io/[email protected]/pkg/util/interrupt/interrupt.go:63 +0x54
k8s.io/kubernetes/pkg/util/interrupt.(*Handler).Run(0xc0000c74a0, 0xc0000ac180, 0x0, 0x0)
        /Users/alei/go/pkg/mod/k8s.io/[email protected]/pkg/util/interrupt/interrupt.go:103 +0x113
github.com/aylei/kubectl-debug/pkg/plugin.(*DebugOptions).Run.func2(0xc000000008, 0x20fac30)
        /Users/alei/go/src/github.com/aylei/kubectl-debug/pkg/plugin/cmd.go:475 +0xf3
k8s.io/kubernetes/pkg/util/interrupt.(*Handler).Run(0xc0000c7470, 0xc0000c7440, 0x0, 0x0)
        /Users/alei/go/pkg/mod/k8s.io/[email protected]/pkg/util/interrupt/interrupt.go:103 +0xff
github.com/aylei/kubectl-debug/pkg/util.TTY.Safe(0x2244600, 0xc0000b4000, 0x2244620, 0xc0000b8000, 0x1, 0x0, 0xc00013e780, 0xc0000c7440, 0x0, 0x0)
        /Users/alei/go/src/github.com/aylei/kubectl-debug/pkg/util/term.go:110 +0x189
github.com/aylei/kubectl-debug/pkg/plugin.(*DebugOptions).Run(0xc000174700, 0x0, 0x20fac28)
        /Users/alei/go/src/github.com/aylei/kubectl-debug/pkg/plugin/cmd.go:478 +0x8ea
github.com/aylei/kubectl-debug/pkg/plugin.NewDebugCmd.func1(0xc000551180, 0xc0000bb2c0, 0x1, 0x6)
        /Users/alei/go/src/github.com/aylei/kubectl-debug/pkg/plugin/cmd.go:150 +0xe7
github.com/spf13/cobra.(*Command).execute(0xc000551180, 0xc00003a080, 0x6, 0x6, 0xc000551180, 0xc00003a080)
        /Users/alei/go/pkg/mod/github.com/spf13/[email protected]/command.go:766 +0x2ae
github.com/spf13/cobra.(*Command).ExecuteC(0xc000551180, 0xc0000b4000, 0x2244620, 0xc0000b8000)
        /Users/alei/go/pkg/mod/github.com/spf13/[email protected]/command.go:852 +0x2ec
github.com/spf13/cobra.(*Command).Execute(...)
        /Users/alei/go/pkg/mod/github.com/spf13/[email protected]/command.go:800
main.main()
        /Users/alei/go/src/github.com/aylei/kubectl-debug/cmd/plugin/main.go:17 +0x110

doesn't work

网络情况:通过apiservice接口访问远程集群

配置:

# debug-agent 映射到宿主机的端口
# 默认 10027
agentPort: 10027

# 是否开启ageless模式
# 默认 false
agentless: false
# agentPod 的 namespace, agentless模式可用
# 默认 default
agentPodNamespace: kube-system
# agentPod 的名称前缀,后缀是目的主机名, agentless模式可用
# 默认 debug-agent-pod
agentPodNamePrefix: debug-agent-pod
# agentPod 的镜像, agentless模式可用
# 默认 aylei/debug-agent:latest
agentImage: aylei/debug-agent:latest

# debug-agent DaemonSet 的名字, port-forward 模式时会用到
# 默认 'debug-agent'
debugAgentDaemonset: debug-agent
# debug-agent DaemonSet 的 namespace, port-forward 模式会用到
# 默认 'default'
debugAgentNamespace: kube-system
# 是否开启 port-forward 模式
# 默认 false
portForward: true
# image of the debug container
# default as showed
image: nicolaka/netshoot:latest
# start command of the debug container
# default ['bash']
command:
- '/bin/bash'
- '-l'

效果:

~ kubectl debug -n runtime  gateway-controller-7989c46dff-msdzh  bash
error parsing configuration file: yaml: unmarshal errors:
  line 7: field agentless not found in type plugin.Config
  line 10: field agentPodNamespace not found in type plugin.Config
  line 13: field agentPodNamePrefix not found in type plugin.Config
  line 16: field agentImage not found in type plugin.Config

然后卡死

使用README中的命令也无效

~ kubectl debug -n runtime gateway-controller-7989c46dff-msdzh --port-forward --daemonset-ns=kube-system --daemonset-name=debug-agent
Error: unknown flag: --port-forward

(摊手

Add fixed function of some misleading metrics for targeted pod when using some tools like `free`, `top` which rely on procfs isolation.

hi aylei, this PR aimed at fixing some misleading metrics for targeted pod when using some tools like free, top which rely on procfs isolation. Those crucial data will be correct by running a fuse filesystem (lxcfs) in our debug-agent.

Once we started agent pod, no matter agentless mode or daemonset mode we choose, we ran the lxcfs process on targeted node accordingly. Then, we starting debugging container, by enabling lxcfs mode, filled 'isLxcfsEnabled: true' config in ~/.kube/debug-config, we can instantly set the targeted container's procfs correct.

I've tested some cases bellow:

  • daemonset mode when port-forward mode is off:
kubectl-debug --agentless=false --port-forward=false debugtest
set container procfs correct true ..
pulling image nicolaka/netshoot:latest...
latest: Pulling from nicolaka/netshoot
Digest: sha256:8b020dc72d8ef07663e44c449f1294fc47c81a10ef5303dc8c2d9635e8ca22b1
Status: Image is up to date for nicolaka/netshoot:latest
starting debug container...
container created, open tty...
 [1] 🐳  →

In targeted container,

root@iZuf6f9dx8ur1ouh5sb18hZ:~# docker exec -ti d51d185b37a4 free -h
              total        used        free      shared  buff/cache   available
Mem:           1.0G        256K        1.0G          0B          0B        1.0G
Swap:            0B          0B          0B

root@iZuf6f9dx8ur1ouh5sb18hZ:~# docker exec -ti d51d185b37a4 top
top - 15:27:55 up  4:01,  0 users,  load average: 0.00, 0.00, 0.00
Tasks:  14 total,   1 running,  13 sleeping,   0 stopped,   0 zombie
%Cpu0  :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
%Cpu1  :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
%Cpu2  :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
%Cpu3  :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
%Cpu4  :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
%Cpu5  :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
%Cpu6  :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
%Cpu7  :  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem :  1048576 total,  1048320 free,      256 used,        0 buff/cache
KiB Swap:        0 total,        0 free,        0 used.  1048320 avail Mem

   PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
...

root@iZuf6f9dx8ur1ouh5sb18hZ:~# docker exec -ti d51d185b37a4 uptime
 15:28:14 up  4:01,  0 users,  load average: 0.00, 0.00, 0.00

  • agentless mode when port-forward mode is off
[root@iZuf6f9dx8ur1ouh5sb18gZ ~]$ kubectl-debug --agentless=true --port-forward=false debugtest
Agent Pod info: [Name:debug-agent-pod-5bcf4426-0459-11ea-b12d-00163e06d596, Namespace:default, Image:registry.cn-hangzhou.aliyuncs.com/huya_zhangyi/debug-agent:latest, HostPort:10027, ContainerPort:10027]
Waiting for pod debug-agent-pod-5bcf4426-0459-11ea-b12d-00163e06d596 to run...
set container procfs correct true ..
pulling image nicolaka/netshoot:latest...
latest: Pulling from nicolaka/netshoot
Digest: sha256:8b020dc72d8ef07663e44c449f1294fc47c81a10ef5303dc8c2d9635e8ca22b1
Status: Image is up to date for nicolaka/netshoot:latest
starting debug container...
container created, open tty...


root@iZuf6f9dx8ur1ouh5sb18hZ:~#docker exec -ti d51d185b37a4 uptime
 16:02:18 up  4:35,  0 users,  load average: 0.20, 0.14, 0.10

root@iZuf6f9dx8ur1ouh5sb18hZ:~# docker exec -ti d51d185b37a4 top
top - 16:03:03 up  4:36,  0 users,  load average: 0.17, 0.14, 0.10
Tasks:  14 total,   1 running,  13 sleeping,   0 stopped,   0 zombie
%Cpu(s):  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem :  1048576 total,  1048320 free,      256 used,        0 buff/cache
KiB Swap:        0 total,        0 free,        0 used.  1048320 avail Mem

Besides, in order to improve the availability of this function, support remount lxcfs file or hot update lxcfs process, we recommend that all pods mount a parent directory of lxcfs by supporting a mount point.

apiVersion: v1
kind: Pod
metadata:
  name: targetcontainer
spec:
  restartPolicy: Always
  containers:
  - name: nginx
    image: nginx:1.12.2
    stdin: true
    tty: true
    resources:
      limits:
        cpu: "2"
        memory: "1Gi"
      requests:
        cpu: "2"
        memory: "1Gi"
    volumeMounts:
    - name: lxcfs
      mountPath: /var/lib/lxc
      mountPropagation: HostToContainer
  volumes:
  - name: lxcfs
    hostPath:
      path: /var/lib/lxc
      type: DirectoryOrCreate

Panic when target pod is not specified

➜  kubectl-debug git:(master) ✗ kubectl debug
error pod not specified
pod name must be specified
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x68 pc=0x1b9a7c2]

goroutine 1 [running]:
github.com/aylei/kubectl-debug/pkg/plugin.(*DebugOptions).Run(0xc0002481c0, 0x0, 0x0)
	/Users/alei/go/src/github.com/aylei/kubectl-debug/pkg/plugin/cmd.go:201 +0x62
github.com/aylei/kubectl-debug/pkg/plugin.NewDebugCmd.func1(0xc00034f400, 0x2bad698, 0x0, 0x0)
	/Users/alei/go/src/github.com/aylei/kubectl-debug/pkg/plugin/cmd.go:100 +0x134
github.com/aylei/kubectl-debug/vendor/github.com/spf13/cobra.(*Command).execute(0xc00034f400, 0xc0000b8190, 0x0, 0x0, 0xc00034f400, 0xc0000b8190)
	/Users/alei/go/src/github.com/aylei/kubectl-debug/vendor/github.com/spf13/cobra/command.go:766 +0x2cc
github.com/aylei/kubectl-debug/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc00034f400, 0x2080880, 0x1d54140, 0x207c238)
	/Users/alei/go/src/github.com/aylei/kubectl-debug/vendor/github.com/spf13/cobra/command.go:852 +0x2fd
github.com/aylei/kubectl-debug/vendor/github.com/spf13/cobra.(*Command).Execute(0xc00034f400, 0xc0000ce000, 0x2088300)
	/Users/alei/go/src/github.com/aylei/kubectl-debug/vendor/github.com/spf13/cobra/command.go:800 +0x2b
main.main()
	/Users/alei/go/src/github.com/aylei/kubectl-debug/cmd/plugin/main.go:17 +0x10e

Forked pod doesn't copy pod's labels

I have a scenario that I'm using Azure MSI and aad-pod-identity to assign identities to pods (in order to retrieve credentials for several azure services). The way this integration works is by assigning a label to the pods.

When I use the fork feature it doesn't copy any of the original labels and it causes the pod to fail before the point I need to debug.

As I see it, fork should copy the labels of the original pod.

connection timed out

Hi, I get connection timed out when try to use this tool. It's managed Kubernetes on DigitalOcean

[~]$ k debug auth-service-c7c55cc59-jxsts
error execute remote, error sending request: Post http://10.136.228.161:10027/api/v1/debug?command=%5B%22bash%22%5D&container=docker%3A%2F%2Fad6c55366505d3816dc2d7c274f15738c7995483807d404755ea96188249e2fa&image=nicolaka%2Fnetshoot%3Alatest: dial tcp 10.136.228.161:10027: connect: connection timed out
error: error sending request: Post http://10.136.228.161:10027/api/v1/debug?command=%5B%22bash%22%5D&container=docker%3A%2F%2Fad6c55366505d3816dc2d7c274f15738c7995483807d404755ea96188249e2fa&image=nicolaka%2Fnetshoot%3Alatest: dial tcp 10.136.228.161:10027: connect: connection timed out

agent are installed:
$ k get pod
NAME READY STATUS RESTARTS AGE
auth-service-c7c55cc59-jxsts 1/1 Running 1 16h
debug-agent-jmh56 1/1 Running 0 5m58s
debug-agent-kscnq 1/1 Running 0 5m58s
debug-agent-zsn6g 1/1 Running 0 5m58s

Kubernetes version 1.13.5, kubectl 1.15.2

安装好了kubectl-debug后执行命令直接refused

通过二进制文件下载好后在调用kubectl debug demo 的时候,出了异常
在master上运行的, pod中的镜像使用的是nginx镜像
是我在某个服务没有启用吗?

异常内容:
error execute remote, error sending request: Post http://10.211.55.112:10027/api/v1/debug?command=%5B%22bash%22%5D&container=docker%3A%2F%2F6a6200df293ff1270e809ca14f384a2de72b8a1b02eb262206b5916802a71609&image=nicolaka%2Fnetshoot%3Alatest: dial tcp 10.211.55.112:10027: connect: connection refused error: error sending request: Post http://10.211.55.112:10027/api/v1/debug?command=%5B%22bash%22%5D&container=docker%3A%2F%2F6a6200df293ff1270e809ca14f384a2de72b8a1b02eb262206b5916802a71609&image=nicolaka%2Fnetshoot%3Alatest: dial tcp 10.211.55.112:10027: connect: connection refused

support init containers

Supporting init containers would be great for troubleshooting.

I have some troubleshooting to do on mongodb-replicaset, getting locked with init containers. So tried this beautiful tool but :

[test@test ~]$ kubectl debug --namespace test test-mongodb-replicaset-1  
container mongodb-replicaset id not ready
[test@test ~]$ kubectl debug --namespace test test-mongodb-replicaset-1 -c bootstrap
cannot find specified container bootstrap

kubectl can see this bootsrap (still running, that is the problem) running :

[test@test ~]$ kubectl logs test-mongodb-replicaset-1 -n test -c bootstrap -f 
2019/02/21 11:05:13 Peer list updated
... waiting for new logs that will never come...

Extract from describe :

kubectl -n test describe pod test-mongodb-replicaset-1 
Name:               test-mongodb-replicaset-1
Namespace:          test
Priority:           0
PriorityClassName:  <none>
Node:               xxx
Start Time:         Thu, 21 Feb 2019 11:05:09 +0000
Labels:             app=mongodb-replicaset
                    controller-revision-hash=test-mongodb-replicaset-647db4c6c4
                    release=test
                    statefulset.kubernetes.io/pod-name=test-mongodb-replicaset-1
Annotations:        <none>
Status:             Pending
IP:                 10.233.88.14
Controlled By:      StatefulSet/test-mongodb-replicaset
Init Containers:
  copy-config:
    Container ID:  docker://f6685231928ba6a843dd348f6cdd602f178aa5e132194dc8aaa44b8058d02c21
   ...
    State:          Terminated
  install:
    Container ID:  docker://618fcfe64504376ece61dfc481641921f0bf1226a6b1394503f8622e0fed0cd9
    State:          Terminated   
   ...
  bootstrap:
    Container ID:  docker://34e529ecd86058e1c9e45a1a50f51d610ec65181c0bdfab17b46010c47083142
    Image:         mongo:3.6
    Image ID:      docker-pullable://mongo@sha256:89822fa6161c2ed77e73fb717f189e6ce5a95cb752fe2021508daeee366f9b69
    Port:          <none>
    Host Port:     <none>
    Command:
      /work-dir/peer-finder
    Args:
      -on-start=/init/on-start.sh
      -service=test-mongodb-replicaset
    State:          Running
      Started:      Thu, 21 Feb 2019 11:05:13 +0000
    Ready:          False
    Restart Count:  0
    Environment:
      POD_NAMESPACE:  test (v1:metadata.namespace)
      REPLICA_SET:    rs0
    Mounts:
      /data/configdb from configdir (rw)
      /data/db from datadir (rw)
      /init from init (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-f6p6g (ro)
      /work-dir from workdir (rw)
Containers:
  mongodb-replicaset:
    Container ID:  
    Image:         mongo:3.6
    Image ID:      
...
Conditions:
  Type              Status
  Initialized       False 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
...
Events:          <none>

Error response from daemon: No such container

Hi,when was trying to run kubectl-debug for a running pod with agent mode, I received the following error:

kubectl-debug POD -n NAMESPACE
pulling image nicolaka/netshoot:latest...
latest: Pulling from nicolaka/netshoot
Digest: sha256:8b020dc72d8ef07663e44c449f1294fc47c81a10ef5303dc8c2d9635e8ca22b1
Status: Image is up to date for nicolaka/netshoot:latest
starting debug container...
error execute remote, Internal error occurred: error attaching to container: Error response from daemon: No such container: 89e0cbee76a885ac49a725d5c341980a7bf74268e5fe5c695200b41d524df73c
error: Internal error occurred: error attaching to container: Error response from daemon: No such container: 89e0cbee76a885ac49a725d5c341980a7bf74268e5fe5c695200b41d524df73c

But in the host I can get the running container with same container ID "89e0cbee76a885ac49a725d5c341980a7bf74268e5fe5c695200b41d524df73c".

kubectl version: v1.15.1
k8s version:     v1.15.0
kubectl-debug --version
debug version v0.0.0-master+$Format:%h$

Can you please help how to solve it?

adapt to CRI

kubectl-debug only support docker as container runtime now. We should use CRI for container operations.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.