Giter Site home page Giter Site logo

aws-containers / kubectl-detector-for-docker-socket Goto Github PK

View Code? Open in Web Editor NEW
152.0 7.0 21.0 2.07 MB

A Kubectl plugin that can detect if any of your workloads or manifest files are mounting the docker.sock volume

License: Apache License 2.0

Makefile 7.58% Go 92.42%

kubectl-detector-for-docker-socket's Introduction

Detector for Docker Socket (DDS)

A kubectl plugin to detect if active Kubernetes workloads are mounting the docker socket (docker.sock) volume.

a short video showing the plugin being used

Install

Install the plugin with

kubectl krew install dds

You can install the krew plugin manager from their installation documentation

How it works

dds looks for every pod in your Kubernetes cluster. If pods are part of a workload (eg Deployment, StatefulSet) it inspects the workload type instead of pods directly.

It then inspects all of the volumes in the containers and looks for any volume with the path *docker.sock

Supported workload types:

  • Pods
  • ReplicaSets
  • Deployments
  • StatefulSets
  • DaemonSets
  • Jobs
  • CronJobs

Why do you need this?

If you're still not sure why you might need this plugin click on the image below to see a short video explaination.

You can read the full FAQ about dockershim deprecation at https://k8s.io/dockershim

Run

You can run the plugin with no arguments and it will inspect all pods in all namespaces that the current Kubernetes user has access to.

kubectl dds

example output

NAMESPACE       TYPE            NAME                    STATUS
default         deployment      deploy-docker-volume    mounted
default         daemonset       ds-docker-volume        mounted
default         statefulset     ss-docker-volume        mounted
default         job             job-docker-volume       mounted
default         cron            cron-docker-volume      mounted
kube-system     pod             pod-docker-volume       mounted
test1           deployment      deploy-docker-volume    mounted

You can specify a namespace to limit the scope of what will be scanned.

kubectl dds --namespace kube-system

example output

NAMESPACE       TYPE    NAME                    STATUS
kube-system     pod     pod-docker-volume       mounted

You can run dds against a single manifest file or folder of manifest files (recursive). The repo includes a tests/manifests directory.

kubectl dds --filename tests

example output

FILE                                                    LINE    STATUS
tests/manifests/docker-volume.cronjob.yaml               22      mounted
tests/manifests/docker-volume.daemonset.yaml             24      mounted
tests/manifests/docker-volume.deploy.test1.yaml          32      mounted
tests/manifests/docker-volume.deploy.yaml                25      mounted
tests/manifests/docker-volume.job.yaml                   17      mounted
tests/manifests/docker-volume.pod.kube-system.yaml       14      mounted
tests/manifests/docker-volume.replicaset.yaml            25      mounted
tests/manifests/docker-volume.statefulset.yaml           26      mounted

Use the --verbose with a log level (1-10) to get more output

kubectl dds --verbose=4

example output

NAMESPACE       TYPE            NAME                    STATUS
default         deployment      deploy-docker-volume    mounted
default         daemonset       ds-docker-volume        mounted
default         statefulset     ss-docker-volume        mounted
default         job             job-docker-volume       mounted
default         cron            cron-docker-volume      mounted
kube-system     pod             pod-docker-volume       mounted
kube-system     daemonset       aws-node                not-mounted
kube-system     daemonset       ebs-csi-node            not-mounted
kube-system     daemonset       kube-proxy              not-mounted
test1           deployment      deploy-docker-volume    mounted

You can use dds as part of your CI pipeline to catch manifest files before they are deployed.

kubectl dds --exit-with-error -f YOUR_FILES

If the docker.sock volume is found in any files the cli exit code with be 1.

Build

To build the binary you can use go build -o kubectl-dds main.go or make dds to use goreleaser.

Install the kubectl-dds binary somewhere in your path to use it with kubectl or use it by itself without kubectl. The same kubectl authentication works with or without kubectl (e.g. $HOME/.kube/config or KUBECONFIG).

Testing

There are different test workloads in the /tests folder. You can deploy these workloads to verify the plugin is working as intended.

kubectl apply -f tests/
daemonset.apps/ds-docker-volume created
namespace/test1 created
deployment.apps/deploy-docker-volume created
deployment.apps/deploy-docker-volume created
job.batch/job-docker-volume created
pod/pod-docker-volume created
statefulset.apps/ss-docker-volume created
pod/empty-volume created
deployment.apps/no-volume created

and then run

kubectl dds
NAMESPACE       TYPE            NAME                    STATUS
default         deployment      deploy-docker-volume    mounted
default         daemonset       ds-docker-volume        mounted
default         statefulset     ss-docker-volume        mounted
default         job             job-docker-volume       mounted
default         cron            cron-docker-volume      mounted
kube-system     pod             pod-docker-volume       mounted
test1           deployment      deploy-docker-volume    mounted

kubectl-detector-for-docker-socket's People

Contributors

bourne-id-work avatar cwimmer avatar konoui avatar mrfishfinger avatar rajatjindal avatar rothgar avatar skymoore avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

kubectl-detector-for-docker-socket's Issues

Document alternatives to mounting docker.sock

A couple of our Java/Kotlin based deployments and jobs execute containerized c++ or python tools (to avoid dependency hell). By using docker-in-docker, this has allowed our k8s versions to behave the same way as our bare metal (i.e. run from IntelliJ or docker-compose). In other words, it doesn't make sense for us to spin up yet another job with a shared volume when we can use dind.

I see that this will no longer be supported in 1.24 and dds identifies the expected deployment; however, I can't find anywhere what to do about this? Do I just do the same thing with containerd.sock and nerdctl? Any alternative documented would be appreciated

error: the server could not find the requested resource

Running kubectl dds on my cluster and getting the above error. Verbose mode doesn't tell me which resource

$ kubectl dds -v
error: the server could not find the requested resource

I have even deleted the pods from failing jobs, but I still get the error

Running on kube v1.20. Version info below

$ kubectl version 
Client Version: version.Info{Major:"1", Minor:"20+", GitVersion:"v1.20.4-eks-6b7464", GitCommit:"6b746440c04cb81db4426842b4ae65c3f7035e53", GitTreeState:"clean", BuildDate:"2021-03-19T19:35:50Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"20+", GitVersion:"v1.20.15-eks-6d3986b", GitCommit:"d4be14f563712c4e1964fe8a4171ca353b6e7e1a", GitTreeState:"clean", BuildDate:"2022-07-20T22:04:24Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/amd64"}

panic occurs when directly using ReplicaSet

I found that panic occurs when directly using ReplicaSet.
DDS does not support ReplicaSet but I want to avoid the panic.

$ kubectl dds

NAMESPACE       TYPE    NAME    STATUS
panic: runtime error: index out of range [0] with length 0

goroutine 1 [running]:
main.printResources({{{0x0, 0x0}, {0x0, 0x0}}, {{0x140004756c0, 0x7}, {0x0, 0x0}, {0x0, 0x0}, ...}, ...}, ...)
        /Users/tanaka/dev/src/github.com/konoui/kubectl-detector-for-docker-socket/main.go:200 +0x1478
main.runCluster({0x101f4d9aa, 0x3}, 0x1400012c008?, 0x2)
        /Users/tanaka/dev/src/github.com/konoui/kubectl-detector-for-docker-socket/main.go:142 +0x4ac
main.main()
        /Users/tanaka/dev/src/github.com/konoui/kubectl-detector-for-docker-socket/main.go:62 +0x270
---
apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: replicaset-docker-volume
  labels:
    app: rs
spec:
  replicas: 3
  selector:
    matchLabels:
      app: rs
  template:
    metadata:
      labels:
        app: rs
    spec:
      containers:
      - name: pause
        image: public.ecr.aws/eks-distro/kubernetes/pause:v1.21.5-eks-1-21-8
        ports:
        - containerPort: 80
        volumeMounts:
        - name: dockersock
          mountPath: "/var/run/docker.sock"
      volumes:
      - name: dockersock
        hostPath:
          path: /var/run/docker.sock

No output if Replica Set not owned by Deployment

It is possible that a ReplicaSet is not owned by a Deployment but instead something else, for example ArgoRollouts.

In the event DDS scans a pod, refers to the ReplicaSet but is owned by a Rollout, it will log

> kubectl dds -n rbourne
error: deployments.apps "rollout-socket" not found%

However the end table is not outputted. This means a large cluster may have been scanned, Docker mounts detected but nothing is outputted. This can mislead the end user to determine no mounts exist in the cluster.

Unlisted docker mount if owner is not known

During a scan of a cluster, there may be custom resource definitions which own a pod beyond the stated list on the readme.md file - for example runners from summerwind.

Whilst an error about these owners is presented, the pod itself is not scanned. As such any pods mounting the docker host socket with an unknown owner will not be presented in the conclusion table.

I propose this tool scans the pod for the mount in the event the owner is unknown instead of ignoring it.

Replication:
Install an addon which controls pods, for example SummerWind Action Runners
Mount the Docker host socket with a runner
Run the tool

Output:

could not find resource manager for type Runner for pod my-docker-9k99f-12345
NAMESPACE	TYPE	NAME	STATUS

Outdated Krew plugin

Default plugin version on krew is 0.1.0, which misses the check for dockershim.sock

Error scanning namespace workloads if there are batch jobs running on it

I am trying to scan a cluster that has different kinds of workloads (deployments, pods, statefulsets, batch jobs, etc). However, when the scan finishes, I always get the same error: "jobs.batch not found. The following table may be incomplete due to errors detected during the run." The table only returns a single row, analyzing the kube-system namespace, but not all the other workloads, which amount to more than 300. I believe this issue arises because when the scan starts, there are some jobs running but then they finish during the scan (as they are meant to do). However, the plugin interprets this as an issue and throws an error. Is there any workaround for this problem?

Input = kubectl dds

Output =
error: [jobs.batch "job1" not found, jobs.batch "job2" not found, jobs.batch "job3" not found, jobs.batch "job4" not found]
Warning: The following table may be incomplete due to errors detected during the run
NAMESPACE TYPE NAME STATUS
kube-system daemonset aws-node mounted

--exit-with-error flag does not work

Thank you for the great tool!
I found that the --exit-with-error flag does not work under some conditions.

Case1

flaky results occur when mounted and not-mounted resources of the same workload exist.

$ kubectl delete -f test/manifests
$ kubectl apply -f test/manifests/docker-volume.deploy.yaml
$ kubectl apply -f test/manifests/no-volume.deploy.yaml

$ kubectl dds --exit-with-error -n default
NAMESPACE       TYPE            NAME                    STATUS
default         deployment      deploy-docker-volume    mounted

$ echo $?
0

$ kubectl dds --exit-with-error
(snip)

$ echo $?
1

$ kubectl dds --exit-with-error
(snip)

$ echo $?
1

$ kubectl dds --exit-with-error
(snip)

$ echo $?
0

Case2

DDS exit with 0 when found only daemoset of docker volume.

$ kubectl delete -f test/manifests
$ kubectl dds --exit-with-error -n kube-system
NAMESPACE       TYPE            NAME            STATUS
kube-system     daemonset       aws-node        mounted

$ echo $?
0

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.