Giter Site home page Giter Site logo

aquasecurity / kube-hunter Goto Github PK

View Code? Open in Web Editor NEW
4.6K 95.0 577.0 1.76 MB

Hunt for security weaknesses in Kubernetes clusters

License: Apache License 2.0

Python 99.26% Dockerfile 0.27% Makefile 0.46%
vulnerabilities kubernetes-clusters hacktoberfest

kube-hunter's Introduction

Notice

kube-hunter is not under active development anymore. If you're interested in scanning Kubernetes clusters for known vulnerabilities, we recommend using Trivy. Specifically, Trivy's Kubernetes misconfiguration scanning and KBOM vulnerability scanning. Learn more in the Trivy Docs.


kube-hunter hunts for security weaknesses in Kubernetes clusters. The tool was developed to increase awareness and visibility for security issues in Kubernetes environments. You should NOT run kube-hunter on a Kubernetes cluster that you don't own!

Run kube-hunter: kube-hunter is available as a container (aquasec/kube-hunter), and we also offer a web site at kube-hunter.aquasec.com where you can register online to receive a token allowing you to see and share the results online. You can also run the Python code yourself as described below.

Explore vulnerabilities: The kube-hunter knowledge base includes articles about discoverable vulnerabilities and issues. When kube-hunter reports an issue, it will show its VID (Vulnerability ID) so you can look it up in the KB at https://aquasecurity.github.io/kube-hunter/
If you're interested in kube-hunter's integration with the Kubernetes ATT&CK Matrix Continue Reading

kube-hunter demo video

Table of Contents

Kubernetes ATT&CK Matrix

kube-hunter now supports the new format of the Kubernetes ATT&CK matrix. While kube-hunter's vulnerabilities are a collection of creative techniques designed to mimic an attacker in the cluster (or outside it) The Mitre's ATT&CK defines a more general standardised categories of techniques to do so.

You can think of kube-hunter vulnerabilities as small steps for an attacker, which follows the track of a more general technique he would aim for. Most of kube-hunter's hunters and vulnerabilities can closly fall under those techniques, That's why we moved to follow the Matrix standard.

Some kube-hunter vulnerabities which we could not map to Mitre technique, are prefixed with the General keyword kube-hunter

Hunting

Where should I run kube-hunter?

There are three different ways to run kube-hunter, each providing a different approach to detecting weaknesses in your cluster:

Run kube-hunter on any machine (including your laptop), select Remote scanning and give the IP address or domain name of your Kubernetes cluster. This will give you an attackers-eye-view of your Kubernetes setup.

You can run kube-hunter directly on a machine in the cluster, and select the option to probe all the local network interfaces.

You can also run kube-hunter in a pod within the cluster. This indicates how exposed your cluster would be if one of your application pods is compromised (through a software vulnerability, for example). (--pod flag)

Scanning options

First check for these pre-requisites.

By default, kube-hunter will open an interactive session, in which you will be able to select one of the following scan options. You can also specify the scan option manually from the command line. These are your options:

  1. Remote scanning

To specify remote machines for hunting, select option 1 or use the --remote option. Example: kube-hunter --remote some.node.com

  1. Interface scanning

To specify interface scanning, you can use the --interface option (this will scan all of the machine's network interfaces). Example: kube-hunter --interface

  1. Network scanning

To specify a specific CIDR to scan, use the --cidr option. Example: kube-hunter --cidr 192.168.0.0/24

  1. Kubernetes node auto-discovery

Set --k8s-auto-discover-nodes flag to query Kubernetes for all nodes in the cluster, and then attempt to scan them all. By default, it will use in-cluster config to connect to the Kubernetes API. If you'd like to use an explicit kubeconfig file, set --kubeconfig /location/of/kubeconfig/file.

Also note, that this is always done when using --pod mode.

Authentication

In order to mimic an attacker in it's early stages, kube-hunter requires no authentication for the hunt.

  • Impersonate - You can provide kube-hunter with a specific service account token to use when hunting by manually passing the JWT Bearer token of the service-account secret with the --service-account-token flag.

    Example:

    $ kube-hunter --active --service-account-token eyJhbGciOiJSUzI1Ni...
  • When runing with --pod flag, kube-hunter uses the service account token mounted inside the pod to authenticate to services it finds during the hunt.

    • if specified, --service-account-token flag takes priority when running as a pod

Active Hunting

Active hunting is an option in which kube-hunter will exploit vulnerabilities it finds, to explore for further vulnerabilities. The main difference between normal and active hunting is that a normal hunt will never change the state of the cluster, while active hunting can potentially do state-changing operations on the cluster, which could be harmful.

By default, kube-hunter does not do active hunting. To active hunt a cluster, use the --active flag. Example: kube-hunter --remote some.domain.com --active

List of tests

You can see the list of tests with the --list option: Example: kube-hunter --list

To see active hunting tests as well as passive: kube-hunter --list --active

Nodes Mapping

To see only a mapping of your nodes network, run with --mapping option. Example: kube-hunter --cidr 192.168.0.0/24 --mapping This will output all the Kubernetes nodes kube-hunter has found.

Output

To control logging, you can specify a log level, using the --log option. Example: kube-hunter --active --log WARNING Available log levels are:

  • DEBUG
  • INFO (default)
  • WARNING

Dispatching

By default, the report will be dispatched to stdout, but you can specify different methods by using the --dispatch option. Example: kube-hunter --report json --dispatch http Available dispatch methods are:

  • stdout (default)
  • http (to configure, set the following environment variables:)
    • KUBEHUNTER_HTTP_DISPATCH_URL (defaults to: https://localhost)
    • KUBEHUNTER_HTTP_DISPATCH_METHOD (defaults to: POST)

Advanced Usage

Azure Quick Scanning

When running as a Pod in an Azure or AWS environment, kube-hunter will fetch subnets from the Instance Metadata Service. Naturally this makes the discovery process take longer. To hardlimit subnet scanning to a /24 CIDR, use the --quick option.

Custom Hunting

Custom hunting enables advanced users to have control over what hunters gets registered at the start of a hunt. If you know what you are doing, this can help if you want to adjust kube-hunter's hunting and discovery process for your needs.

Example:

kube-hunter --custom <HunterName1> <HunterName2>

Enabling Custom hunting removes all hunters from the hunting process, except the given whitelisted hunters.

The --custom flag reads a list of hunters class names, in order to view all of kube-hunter's class names, you can combine the flag --raw-hunter-names with the --list flag.

Example:

kube-hunter --active --list --raw-hunter-names

Notice: Due to kube-huner's architectural design, the following "Core Hunters/Classes" will always register (even when using custom hunting):

  • HostDiscovery
    • Generates ip addresses for the hunt by given configurations
    • Automatically discovers subnets using cloud Metadata APIs
  • FromPodHostDiscovery
    • Auto discover attack surface ip addresses for the hunt by using Pod based environment techniques
    • Automatically discovers subnets using cloud Metadata APIs
  • PortDiscovery
    • Port scanning given ip addresses for known kubernetes services ports
  • Collector
    • Collects discovered vulnerabilities and open services for future report
  • StartedInfo
    • Prints the start message
  • SendFullReport
    • Dispatching the report based on given configurations

Deployment

There are three methods for deploying kube-hunter:

On Machine

You can run kube-hunter directly on your machine.

Prerequisites

You will need the following installed:

  • python 3.x
  • pip
Install with pip

Install:

pip install kube-hunter

Run:

kube-hunter
Run from source

Clone the repository:

git clone https://github.com/aquasecurity/kube-hunter.git

Install module dependencies. (You may prefer to do this within a Virtual Environment)

cd ./kube-hunter
pip install -r requirements.txt

Run:

python3 kube_hunter

If you want to use pyinstaller/py2exe you need to first run the install_imports.py script.

Container

Aqua Security maintains a containerized version of kube-hunter at aquasec/kube-hunter:aqua. This container includes this source code, plus an additional (closed source) reporting plugin for uploading results into a report that can be viewed at kube-hunter.aquasec.com. Please note, that running the aquasec/kube-hunter container and uploading reports data are subject to additional terms and conditions.

The Dockerfile in this repository allows you to build a containerized version without the reporting plugin.

If you run kube-hunter container with the host network, it will be able to probe all the interfaces on the host:

docker run -it --rm --network host aquasec/kube-hunter

Note for Docker for Mac/Windows: Be aware that the "host" for Docker for Mac or Windows is the VM that Docker runs containers within. Therefore specifying --network host allows kube-hunter access to the network interfaces of that VM, rather than those of your machine. By default, kube-hunter runs in interactive mode. You can also specify the scanning option with the parameters described above e.g.

docker run --rm aquasec/kube-hunter --cidr 192.168.0.0/24

Pod

This option lets you discover what running a malicious container can do/discover on your cluster. This gives a perspective on what an attacker could do if they were able to compromise a pod, perhaps through a software vulnerability. This may reveal significantly more vulnerabilities.

The example job.yaml file defines a Job that will run kube-hunter in a pod, using default Kubernetes pod access settings. (You may wish to modify this definition, for example to run as a non-root user, or to run in a different namespace.)

  • Run the job with kubectl create -f ./job.yaml
  • Find the pod name with kubectl describe job kube-hunter
  • View the test results with kubectl logs <pod name>

Contribution

To read the contribution guidelines, Click here

License

This repository is available under the Apache License 2.0.

kube-hunter's People

Contributors

ccojocar avatar danielsagi avatar eranbibi avatar hugovk avatar idanr1986 avatar itaysk avatar iyehuda avatar kiranbodipi avatar lizrice avatar maniish-jaiin avatar mccormickt avatar mcherny avatar mormamn avatar mrueg avatar nhibberd avatar ninjacoderdev avatar nshauli avatar oriagmon avatar raitobezarius avatar rdxr10 avatar shirtam avatar sidhyatikku avatar simar7 avatar sinithh avatar steffinstanly avatar sumitkharche avatar tom-davidson avatar ttousai avatar vipulgupta2048 avatar westonsteimel avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kube-hunter's Issues

More logs to show test progress

At the moment it's hard to tell which hunters were run. It would be good if there were a log for each hunter that's triggered to indicate what it's attempting, and whether it succeeded or failed.

ignore hook exceptions in handler

Hi, I'm kube-hunter newbie and thank you for this awesome project.

When I execute kube-hunter.py, some hunting is skipped without error message.

    # In handler.py (82 line)
    def worker(self):
        while self.running:
            queue_lock.acquire()
            hook = self.get()
            queue_lock.release()

            try:
                hook.execute()
            except Exception as ex:
                **logging.debug(ex)**
            self.task_done()
        logging.debug("closing thread...")

I think skipping hunting is important event. Can you give more detail info if hook throw exception? In my case, get_api_server_version_end_point (in cvehunter.py) throw exception and skipped cvehunter.execute.

Evidence field truncated in the Vulnerability table

Hi Team,

The report generated in the logs has a vulnerability table. The Evidence field in the table is truncated, and so we are not able to find the root cause of that vulnerability.

Below is the image and report file of the issue:
kube-hunter-uklab-18-report.log

truncated

`
Vulnerabilities
+--------------------+----------------------+----------------------+----------------------+----------------------+
| LOCATION | CATEGORY | VULNERABILITY | DESCRIPTION | EVIDENCE |
+--------------------+----------------------+----------------------+----------------------+----------------------+
| 10.233.64.1:6443 | Remote Code | Access to server API | Accessing the | {"kind":"APIVersions |
| | Execution | | server API within a | ","versions":["v1"], |
| | | | compromised pod | ... |
| | | | would help an | |
| | | | attacker gain full | |
| | | | control over the | |
| | | | cluster | |
+--------------------+----------------------+----------------------+----------------------+----------------------+
| 10.233.64.1:6443 | Information | Listing pods list | Accessing the pods | [{'namespace': |
| | Disclosure | under default | list under default | 'default', 'name': |
| | | namespace | namespace might give | 'glust... |
| | | | an attacker valuable | |
| | | | information to | |
| | | | harm the cluster | |
+--------------------+----------------------+----------------------+----------------------+----------------------+
| 10.233.64.1:6443 | Information | Listing all roles | Accessing all of | ['prometheus-k8s', |
| | Disclosure | | the roles might give | 'kubeadm:bootstrap- |
| | | | an attacker valuable | si... |
| | | | information | |
+--------------------+----------------------+----------------------+----------------------+----------------------+
| 10.233.64.1:6443 | Information | Listing all | Accessing all of | ['a', 'abc', 'alpha- |
| | Disclosure | namespaces | the namespaces might | customer', 'b', |
| | | | give an attacker | 'c',... |
| | | | valuable information | |
+--------------------+----------------------+----------------------+----------------------+----------------------+

`

Kindly help us with this issue.
Thanks in advance

Final hook is hanging

Hi,

Thanks for the tool!

When running aquasec/kube-hunter with --pod and --log=info inside of a GKE cluster I get this output:

...
~ Started
~ Discovering Open Kubernetes Services...
Event <class 'src.modules.discovery.hosts.HostScanEvent'> got published with <src.modules.discovery.hosts.HostScanEvent object at 0x7fc91436ced0>
Starting new HTTP connection (1): 169.254.169.254:80
http://169.254.169.254:80 "GET /metadata/instance?api-version=2017-08-01 HTTP/1.1" 403 56
Starting new HTTP connection (1): canhazip.com:80
http://canhazip.com:80 "GET / HTTP/1.1" 200 14
Starting new HTTP connection (1): www.azurespeed.com:80
http://www.azurespeed.com:80 "GET /api/region?ipOrUrl=x.x.x.x%0A HTTP/1.1" 200 None
Cannot read wireshark manuf database

It eventually stops with Kube Hunter couldn't find any clusters (\o/ us), but I am wondering what if the manuf db was available for scapy (?). Maybe something to look into.

Question about access_api_server test

Is [0] a vulnerability? If you have anonymous enabled in the api you can see [2] as well as version and healthz. Only when you try to access namespaced objects as anonymous and you have access it makes sense to mark as vulnerable.

What others think?

This design document tries to have stricter defaults [1].

[0] https://github.com/aquasecurity/kube-hunter/blob/master/src/modules/hunting/apiserver.py#L228
[1] kubernetes/enhancements#720
[2]

{
  "kind": "APIVersions",
  "versions": [
    "v1"
  ],
  "serverAddressByClientCIDRs": [
    {
      "clientCIDR": "0.0.0.0/0",
      "serverAddress": "IP:6443"
    }
  ]
}

Provide metrics endpoint

Provide an endpoint which Prometheus can scrape to allow real-time alerting of potential vulnerabilities.

kube-hunter endless loop

When running against an openshift 3.11 install it hangs
git commit id: 8d367d1
Python: 2.7.15/MacOS

./kube-hunter.py --remote masters.sanitized.tld infra.sanitized.tld node1.sanitized.tld --log debug
<class 'src.modules.report.collector.Collector'> subscribed to <class 'src.core.events.types.common.Vulnerability'>
<class 'src.modules.report.collector.Collector'> subscribed to <class 'src.core.events.types.common.Service'>
<class 'src.modules.report.collector.SendFullReport'> subscribed to <class 'src.core.events.types.common.HuntFinished'>
<class 'src.modules.report.collector.StartedInfo'> subscribed to <class 'src.core.events.types.common.HuntStarted'>
<class 'src.modules.discovery.apiserver.ApiServerDiscovery'> subscribed to <class 'src.core.events.types.common.OpenPortEvent'>
<class 'src.modules.discovery.kubelet.KubeletDiscovery'> subscribed to <class 'src.core.events.types.common.OpenPortEvent'>
<class 'src.modules.discovery.proxy.KubeProxy'> subscribed to <class 'src.core.events.types.common.OpenPortEvent'>
<class 'src.modules.discovery.ports.PortDiscovery'> subscribed to <class 'src.core.events.types.common.NewHostEvent'>
<class 'src.modules.discovery.hosts.FromPodHostDiscovery'> subscribed to <class 'src.modules.discovery.hosts.RunningAsPodEvent'>
<class 'src.modules.discovery.hosts.HostDiscovery'> subscribed to <class 'src.modules.discovery.hosts.HostScanEvent'>
<class 'src.modules.discovery.dashboard.KubeDashboard'> subscribed to <class 'src.core.events.types.common.OpenPortEvent'>
<class 'src.modules.discovery.etcd.EtcdRemoteAccess'> subscribed to <class 'src.core.events.types.common.OpenPortEvent'>
<class 'src.modules.hunting.apiserver.AccessApiServerViaServiceAccountToken'> subscribed to <class 'src.core.events.types.common.OpenPortEvent'>
<class 'src.modules.hunting.kubelet.ReadOnlyKubeletPortHunter'> subscribed to <class 'src.modules.discovery.kubelet.ReadOnlyKubeletEvent'>
<class 'src.modules.hunting.kubelet.SecureKubeletPortHunter'> subscribed to <class 'src.modules.discovery.kubelet.SecureKubeletEvent'>
<class 'src.modules.hunting.proxy.KubeProxy'> subscribed to <class 'src.modules.discovery.proxy.KubeProxyEvent'>
<class 'src.modules.hunting.certificates.CertificateDiscovery'> subscribed to <class 'src.core.events.types.common.Service'>
<class 'src.modules.hunting.CVE_2018_1002105.IsVulnerableToCVEAttack'> subscribed to <class 'src.core.events.types.common.OpenPortEvent'>
<class 'src.modules.hunting.dashboard.KubeDashboard'> subscribed to <class 'src.modules.discovery.dashboard.KubeDashboardEvent'>
<class 'src.modules.hunting.etcd.EtcdRemoteAccess'> subscribed to <class 'src.core.events.types.common.OpenPortEvent'>
<class 'src.modules.hunting.secrets.AccessSecrets'> subscribed to <class 'src.modules.discovery.hosts.RunningAsPodEvent'>
<class 'src.modules.hunting.aks.AzureSpnHunter'> subscribed to <class 'src.modules.hunting.kubelet.ExposedRunHandler'>
Event <class 'src.core.events.types.common.HuntStarted'> got published with <src.core.events.types.common.HuntStarted object at 0x107c1c350>
Event <class 'src.modules.discovery.hosts.HostScanEvent'> got published with <src.modules.discovery.hosts.HostScanEvent object at 0x107c68490>
~ Started
~ Discovering Open Kubernetes Services...
Checking whether the cluster is deployed on azure's cloud
Starting new HTTP connection (1): 127.0.0.1
http://127.0.0.1:3128 "GET http://www.azurespeed.com/api/region?ipOrUrl=masters.sanitized.tld HTTP/1.1" 404 0
Event <class 'src.core.events.types.common.NewHostEvent'> got published with masters.sanitized.tld
Checking whether the cluster is deployed on azure's cloud
Starting new HTTP connection (1): 127.0.0.1
host masters.sanitized.tld try ports: [8001, 10250, 10255, 30000, 443, 6443, 2379]
Reachable port found: 10250
Event <class 'src.core.events.types.common.OpenPortEvent'> got published with 10250
Attempting to get kubelet secure access
Attempting to get pod info from kubelet
Starting new HTTPS connection (1): masters.sanitized.tld
Reachable port found: 443
Event <class 'src.core.events.types.common.OpenPortEvent'> got published with 443
Attempting to discover an Api server
masters.sanitized.tld
Passive Hunter is attempting to access pod's service account token
masters.sanitized.tld
Passive Hunter is attempting to access pod's service account token
masters.sanitized.tld
Passive Hunter is attempting to access the API server using the pod's service account token
masters.sanitized.tld
Passive Hunter is attempting to access the API server /version end point using the pod's service account token
Starting new HTTPS connection (1): masters.sanitized.tld
Starting new HTTPS connection (1): masters.sanitized.tld
Starting new HTTPS connection (1): masters.sanitized.tld
https://masters.sanitized.tld:10250 "GET /pods HTTP/1.1" 403 78
Event <class 'src.modules.discovery.kubelet.SecureKubeletEvent'> got published with <src.modules.discovery.kubelet.SecureKubeletEvent object at 0x107c7aad0>
Event <class 'src.modules.hunting.kubelet.AnonymousAuthEnabled'> got published with <src.modules.hunting.kubelet.AnonymousAuthEnabled object at 0x107c7aa90>
|
| Kubelet API:
|   type: open service
|   service: Kubelet API
|_  host: masters.sanitized.tld:10250
Passive hunter is attempting to get server certificate
|
| Anonymous Authentication:
|   type: vulnerability
|   host: masters.sanitized.tld:10250
|   description:
|     The kubelet is misconfigured, potentially
|     allowing secure access to all requests on the
|_    kubelet, without the need to authenticate
Starting new HTTPS connection (1): masters.sanitized.tld
Reachable port found: 2379
Event <class 'src.core.events.types.common.OpenPortEvent'> got published with 2379
Event <class 'src.modules.discovery.etcd.EtcdAccessEvent'> got published with <src.modules.discovery.etcd.EtcdAccessEvent object at 0x107c7a550>
masters.sanitized.tld Passive hunter is attempting to access etcd insecurely
|
| Etcd:
|   type: open service
|   service: Etcd
|_  host: masters.sanitized.tld:2379
Passive hunter is attempting to get server certificate
Starting new HTTP connection (1): 127.0.0.1
https://masters.sanitized.tld:443 "GET / HTTP/1.1" 200 None
Event <class 'src.modules.discovery.apiserver.ApiServer'> got published with <src.modules.discovery.apiserver.ApiServer object at 0x107cb9710>
|
| API Server:
|   type: open service
|   service: API Server
|_  host: masters.sanitized.tld:443
Passive hunter is attempting to get server certificate
https://masters.sanitized.tld:443 "GET /version HTTP/1.1" 200 239
name 'd4cacc0' is not defined
https://masters.sanitized.tld:10250 "GET /pods HTTP/1.1" 401 12
Starting new HTTPS connection (1): masters.sanitized.tld
http://127.0.0.1:3128 "GET http://www.azurespeed.com/api/region?ipOrUrl=infra.sanitized.tld HTTP/1.1" 404 0
Event <class 'src.core.events.types.common.NewHostEvent'> got published with infra.sanitized.tld
Checking whether the cluster is deployed on azure's cloud
Starting new HTTP connection (1): 127.0.0.1
host infra.sanitized.tld try ports: [8001, 10250, 10255, 30000, 443, 6443, 2379]
https://masters.sanitized.tld:443 "GET /api HTTP/1.1" 200 147
Event <class 'src.modules.hunting.apiserver.ServerApiAccess'> got published with <src.modules.hunting.apiserver.ServerApiAccess object at 0x107cb9310>
Reachable port found: 10250
Event <class 'src.core.events.types.common.OpenPortEvent'> got published with 10250
|
| Access to server API:
|   type: vulnerability
|   host: masters.sanitized.tld:443
|   description:
|     Accessing the server API within a
|     compromised pod would help an attacker gain full
|_    control over the cluster
Attempting to get kubelet secure access
Attempting to get pod info from kubelet
Starting new HTTPS connection (1): masters.sanitized.tld
Starting new HTTPS connection (1): infra.sanitized.tld
https://masters.sanitized.tld:10250 "GET /healthz HTTP/1.1" 403 78
Reachable port found: 443
Event <class 'src.core.events.types.common.OpenPortEvent'> got published with 443
infra.sanitized.tld
Passive Hunter is attempting to access pod's service account token
Attempting to discover an Api server
infra.sanitized.tld
infra.sanitized.tld
Passive Hunter is attempting to access pod's service account token
Starting new HTTPS connection (1): infra.sanitized.tld
Passive Hunter is attempting to access the API server using the pod's service account token
infra.sanitized.tld
Passive Hunter is attempting to access the API server /version end point using the pod's service account token
Starting new HTTPS connection (1): infra.sanitized.tld
Starting new HTTPS connection (1): infra.sanitized.tld
https://infra.sanitized.tld:10250 "GET /pods HTTP/1.1" 403 78
Event <class 'src.modules.discovery.kubelet.SecureKubeletEvent'> got published with <src.modules.discovery.kubelet.SecureKubeletEvent object at 0x107cb9f90>
Event <class 'src.modules.hunting.kubelet.AnonymousAuthEnabled'> got published with <src.modules.hunting.kubelet.AnonymousAuthEnabled object at 0x107cb9550>
Passive hunter is attempting to get server certificate
|
| Kubelet API:
|   type: open service
|   service: Kubelet API
|_  host: infra.sanitized.tld:10250
|
| Anonymous Authentication:
|   type: vulnerability
|   host: infra.sanitized.tld:10250
|   description:
|     The kubelet is misconfigured, potentially
|     allowing secure access to all requests on the
|_    kubelet, without the need to authenticate
Starting new HTTPS connection (1): infra.sanitized.tld
https://masters.sanitized.tld:443 "GET /api/v1/namespaces HTTP/1.1" 403 264
Starting new HTTPS connection (1): masters.sanitized.tld
https://infra.sanitized.tld:443 "GET / HTTP/1.1" 503 None
Event <class 'src.modules.discovery.apiserver.ApiServer'> got published with <src.modules.discovery.apiserver.ApiServer object at 0x107cc9e10>
|
| API Server:
|   type: open service
|   service: API Server
|_  host: infra.sanitized.tld:443
Passive hunter is attempting to get server certificate
https://infra.sanitized.tld:443 "GET /version HTTP/1.1" 503 None
http://127.0.0.1:3128 "GET http://www.azurespeed.com/api/region?ipOrUrl=node1.sanitized.tld HTTP/1.1" 404 0

Event <class 'src.core.events.types.common.NewHostEvent'> got published with node1.sanitized.tld
host node1.sanitized.tld try ports: [8001, 10250, 10255, 30000, 443, 6443, 2379]
https://infra.sanitized.tld:443 "GET /api HTTP/1.1" 503 None
https://infra.sanitized.tld:10250 "GET /pods HTTP/1.1" 401 12
Starting new HTTPS connection (1): infra.sanitized.tld
Starting new HTTPS connection (1): infra.sanitized.tld
Reachable port found: 10250
Event <class 'src.core.events.types.common.OpenPortEvent'> got published with 10250
Attempting to get kubelet secure access
Attempting to get pod info from kubelet
Starting new HTTPS connection (1): node1.sanitized.tld
Incorrect padding
https://masters.sanitized.tld:443 "GET /api/v1/None/pods HTTP/1.1" 403 268
Starting new HTTPS connection (1): masters.sanitized.tld
https://infra.sanitized.tld:10250 "GET /healthz HTTP/1.1" 403 78
https://infra.sanitized.tld:443 "GET /api/v1/namespaces HTTP/1.1" 503 None
No JSON object could be decoded
https://node1.sanitized.tld:10250 "GET /pods HTTP/1.1" 403 78
Event <class 'src.modules.discovery.kubelet.SecureKubeletEvent'> got published with <src.modules.discovery.kubelet.SecureKubeletEvent object at 0x107cc9e90>
Event <class 'src.modules.hunting.kubelet.AnonymousAuthEnabled'> got published with <src.modules.hunting.kubelet.AnonymousAuthEnabled object at 0x107c68b90>
Passive hunter is attempting to get server certificate
|
| Anonymous Authentication:
|   type: vulnerability
|   host: node1.sanitized.tld:10250
|   description:
|     The kubelet is misconfigured, potentially
|     allowing secure access to all requests on the
|_    kubelet, without the need to authenticate
|
| Kubelet API:
|   type: open service
|   service: Kubelet API
|_  host: node1.sanitized.tld:10250
Starting new HTTPS connection (1): node1.sanitized.tld
https://node1.sanitized.tld:10250 "GET /pods HTTP/1.1" 401 12
Starting new HTTPS connection (1): node1.sanitized.tld
https://masters.sanitized.tld:443 "GET /api/v1/namespaces/default/pods HTTP/1.1" 403 254
Starting new HTTPS connection (1): masters.sanitized.tld
https://node1.sanitized.tld:10250 "GET /healthz HTTP/1.1" 403 78
https://masters.sanitized.tld:443 "GET /apis/rbac.authorization.k8s.io/v1/roles HTTP/1.1" 403 337
Starting new HTTPS connection (1): masters.sanitized.tld
https://masters.sanitized.tld:443 "GET /apis/rbac.authorization.k8s.io/v1/namespaces/default/roles HTTP/1.1" 403 345
Starting new HTTPS connection (1): masters.sanitized.tld
https://masters.sanitized.tld:443 "GET /apis/rbac.authorization.k8s.io/v1/clusterroles HTTP/1.1" 403 358
Event <class 'src.modules.hunting.apiserver.ApiServerPassiveHunterFinished'> got published with
1 tasks left
1 tasks left
1 tasks left
1 tasks left
1 tasks left
1 tasks left
1 tasks left
1 tasks left
1 tasks left
1 tasks left
1 tasks left
1 tasks left
1 tasks left
1 tasks left
1 tasks left
1 tasks left
1 tasks left
1 tasks left
1 tasks left
1 tasks left
1 tasks left
1 tasks left
1 tasks left
1 tasks left
1 tasks left
1 tasks left
1 tasks left
1 tasks left
etc etc etc

Link to EULA for Docker containers

Hi,

if you really want to use uploaded reports from the docker container version it may be a good idea to link to the EULA in the readme of this repository.

Link: https://kube-hunter.aquasec.com/eula.html

Customer hereby agrees that Aqua may upload samples of such results to its servers for its internal informational and research purposes

Kind regards,
Matthias

git clone from Readme does not work

The git clone in the readme does not work.
https://github.com/aquasecurity/kube-hunter/blame/master/README.md#L75

`git clone [email protected]:aquasecurity/kube-hunter.git thiswillnotwork
Cloning into 'thiswillnotwork'...
Permission denied (publickey).
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.`

git clone via https works fine:
git clone https://github.com/aquasecurity/kube-hunter.git Cloning into 'kube-hunter'... remote: Counting objects: 999, done. remote: Compressing objects: 100% (10/10), done. remote: Total 999 (delta 1), reused 3 (delta 1), pack-reused 988 Receiving objects: 100% (999/999), 177.52 KiB | 0 bytes/s, done. Resolving deltas: 100% (546/546), done.

No location shown for some Access Risk vulnerabilities

Example:

+-----------------+----------------------+----------------------+----------------------+----------------------+
| 10.32.0.1:10255 | Information          | Cluster Health       | By accessing the     | status: ok           |
|                 | Disclosure           | Disclosure           | open /healthz        |                      |
|                 |                      |                      | handler, an attacker |                      |
|                 |                      |                      | could get the        |                      |
|                 |                      |                      | cluster health state |                      |
|                 |                      |                      | without              |                      |
|                 |                      |                      | authenticating       |                      |
+-----------------+----------------------+----------------------+----------------------+----------------------+
| 10.32.0.1:10255 | Access Risk          | Privileged Container | A Privileged         | pod: kube-proxy-     |
|                 |                      |                      | container exist on a | fbm7v, container:    |
|                 |                      |                      | node. could expose   | kube-p...            |
|                 |                      |                      | the node/cluster to  |                      |
|                 |                      |                      | unwanted root        |                      |
|                 |                      |                      | operations           |                      |
+-----------------+----------------------+----------------------+----------------------+----------------------+
|                 | Access Risk          | Read access to pod's |  Accessing the pod   | eyJhbGciOiJSUzI1NiIs |
|                 |                      | service account      | service account      | InR5cCI6IkpXVCJ9.eyJ |
|                 |                      | token                | token gives an       | ...                  |
|                 |                      |                      | attacker the option  |                      |
|                 |                      |                      | to use the server    |                      |
|                 |                      |                      | API                  |                      |
+-----------------+----------------------+----------------------+----------------------+----------------------+
|                 | Access Risk          | Access to pod's      |  Accessing the pod's | ['/var/run/secrets/k |
|                 |                      | secrets              | secrets within a     | ubernetes.io/service |
|                 |                      |                      | compromised pod      | ...                  |
|                 |                      |                      | might disclose       |                      |
|                 |                      |                      | valuable data to a   |                      |
|                 |                      |                      | potential attacker   |                      |
+-----------------+----------------------+----------------------+----------------------+----------------------+

Kube hunter running as Job with args --report=json not working

Dear Team,

I was trying to run the kube hunter job in k8s with option
command: ["python", "kube-hunter.py"]
args: ["--internal","--report=json"]
However the logs are not coming in json format. I would like to send the logs to elastic search and further integrate with monitoring system in order to generate alerts.

I am not sure if i am doing something wrong while specifying the arguments or is it a bug.
Could you please help me out here.
Thanks in advance.

--report and --log anomolies

Behaviour:

Depending on the options set either output appears or not.

Configuration:

Running as a K8s Job.

This DOES produce output ... note I have --log set to NONE and --report to JSON ...

spec:
  containers:
  - args:
    - --remote
    - foo.bar.baz.com
    - --log
    - none
    - --report
    - json
    command:
    - python
    - kube-hunter.py

This does NOT produce output ... same as above .. just changed --report to PLAIN

spec:
  containers:
  - args:
    - --remote
    - foo.bar.baz.com
    - --log
    - none
    - --report
    - plain
    command:
    - python
    - kube-hunter.py

This DOES produce output ... just changed --log to INFO

spec:
  containers:
  - args:
    - --remote
    - foo.bar.baz.com
    - --log
    - info
    - --report
    - plain
    command:
    - python
    - kube-hunter.py

I haven't tried all combinations (although a few more than I have shown here) but from these very limited tests it suggests that if --log is set to a vault other than NONE, output is produced, or if --log is NONE then --report only works with JSON ??

ARM support ?

Hi,
i would like to test kube-hunter on my ARM Kubernetes cluster using the Docker container.
I've got this error :

$ docker run -it --rm --network host aquasec/kube-hunter --token xxxxxxxxxxxxxxxxxxxxxxxx
Unable to find image 'aquasec/kube-hunter:latest' locally
latest: Pulling from aquasec/kube-hunter
be8881be8156: Pull complete 
44247e56d488: Pull complete 
9b1ccb116b10: Pull complete 
94c785725d8a: Pull complete 
ec04bd431296: Pull complete 
abdaeaf60dc8: Pull complete 
e6cf9354e1c2: Pull complete 
d80fd4a74001: Pull complete 
ae47b67b03da: Pull complete 
Digest: sha256:4d52303ee247ebabc18146de6728e30439a99f5be53f69b63a451acb7cdbab3d
Status: Downloaded newer image for aquasec/kube-hunter:latest
standard_init_linux.go:190: exec user process caused "exec format error"

I think it could comes from :

This container includes this source code, plus an additional (closed source) reporting plugin for uploading results into a report that can be viewed at kube-hunter.aquasec.com

Do you plan to support multiarch ?

ectd hunter

We should add a hunter looking for etcd access, and for checking whether etcd data is encrypted

etcd 3 support

Thanks for the great tool!

I tried it on a 3.7 cluster and it found etcd was open (v2 API) however it couldn't write. I tried the v3 API (v3beta) manually using curl with the gRPC gateway and found I could read and write! Could you add support for probing the v3 APIs too please?

Enhancing subscribe mechanism

The hunter subscribe mechanism currently support subscribing to one or multiple event.
The problem arises when for example we want to run a hunter only if two specific events were published.

One thing we could do is add a new 'queue' dictionary, that will store the 'multiple subscriptions', and on each event that is being published that resides in this queue_dict, adding the event object to the queue.
In the case that the list of event objects corresponds to the requirement of the publish, the events will be concatenated, and the hunter hook will execute. This logic would happen each time on the publish_event() method.

issue with local usage of ./kube-hunter.py

Today I cloned kube-hunter to:

  1. an Ubuntu VM
  2. the local Windows Bash on Ubuntu (Linux Subsystem on Windows)
  3. on local Windows

On the two Ubuntu vms (1 and 2) running ./kube-hunter.py --list returns the following error:

$ ./kube-hunter.py --list
Traceback (most recent call last):
  File "./kube-hunter.py", line 34, in <module>
    from src.modules.report.plain import PlainReporter
  File "/home/ubuntu/git/kube-hunter/src/__init__.py", line 2, in <module>
    import modules
  File "/home/ubuntu/git/kube-hunter/src/modules/__init__.py", line 1, in <module>
    import report
  File "/home/ubuntu/git/kube-hunter/src/modules/report/__init__.py", line 7, in <module>
    exec('from {} import *'.format(module_name))
  File "<string>", line 1, in <module>
  File "/home/ubuntu/git/kube-hunter/src/modules/report/yaml.py", line 3, in <module>
    from ruamel.yaml import YAML
ImportError: No module named ruamel.yaml

In Windows cmd (3) running ./kube-hunter.py --list returns the following error:

>kube-hunter.py --list
Traceback (most recent call last):
  File "C:\Users\myuser\git\kube-hunter\kube-hunter.py", line 34, in <module>
    from src.modules.report.plain import PlainReporter
  File "C:\Users\myuser\git\kube-hunter\src\__init__.py", line 1, in <module>
    import core
ImportError: No module named 'core'

Running kube-hunter in a pod on Kubernetes works fine. But currently I am not able to use the local version of kube-hunter.

Running kube-hunter with istio?

I am attempting to run this inside the default namespace with istio auto-inejection enabled. It is unable to discover anything, has anyone had any success running this alongside istio? I am experimenting with running in a different namespace with auto injection turned off at the moment.

Hunter statistics look misleading

What do the Hunter event counts mean to a user? For example, a non-zero count against Certificate Email Hunting doesn't mean that any email addresses have been found in certificates, just that the hunter was triggered... Please can you explain the intention here @nshauli?

cc @jerbia

Kube Hunter couldn't find any clusters

Hi,
trying to launch ./kube-hunter.py --active on k8s installed on-prem with kubeadm.
got
|
| API Server:
| type: open service
| service: API Server
|_ host: 198.18.240.90:443
Passive hunter is attempting to get server certificate
|
| API Server:
| type: open service
| service: API Server
|_ host: 198.18.240.86:443
Incorrect padding
Reachable port found: 443
Event <class 'src.core.events.types.common.OpenPortEvent'> got published with 443
Incorrect padding
Incorrect padding
Incorrect padding
Incorrect padding
Incorrect padding
Incorrect padding
Incorrect padding
Incorrect padding
Incorrect padding
Incorrect padding
Incorrect padding
Attempting to discover an Api server
Incorrect padding
Incorrect padding
Incorrect padding
Starting new HTTPS connection (1): 198.18.240.247:443
https://198.18.240.247:443 "GET / HTTP/1.1" 403 185
Event <class 'src.modules.discovery.apiserver.ApiServer'> got published with <src.modules.discovery.apiserver.ApiServer object at 0x7f9b606d2d90>
|
| API Server:
| type: open service
| service: API Server
|_ host: 198.18.240.247:443
Passive hunter is attempting to get server certificate
https://198.18.240.212:10250 "GET /pods HTTP/1.1" 401 12
Starting new HTTPS connection (1): 198.18.240.212:10250
https://198.18.240.212:10250 "GET /healthz HTTP/1.1" 401 12
https://198.18.240.215:10250 "GET /pods HTTP/1.1" 401 12
Starting new HTTPS connection (1): 198.18.240.215:10250
https://198.18.240.213:10250 "GET /pods HTTP/1.1" 401 12
Starting new HTTPS connection (1): 198.18.240.213:10250
https://198.18.240.214:10250 "GET /pods HTTP/1.1" 401 12
Starting new HTTPS connection (1): 198.18.240.214:10250
https://198.18.240.215:10250 "GET /healthz HTTP/1.1" 401 12
https://198.18.240.213:10250 "GET /healthz HTTP/1.1" 401 12
https://198.18.240.214:10250 "GET /healthz HTTP/1.1" 401 12
https://172.17.0.1:10250 "GET /pods HTTP/1.1" 401 12
Starting new HTTPS connection (1): 172.17.0.1:10250
https://172.17.0.1:10250 "GET /healthz HTTP/1.1" 401 12
Event <class 'src.core.events.types.common.HuntFinished'> got published with <src.core.events.types.common.HuntFinished object at 0x7f9b8406ec10>

Kube Hunter couldn't find any clusters


Event <class 'src.modules.report.collector.TablesPrinted'> got published with <src.modules.report.collector.TablesPrinted object at 0x7f9b8406e3d0>
Cleaned Queue

Mode selection can be shown in the wrong order

Possible to see this:

Choose one of the options below:
1. Remote scanning      (scans one or more specific IPs or DNS names)
2. IP range scanning    (scans a given IP range)
3. Subnet scanning      (scans subnets on all local network interfaces)
Your choice: 3
CIDR (example - 192.168.1.0/24): ^C

where the menu items appear in the "wrong" order (not the order that the subsequent code relies on). Dictionaries don't have a defined order so we shouldn't rely on the order of entries in it.

Potential False Positive - Kubelet Read/Write API

First of all, excellent app! Love seeing these kinds of tools. One thing stuck out on my first go-round. Did a scan of a stock GKE cluster (1.10.x), and the report indicated a "High" risk "Remote Code Execution" via the Kubelet Read/Write API (port 10250). The Kubelet's API is indeed available, but manual validation indicates the API properly disallows the request:

root@kube-hunter:/usr/src/kube-hunter# curl -sk https://10.128.0.4:10250/run/
Unauthorized

kube hunter error

over k8s we have this error: Cannot read wireshark manuf database

Confusing license terms

on the website is says https://kube-hunter.aquasec.com/

Customer shall not, and shall not permit or encourage any third party to, do any of the following: (a) copy the Software; (b) sell, assign, lease, lend, rent, sublicense, or make available the Software to any third party, or otherwise use the Software to operate in a time-sharing, outsourcing, or service bureau environment;

on the github it says

each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      copyright license to reproduce, prepare Derivative Works of,
      publicly display, publicly perform, sublicense, and distribute the
      Work and such Derivative Works in Source or Object form.

๐Ÿคทโ€โ™€๏ธ if someone could explain how this works to me I would really appreciate it

Kube Hunter couldn't find any clusters on AKS

Hi,

I am trying to run kube-hunter on AKS as a Pod unfortunately it does not work:

kubectl logs kube-hunter-jbjxs
~ Started
~ Discovering Open Kubernetes Services...


Kube Hunter couldn't find any clusters

AKS is RBAC enabled can this cause a issue?

enum module not in requirements.txt

Hi,

please add enum to the requirements.txt file, it's needed but not in the file.

Ubuntu 16.04 with Python 2.7.12

./kube-hunter.py
Traceback (most recent call last):
File "./kube-hunter.py", line 37, in
from src.modules.report.plain import PlainReporter
File "/home/uidv7979/Dokumente/kube-hunter/src/init.py", line 2, in
from . import modules
File "/home/uidv7979/Dokumente/kube-hunter/src/modules/init.py", line 2, in
from . import discovery
File "/home/uidv7979/Dokumente/kube-hunter/src/modules/discovery/init.py", line 8, in
exec('from .{} import *'.format(module_name))
File "", line 1, in
File "/home/uidv7979/Dokumente/kube-hunter/src/modules/discovery/kubelet.py", line 3, in
from enum import Enum
ImportError: No module named enum

pip install -r requirements.txt
Requirement already satisfied: netaddr in /usr/local/lib/python2.7/dist-packages (from -r requirements.txt (line 1)) (0.7.19)
Requirement already satisfied: netifaces in /usr/local/lib/python2.7/dist-packages (from -r requirements.txt (line 2)) (0.10.9)
Requirement already satisfied: scapy==2.4.3rc1 in /usr/local/lib/python2.7/dist-packages (from -r requirements.txt (line 3)) (2.4.3rc1)
Requirement already satisfied: requests in /usr/local/lib/python2.7/dist-packages (from -r requirements.txt (line 4)) (2.22.0)
Requirement already satisfied: PrettyTable in /usr/local/lib/python2.7/dist-packages (from -r requirements.txt (line 5)) (0.7.2)
Requirement already satisfied: urllib3<1.25,>=1.24.2 in /usr/local/lib/python2.7/dist-packages (from -r requirements.txt (line 6)) (1.24.3)
Requirement already satisfied: ruamel.yaml in /usr/local/lib/python2.7/dist-packages (from -r requirements.txt (line 7)) (0.15.96)
Requirement already satisfied: requests_mock in /usr/local/lib/python2.7/dist-packages (from -r requirements.txt (line 8)) (1.6.0)
Requirement already satisfied: future in /usr/local/lib/python2.7/dist-packages (from -r requirements.txt (line 9)) (0.17.1)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python2.7/dist-packages (from requests->-r requirements.txt (line 4)) (2019.3.9)
Requirement already satisfied: chardet<3.1.0,>=3.0.2 in /usr/local/lib/python2.7/dist-packages (from requests->-r requirements.txt (line 4)) (3.0.4)
Requirement already satisfied: idna<2.9,>=2.5 in /usr/local/lib/python2.7/dist-packages (from requests->-r requirements.txt (line 4)) (2.8)
Requirement already satisfied: ruamel.ordereddict; platform_python_implementation == "CPython" and python_version <= "2.7" in /usr/local/lib/python2.7/dist-packages (from ruamel.yaml->-r requirements.txt (line 7)) (0.4.13)
Requirement already satisfied: six in /usr/lib/python2.7/dist-packages (from requests_mock->-r requirements.txt (line 8)) (1.10.0)

After Installing enum manual with pip, it's working
pip install enum
Collecting enum
Downloading https://files.pythonhosted.org/packages/02/a0/32e1d5a21b703f600183e205aafc6773577e16429af5ad3c3f9b956b07ca/enum-0.4.7.tar.gz
Requirement already satisfied: setuptools in /usr/lib/python2.7/dist-packages (from enum) (20.7.0)
Building wheels for collected packages: enum
Building wheel for enum (setup.py) ... done
Stored in directory: /root/.cache/pip/wheels/be/ba/eb/7c6273cf8a17300ccda1e504dbbd7e563670736e887f389459
Successfully built enum
Installing collected packages: enum
Successfully installed enum-0.4.7

Vulnerabilities not shown in dashboard

@screen shot 2018-12-13 at 11 49 15 am

When I scan with kube-hunter, It is identifying some vulnerabilities. But the vulnerabilities are not shown in the dashboard.

screen shot 2018-12-13 at 12 35 53 pm

Please let me know what could be possible issue.

Permission denied when cloning repo

projects $ git clone [email protected]:aquasecurity/kube-hunter.git
Cloning into 'kube-hunter'...
The authenticity of host 'github.com (192.30.253.113)' can't be established.
RSA key fingerprint is SHA256:nThbg6kXUpJWGl7E1IGOCspRomTxdCARLviKw6E5SY8.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'github.com,192.30.253.113' (RSA) to the list of known hosts.
[email protected]: Permission denied (publickey).
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.

Azure Cloud Checks - Make Optional

I note that there is a check whether the cluster is deployed into Azure Cloud .. see below.

I don't have a problem with that per se except that in a corporate environment I don't want to request a proxy whitelist exception with my CISO for an endpoint that we have no use for (http://www.azurespeed.com). Moreover, whilst this test will eventually timeout and the checks resume, it delays the process by whatever our default timeout is (120 secs I think) which is not ideal.

Could you make this check optional using something like this ...

parser.add_argument('--azurechk', action="store_true", help="whether to check if the cluster is deployed on azure cloud - defaults to true")
...
class HostDiscoveryHelpers:
    @staticmethod
    def get_cloud(host):
        if config.azurechk:
            ...

Kind Regards

Fraser.

class HostDiscoveryHelpers:
    @staticmethod
    def get_cloud(host):
        if config.azurechk:
            try:
                logging.debug("Checking whether the cluster is deployed on azure's cloud")
                metadata = requests.get("http://www.azurespeed.com/api/region?ipOrUrl={ip}".format(ip=host)).text
            except requests.ConnectionError as e:
                logging.info("- unable to check cloud: {0}".format(e))
                return
        if "cloud" in metadata:
            return json.loads(metadata)["cloud"]

Insecure Azure Cloud IP detection

To detect whether the IP is in a known public cloud, kube-hunter performs a query to http://www.azurespeed.com using an insecure http connection. Given that kube-hunter is supposed to increase cluster security, leaking out internal IP data seems to counter that purpose.

Requests to such an external service should be secure and optional. #107 already suggests to make the check optional to reduce unnecessary timeouts.

Update docker image

As a follow-up to #41, the docker image should at some point be updated to use some version of python 3. Also, alpine3.9 has been released. So the following questions arise

  • Which version of python3 should be used in the docker image?
  • should alpine be updated from 3.8 to 3.9?

For my testing of the python3 changes, I used a docker image with python:3.7.2-alpine3.9, so I'm fairly confident that all works, but it may make sense to use something else for your official image. I see that python3.6 is used in Travis, so perhaps it would be best to keep python3.6 in the docker image as well.

It would also be good to update the README to indicate that python 3 is supported now.

CAP_NET_RAW new hunter

Changing kube-hunter to not rely on having a raw socket capability when running in a pod. so not using scapy by default.

Also, adding a hunter which checks for this capability and throws a NET_RAW event should be a good thing

Automated image build is not working

The job that builds an image and pushes it to Docker Hub is currently failing (due to a high severity vulnerability in one of the dependencies)

Suggested optimisations for Docker image

I believe it should be possible to reduce the final docker image layer for kube-hunter.

The build-base package is only needed for pip to build dependencies against musl, but then isn't actually needed at runtime. Also, tcpdump and wireguard were added in de1508d in order to appease scapy, but this seems to be unnecessary now.

edit: It does complain about missing wireshark db when running with --pod, so perhaps best to leave both in for now and just remove build-base from final layer. It looks like from past discussions that it isn't actually needed, but the warnings cause alarm.

As an example, I am currently using this dockerfile, and I haven't encountered any issues yet.

Sample scan:

$ docker run --rm -it westonsteimel/kube-hunter --internal
~ Started
~ Discovering Open Kubernetes Services...
|
| API Server:
|   type: open service
|   service: API Server
|_  host: 172.17.0.1:6443

----------

Nodes
+-------------+------------+
| TYPE        | LOCATION   |
+-------------+------------+
| Node/Master | 172.17.0.1 |
+-------------+------------+

Detected Services
+------------+-----------------+----------------------+
| SERVICE    | LOCATION        | DESCRIPTION          |
+------------+-----------------+----------------------+
| API Server | 172.17.0.1:6443 | The API server is in |
|            |                 | charge of all        |
|            |                 | operations on the    |
|            |                 | cluster.             |
+------------+-----------------+----------------------+

Hunter Statistics
+----------------------+----------------------+--------+
| NAME                 | DESCRIPTION          | EVENTS |
+----------------------+----------------------+--------+
| Proxy Hunting        | Hunts for a          | 0      |
|                      | dashboard behind the |        |
|                      | proxy                |        |
+----------------------+----------------------+--------+
| Kubelet Secure Ports | Hunts specific       | 0      |
| Hunter               | endpoints on an open |        |
|                      | secured Kubelet      |        |
+----------------------+----------------------+--------+
| Kubelet Readonly     | Hunts specific       | 0      |
| Ports Hunter         | endpoints on open    |        |
|                      | ports in the         |        |
|                      | readonly Kubelet     |        |
|                      | server               |        |
+----------------------+----------------------+--------+
| Etcd Remote Access   | Checks for remote    | 0      |
|                      | availability of      |        |
|                      | etcd, its version,   |        |
|                      | and read access to   |        |
|                      | the DB               |        |
+----------------------+----------------------+--------+
| Dashboard Hunting    | Hunts open           | 0      |
|                      | Dashboards, gets the |        |
|                      | type of nodes in the |        |
|                      | cluster              |        |
+----------------------+----------------------+--------+
| Certificate Email    | Checks for email     | 1      |
| Hunting              | addresses in         |        |
|                      | kubernetes ssl       |        |
|                      | certificates         |        |
+----------------------+----------------------+--------+
| CVE-2018-1002105     | Checks if Node is    | 1      |
| hunter               | running a Kubernetes |        |
|                      | version vulnerable   |        |
|                      | to critical          |        |
|                      | CVE-2018-1002105     |        |
+----------------------+----------------------+--------+
| Access Secrets       | Accessing the        | 0      |
|                      | secrets accessible   |        |
|                      | to the pod           |        |
+----------------------+----------------------+--------+
| API Server Hunter    | Checks if API server | 1      |
|                      | is accessible        |        |
+----------------------+----------------------+--------+
| API Server Hunter    | Accessing the API    | 1      |
|                      | server using the     |        |
|                      | service account      |        |
|                      | token obtained from  |        |
|                      | a compromised pod    |        |
+----------------------+----------------------+--------+
| AKS Hunting          | Hunting Azure        | 0      |
|                      | cluster deployments  |        |
|                      | using specific known |        |
|                      | configurations       |        |
+----------------------+----------------------+--------+

No vulnerabilities were found

Use token we previously obtained

in hunting/cvehunter.py we get the service account token again, but we should be able to reuse the one we obtained in discover/hosts.py

Have a toggle to make logs machine readable

Have an environment or argument to allow logs to be printed to STDOUT in machine-readable format. This makes ingesting job output into Elasticsearch or other log aggregators

Update docker image

Hi, thanks for the tool!

Please update the docker image, the current version has outdated code that report false positive Exposed Run Inside Container that was fixed on #33

Also have tagged images would be nice.

Support Python 3

I try kube-hunter using a virtualenv in Python 3 :

$ git clone [email protected]:aquasecurity/kube-hunter.git
$ cd kube-hunter
$ python3 -m venv venv
$ . venv/bin/activate
$ pip3 install -r requirements.txt
[...]
Successfully installed PrettyTable-0.7.2 certifi-2018.8.24 chardet-3.0.4 enum34-1.1.6 idna-2.7 netifaces-0.10.7 requests-2.19.1 ruamel.yaml-0.15.64 scapy-2.4.0 urllib3-1.23
$  python ./kube-hunter.py 
Traceback (most recent call last):
  File "./kube-hunter.py", line 34, in <module>
    from src.modules.report.plain import PlainReporter
  File "/home/rock64/Projects/kube-hunter/src/__init__.py", line 1, in <module>
    import core
ImportError: No module named 'core'

Unable to scan cluster

My cluster isn't accessible externally, I've tried both the cidr and remote option, but the cluster nevers gets detected. It seems to be because of the use of the azurespeed.com website as part of the scan:

Choose one of the options below:
1. Remote scanning      (scans one or more specific IPs or DNS names)
2. Internal scanning    (scans all network interfaces)
3. Network scanning     (scans a given IP range)
Your choice: 3
CIDR (example - 192.168.1.0/24): <ip>/32
Event <class 'src.core.events.types.common.HuntStarted'> got published with <src.core.events.types.common.HuntStarted object at 0x7f9efcab8810>
Event <class 'src.modules.discovery.hosts.HostScanEvent'> got published with <src.modules.discovery.hosts.HostScanEvent object at 0x7f9ed873a550>
~ Started
~ Discovering Open Kubernetes Services...
Starting new HTTP connection (1): www.azurespeed.com:80
http://www.azurespeed.com:80 "GET /api/region?ipOrUrl=<ip> HTTP/1.1" 200 None
Event <class 'src.core.events.types.common.NewHostEvent'> got published with 10.70.0.106
Event <class 'src.core.events.types.common.HuntFinished'> got published with <src.core.events.types.common.HuntFinished object at 0x7f9ed8709410>

Kube Hunter couldn't find any clusters

----------

Event <class 'src.modules.report.default.TablesPrinted'> got published with <src.modules.report.default.TablesPrinted object at 0x7f9f0167fcd0>
Cleaned Queue

Error when running kube-hunter.py

pez$ ./kube-hunter.py --cidr 192.168.70.143 --mapping
Traceback (most recent call last):
File "./kube-hunter.py", line 36, in
from src.modules.report.plain import PlainReporter
File "/Users/pez/src/github.com/kube-hunter/src/init.py", line 1, in
import core
File "/Users/pez/src/github.com/kube-hunter/src/core/init.py", line 1, in
import types
File "/Users/pez/src/github.com/kube-hunter/src/core/types.py", line 52, in
from events import handler # import is in the bottom to break import loops
File "/Users/pez/src/github.com/kube-hunter/src/core/events/init.py", line 1, in
from handler import *
File "/Users/pez/src/github.com/kube-hunter/src/core/events/handler.py", line 12, in
from ...core.events.types import HuntFinished
File "/Users/pez/src/github.com/kube-hunter/src/core/events/types/init.py", line 4, in
from common import *
File "/Users/pez/src/github.com/kube-hunter/src/core/events/types/common.py", line 2, in
import requests
ImportError: No module named requests

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.