Giter Site home page Giter Site logo

kubesphere / kubeeye Goto Github PK

View Code? Open in Web Editor NEW
790.0 17.0 130.0 208.59 MB

KubeEye aims to find various problems on Kubernetes, such as application misconfiguration, unhealthy cluster components and node problems.

Home Page: https://kubesphere.io

License: Apache License 2.0

Go 50.99% Makefile 4.23% Open Policy Agent 24.91% Dockerfile 1.37% Smarty 0.76% Shell 17.74%
kubernetes cluster-analysis k8s kubeeye observability

kubeeye's People

Contributors

allcontributors[bot] avatar dependabot[bot] avatar doudouzh avatar fingerliu avatar forest-l avatar inineku avatar ks-ci-bot avatar leonharetd avatar liangzai006 avatar linuxsuren avatar panzhen6668 avatar pixiake avatar realharshthakur avatar shaowenchen avatar zheng1 avatar zryfish avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kubeeye's Issues

Manage plugins by CRD

What would you like to be added:
Setting spec.plugins.npd.enabled to true will install NPD in the cluster
Setting spec.plugins.kubebench.enabled to true will install kubebench in the cluster

Setting spec.plugins.npd.enabled to false will check if install NPD in the cluster, and if so, uninstall NPD
Setting spec.plugins.kubebench.enabled to false will check if install kubebench in the cluster, and if so, uninstall kubebench

Why is this needed:
Modify the CRD to

apiVersion: kubeeye.kubesphere.io/v1alpha1
kind: ClusterInsight
metadata:
  name: clusterinsight-sample
  namespace: kubeeye-system
spec:
  auditPeriod: 24h
  plugins:
    npd:
      enabled: true
    kubebench:
      enabled: true

This will make it easier for users to manage plugins

The vendor directory is too large

The vendor directory is too large, it's not necessary for most users, and affects user clone repo.
The vendor directory will be delete.

巡检指标未找到对应实现逻辑

您好,我最近在调研kubeeye,我发现在README中有写以下几个巡检功能,但是我在v0.4.0的代码里没有找到相关内容
Hello, I was researching kubeeye recently, and I found that the following inspection functions are written in the README, but I did not find relevant content in the code of v0.4.0

DockerHealthStatus
ETCDHealthStatus
ControllerManagerHealthStatus
SchedulerHealthStatus
KubeletHealthStatus
NodeDisk
NodeOOM
.....

方便解答一下这部分关于集群、主机、docker的状态是如何检测的吗?
Is it convenient to answer this part about how the status of the cluster, host, and docker is detected?

Make it work with `go install` and `brew install`

Hi. The tool looks interesting but in order to make the installation simpler, would be nice to support these two options:

  1. go install (and go get) is failing now
$ go install github.com/kubesphere/kubeeye
go: finding module for package github.com/kubesphere/kubeeye
go: found github.com/kubesphere/kubeeye in github.com/kubesphere/kubeeye v0.1.0
go: github.com/kubesphere/kubeeye: github.com/kubesphere/[email protected]: parsing go.mod:
	module declares its path as: kubeye
	        but was required as: github.com/kubesphere/kubeeye
  1. brew install is missing

Completed the audit results level

What would you like to be added:
Completed the audit results level

Why is this needed:
Cluster scores are necessary to evaluate the cluster. In order to score the cluster results later, now set the level for the results, and the output in CRD's status is like follows:

apiVersion: v1
items:
- apiVersion: kubeeye.kubesphere.io/v1alpha1
  kind: ClusterInsight
  metadata:
    name: clusterinsight-sample
    namespace: default
  spec:
    auditPeriod: 24h
  status:
    auditResults:
      auditResults:
      - resourcesType: Node
        resultInfos:
        - namespace: ""
          resourceInfos:
          - items:
            - level: waring
              message: KubeletHasDiskPressure
              reason: kubelet has disk pressure
            - level: waring
              message: KubeletHasNoSufficientPID
              reason: kubelet has no sufficient PID available
            name: docker-desktop

Support Web Interface

What would you like to be added:

Provide a web interface that allows users to view results directly through a browser

Why is this needed:

The pure CLI view of the inspect results is not easy to visualize, it is easier to understand the visual results through the web page, and it can be increased for the inspection and solution of the results.

Release asset of v1.0.0 doesn't include prebuilt binaries

https://github.com/kubesphere/kubeeye#install-and-use-kubeeye

Method 1: Download the pre-built executable file from Releases.

https://github.com/kubesphere/kubeeye/releases/tag/v1.0.0

image

The release asset of v1.0.0 doesn't include prebuilt binaries.

$ tar tvzf kubeeye-offline-v1.0.0.tar.gz 
drwxr-xr-x  0 runner docker      0 12  1 00:36 kubeeye-offline-v1.0.0/
drwxr-xr-x  0 runner docker      0 12  1 00:36 kubeeye-offline-v1.0.0/chart/
drwxr-xr-x  0 runner docker      0 12  1 00:36 kubeeye-offline-v1.0.0/chart/kubeeye/
-rw-r--r--  0 runner docker   1137 12  1 00:36 kubeeye-offline-v1.0.0/chart/kubeeye/Chart.yaml
-rw-r--r--  0 runner docker    349 12  1 00:36 kubeeye-offline-v1.0.0/chart/kubeeye/.helmignore
-rw-r--r--  0 runner docker   2222 12  1 00:36 kubeeye-offline-v1.0.0/chart/kubeeye/values.yaml
drwxr-xr-x  0 runner docker      0 12  1 00:36 kubeeye-offline-v1.0.0/chart/kubeeye/templates/
-rw-r--r--  0 runner docker   4855 12  1 00:36 kubeeye-offline-v1.0.0/chart/kubeeye/templates/deployment.yaml
-rw-r--r--  0 runner docker    263 12  1 00:36 kubeeye-offline-v1.0.0/chart/kubeeye/templates/serviceaccount.yaml
-rw-r--r--  0 runner docker    276 12  1 00:36 kubeeye-offline-v1.0.0/chart/kubeeye/templates/manager-config.yaml
-rw-r--r--  0 runner docker    831 12  1 00:36 kubeeye-offline-v1.0.0/chart/kubeeye/templates/proxy-rbac.yaml
-rw-r--r--  0 runner docker    240 12  1 00:36 kubeeye-offline-v1.0.0/chart/kubeeye/templates/metrics-reader-rbac.yaml
-rw-r--r--  0 runner docker    996 12  1 00:36 kubeeye-offline-v1.0.0/chart/kubeeye/templates/leader-election-rbac.yaml
-rw-r--r--  0 runner docker    185 12  1 00:36 kubeeye-offline-v1.0.0/chart/kubeeye/templates/config.yaml
-rw-r--r--  0 runner docker   1782 12  1 00:36 kubeeye-offline-v1.0.0/chart/kubeeye/templates/_helpers.tpl
-rw-r--r--  0 runner docker    450 12  1 00:36 kubeeye-offline-v1.0.0/chart/kubeeye/templates/metrics-service.yaml
-rw-r--r--  0 runner docker    350 12  1 00:36 kubeeye-offline-v1.0.0/chart/kubeeye/templates/apiserver.yaml
-rw-r--r--  0 runner docker   2759 12  1 00:36 kubeeye-offline-v1.0.0/chart/kubeeye/templates/manager-rbac.yaml
-rw-r--r--  0 runner docker    329 12  1 00:36 kubeeye-offline-v1.0.0/chart/kubeeye/templates/inspect-result.yaml
drwxr-xr-x  0 runner docker      0 12  1 00:36 kubeeye-offline-v1.0.0/chart/kubeeye/crds/
-rw-r--r--  0 runner docker   8696 12  1 00:36 kubeeye-offline-v1.0.0/chart/kubeeye/crds/inspectresult-crd.yaml
-rw-r--r--  0 runner docker   3538 12  1 00:36 kubeeye-offline-v1.0.0/chart/kubeeye/crds/inspectplan-crd.yaml
-rw-r--r--  0 runner docker   3478 12  1 00:36 kubeeye-offline-v1.0.0/chart/kubeeye/crds/inspecttask-crd.yaml
-rw-r--r--  0 runner docker   7480 12  1 00:36 kubeeye-offline-v1.0.0/chart/kubeeye/crds/inspectrule-crd.yaml
drwxr-xr-x  0 runner docker      0 12  1 00:36 kubeeye-offline-v1.0.0/rule/
-rw-r--r--  0 runner docker    579 12  1 00:36 kubeeye-offline-v1.0.0/rule/kubeeye_v1alpha2_systemd.yaml
-rw-r--r--  0 runner docker   4383 12  1 00:36 kubeeye-offline-v1.0.0/rule/kubeeye_v1alpha2_prometheusrule.yaml
-rw-r--r--  0 runner docker    262 12  1 00:36 kubeeye-offline-v1.0.0/rule/kubeeye_v1alpha2_filterrule.yaml
-rw-r--r--  0 runner docker    176 12  1 00:36 kubeeye-offline-v1.0.0/rule/kubeeye_v1alpha2_services_connect.yaml
-rw-r--r--  0 runner docker   1603 12  1 00:36 kubeeye-offline-v1.0.0/rule/kubeeye_v1alpha2_opa_node.yaml
-rw-r--r--  0 runner docker  57827 12  1 00:36 kubeeye-offline-v1.0.0/rule/kubeeye_v1alpha2_opa_deployment.yaml
-rw-r--r--  0 runner docker   2076 12  1 00:36 kubeeye-offline-v1.0.0/rule/kubeeye_v1alpha2_opa_namespace.yaml
-rw-r--r--  0 runner docker    688 12  1 00:36 kubeeye-offline-v1.0.0/rule/kubeeye_v1alpha2_nodeInfo.yaml
-rw-r--r--  0 runner docker   1304 12  1 00:36 kubeeye-offline-v1.0.0/rule/kubeeye_v1alpha2_opa_evnet.yaml
-rw-r--r--  0 runner docker    505 12  1 00:36 kubeeye-offline-v1.0.0/rule/kubeeye_v1alpha2_filechange.yaml
-rw-r--r--  0 runner docker   1512 12  1 00:36 kubeeye-offline-v1.0.0/rule/kubeeye_v1alpha2_opa_abnormalPodStatus.yaml
-rw-r--r--  0 runner docker   3414 12  1 00:36 kubeeye-offline-v1.0.0/rule/kubeeye_v1alpha2_sysctlrule.yaml
drwxr-xr-x  0 runner docker      0 12  1 00:36 kubeeye-offline-v1.0.0/images/
-rw-------  0 runner docker 73283584 12  1 00:36 kubeeye-offline-v1.0.0/images/kubeeye-apiserver.tar
-rw-------  0 runner docker 47816192 12  1 00:36 kubeeye-offline-v1.0.0/images/kube-rbac-proxy.tar
-rw-------  0 runner docker 73251840 12  1 00:36 kubeeye-offline-v1.0.0/images/kubeeye-controller.tar
-rw-------  0 runner docker 65912832 12  1 00:36 kubeeye-offline-v1.0.0/images/kubeeye-job.tar

kubeeye v0.3.0 running ok but main branch build and run with error

when i use v0.3.0, i can get audit message(but with bug when -o json), and then i try build exec file with the main branch ,when it run , it show me the error as the follow pic

1.649217524961418e+09 ERROR controller-runtime.source if kind is a CRD, it should be installed before calling Start {"kind": "ClusterInsight.kubeeye.kubesphere.io", "error": "no matches for kind "ClusterInsight" in version "kubeeye.kubesphere.io/v1alpha1""}
sigs.k8s.io/controller-runtime/pkg/source.(*Kind).Start.func1.1
/Users/abel/GolandProjects/kubeeye/vendor/sigs.k8s.io/controller-runtime/pkg/source/source.go:137
k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext
/Users/abel/GolandProjects/kubeeye/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:233
k8s.io/apimachinery/pkg/util/wait.poll
/Users/abel/GolandProjects/kubeeye/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:580
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext
/Users/abel/GolandProjects/kubeeye/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:545
sigs.k8s.io/controller-runtime/pkg/source.(*Kind).Start.func1
/Users/abel/GolandProjects/kubeeye/vendor/sigs.k8s.io/controller-runtime/pkg/source/source.go:131
1.6492175356150734e+09 ERROR controller-runtime.source if kind is a CRD, it should be installed before calling Start {"kind": "ClusterInsight.kubeeye.kubesphere.io", "error": "no matches for kind "ClusterInsight" in version "kubeeye.kubesphere.io/v1alpha1""}
sigs.k8s.io/controller-runtime/pkg/source.(*Kind).Start.func1.1
/Users/abel/GolandProjects/kubeeye/vendor/sigs.k8s.io/controller-runtime/pkg/source/source.go:137
k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext
/Users/abel/GolandProjects/kubeeye/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:233
k8s.io/apimachinery/pkg/util/wait.WaitForWithContext
/Users/abel/GolandProjects/kubeeye/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660
k8s.io/apimachinery/pkg/util/wait.poll
/Users/abel/GolandProjects/kubeeye/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:594
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext
/Users/abel/GolandProjects/kubeeye/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:545
sigs.k8s.io/controller-runtime/pkg/source.(*Kind).Start.func1
/Users/abel/GolandProjects/kubeeye/vendor/sigs.k8s.io/controller-runtime/pkg/source/source.go:131

image

cannot load (...)/kubeeye/packrd: malformed module path "(...)/kubeeye/packrd": missing dot in first path element

Can't build.
(I also tried the package but I got the error panic: stat /checks/hostIPCSet.yaml: no such file or directory)

# The (...) is my $HOME path
$ make install
GO111MODULE=on GOPROXY=https://goproxy.io CGO_ENABLED=0 go get -u github.com/gobuffalo/packr/v2/packr2
go: finding golang.org/x/term latest
go: finding golang.org/x/sync latest
go: finding golang.org/x/sys latest
/home/felipe/go/bin/packr2 build -a -o "ke" *.go
build command-line-arguments: cannot load (...)/kubeeye/packrd: malformed module path "(...)/kubeeye/packrd": missing dot in first path element
Error: exit status 1

R&D plan

For a better user experience, we will add Kubeeye Console, kubeeye Collector to Kubeeye.
Now that we have the prototype of Kubeeye command-line tool, we will develop Kubeeye Console and Kubeeye Collector, and gradually improve the functions of Kubeeye command-line tool.
Kubeeye Collector is a daemonset executed in k8s to collect node information.
kubeeye Console is a page of kubeeye, providing functions of display audit results viewing and downloading.

  • kubeeye command-line tool:

Functional Requirements:

  • Standalone operation.

  • Check k8s cluster resource configuration and events.

  • Install the console.

  • Provide installation methods for installing more cluster review components, such as installing NPD, Kubebench, etc.

  • Able to collect node information and review through Kubeeye Collector.

  • Kubeeye Collector

Functional Requirements:

  • Run as DaemonSet in the k8s cluster to collect node information.

  • Kubeeye console:

Functional Requirements:

  • Able to call Kubeeye command-line tool to get audit results.

  • Management rego rules.

  • Management audit scope.

  • The web page displays the audit results.

  • Store audit results and view historical audit results on the web page.

  • The web page can reflect the changing trend of cluster audit.

  • The audit results can be downloaded on the web page.

  • Suggest amendments to each audit result.

巡检卡住,没有产生巡检报告

按照文档在集群中安装了kubeeye operator,执行kubectl get clusterinsight -o yaml输出以下内容。没有生成巡检结果,一直是running状态。

apiVersion: v1
items:
- apiVersion: kubeeye.kubesphere.io/v1alpha1
  kind: ClusterInsight
  metadata:
    annotations:
      kubectl.kubernetes.io/last-applied-configuration: |
        {"apiVersion":"kubeeye.kubesphere.io/v1alpha1","kind":"ClusterInsight","metadata":{"annotations":{},"name":"clusterinsight-sample"},"spec":{"auditPeriod":"05 11 * * *"}}
    creationTimestamp: "2023-08-07T07:04:43Z"
    generation: 3
    name: clusterinsight-sample
    resourceVersion: "790533582"
    uid: e2e23f3d-4358-435a-bbef-78eb9e4be8a9
  spec:
    auditPeriod: 05 11 * * *
  status:
    auditPercent: 14
    clusterInfo: {}
    phase: Running
    scoreInfo: {}
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

manager容器一直重复打印下面的日志。没有报错,不知道怎么查起。麻烦大佬帮忙看一下~~

I0808 07:13:09.249903 1 basecontroller.go:111] Successfully Synced key:clusterinsight-sample in KubeEye-controller
I0808 07:13:11.251010 1 kubeeyecronjob_controller.go:92] Next audit time: 2023-08-08 11:05:00.1 +0000 UTC m=+13927.558089397
I0808 07:13:11.251058 1 kubeeyecronjob_controller.go:104] wait starting audit
I0808 07:13:11.251083 1 basecontroller.go:111] Successfully Synced key:clusterinsight-sample in KubeEye-controller
I0808 07:13:13.251530 1 kubeeyecronjob_controller.go:92] Next audit time: 2023-08-08 11:05:00.1 +0000 UTC m=+13927.558089530
I0808 07:13:13.251594 1 kubeeyecronjob_controller.go:104] wait starting audit
I0808 07:13:13.251641 1 basecontroller.go:111] Successfully Synced key:clusterinsight-sample in KubeEye-controller

documentation: NodeCPU

Hey,

I saw that NodeCPU could be a check item but i dont see it implemented nor documented, am i missing something?

make error

Please provide an in-depth description of the question you have:
when I clone repo and execute make installke , some error happend . Anyone come across that?

image

What do you think about this question?:

Environment:

  • KubeEye version:
  • In main branch
  • Others:

Incorrect node conditions regorules

What happened:
node conditions are as follow:
1661224733885

Incorrect rego output:

{
    "deny": [
        {
            "Level": "warning",
            "Message": "KubeletHasDiskPressure",
            "Name": "kind-control-plane",
            "Reason": "kubelet has disk pressure",
            "Type": "Node"
        },
        {
            "Level": "warning",
            "Message": "KubeletHasNoSufficientMemory",
            "Name": "kind-control-plane",
            "Reason": "kubelet has no sufficient memory available",
            "Type": "Node"
        },
        {
            "Level": "warning",
            "Message": "KubeletHasNoSufficientPID",
            "Name": "kind-control-plane",
            "Reason": "kubelet has no sufficient PID available",
            "Type": "Node"
        }
    ]
}

rego input:

{
    "Object": {
        "kind": "Node",
        "apiVersion": "v1",
        "metadata": {
            "name": "kind-control-plane",
            "selfLink": "/api/v1/nodes/kind-control-plane",
            "uid": "d3d62b4e-e19e-48c0-803f-4245603bc4d8",
            "resourceVersion": "6495906",
            "creationTimestamp": "2022-05-16T02:18:51Z",
            "labels": {
                "beta.kubernetes.io/arch": "amd64",
                "beta.kubernetes.io/os": "linux",
                "kubernetes.io/arch": "amd64",
                "kubernetes.io/hostname": "kind-control-plane",
                "kubernetes.io/os": "linux",
                "node-role.kubernetes.io/master": ""
            },
            "annotations": {
                "kubeadm.alpha.kubernetes.io/cri-socket": "unix:///run/containerd/containerd.sock",
                "node.alpha.kubernetes.io/ttl": "0",
                "volumes.kubernetes.io/controller-managed-attach-detach": "true"
            }
        },
        "spec": {
            "podCIDR": "10.244.0.0/24",
            "podCIDRs": [
                "10.244.0.0/24"
            ],
            "providerID": "kind://docker/kind/kind-control-plane"
        },
        "status": {
            "conditions": [
                {
                    "type": "MemoryPressure",
                    "status": "False",
                    "lastHeartbeatTime": "2022-08-23T02:30:23Z",
                    "lastTransitionTime": "2022-08-20T01:23:07Z",
                    "reason": "KubeletHasSufficientMemory",
                    "message": "kubelet has sufficient memory available"
                },
                {
                    "type": "DiskPressure",
                    "status": "False",
                    "lastHeartbeatTime": "2022-08-23T02:30:23Z",
                    "lastTransitionTime": "2022-08-20T01:23:07Z",
                    "reason": "KubeletHasNoDiskPressure",
                    "message": "kubelet has no disk pressure"
                },
                {
                    "type": "PIDPressure",
                    "status": "False",
                    "lastHeartbeatTime": "2022-08-23T02:30:23Z",
                    "lastTransitionTime": "2022-08-20T01:23:07Z",
                    "reason": "KubeletHasSufficientPID",
                    "message": "kubelet has sufficient PID available"
                },
                {
                    "type": "Ready",
                    "status": "True",
                    "lastHeartbeatTime": "2022-08-23T02:30:23Z",
                    "lastTransitionTime": "2022-08-20T01:23:07Z",
                    "reason": "KubeletReady",
                    "message": "kubelet is posting ready status"
                }
            ]
        }
    }
}

What you expected to happen:
According to kubernetes node condition descriptions, the correct output should be as follows:

{
    "deny": []
}

How to reproduce it (as minimally and precisely as possible):

Reproduce it using FalseNodeConditionsRule.rego in rego playground

Anything else we need to know?:

Environment:

  • Kubeeye version: v0.5.0
  • Others:

npd-rule.yaml is missing

When execute ke install npd,cli prompt "Failed to get npd-rule.yaml: stat /root/kubeye/examples/npd-rule.yaml: no such file or directory". And npd-rule.yaml was not found in repo example folder.

Cluster Inspection Scoring Policy

How to better score clusters
refer to https://en.wikipedia.org/wiki/Common_Vulnerability_Scoring_System

The following three methods of scoring policy
一.Simple weighted calculation
1.First count the total check items weighting;
totalWeight = Success * 2 + Warning*1 + Danger * 2 (ingore excluding)

2.Then calculate according to the ratio;
score = Success * 2 / totalWeight * 100
eg.
{
"scoreInfo": {
"score": 79,
"dangerous": 10,
"passing": 50,
"ignore": 5,
"warning": 7,
"total": 72
}
}
score: 50 * 2 / (10 * 2 + 50 * 2 + 7 * 1) * 100=79

二.Multiply by an availability factor (0.8-1) on the basis of the first method above, this availability factor can be dynamically valued according to the current vulnerability situation.
score=(Success* 2 / totalWeight * 100) * factor

三. Expand the weighting range to NS
Give different weighted values according to different namespaces: for example, the weighted value of kube-system is 3, the weighted value of no namespace is 2, and the ordinary namespace is 1.

totalWeight = kube-system score + no namespace score + other namespace score
success score = kube-system.Success3 + no namespace.Success2 + other namespace.Success*1
total score = (success score / totalWeight * 100)

more other methods ...

Unable to make from source

Everytime I try to make I get a different error during the process. I tried removing the GOPROXY and that did not help.
`HypriotOS/armv7: in ~/kubeeye

make install

GO111MODULE=on CGO_ENABLED=0 go get -u github.com/gobuffalo/packr/v2/packr2
go: finding github.com/gobuffalo/packr/v2/packr2 latest
go: finding golang.org/x/sync latest
go: finding golang.org/x/sys latest
go: finding golang.org/x/net latest
go: finding github.com/coreos/pkg latest
go: finding github.com/kr/logfmt latest
go: github.com/coreos/[email protected]: parsing go.mod: unexpected module path "go.etcd.io/bbolt"
go: finding google.golang.org/genproto latest
go: finding golang.org/x/time latest
go: finding gopkg.in/check.v1 latest
go: finding golang.org/x/crypto latest
go get: error loading module requirements
make: *** [Makefile:12: install-packr2] Error 1`

Is is project is alive?

Please provide an in-depth description of the question you have:

What do you think about this question?:
这个项目还维护吗?看着最新的代码都是 2022 年的。是否已经放弃维护了。

Manage plugins by CRD

What would you like to be added:
Create a new CRD plugins-manager for managing plugins.
When a CR is created and spec.enabled is set to true, the plugin will be automatically deployed.
When the creation is complete, status.ready will be set to true.

apiVersion: kubeeye.kubesphere.io/v1alpha1
kind: PluginSubscription
metadata:
  namespace: kubeeye-system
  name: kubebench
spec:
  enabled: true
status:
  ready: true

clusterinsights will watch this CRD, and when a new CR is created, it will be automatically try to get the plugin audit results.

Why is this needed:
Automatically manage resources

Support cluster health score

What would you like to be added:
Provides a cluster health score to show the level of cluster health

Why is this needed:
Cluster health scores are necessary to evaluate the cluster. Through the cluster health score, the user can decide the state of the cluster quickly and proceed to the next step, improving the user's experience on the website.

"scoreInfo": {
  "dangerous": 15,
  "ignore": 3,
  "passing": 372,
  "score": 89,
  "total": 450,
  "warning": 60
}

Combined with OPA policy engine

The Open Policy Agent is an open source, general-purpose policy engine that unifies policy enforcement across the stack. By combining with OPA, KubeEye will become more flexible and improve scalability.

OPA policies are expressed in a high-level declarative language called Rego. Rego is purpose-built for expressing policies over complex hierarchical data structures. Rego is simpler and more convenient than the existing policy definition.

How can i do about viewing the audit results through websit

Please provide an in-depth description of the question you have:
What KubeEye Operator can do
KubeEye Operator provides manage website.
KubeEye Operator recode audit results by CR, can view and compare cluster audit results by website.
KubeEye Operator provides more plugins.
KubeEye Operator provides modify suggestions by the website.
What do you think about this question?:

Environment:

  • KubeEye version:
  • Others:

Support deploy in k8s

What would you like to be added:

Provide an easy way to install into users' own k8s environment

  • kubeeye docker image update
  • gcr.io needs to be replaced
  • documentation about how to access kubeeye's webpage
  • helm chart

Support to store inspection results in external database

What would you like to be added:
Support to store inspection results in external database. such as: prometheus etc.

Why is this needed:
At present, kubeeye stores the inspection results in the status of custom resources, but with the increasing number of integrated plugins, there will be more and more inspection results, so it is necessary to store the inspection results in an external database.

ke fails with error "no Auth Provider found for name azure"

Hello,
I have an AKS cluster in Azure, and I am authenticating with Azure AD. My kubeconfig looks like this :

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: xxxx
    server: https://myakscluster.hcp.westeurope.azmk8s.io:443
  name: myakscluster
contexts:
- context:
    cluster: myakscluster
    user: clusterUser_myakscluster-rg_myakscluster
  name: myakscluster
current-context: myakscluster
kind: Config
preferences: {}
users:
- name: clusterUser_myakscluster-rg_myakscluster
  user:
    auth-provider:
      config:
        apiserver-id: xxxxxxxxx
        client-id: xxxxxxxxxxxx
        config-mode: '1'
        environment: AzurePublicCloud
        tenant-id: xxxxxxxx
      name: azure

When I try to use kubeeye, I get the following error :

$ ke diag --kubeconfig ~/.kube/config
ERRO[0000] Error fetching api: no Auth Provider found for name "azure"
Failed to get cluster information: no Auth Provider found for name "azure"

Do you plan to support this provider in the future ?

ke audit return failed

ke audit
failed to fetch clusterRoles: the server could not find the requested resource

kubectl version --short
Client Version: v1.22.3
Server Version: v1.22.3

Modify the structure of auditResults in CRD status

What would you like to be added:
Modify the structure of auditResults in CRD status, convenient to receive and resolve the results.

Why is this needed:
Modify the structure of auditResults in CRD status, put the same namespace results together, as follows:

"status": {
    "auditResults": [
        {
            "namespace": "",
            "resultInfos": [
                {
                    "resourceType": "",
                    "resourceInfos": {
                        "items": [
                            {
                                "level": "",
                                "message": "",
                                "reason": ""
                            },
                        ],
                        "name": ""
                    },
                }
            ],
            "resultInfos": [
                {
                    "resourceType": "",
                    "resourceInfos": {
                        "items": [
                            {
                                "level": "",
                                "message": "",
                                "reason": ""
                            },
                        ],
                        "name": ""
                    },
                }
            ],
        }
    ]
}

A bug occurs when my custom resource ownerReferences has a circular dependency

Two CRD are like this
Kind A OwnerReferences Kind B
Kind B OwnerReferences Kind A
When the program runs to get the resource Kind A or KindB ,it will be stuck in an infinite loop

The code that causes this problem is in
/kubeeye/pkg/kube/workload.go 73 lines

Although this scenario is not very common,Or maybe it just shouldn't exist,But that's what my company does,Does it need to be improved

detect wrong node name

What happened:
I tried to install kubesphere with all-in-one mode on my brand new CVM(tencentcloud virtual machine instance),when I run ./kk create cluster --with-kubernetes v1.21.5 --with-kubesphere v3.2.1, I got the failed message below:

Error from server (NotFound): nodes "VM-4-12-centos" not found
20:35:24 CST message: [VM-4-12-centos]
remove master taint failed: Failed to exec command: sudo -E /bin/bash -c "/usr/local/bin/kubectl taint nodes VM-4-12-centos node-role.kubernetes.io/master=:NoSchedule-"
Error from server (NotFound): nodes "VM-4-12-centos" not found: Process exited with status 1
20:35:24 CST retry: [VM-4-12-centos]
20:35:29 CST stdout: [VM-4-12-centos]
Error from server (NotFound): nodes "VM-4-12-centos" not found
20:35:29 CST message: [VM-4-12-centos]
remove master taint failed: Failed to exec command: sudo -E /bin/bash -c "/usr/local/bin/kubectl taint nodes VM-4-12-centos node-role.kubernetes.io/master=:NoSchedule-"
Error from server (NotFound): nodes "VM-4-12-centos" not found: Process exited with status 1
20:35:29 CST retry: [VM-4-12-centos]
20:35:34 CST stdout: [VM-4-12-centos]
Error from server (NotFound): nodes "VM-4-12-centos" not found
20:35:34 CST message: [VM-4-12-centos]
remove master taint failed: Failed to exec command: sudo -E /bin/bash -c "/usr/local/bin/kubectl taint nodes VM-4-12-centos node-role.kubernetes.io/master=:NoSchedule-"
Error from server (NotFound): nodes "VM-4-12-centos" not found: Process exited with status 1
20:35:34 CST retry: [VM-4-12-centos]
20:35:39 CST stdout: [VM-4-12-centos]
Error from server (NotFound): nodes "VM-4-12-centos" not found
20:35:39 CST message: [VM-4-12-centos]
remove master taint failed: Failed to exec command: sudo -E /bin/bash -c "/usr/local/bin/kubectl taint nodes VM-4-12-centos node-role.kubernetes.io/master=:NoSchedule-"
Error from server (NotFound): nodes "VM-4-12-centos" not found: Process exited with status 1
20:35:39 CST retry: [VM-4-12-centos]
20:35:44 CST stdout: [VM-4-12-centos]
Error from server (NotFound): nodes "VM-4-12-centos" not found
20:35:44 CST message: [VM-4-12-centos]
remove master taint failed: Failed to exec command: sudo -E /bin/bash -c "/usr/local/bin/kubectl taint nodes VM-4-12-centos node-role.kubernetes.io/master=:NoSchedule-"
Error from server (NotFound): nodes "VM-4-12-centos" not found: Process exited with status 1
20:35:44 CST failed: [VM-4-12-centos]
error: Pipeline[CreateClusterPipeline] execute failed: Module[InitKubernetesModule] exec failed:
failed: [VM-4-12-centos] [RemoveMasterTaint] exec failed after 5 retires: remove master taint failed: Failed to exec command: sudo -E /bin/bash -c "/usr/local/bin/kubectl taint nodes VM-4-12-centos node-role.kubernetes.io/master=:NoSchedule-"
Error from server (NotFound): nodes "VM-4-12-centos" not found: Process exited with status 1

my node name is vm-4-12-centos which was generated automatically by tencentcloud,but the error message is "/usr/local/bin/kubectl taint nodes VM-4-12-centos node-role.kubernetes.io/master=:NoSchedule-" which VM is Uppercase,I thought maybe kubeeye detect wrong node name.

What you expected to happen:
kubernetes and kubesphere installed successfully.

How to reproduce it (as minimally and precisely as possible):
maybe change the node name to vm-4-12-centos and run the install command?

Anything else we need to know?:
None

Environment:

  • Kubeeye version: 2.0.0
  • CentOS 7.6

部署后无法找到ke install/audit命令

试了 kubeeye-1.0.0-beta.4.zip kubeeye-1.0.0-beta.5.tar.gz ,没看到Markdown中命令执行啊,这是为啥?

[admin@master01 kubeeye]$ ls
ke kubeeye-1.0.0-beta.4 kubeeye-1.0.0-beta.4.zip kubeeye-1.0.0-beta.5.tar.gz kubeeye-v1.0.0-beta.5-linux-amd64.tar.gz
[admin@master01 kubeeye]$ ./ke
inspect finds various problems on Kubernetes cluster.

Usage:
ke [command]

Available Commands:
completion Generate the autocompletion script for the specified shell
create create inspect job on Kubernetes cluster.
get Get Inspect the Config or Result for Cluster
help Help about any command

Flags:
-h, --help help for ke
--kube-config string kube config

Use "ke [command] --help" for more information about a command.
[admin@master01 kubeeye]$ kubeeye
inspect finds various problems on Kubernetes cluster.

Usage:
ke [command]

Available Commands:
completion Generate the autocompletion script for the specified shell
create create inspect job on Kubernetes cluster.
get Get Inspect the Config or Result for Cluster
help Help about any command

Flags:
-h, --help help for ke
--kube-config string kube config

Use "ke [command] --help" for more information about a command.
[admin@master01 kubeeye]$

add custom rules and refactoring

User Story

As a user Using kubeeye in a production environment, we not only uses OPA rule validation, we also used

  1. check nodes ssh conntecion
  2. kubernetes certexpire
  3. check Component startup configuration consistency,such as kubelet command line parameter --root-dir.

and so on.
While meeting the above, we also hope to extend it with out of tree.

Detailed Description

Based on the above points, we extend kubeeye and refactor the code.

Feature Description

We have added the following features

custom command

Expand kubeeye's command line

Embed Rules

Embedded rules, package the rules into kubeeye for easy use

  • OPA rules
  • Function rules

Function check rules provide more customized rule checks. For example, by using a shell and calling a third-party interface, you can enclose the function and return the output according to the agreed format, which can be displayed uniformly in the report.

Why
custom command

On the one hand, kubeeye can be programmed into subcommands of other command-line tools. On the other hand, other command-line tools can also become kubeeye tools.

Embed Rules

Checklist are different in different environments and different businesses, But they have something in common, If it is maintained only through an external directory, it will lead to redundancy of the checklist. Therefore, we can package it as a whole and control the start and stop of the business checklist through the configuration file later such as

apiVersion: v1
kind: ConfigMap
metadata:
  name: kubeeye-<xxxx>-rules
  namespace: kube-system
data:
  version: "v1"
  regorules: |
    enable: 
      - name: allowPrivilegeEscalationRule
      - name: canImpersonateUserRoleRule
    disable: 
      - name: "*"
  funcrules: |
    enable:
      - name: nodeSSHConnection
    disable:
      - name: xxxxStatus

Configuration file feature is still in progress.

How
custom command

A kubeeyecommand is defined using the builder pattern, You can assemble it with any command, regorule and funcrule,Finally, a cobra command line is returned.

Embed Rules
  • The OPA rule uses go1.16 embedded, It can package files into code compilation. Whether default rules or additional rules,You must use a variable to package OPA rules.
  • Function rule is much simpler, Because it is go code itself, it can be packaged and compiled directly through import.

Refactor Description

In order to better add new features, we have adjusted the code structure.

  • Added directory funcrules, regorules, register

    Funcrules: Storing default function rules

    Regorules: Storing default regorules rules

    Register: rules register

  • Use go channel mode, Fan In to connect pipeline in series, The main entrance is audit.Run

  • Use fs.FS abstracts local file and embedded file operations

  • Simplifies the function of output

  • Some channels have been merged

Anything else you would like to add:

https://github.com/leonharetd/kubeeye is refactor kubeeye code
https://github.com/leonharetd/kubeeye_sample is kubeeye sample
These are some of my practices. Welcome to communicate. Thank you very much🙏.

Support for third-party plugins

What would you like to be added:
Add third-party plugins to support more extended functions

Why is this needed:
Through a wealth of plug-ins, it can meet the needs of various application scenarios and different users.

1.kubebench: checks your Kubernetes cluster against the CIS Kubernetes Benchmark , which is aimed at keeping clusters secure.

2.Kube-hunter: hunts for security weaknesses in Kubernetes clusters. The tool was developed to increase awareness and visibility for security issues in Kubernetes environments.

others:
KubeLinter

Support flaco as a plugin for kubeeye

What is Falco

The Falco Project is an open source runtime security tool.

What does Falco do

Falco uses system calls to secure and monitor a system, by:

  • Parsing the Linux system calls from the kernel at runtime
  • Asserting the stream against a powerful rules engine
  • Alerting when a rule is violated

support operator

What would you like to be added:
Enable kubeeye support operator pattern, and insert the audit results into CRD status

Why is this needed:
Through the operator pattern, inserting the audit results into CRD status, enables other services to get results through the Kubernetes API server, thereby expanding the usage scenarios of kubeeye, so Kubeeye will be able to be used in more situations.

With the operator, you can get the the results in the CRD's status:

kubectl get clusterinsight -o yaml
apiVersion: v1
items:
- apiVersion: kubeeye.kubesphere.io/v1alpha1
  kind: ClusterInsight
  metadata:
    name: clusterinsight-sample
    namespace: default
  spec:
    auditPeriod: 24h
  status:
    auditResults:
      auditResults:
      - resourcesType: Node
        resultInfos:
        - namespace: ""
          resourceInfos:
          - items:
            - level: waring
              message: KubeletHasNoSufficientMemory
              reason: kubelet has no sufficient memory available
            - level: waring
              message: KubeletHasNoSufficientPID
              reason: kubelet has no sufficient PID available
            - level: waring
              message: KubeletHasDiskPressure
              reason: kubelet has disk pressure
            name: kubeeyeNode

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.