Giter Site home page Giter Site logo

kubevious / kubevious Goto Github PK

View Code? Open in Web Editor NEW
1.5K 18.0 89.0 3.42 MB

Kubevious - Kubernetes without disasters

Home Page: https://kubevious.io

License: Apache License 2.0

kubernetes kubernetes-monitoring kubernetes-dashboard cloud cloud-native microservices docker troubleshooting configuration assurance

kubevious's Introduction

Release DockerPulls Issues Slack Twitter License

Kubevious (pronounced [kju:bvi:əs]) is a suite of app-centric assurance, validation, and introspection products for Kubernetes. It helps running modern Kubernetes applications without disasters and costly outages by continuously validating application manifests, cluster state, and configuration. Kubevious projects detect and prevent errors(typos, misconfigurations, conflicts, inconsistencies) and violations of best practices. Our secret sauce is based on the ability to validate across multiple manifests and look at the configuration from the application vantage point.

Kubevious CLI

Kubevious CLI is a standalone tool that validates YAML manifests for syntax, semantics, conflicts, compliance, and security best practices violations. Can be easily used during active development and integrated into GitOps processes and CI/CD pipelines to validate changes toward live Kubernetes clusters. This is our newest development was based on the lessons learned and the foundation of the Kubevious Dashboard.

Learn more about securing your Kubernetes apps and clusters here: https://github.com/kubevious/cli

Kubevious CLI Video

Kubevious Dashboard

Kubevious Dashboard is a web app that delivers unique app-centric intuitive insights, introspects Kubernetes manifests, and provides troubleshooting tools for cloud-native applications. It works right out of the box and only takes a few minutes to get Kubevious up and running for existing production applications.

Learn more about introspecting Kubernetes apps and clusters here: https://github.com/kubevious/kubevious/blob/main/projects/DASHBOARD.md

Kubevious Intro

🧑🏻‍🤝‍🧑🏿 Community

💬 Slack

Join the Kubevious Slack workspace to chat with Kubevious developers and users. This is a good place to learn about Kubevious, ask questions, and share your experiences.

🏗️ Contributing

We invite your participation through issues and pull requests! You can peruse the contributing guidelines.

🏛️ Governance

The Kubevious project is created by AUTHORS. Governance policy is yet to be defined.

🚀 Roadmap

Kubevious maintains a public roadmap, which provides priorities and future capabilities we are planning on adding to Kubevious.

📜 License

Kubevious is an open-source project licensed under the Apache License, Version 2.0.

📢 What people say about Kubevious

If you want your article describing the experience with Kubevious posted here, please submit a PR.

kubevious's People

Contributors

cfarrend avatar gitter-badger avatar kostis-codefresh avatar kubevious avatar rubenhak avatar saiyam1814 avatar sempukh avatar tanmay-bhat avatar thelan avatar vfarcic avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kubevious's Issues

mysql is not booting: No space left on device

Describe the bug

After deploying using helm chart the mysql is in a crashloopbackoff state all the time. Here is the error log:

mysql 2020-12-02 20:43:15+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.22-1debian10 started.                                                         
mysql 2020-12-02 20:43:16+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql'                                                                                  
mysql 2020-12-02 20:43:16+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.22-1debian10 started.                                                         
mysql 2020-12-02T20:43:16.346666Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.22) starting as process 1                                                  
mysql 2020-12-02T20:43:16.355244Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started.                                                                      
mysql 2020-12-02T20:43:17.534375Z 1 [Warning] [MY-012637] [InnoDB] 1048576 bytes should have been written. Only 720896 bytes written. Retrying for the remaining bytes.   
mysql 2020-12-02T20:43:17.534530Z 1 [Warning] [MY-012638] [InnoDB] Retry attempts for writing partial data failed.                                                        
mysql 2020-12-02T20:43:17.534645Z 1 [ERROR] [MY-012639] [InnoDB] Write to file ./ibtmp1 failed at offset 3145728, 1048576 bytes should have been written, only 720896 were
mysql 2020-12-02T20:43:17.534766Z 1 [ERROR] [MY-012640] [InnoDB] Error number 28 means 'No space left on device'                                                          
mysql 2020-12-02T20:43:17.534993Z 1 [ERROR] [MY-012267] [InnoDB] Could not set the file size of './ibtmp1'. Probably out of disk space                                    
mysql 2020-12-02T20:43:17.535110Z 1 [ERROR] [MY-012926] [InnoDB] Unable to create the shared innodb_temporary.                                                            
mysql 2020-12-02T20:43:17.535234Z 1 [ERROR] [MY-012930] [InnoDB] Plugin initialization aborted with error Generic error.                                                  
mysql 2020-12-02T20:43:17.922041Z 1 [ERROR] [MY-010334] [Server] Failed to initialize DD Storage Engine                                                                   
mysql 2020-12-02T20:43:17.922357Z 0 [ERROR] [MY-010020] [Server] Data Dictionary initialization failed.                                                                   
mysql 2020-12-02T20:43:17.922691Z 0 [ERROR] [MY-010119] [Server] Aborting                                                                                                 
mysql 2020-12-02T20:43:17.923157Z 0 [System] [MY-010910] [Server] /usr/sbin/mysqld: Shutdown complete (mysqld 8.0.22)  MySQL Community Server - GPL.                      

To Reproduce

Install using helm chart

Expected behavior

mysql not crashing.

Environment Details:

  • aks v1.16.13
  • Version v0.7.26

MySqlDriver Error

Hello!
I've installed kubevious via helm template
helm template --namespace kubevious . > kubevious.yaml kubectl -n kubeious create -f kubevious.yaml

I've got 500 errornotconnected in UI and error in kubevious pod
`
[2020-11-05 10:14:00.968 +0000] ERROR (MySqlDriver/1 on kubevious-859c868868-hvm8k): [executeInTransaction] TX Failed.
[2020-11-05 10:14:00.969 +0000] ERROR (MySqlDriver/1 on kubevious-859c868868-hvm8k): [executeInTransaction] Rolling Back.
[2020-11-05 10:14:00.970 +0000] ERROR (MySqlDriver/1 on kubevious-859c868868-hvm8k): [executeInTransaction] Rollback complete.
[2020-11-05 10:14:00.970 +0000] ERROR (MySqlDriver/1 on kubevious-859c868868-hvm8k): [_acceptConnection] failed: Error: Cannot find module './migrators/7'
Require stack:

  • /app/lib/db/index.js
  • /app/lib/context.js
  • /app/index.js
    at Function.Module._resolveFilename (internal/modules/cjs/loader.js:966:15)
    at Function.Module._load (internal/modules/cjs/loader.js:842:27)
    at Module.require (internal/modules/cjs/loader.js:1026:19)
    at require (internal/modules/cjs/helpers.js:72:18)
    at Database._processVersionMigration (/app/lib/db/index.js:114:24)
    at /app/lib/db/index.js:105:74
    at /app/node_modules/kubevious-kubik/node_modules/the-promise/lib/serial.js:9:36
    at tryCatcher (/app/node_modules/bluebird/js/release/util.js:16:23)
    at Object.gotValue (/app/node_modules/bluebird/js/release/reduce.js:166:18)
    at Object.gotAccum (/app/node_modules/bluebird/js/release/reduce.js:155:25)
    at Object.tryCatcher (/app/node_modules/bluebird/js/release/util.js:16:23)
    at Promise._settlePromiseFromHandler (/app/node_modules/bluebird/js/release/promise.js:547:31)
    at Promise._settlePromise (/app/node_modules/bluebird/js/release/promise.js:604:18)
    at Promise._settlePromiseCtx (/app/node_modules/bluebird/js/release/promise.js:641:10)
    at _drainQueueStep (/app/node_modules/bluebird/js/release/async.js:97:12)
    at _drainQueue (/app/node_modules/bluebird/js/release/async.js:86:9)
    [2020-11-05 10:14:00.970 +0000] INFO (MySqlDriver/1 on kubevious-859c868868-hvm8k): [_disconnect]

`
mysql pod is up and statefulset is ready.

Environment Details:

  • Kubernetes version
    Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.4", GitCommit:"8d8aa39598534325ad77120c120a22b3a990b5ea", GitTreeState:"clean", BuildDate:"2020-03-12T20:55:23Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
  • Browser furefox
  • Version [e.g. v0.3]

Feature Proposal: Support for Authentication

Is your feature request related to a problem? Please describe.

  • Authentication is necessary for dashboard based applications since they are meant to be exposed.
  • Right now, kubevious doesn't have any auth feature. Hence, anyone who's exposing it has to find an alternate way either through API gateway, or basic-auth via ingress or running nginx sidecar container.
  • Hence, enabling auth for kubevious would be great for securing the endpoints.

History DB Auto-cleanup

History DB grows infinitely. Would eventually cause out-of-space error on MySQL. Implement auto-cleanup capability. Cleanup based on time/disk space.

Empty universe

Describe the bug

Completely empty universe tab as well as other tabs

To Reproduce

Steps to reproduce the behavior:

  1. helm upgrade --atomic -i -n kubevious \
--version 0.7.13 \
--set ingress.enabled=true \
--set ingress.class=nginx \
--set ingress.domain=<my-domain> \
--set provider=eks \
kubevious kubevious/kubevious
  1. Open the UI

Expected behavior

A clear and concise description of what you expected to happen.

Screenshots

Screenshot-from-2020-08-30-23-10-04

Environment Details:

  • Kubernetes Distribution/Version: v1.16.8-eks
  • Browser: chrome
  • Version 85.0.4183.83 (Official Build) (64-bit)

Additional context

  • I don't see any error in the Kubevious pods
  • Noticed the websocket has an error "Error during WebSocket handshake: Unexpected response code: 400" But then this error appears on the Kubevious demo page as well and doesn't seem to have any impact
  • The Parser pod logs show that it is detecting changes in the cluster and acting on them.
  • But the polling request responses seem small and empty

Backend: Gracefully handing parsing errors

Gracefully handle parsing errors. Currently because of a single error nothing would render on the screen.

Need something smarter than try/catch. Accumulate exceptions. Deduplicate exceptions. Generate report including which parsers failed, on which objects. Include details on which objects the parsing failed. Best if a package is downloadable from UI.

Even better if a button appears on a UI with the bug details prefilled. Instructions on prefilling bugs in github here: https://help.github.com/en/github/managing-your-work-on-github/about-automation-for-issues-and-pull-requests-with-query-parameters

File: src/lib/logic/processor.js
Function: _processHandler
Line: 232: handlerInfo.handler(handlerArgs);

Feature Proposal: 3rd Party API Gateway(Traefik Proxy) Validation Support

Scope

We want to make usage of 3rd party API Gateways easier and safer. Kubevious is already equipped with a Gateway View where Ingresses, Services, and Applications are correlated and presented in a Domain -> URL -> Ingress -> Service -> Service Port -> Container Port -> Application path. We want to extend Gateway support capability and add support to popular API Gateways such as Traefik, Kong, Istio, Ambassador, Skipper, etc.

Kubevious already validates Service selectors in Ingresses, and Pod and Port selectors in Services. We also want to validate 3rd party API Gateways to detect errors early and aid with troubleshooting. That could require the correlation of 3rd party CRDs and other runtime sources to build a more clear understanding of what's going on in the cluster, application, and the API Gateway.

Requirements

  • API Gateway to be supported: Traefik Proxy
  • Parse and integrate IngressRoute, TraefikService, Middleware, and TLSOptions
  • Objects to be parsed under the Gateway View

Validation Logic

  • Detect missing Service and TraefikService
  • Detect missing Ports
  • Detect missing Middleware
  • Detect missing TLSOptions
  • Detect unused TraefikService
  • Detect unused Middleware
  • Detect unused TLSOptions

Validator documentation: https://kubevious.io/docs/built-in-validators/traefik-proxy/

DRI

@rubenhak

Current State

Traefik Proxy native support is already available in version 1.0.7.

Progress

  • ✅ Draft idea description
  • ✅ Gather initial interest from the community.
  • ✅ Define high-level requirements
  • ✅ Form a working group and elect a DRI
  • ✅ Clarify implementation specifics
  • ✅ Fun part - coding
  • ✅ Beta
  • ✅ Released

Appendixes

Kubevious Gateway View

A glimpse of what Kubevious does for K8s Ingresses. We want to do the same (and more) for 3rd party API Gateways.
Gateway View

Legend

✅ - Complete
👉 - Current / active stage

Mysql unstable in minikube cluster

Describe the bug

When I install kubevious into an empty cluster (e.g. with minikube), not all the components become ready together.

To Reproduce

Steps to reproduce the behavior:

minikube start
# Copy-pasted from https://github.com/kubevious/kubevious#running-kubevious
kubectl create namespace kubevious
helm repo add kubevious https://helm.kubevious.io
helm upgrade --atomic -i kubevious kubevious/kubevious --version 1.0.7 -n kubevious
kubectl port-forward $(kubectl get pods -n kubevious -l "app.kubernetes.io/component=kubevious-ui" -o jsonpath="{.items[0].metadata.name}") 8080:80 -n kubevious 
# Visit localhost:8080

Multiple problems play together at this point:

  1. The UI is rendered in the browser, but it's basically empty (see Screenshots).
  2. On almost every request (e.g. reloading the UI), the mysql Pod becomes shortly un-Ready (red in k9s) because the Readiness Probe fails. I am not sure if this warning log message is related:
    34 [Warning] [MY-010055] [Server] IP address '172.17.0.1' could not be resolved: Name or service not known
    

Expected behavior

I expected all Pods to become Ready and for the UI to not be empty.

Screenshots

Empty UI

Environment Details:

  • Kubernetes Distribution/Version: k8s v1.23.3 on minikube v1.25.2
  • Browser: Brave/Chromium
  • Version: 1.39.111 (Chromium v102.0.5005.61)

Error: Can't add new command when connection is in closed stat

Describe the bug

Error with reestablishing connection to mysql.

[2020-06-12 21:30:21.465 +0000] ESC[31mERRORESC[39m (MySqlDriver/1 on kubevious-6f59b8958f-4cz2p): ESC[36m[executeStatement] ERROR IN RULE_QESC[36m    at Connection._addCommandClosedState (/app/node_modules/mysql2/lib/connection.js:137:17)ESC[39m
ESC[36m    at PreparedStatementInfo.execute (/app/node_modules/mysql2/lib/commands/prepare.js:27:29)ESC[39m
ESC[36m    at /app/node_modules/kubevious-helpers/lib/mysql-driver.js:116:31ESC[39m
ESC[36m    at Promise._execute (/app/node_modules/bluebird/js/release/debuggability.js:384:9)ESC[39m
ESC[36m    at Promise._resolveFromExecutor (/app/node_modules/bluebird/js/release/promise.js:518:18)ESC[39m
ESC[36m    at new Promise (/app/node_modules/bluebird/js/release/promise.js:103:10)ESC[39m
ESC[36m    at /app/node_modules/kubevious-helpers/lib/mysql-driver.js:113:24ESC[39m
ESC[36m    at tryCatcher (/app/node_modules/bluebird/js/release/util.js:16:23)ESC[39m
ESC[36m    at Promise._settlePromiseFromHandler (/app/node_modules/bluebird/js/release/promise.js:547:31)ESC[39m
ESC[36m    at Promise._settlePromise (/app/node_modules/bluebird/js/release/promise.js:604:18)ESC[39m
ESC[36m    at Promise._settlePromiseCtx (/app/node_modules/bluebird/js/release/promise.js:641:10)ESC[39m
ESC[36m    at _drainQueueStep (/app/node_modules/bluebird/js/release/async.js:97:12)ESC[39m
ESC[36m    at _drainQueue (/app/node_modules/bluebird/js/release/async.js:86:9)ESC[39m
ESC[36m    at Async._drainQueues (/app/node_modules/bluebird/js/release/async.js:102:5)ESC[39m
ESC[36m    at Immediate.Async.drainQueues [as _onImmediate] (/app/node_modules/bluebird/js/release/async.js:15:14)ESC[39m
ESC[36m    at processImmediate (internal/timers.js:456:21)ESC[39m
Unhandled rejection Error: Can't add new command when connection is in closed state
    at Connection._addCommandClosedState (/app/node_modules/mysql2/lib/connection.js:137:17)
    at PreparedStatementInfo.execute (/app/node_modules/mysql2/lib/commands/prepare.js:27:29)
    at /app/node_modules/kubevious-helpers/lib/mysql-driver.js:116:31
    at Promise._execute (/app/node_modules/bluebird/js/release/debuggability.js:384:9)
    at Promise._resolveFromExecutor (/app/node_modules/bluebird/js/release/promise.js:518:18)
    at new Promise (/app/node_modules/bluebird/js/release/promise.js:103:10)
    at /app/node_modules/kubevious-helpers/lib/mysql-driver.js:113:24
    at tryCatcher (/app/node_modules/bluebird/js/release/util.js:16:23)
    at Promise._settlePromiseFromHandler (/app/node_modules/bluebird/js/release/promise.js:547:31)
    at Promise._settlePromise (/app/node_modules/bluebird/js/release/promise.js:604:18)
    at Promise._settlePromiseCtx (/app/node_modules/bluebird/js/release/promise.js:641:10)
    at _drainQueueStep (/app/node_modules/bluebird/js/release/async.js:97:12)
    at _drainQueue (/app/node_modules/bluebird/js/release/async.js:86:9)
    at Async._drainQueues (/app/node_modules/bluebird/js/release/async.js:102:5)
    at Immediate.Async.drainQueues [as _onImmediate] (/app/node_modules/bluebird/js/release/async.js:15:14)
    at processImmediate (internal/timers.js:456:21)

Namespace and K8 Labels as filter

Is your feature request related to a problem? Please describe.

Is it possible to have filter on namespace and labels?
User will use some kube context with permission to access only one namespace in that case this might break.
Sometime we need to view only some resources on the basis of labels.

Allow option to load MYSQL_PASS from file instead of ENV

Is your feature request related to a problem? Please describe.

No. But having the option to load password from file would allow easy integration with external centralized secret storage such as Hashicorp Vault. This is great for operator that do not want to store credentials in k8s etcd as Secret (which is unencrypted by default), and rely on external vault service.

The Vault Agent Injector can automatically mount the secret as a file to the kubevious pod, which the path can then sent to the arguments.

Describe the solution you'd like

Allow passing argument such as the following to load from file

--mysql-password-from-file=/vault/secrets/mysql-pass

If the file is not specified, then fallback to ENV or string literal --mysql-password-literal=myawesomepassword

Describe alternatives you've considered

Patch the Deployment with Hashicorp-vault-agent-injector annotations to mount, then template it to ENV.

Additional context

None

guard validate.sh script fails with an error

Describe the bug

when running guard validate script I am getting below error.

cat deployment.yaml | sh <(curl -sfL https://run.kubevious.io/validate.sh)
/dev/fd/63: 2: Bad substitution
/dev/fd/63: 2: Bad substitution

**** KUBEVIOUS CHANGE VALIDATOR ****

/dev/fd/63: 121: Syntax error: "(" unexpected (expecting "}")

To Reproduce

Steps to reproduce the behavior:

  1. On a fresh ec2 ubuntu 20.04 machine
  2. copy deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kuard
  namespace: roshan
spec:
  selector:
    matchLabels:
      app: kuard
      env: prod
  replicas: 1
  revisionHistoryLimit: 10
  minReadySeconds: 60
  progressDeadlineSeconds: 600
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
  template:
    metadata:
      labels:
        app: kuard
        env: prod
    spec:
      containers:
        - name: kuard
          image: gcr.io/kuar-demo/kuard-amd64:green
          resources:
            limits:
              memory: "128Mi"
              cpu: "250m"
          ports:
            - containerPort: 8080
              name: http
              protocol: TCP
          readinessProbe:
            httpGet:
              port: 8080
              path: /ready
            initialDelaySeconds: 2
            timeoutSeconds: 1
            periodSeconds: 10
            failureThreshold: 3
  1. run command cat deployment.yaml | sh <(curl -sfL https://run.kubevious.io/validate.sh)

Expected behavior

validate script should show issues with the yaml.

Screenshots

If applicable, add screenshots to help explain your problem.

Environment Details:

  • Kubernetes Distribution/Version [minikube]
  • Browser [chrome]
  • Version [1.0.10]

Additional context

I tried changing sh to bash. But then I run into another issue as below.

cat deployment.yaml | bash <(curl -sfL https://run.kubevious.io/validate.sh)

**** KUBEVIOUS CHANGE VALIDATOR ****

👉 *** Reading Input...
✅     Reading Input. Done.
👉 *** Parsing Input...
Input Length: 1578
✅     Parsing Input. Done.
👉 *** Building Package...
✅     Building Package. Done.
👉 *** Apply Package...
error: error parsing STDIN: error converting YAML to JSON: yaml: line 10: could not find expected ':'
🔴🔴🔴
🔴🔴🔴 ERROR: Could not apply change request. Make sure you have write access to kubevious.io/v1 ChangePackage
🔴🔴🔴

kubevious UI is not loading.

I am using EKS cluster with k8s version 1.15 + . I don't see UI is loading any data. My cluster is huge ,probably having more than 500 pods. Is there anyway i can debug further .
I see below error in kubevious

[2021-02-18 03:34:37.152 +0000] ERROR (Worldvious/1 on kubevious-6454dbdbd-hcqjk): Failed https://api.kubevious.io/api/v1/oss/report/version. Request failed with status code 500
[2021-02-18 03:34:37.154 +0000] ERROR (Worldvious/1 on kubevious-6454dbdbd-hcqjk): version-check failed. reason: Request failed with status code 500

image
image

Define log level on deployment

Hello

Would be nice to have ability to set the log level of each component
warning/error/info/debug etc...

Im using helm chart version:
0.7.18

Error: release kubevious failed, and has been uninstalled due to atomic being set: timed out waiting for the condition

Describe the bug

I am trying to run the tool on Minikube on my Macbook pro M1 and failing to install it with an error message
❯ helm upgrade --atomic -i kubevious kubevious/kubevious --version 0.8.15 -n kubevious

Release "kubevious" does not exist. Installing it now.
Error: release kubevious failed, and has been uninstalled due to atomic being set: timed out waiting for the condition

~ took 5m3s

To Reproduce

Steps to reproduce the behavior:
~ took 25s
❯ kubectl create namespace kubevious

namespace/kubevious created

~
❯ helm repo add kubevious https://helm.kubevious.io

"kubevious" has been added to your repositories

~ took 2s
❯ helm upgrade --atomic -i kubevious kubevious/kubevious --version 0.8.15 -n kubevious

Release "kubevious" does not exist. Installing it now.
Error: release kubevious failed, and has been uninstalled due to atomic being set: timed out waiting for the condition

Expected behavior

A running Kubevious system up and running

Screenshots

If applicable, add screenshots to help explain your problem.

Environment Details:

MacBook Pro M1
❯ minikube start
😄 minikube v1.23.2 on Darwin 12.0.1 (arm64)
🎉 minikube 1.24.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.24.0
💡 To disable this notice, run: 'minikube config set WantUpdateNotification false'

✨ Using the docker driver based on existing profile
👍 Starting control plane node minikube in cluster minikube
🚜 Pulling base image ...
🔄 Restarting existing docker container for "minikube" ...
🐳 Preparing Kubernetes v1.22.2 on Docker 20.10.8 ...
🔎 Verifying Kubernetes components...
▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
╭──────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ Registry addon with docker driver uses port 54143 please use that instead of default port 5000 │
│ │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────╯
📘 For more information see: https://minikube.sigs.k8s.io/docs/drivers/docker
▪ Using image registry:2.7.1
▪ Using image gcr.io/google_containers/kube-registry-proxy:0.4
🔎 Verifying registry addon...
🌟 Enabled addons: storage-provisioner, default-storageclass, registry
🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

Error while opening UI: 426 (Upgrade Required)

Describe the bug

When I proxy the UI through kubectl, the UI is loaded (although kind of empty) and I'm getting the following error:

index.js:16 GET http://localhost:3000/socket/?EIO=3&transport=polling&t=NKnDaco 426 (Upgrade Required)

All other pods are running without any errors.

Log from kubevious-ui:

127.0.0.1 - - [16/Oct/2020:11:49:12 +0000] "GET /socket/?EIO=3&transport=polling&t=NKnFX7v HTTP/1.1" 426 0 "http://localhost:3000/?tme=false&tmdt=RnJpIE9jdCAxNiAyMDIwIDEzOjQ5OjAyIEdNVCswMjAwIChDZW50cmFsIEV1cm9wZWFuIFN1bW1lciBUaW1lKQ==&tmd=MjQ=&tmdaf=VGh1IE9jdCAxNSAyMDIwIDEzOjQ5OjAyIEdNVCswMjAwIChDZW50cmFsIEV1cm9wZWFuIFN1bW1lciBUaW1lKQ==" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.75 Safari/537.36" "-"

To Reproduce

Steps to reproduce the behavior:

  1. Run kubectl port-forward $(kubectl get pod -l k8s-app=kubevious-ui -n kubevious -o jsonpath="{.items[0].metadata.name}") 3000:80 -n kubevious
  2. Open localhost:3000

Expected behavior

UI with all the information

Screenshots

If applicable, add screenshots to help explain your problem.
image

Environment Details:

  • AKS (v1.16.13)
  • Chrome 86.0.4240.75 (Official Build) (64-bit)
  • Version 0.6.36

Error: Request failed with status code 413

Describe the bug

I installed kubevious via the latest helm chart and noticed an empty dashboard.

Logs kubevious pod

PayloadTooLargeError: request entity too large
    at readStream (/app/node_modules/raw-body/index.js:155:17)
    at getRawBody (/app/node_modules/raw-body/index.js:108:12)
    at read (/app/node_modules/body-parser/lib/read.js:77:3)
    at jsonParser (/app/node_modules/body-parser/lib/types/json.js:135:5)
    at Layer.handle [as handle_request] (/app/node_modules/express/lib/router/layer.js:95:5)
    at trim_prefix (/app/node_modules/express/lib/router/index.js:317:13)
    at /app/node_modules/express/lib/router/index.js:284:7
    at Function.process_params (/app/node_modules/express/lib/router/index.js:335:12)
    at next (/app/node_modules/express/lib/router/index.js:275:10)
    at expressInit (/app/node_modules/express/lib/middleware/init.js:40:5)
    at Layer.handle [as handle_request] (/app/node_modules/express/lib/router/layer.js:95:5)
    at trim_prefix (/app/node_modules/express/lib/router/index.js:317:13)
    at /app/node_modules/express/lib/router/index.js:284:7
    at Function.process_params (/app/node_modules/express/lib/router/index.js:335:12)
    at next (/app/node_modules/express/lib/router/index.js:275:10)
    at query (/app/node_modules/express/lib/middleware/query.js:45:5)

kubevious-parser log files:

[2020-04-05 07:56:25.981 +0000] INFO  (FacadeDampener/1 on kubevious-parser-5d9f749865-6rtz5): [_tryProcessJob] empty
[2020-04-05 07:56:28.246 +0000] ERROR (ReporterTarget/1 on kubevious-parser-5d9f749865-6rtz5): [request]  Error: Request failed with status code 413
    at createError (/app/node_modules/axios/lib/core/createError.js:16:15)
    at settle (/app/node_modules/axios/lib/core/settle.js:17:12)
    at IncomingMessage.handleStreamEnd (/app/node_modules/axios/lib/adapters/http.js:236:11)
    at IncomingMessage.emit (events.js:323:22)
    at endReadableNT (_stream_readable.js:1204:12)
    at processTicksAndRejections (internal/process/task_queues.js:84:21)
[2020-04-05 07:56:28.247 +0000] ERROR (ReporterDampener/1 on kubevious-parser-5d9f749865-6rtz5): [_processJob]  Error: Request failed with status code 413
    at createError (/app/node_modules/axios/lib/core/createError.js:16:15)
    at settle (/app/node_modules/axios/lib/core/settle.js:17:12)
    at IncomingMessage.handleStreamEnd (/app/node_modules/axios/lib/adapters/http.js:236:11)
    at IncomingMessage.emit (events.js:323:22)
    at endReadableNT (_stream_readable.js:1204:12)
    at processTicksAndRejections (internal/process/task_queues.js:84:21)
[2020-04-05 07:56:28.247 +0000] WARN  (ReporterDampener/1 on kubevious-parser-5d9f749865-6rtz5): [_tryProcessJob] last job failed

To Reproduce

Steps to reproduce the behavior:

  1. install helm chart
  2. Dashboard empty
  3. kubectl logs kubevious and kubevious-parser-....

Environment Details:

  • Kubernetes Current version: 1.16.6-do.0
  • Chrome

Feature Proposal: Kubevious Guard - best practices enforcer

Scope

Kubevious continuously validates Kubernetes configuration and state for misconfigurations, conflicts, and violations of best practices. It is equipped with built-in Kubernetes validations as well as an extensible rules engine that allows any arbitrary conditions validated using JavaScript like if-then-else logic. What makes Kubevious validation engines unique is the ability to execute validation across multiple manifests and state objects, following the same app-centric views used in the UI.

Currently, Kubevious validations are only informative. Meaning Kubevious displays violations in the dashboards and preserves the history of violations. We want to extend Kubevious with the ability to prevent bad, conflicting, or invalid changes from entering the cluster in the first place. Kubevious detects missing resources such as ConfigMaps, Secrets, Services, labels, etc.

This proposal is to implement a component called Kubevious Guard, which should look at the changes as a whole instead of validating each YAML manifest independently. By doing so, Kubevious Guard will accept changes such as object rename that otherwise would be considered invalid. For example, Kubevious Guard should allow renaming ConfigMap used in Deployment as environment variables if, and only if the reference in the Deployment is also correctly updated.

There are three levels where validations can be performed:

  1. Kubevious CLI - validates changes as a whole: helm chart + overrides, YAML files (new capability)
  2. Kubevious Admission Webhook - validates individual YAML changes (frozen capability, see the reasoning in considerations)
  3. Kubevious - monitors all changes in the cluster and notifies if there are currently any violations, effectively acting as an audit. (existing capability)

Requirements

  • New KubeviousCLI tool which would get helm chart + overrides or YAML manifests and would validate them against the rules and violations already enabled in Kubevious.
  • KubeviousCLI should communicate with the Kubevious backend that runs in the Cluster. Possible options for communication:
    • Kubevious gets exposed using Ingress, KubeviousCLI communicates using REST APIs.
    • KubeviousCLI uploads changes to a CRD, Kubevious monitors the CRD, executes validation logic, and updates the result in the status CRD.
  • Kubevious should consider the current state of the cluster and apply changes uploaded from KubeviousCLI.
  • Kubevous Guard should validate if the change package introduces additional violations compared with violations present before applying the package. Such changes should be blocked.

Current Stage

At this stage, we want to collect high-level requirements and find solutions to the open issues described above.

Considerations

Validating Admission Webhook

Admission Controller triggers webhooks for each resource to be modified one by one in random order. Since Kubevious validates links across multiple resources, it makes it impossible for Kubvious to correctly validate changes using Admission Webhook. For example, let's consider a change package with a new Deployment, and a referenced ConfigMap is applied. If the Deployment gets to Admission Controller first, the validation will fail because the new ConfigMap was not yet created.

DRI

@rubenhak

Progress

  • ✅ Draft idea description
  • ✅ Gather initial interest from the community. Thumbs up if you think you would use this feature in production.
  • ✅ Define high-level requirements, find solutions to open issues
  • ✅ Form a working group and elect a DRI
  • ✅ Clarify implementation specifics
  • ✅ Fun part - coding
  • ✅ Released

Appendixes

Kubevious Guard Proposal

Kubevious Guard Architecture

Kubevious Guard Diagram

Legend

✅ - Complete
👉 - Current / active stage

[Installation] Missing DB after helm deploy.

Describe the bug

  • The DB is missing after a successful Helm install.
  • I'm not sure of it's the root cause, but the UI throws out 500 errors and shows SQL errors in the container logs.

To Reproduce

Steps to reproduce the behavior:

  1. helm upgrade --atomic -i kubevious kubevious/kubevious --version 0.4.24 -n kubevious

image

image

Expected behavior

A clear and concise description of what you expected to happen.

Logs

kubectl get pods -n kubevious                                                                                                                                                                                                                                   
NAME                                READY   STATUS    RESTARTS   AGE
kubevious-5696446b5d-kh4qt          1/1     Running   0          29m
kubevious-mysql-0                   1/1     Running   1          29m
kubevious-parser-5d9f749865-4925d   1/1     Running   0          29m
kubevious-ui-6b56bb7df9-n9krj       1/1     Running   0          29m

Logs from the UI

[2020-04-13 01:49:51.528 +0000] ERROR (MySqlDriver/1 on kubevious-ui-6b56bb7df9-n9krj): [_tryConnect] ON ERROR: ER_HOST_NOT_PRIVILEGED
[2020-04-13 01:49:52.531 +0000] ERROR (MySqlDriver/1 on kubevious-ui-6b56bb7df9-n9krj): [_tryConnect] ON ERROR: ER_HOST_NOT_PRIVILEGED
[2020-04-13 01:49:53.536 +0000] ERROR (MySqlDriver/1 on kubevious-ui-6b56bb7df9-n9krj): [_tryConnect] ON ERROR: ER_HOST_NOT_PRIVILEGED
[2020-04-13 01:49:54.541 +0000] ERROR (MySqlDriver/1 on kubevious-ui-6b56bb7df9-n9krj): [_tryConnect] ON ERROR: ER_HOST_NOT_PRIVILEGED
[2020-04-13 01:49:55.548 +0000] ERROR (MySqlDriver/1 on kubevious-ui-6b56bb7df9-n9krj): [_tryConnect] ON ERROR: ER_HOST_NOT_PRIVILEGED
[2020-04-13 01:49:56.552 +0000] ERROR (MySqlDriver/1 on kubevious-ui-6b56bb7df9-n9krj): [_tryConnect] ON ERROR: ER_HOST_NOT_PRIVILEGED
[2020-04-13 01:49:57.556 +0000] ERROR (MySqlDriver/1 on kubevious-ui-6b56bb7df9-n9krj): [_tryConnect] ON ERROR: ER_HOST_NOT_PRIVILEGED
[2020-04-13 01:49:58.559 +0000] ERROR (MySqlDriver/1 on kubevious-ui-6b56bb7df9-n9krj): [_tryConnect] ON ERROR: ER_HOST_NOT_PRIVILEGED
[2020-04-13 01:49:59.563 +0000] ERROR (MySqlDriver/1 on kubevious-ui-6b56bb7df9-n9krj): [_tryConnect] ON ERROR: ER_HOST_NOT_PRIVILEGED
[2020-04-13 01:50:00.566 +0000] ERROR (MySqlDriver/1 on kubevious-ui-6b56bb7df9-n9krj): [_tryConnect] ON ERROR: ER_HOST_NOT_PRIVILEGED
[2020-04-13 01:50:01.570 +0000] ERROR (MySqlDriver/1 on kubevious-ui-6b56bb7df9-n9krj): [_tryConnect] ON ERROR: ER_HOST_NOT_PRIVILEGED
[2020-04-13 01:50:02.574 +0000] ERROR (MySqlDriver/1 on kubevious-ui-6b56bb7df9-n9krj): [_tryConnect] ON ERROR: ER_HOST_NOT_PRIVILEGED
[2020-04-13 01:50:03.577 +0000] ERROR (MySqlDriver/1 on kubevious-ui-6b56bb7df9-n9krj): [_tryConnect] ON ERROR: ER_HOST_NOT_PRIVILEGED
[2020-04-13 01:50:04.580 +0000] ERROR (MySqlDriver/1 on kubevious-ui-6b56bb7df9-n9krj): [_tryConnect] ON ERROR: ER_HOST_NOT_PRIVILEGED
[2020-04-13 01:50:05.584 +0000] ERROR (MySqlDriver/1 on kubevious-ui-6b56bb7df9-n9krj): [_tryConnect] ON ERROR: ER_HOST_NOT_PRIVILEGED
[2020-04-13 01:50:06.588 +0000] ERROR (MySqlDriver/1 on kubevious-ui-6b56bb7df9-n9krj): [_tryConnect] ON ERROR: ER_HOST_NOT_PRIVILEGED
[2020-04-13 01:50:07.591 +0000] ERROR (MySqlDriver/1 on kubevious-ui-6b56bb7df9-n9krj): [_tryConnect] ON ERROR: ER_HOST_NOT_PRIVILEGED
[2020-04-13 01:50:08.595 +0000] ERROR (MySqlDriver/1 on kubevious-ui-6b56bb7df9-n9krj): [_tryConnect] ON ERROR: ER_HOST_NOT_PRIVILEGED
[2020-04-13 01:50:09.600 +0000] ERROR (MySqlDriver/1 on kubevious-ui-6b56bb7df9-n9krj): [_tryConnect] ON ERROR: ER_HOST_NOT_PRIVILEGED
[2020-04-13 01:50:10.602 +0000] ERROR (MySqlDriver/1 on kubevious-ui-6b56bb7df9-n9krj): [_tryConnect] ON ERROR: ER_HOST_NOT_PRIVILEGED
[2020-04-13 01:50:11.606 +0000] ERROR (MySqlDriver/1 on kubevious-ui-6b56bb7df9-n9krj): [_tryConnect] ON ERROR: ER_HOST_NOT_PRIVILEGED
[2020-04-13 01:50:12.610 +0000] ERROR (MySqlDriver/1 on kubevious-ui-6b56bb7df9-n9krj): [_tryConnect] ON ERROR: ER_HOST_NOT_PRIVILEGED
[2020-04-13 01:50:13.613 +0000] ERROR (MySqlDriver/1 on kubevious-ui-6b56bb7df9-n9krj): [_tryConnect] ON ERROR: ER_HOST_NOT_PRIVILEGED
[2020-04-13 01:50:14.617 +0000] ERROR (MySqlDriver/1 on kubevious-ui-6b56bb7df9-n9krj): [_tryConnect] ON ERROR: ER_HOST_NOT_PRIVILEGED
[2020-04-13 01:50:15.620 +0000] ERROR (MySqlDriver/1 on kubevious-ui-6b56bb7df9-n9krj): [_tryConnect] ON ERROR: ER_HOST_NOT_PRIVILEGED
[2020-04-13 01:50:16.624 +0000] ERROR (MySqlDriver/1 on kubevious-ui-6b56bb7df9-n9krj): [_tryConnect] ON ERROR: ER_HOST_NOT_PRIVILEGED
[2020-04-13 01:50:17.628 +0000] ERROR (MySqlDriver/1 on kubevious-ui-6b56bb7df9-n9krj): [_tryConnect] ON ERROR: ER_HOST_NOT_PRIVILEGED
[2020-04-13 01:50:18.630 +0000] ERROR (MySqlDriver/1 on kubevious-ui-6b56bb7df9-n9krj): [_tryConnect] ON ERROR: ER_HOST_NOT_PRIVILEGED
[2020-04-13 01:50:19.633 +0000] ERROR (MySqlDriver/1 on kubevious-ui-6b56bb7df9-n9krj): [_tryConnect] ON ERROR: ER_HOST_NOT_PRIVILEGED
[2020-04-13 01:50:20.636 +0000] ERROR (MySqlDriver/1 on kubevious-ui-6b56bb7df9-n9krj): [_tryConnect] ON ERROR: ER_HOST_NOT_PRIVILEGED
[2020-04-13 01:50:21.640 +0000] ERROR (MySqlDriver/1 on kubevious-ui-6b56bb7df9-n9krj): [_tryConnect] ON ERROR: ER_HOST_NOT_PRIVILEGED
[2020-04-13 01:50:22.644 +0000] ERROR (MySqlDriver/1 on kubevious-ui-6b56bb7df9-n9krj): [_tryConnect] ON ERROR: ER_HOST_NOT_PRIVILEGED
[2020-04-13 01:50:23.647 +0000] ERROR (MySqlDriver/1 on kubevious-ui-6b56bb7df9-n9krj): [_tryConnect] ON ERROR: ER_HOST_NOT_PRIVILEGED
[2020-04-13 01:50:24.650 +0000] ERROR (MySqlDriver/1 on kubevious-ui-6b56bb7df9-n9krj): [_tryConnect] ON ERROR: ER_HOST_NOT_PRIVILEGED
[2020-04-13 01:50:25.653 +0000] ERROR (MySqlDriver/1 on kubevious-ui-6b56bb7df9-n9krj): [_tryConnect] ON ERROR: ER_HOST_NOT_PRIVILEGED
[2020-04-13 01:50:26.656 +0000] ERROR (MySqlDriver/1 on kubevious-ui-6b56bb7df9-n9krj): [_tryConnect] ON ERROR: ER_HOST_NOT_PRIVILEGED
[2020-04-13 01:50:27.660 +0000] ERROR (MySqlDriver/1 on kubevious-ui-6b56bb7df9-n9krj): [_tryConnect] ON ERROR: ER_HOST_NOT_PRIVILEGED
[2020-04-13 01:50:28.664 +0000] ERROR (MySqlDriver/1 on kubevious-ui-6b56bb7df9-n9krj): [_tryConnect] ON ERROR: ER_HOST_NOT_PRIVILEGED
[2020-04-13 01:50:29.668 +0000] ERROR (MySqlDriver/1 on kubevious-ui-6b56bb7df9-n9krj): [_tryConnect] ON ERROR: ER_HOST_NOT_PRIVILEGED
[2020-04-13 01:50:30.671 +0000] ERROR (MySqlDriver/1 on kubevious-ui-6b56bb7df9-n9krj): [_tryConnect] ON ERROR: ER_HOST_NOT_PRIVILEGED
[2020-04-13 01:50:31.675 +0000] ERROR (MySqlDriver/1 on kubevious-ui-6b56bb7df9-n9krj): [_tryConnect] ON ERROR: ER_HOST_NOT_PRIVILEGED
[2020-04-13 01:50:32.677 +0000] ERROR (MySqlDriver/1 on kubevious-ui-6b56bb7df9-n9krj): [_tryConnect] ON ERROR: ER_HOST_NOT_PRIVILEGED

Logs from the parser

[2020-04-13 01:50:08.176 +0000] INFO  (Watch/1 on kubevious-parser-5d9f749865-4925d): [_handleChange] HorizontalPodAutoscaler. MODIFIED :: istio-telemetry...
[2020-04-13 01:50:08.199 +0000] INFO  (Watch/1 on kubevious-parser-5d9f749865-4925d): [_handleChange] HorizontalPodAutoscaler. MODIFIED :: gitlab-sidekiq-all-in-1-v1...
[2020-04-13 01:50:08.256 +0000] INFO  (Watch/1 on kubevious-parser-5d9f749865-4925d): [_handleChange] HorizontalPodAutoscaler. MODIFIED :: gitlab-unicorn...
[2020-04-13 01:50:08.368 +0000] ERROR (ReporterTarget/1 on kubevious-parser-5d9f749865-4925d): [request]  Error: Request failed with status code 413
    at createError (/app/node_modules/axios/lib/core/createError.js:16:15)
    at settle (/app/node_modules/axios/lib/core/settle.js:17:12)
    at IncomingMessage.handleStreamEnd (/app/node_modules/axios/lib/adapters/http.js:236:11)
    at IncomingMessage.emit (events.js:323:22)
    at endReadableNT (_stream_readable.js:1204:12)
    at processTicksAndRejections (internal/process/task_queues.js:84:21)
[2020-04-13 01:50:08.369 +0000] ERROR (ReporterDampener/1 on kubevious-parser-5d9f749865-4925d): [_processJob]  Error: Request failed with status code 413
    at createError (/app/node_modules/axios/lib/core/createError.js:16:15)
    at settle (/app/node_modules/axios/lib/core/settle.js:17:12)
    at IncomingMessage.handleStreamEnd (/app/node_modules/axios/lib/adapters/http.js:236:11)
    at IncomingMessage.emit (events.js:323:22)
    at endReadableNT (_stream_readable.js:1204:12)
    at processTicksAndRejections (internal/process/task_queues.js:84:21)
[2020-04-13 01:50:08.369 +0000] WARN  (ReporterDampener/1 on kubevious-parser-5d9f749865-4925d): [_tryProcessJob] last job failed
[2020-04-13 01:50:08.370 +0000] INFO  (ReporterDampener/1 on kubevious-parser-5d9f749865-4925d): [_rescheduleProcess]
[2020-04-13 01:50:08.508 +0000] INFO  (Watch/1 on kubevious-parser-5d9f749865-4925d): [_handleChange] ConfigMap. MODIFIED :: controller-leader-election-helper...
[2020-04-13 01:50:08.906 +0000] INFO  (Watch/1 on kubevious-parser-5d9f749865-4925d): [_handleChange] ConfigMap. MODIFIED :: cattle-controllers...
[2020-04-13 01:50:10.141 +0000] INFO  (Watch/1 on kubevious-parser-5d9f749865-4925d): [_handleChange] ConfigMap. MODIFIED :: ingress-controller-leader-nginx...
[2020-04-13 01:50:10.565 +0000] INFO  (Watch/1 on kubevious-parser-5d9f749865-4925d): [_handleChange] ConfigMap. MODIFIED :: controller-leader-election-helper...
[2020-04-13 01:50:10.970 +0000] INFO  (Watch/1 on kubevious-parser-5d9f749865-4925d): [_handleChange] ConfigMap. MODIFIED :: cattle-controllers...
[2020-04-13 01:50:11.115 +0000] INFO  (Watch/1 on kubevious-parser-5d9f749865-4925d): [_handleChange] ConfigMap. MODIFIED :: cert-manager-controller...
[2020-04-13 01:50:11.778 +0000] INFO  (FacadeRegistry/1 on kubevious-parser-5d9f749865-4925d): [_handleConcreteRegistryChange] BEGIN
[2020-04-13 01:50:11.778 +0000] INFO  (LogicProcessor/1 on kubevious-parser-5d9f749865-4925d): [process] BEGIN
[2020-04-13 01:50:11.858 +0000] INFO  (LogicProcessor/1 on kubevious-parser-5d9f749865-4925d): [process] READY
[2020-04-13 01:50:11.859 +0000] INFO  (FacadeRegistry/1 on kubevious-parser-5d9f749865-4925d): [acceptLogicItems] item count: 1334
[2020-04-13 01:50:11.859 +0000] INFO  (FacadeDampener/1 on kubevious-parser-5d9f749865-4925d): [acceptJob] job date: 2020-04-13T01:50:11.859Z. queue size: 1
[2020-04-13 01:50:11.859 +0000] INFO  (FacadeDampener/1 on kubevious-parser-5d9f749865-4925d): [_processJob] BEGIN. Date: 2020-04-13T01:50:11.859Z
[2020-04-13 01:50:11.859 +0000] INFO  (LogicProcessor/1 on kubevious-parser-5d9f749865-4925d): [process] END
[2020-04-13 01:50:11.860 +0000] INFO  (FacadeRegistry/1 on kubevious-parser-5d9f749865-4925d): [_handleConcreteRegistryChange] END
[2020-04-13 01:50:11.860 +0000] INFO  (FacadeRegistry/1 on kubevious-parser-5d9f749865-4925d): [_processItems] Date: 2020-04-13T01:50:11.859Z. item count: 1334
[2020-04-13 01:50:11.860 +0000] INFO  (Reporter/1 on kubevious-parser-5d9f749865-4925d): [acceptLogicItems] item count: 1334
[2020-04-13 01:50:12.293 +0000] INFO  (Reporter/1 on kubevious-parser-5d9f749865-4925d): [acceptLogicItems] obj count: 3298
[2020-04-13 01:50:12.294 +0000] INFO  (ReporterTarget/1 on kubevious-parser-5d9f749865-4925d): [report] date: 2020-04-13T01:50:11.859Z, item count: 3298
[2020-04-13 01:50:12.294 +0000] INFO  (ReporterDampener/1 on kubevious-parser-5d9f749865-4925d): [acceptJob] job date: 2020-04-13T01:50:11.859Z. queue size: 11
[2020-04-13 01:50:12.294 +0000] INFO  (ReporterDampener/1 on kubevious-parser-5d9f749865-4925d): [_processJob] BEGIN. Date: 2020-04-13T01:19:12.811Z
[2020-04-13 01:50:12.295 +0000] INFO  (ReporterTarget/1 on kubevious-parser-5d9f749865-4925d): [_processSnapshot] date: 2020-04-13T01:19:12.811Z, item count: 3299
[2020-04-13 01:50:12.295 +0000] INFO  (ReporterTarget/1 on kubevious-parser-5d9f749865-4925d): [_reportSnapshot] Begin
[2020-04-13 01:50:12.295 +0000] INFO  (SnapshotReporter/1 on kubevious-parser-5d9f749865-4925d): [run] 
[2020-04-13 01:50:12.295 +0000] INFO  (SnapshotReporter/1 on kubevious-parser-5d9f749865-4925d): [_execute]
[2020-04-13 01:50:12.296 +0000] INFO  (SnapshotReporter/1 on kubevious-parser-5d9f749865-4925d): [_reportAsSnapshot]
[2020-04-13 01:50:12.296 +0000] INFO  (SnapshotReporter/1 on kubevious-parser-5d9f749865-4925d): [_createSnapshot]
[2020-04-13 01:50:12.296 +0000] INFO  (FacadeDampener/1 on kubevious-parser-5d9f749865-4925d): [_processJob] END
[2020-04-13 01:50:12.296 +0000] INFO  (FacadeDampener/1 on kubevious-parser-5d9f749865-4925d): [_tryProcessJob] END
[2020-04-13 01:50:12.296 +0000] INFO  (FacadeDampener/1 on kubevious-parser-5d9f749865-4925d): [_tryProcessJob] empty
[2020-04-13 01:50:12.303 +0000] INFO  (SnapshotReporter/1 on kubevious-parser-5d9f749865-4925d): [_createSnapshot] id: 2f8f4ee2-e9ad-43a2-84c9-78146ae65ef4
[2020-04-13 01:50:12.303 +0000] INFO  (SnapshotReporter/1 on kubevious-parser-5d9f749865-4925d): [_publishSnapshotItems]
[2020-04-13 01:50:12.606 +0000] INFO  (Watch/1 on kubevious-parser-5d9f749865-4925d): [_handleChange] ConfigMap. MODIFIED :: controller-leader-election-helper...
[2020-04-13 01:50:13.019 +0000] INFO  (Watch/1 on kubevious-parser-5d9f749865-4925d): [_handleChange] ConfigMap. MODIFIED :: cattle-controllers...


Logs from mysql

kubectl logs kubevious-mysql-0  -n kubevious                                                                                                                                                                                                                    
2020-04-13 01:20:26+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.19-1debian10 started.
2020-04-13 01:20:27+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql'
2020-04-13 01:20:27+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.19-1debian10 started.
2020-04-13T01:20:28.002382Z 0 [Warning] [MY-011070] [Server] 'Disabling symbolic links using --skip-symbolic-links (or equivalent) is the default. Consider not using this option as it' is deprecated and will be removed in a future release.
2020-04-13T01:20:28.002456Z 0 [Warning] [MY-011068] [Server] The syntax 'expire-logs-days' is deprecated and will be removed in a future release. Please use binlog_expire_logs_seconds instead.
2020-04-13T01:20:28.002806Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.19) starting as process 1

InnoDB: Progress in percents: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 302020-04-13T01:20:33.443696Z 0 [System] [MY-010229] [Server] Starting XA crash recovery...
2020-04-13T01:20:33.466591Z 0 [System] [MY-010232] [Server] XA crash recovery finished.
 31 32 33 34 35 36 37 38 39 402020-04-13T01:20:34.059807Z 0 [Warning] [MY-010068] [Server] CA certificate ca.pem is self signed.
 41 422020-04-13T01:20:34.101508Z 0 [Warning] [MY-011810] [Server] Insecure configuration for --pid-file: Location '/var/run/mysqld' in the path is accessible to all OS users. Consider choosing a different directory.
 43 44 45 462020-04-13T01:20:34.228615Z 0 [System] [MY-010931] [Server] /usr/sbin/mysqld: ready for connections. Version: '8.0.19'  socket: '/var/run/mysqld/mysqld.sock'  port: 3306  MySQL Community Server - GPL.
 47 48 49 50 51 52 53 54 552020-04-13T01:20:34.445008Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Socket: '/var/run/mysqld/mysqlx.sock' bind-address: '::' port: 33060

Logs from the app

PayloadTooLargeError: request entity too large
    at readStream (/app/node_modules/raw-body/index.js:155:17)
    at getRawBody (/app/node_modules/raw-body/index.js:108:12)
    at read (/app/node_modules/body-parser/lib/read.js:77:3)
    at jsonParser (/app/node_modules/body-parser/lib/types/json.js:135:5)
    at Layer.handle [as handle_request] (/app/node_modules/express/lib/router/layer.js:95:5)
    at trim_prefix (/app/node_modules/express/lib/router/index.js:317:13)
    at /app/node_modules/express/lib/router/index.js:284:7
    at Function.process_params (/app/node_modules/express/lib/router/index.js:335:12)
    at next (/app/node_modules/express/lib/router/index.js:275:10)
    at expressInit (/app/node_modules/express/lib/middleware/init.js:40:5)
    at Layer.handle [as handle_request] (/app/node_modules/express/lib/router/layer.js:95:5)
    at trim_prefix (/app/node_modules/express/lib/router/index.js:317:13)
    at /app/node_modules/express/lib/router/index.js:284:7
    at Function.process_params (/app/node_modules/express/lib/router/index.js:335:12)
    at next (/app/node_modules/express/lib/router/index.js:275:10)
    at query (/app/node_modules/express/lib/middleware/query.js:45:5)
[2020-04-13 01:48:43.886 +0000] ERROR (MySqlDriver/1 on kubevious-5696446b5d-kh4qt): [_tryConnect] ON ERROR: ER_HOST_NOT_PRIVILEGED
[2020-04-13 01:48:44.890 +0000] ERROR (MySqlDriver/1 on kubevious-5696446b5d-kh4qt): [_tryConnect] ON ERROR: ER_HOST_NOT_PRIVILEGED
[2020-04-13 01:48:45.894 +0000] ERROR (MySqlDriver/1 on kubevious-5696446b5d-kh4qt): [_tryConnect] ON ERROR: ER_HOST_NOT_PRIVILEGED
[2020-04-13 01:48:46.898 +0000] ERROR (MySqlDriver/1 on kubevious-5696446b5d-kh4qt): [_tryConnect] ON ERROR: ER_HOST_NOT_PRIVILEGED
[2020-04-13 01:48:47.901 +0000] ERROR (MySqlDriver/1 on kubevious-5696446b5d-kh4qt): [_tryConnect] ON ERROR: ER_HOST_NOT_PRIVILEGED
[2020-04-13 01:48:48.904 +0000] ERROR (MySqlDriver/1 on kubevious-5696446b5d-kh4qt): [_tryConnect] ON ERROR: ER_HOST_NOT_PRIVILEGED
[2020-04-13 01:48:49.906 +0000] ERROR (MySqlDriver/1 on kubevious-5696446b5d-kh4qt): [_tryConnect] ON ERROR: ER_HOST_NOT_PRIVILEGED
[2020-04-13 01:48:50.910 +0000] ERROR (MySqlDriver/1 on kubevious-5696446b5d-kh4qt): [_tryConnect] ON ERROR: ER_HOST_NOT_PRIVILEGED
[2020-04-13 01:48:51.914 +0000] ERROR (MySqlDriver/1 on kubevious-5696446b5d-kh4qt): [_tryConnect] ON ERROR: ER_HOST_NOT_PRIVILEGED
[2020-04-13 01:48:52.918 +0000] ERROR (MySqlDriver/1 on kubevious-5696446b5d-kh4qt): [_tryConnect] ON ERROR: ER_HOST_NOT_PRIVILEGED
[2020-04-13 01:48:53.922 +0000] ERROR (MySqlDriver/1 on kubevious-5696446b5d-kh4qt): [_tryConnect] ON ERROR: ER_HOST_NOT_PRIVILEGED
[2020-04-13 01:48:54.925 +0000] ERROR (MySqlDriver/1 on kubevious-5696446b5d-kh4qt): [_tryConnect] ON ERROR: ER_HOST_NOT_PRIVILEGED
[2020-04-13 01:48:55.927 +0000] ERROR (MySqlDriver/1 on kubevious-5696446b5d-kh4qt): [_tryConnect] ON ERROR: ER_HOST_NOT_PRIVILEGED
[2020-04-13 01:48:56.931 +0000] ERROR (MySqlDriver/1 on kubevious-5696446b5d-kh4qt): [_tryConnect] ON ERROR: ER_HOST_NOT_PRIVILEGED
[2020-04-13 01:48:57.933 +0000] ERROR (MySqlDriver/1 on kubevious-5696446b5d-kh4qt): [_tryConnect] ON ERROR: ER_HOST_NOT_PRIVILEGED

Environment Details:

  • RKE 2.3.3 / Helm 3
  • Chrome 80.0.3987.149
  • Kubevious Helm release 0.4.24

UI Object Selection History

Description

Kubevious UI consists of multiple tree views with an ability to travel from one to another. When traversing the tree it sometimes makes it hard to remember how you landed on a particular node. The proposal is to have a "Navigation History" tool window in Kubevious UI to keep track of node selection with an ability to previous selected node.

Implementation

Changes should be in the following repos:

See contributing guidelines for reference.

Appendix

CleanShot 2022-05-05 at 16 07 35

Error 502 upon installation through helm

Describe the bug

I'm installing Kubevious as described at the GitHub homepage using my in house kubernetes cluster. Once the installation is over I'm starting the kube proxy to access the application. The application starts loading, but on the top of the screen I'm getting the error message: "Error occurred: <!DOCTYPE html> <html> <head> <title>Error</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>An error occurred.</h1> <p>Sorry, the page you are looking for is currently unavailable.<br/> Please try again later.</p> <p>If you are the system administrator of this resource then you should check the error log for details.</p> <p><em>Faithfully yours, nginx.</em></p> </body> </html>"

To Reproduce

Steps to reproduce the behavior:

  1. kubectl create namespace kubevious
  2. helm repo add kubevious https://helm.kubevious.io
  3. helm upgrade --atomic --set mysql.persistence.storageClass=managed-nfs-storage -i kubevious kubevious/kubevious --version 0.8.15 -n kubevious --reuse-values
  4. kubectl port-forward $(kubectl get pods -n kubevious -l "app.kubernetes.io/component=kubevious-ui" -o jsonpath="{.items[0].metadata.name}") 8080:80 -n kubevious
  5. Go to http://localhost:8080/ using the web browser to see the "502:..." error.

Expected behavior

Besides the 502 error, all the panels are empty. I have followed the instructions carefully with the exception of creating the NFS storage class. It should be working.

Environment Details:

  • Kubernetes version: {Major:"1", Minor:"21", GitVersion:"v1.21.3", GitCommit:"ca643a4d1f7bfe34773c74f79527be4afd95bf39", GitTreeState:"clean", BuildDate:"2021-07-15T20:59:07Z", GoVersion:"go1.16.6", Compiler:"gc", Platform:"linux/amd64"}

Installing on Openshift 3.11

I am trying to install on openshift 3.11, but the kubevious-ui pod fails to start with below exceptions

sed: can't create temp file '/etc/nginx/conf.d/default.confXXXXXX': Permission denied
--
  | sed: can't create temp file '/etc/nginx/conf.d/default.confXXXXXX': Permission denied
  | 2020/10/04 23:01:19 [warn] 11#11: the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /etc/nginx/nginx.conf:2
  | nginx: [warn] the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /etc/nginx/nginx.conf:2
  | 2020/10/04 23:01:19 [emerg] 11#11: host not found in resolver "$DNS_SERVERS" in /etc/nginx/conf.d/default.conf:12
  | nginx: [emerg] host not found in resolver "$DNS_SERVERS" in /etc/nginx/conf.d/default.conf:12

Any documentation on openshift 3.11 installation ?

Error parsing containerConfig.envFrom

Describe the bug

After starting deploying to a running kubernetes cluster, the container for the kubevious deployment is throwing the foillowing error. Also this cause the UI to be empty.

ERROR (ConcreteRegistry/1 on kubevious-84bb6b68d7-4ncct): [_triggerChanged] TypeError: Cannot read property 'value' of undefined                                                                               
    at processContainer (/app/lib/logic/parsers/030_app-controllers.js:183:32)                                                                                                                                                                 
    at Object.handler (/app/lib/logic/parsers/030_app-controllers.js:80:17)                                                                                                                                                                    
    at LogicProcessor._processHandler (/app/lib/logic/processor.js:232:21)                                                                                                                                                                     
    at LogicProcessor._processParser (/app/lib/logic/processor.js:178:18)                                                                                                                                                                      
    at LogicProcessor._processParsers (/app/lib/logic/processor.js:164:18)                                                                                                                                                                     
    at LogicProcessor._process (/app/lib/logic/processor.js:146:14)                                                                                                                                                                            
    at EventDampener._processCb (/app/lib/utils/event-dampener.js:42:19)                                                                                                                                                                       
    at /app/lib/utils/event-dampener.js:26:29                                                                                                                                                                                                  
    at tryCatcher (/app/node_modules/bluebird/js/release/util.js:16:23)                                                                                                                                                                        
    at MappingPromiseArray._promiseFulfilled (/app/node_modules/bluebird/js/release/map.js:68:38)                                                                                                                                              
    at MappingPromiseArray.PromiseArray._iterate (/app/node_modules/bluebird/js/release/promise_array.js:115:31)                                                                                                                               
    at MappingPromiseArray.init (/app/node_modules/bluebird/js/release/promise_array.js:79:10)                                                                                                                                                 
    at MappingPromiseArray._asyncInit (/app/node_modules/bluebird/js/release/map.js:37:10)                                                                                                                                                     
    at _drainQueueStep (/app/node_modules/bluebird/js/release/async.js:97:12)                                                                                                                                                                  
    at _drainQueue (/app/node_modules/bluebird/js/release/async.js:86:9)                                                                                                                                                                       
    at Async._drainQueues (/app/node_modules/bluebird/js/release/async.js:102:5)                                                                                                                                                               
    at Immediate.Async.drainQueues [as _onImmediate] (/app/node_modules/bluebird/js/release/async.js:15:14)                                                                                                                                    
    at processImmediate (internal/timers.js:439:21)                   

To Reproduce
Steps to reproduce the behavior:

  1. Deploy using Helm
  2. Observe logs from kubevious pod
  3. Run portforward and go to the webui
  4. See an empty ui and the error above in the kubevious pod logs

Desktop (please complete the following information):

  • OS: Fedora 30
  • Browser: Chrome
  • Version: 79.0.3945.130
  • Kubernetes version: v1.16.3

other informations
Although i'm not familiar with nodejs i've added a quick workaround to skip this section of code and everything else went fine.

kubevious-mysql-o pendong

I used the following command, based on the instruction to install the kubevious

helm upgrade --atomic -i kubevious kubevious/kubevious --version 0.8.15 -n kubevious

the pod kubevious-mysql-o is always pending

kubevious              kubevious-mysql-0                            0/1     Pending   0

The i tried to get info about the process with the following command:

kubectl --namespace kubevious describe pod kubevious-mysql-0

Events:
  Type     Reason            Age   From               Message
  ----     ------            ----  ----               -------
  Warning  FailedScheduling  4m1s  default-scheduler  0/2 nodes are available: 2 pod has unbound immediate PersistentVolumeClaims.
  Warning  FailedScheduling  4m    default-scheduler  0/2 nodes are available: 2 pod has unbound immediate PersistentVolumeClaims.

I have one StorageClass

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer

I have one PersistentVolume

apiVersion: v1
kind: PersistentVolume
metadata:
  name: kubevious-pv-0
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  hostPath:
    path: "/mnt/data

I have PersistentVolumeClaim

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: kubevious-pvc-o
spec:
  accessModes:
    - ReadWriteOnce
  volumeMode: Filesystem
  resources:
    requests:
      storage: 10Gi
  storageClassName: "manual" # Empty string must be explicitly set otherwise default StorageClass will be set
  volumeName: kubevious-pv-0

Any idea?

Feature Proposal: Kubernetes Best Practices Enforcement Library

Scope

Kubevious continuously validates Kubernetes configuration and state for misconfigurations, conflicts, and violations of best practices. Kubevious is equipped with an extensible rules engine which allows any arbitrary conditions validated using JavaScript like if-then-else logic. What makes Kubevious validation engines unique is the ability to execute validation across multiple manifests and state objects. Kubevious Rules Engine also allows identifying manifests of special interest. Those manifests do not necessarily mean to cause trouble. Those could include public or stateful workloads, applications without network policies, MySQL deployments, etc. You can learn more in the Rules Engine docs or try it on the live demo.

We tried simplifying writing new rules as much as possible, but there is still some learning curve involved. Also, we believe that many Kubernetes users are going through similar challenges, and could benefit from each other by sharing with each other rules that they have developed. We created a Git repo in an attempt to build a community-driven rules library, but it doesn't seem to be easy to use either.

This issue request is a proposal to develop an integrated rules library into Kubevious. Users should be able to build their rules, submit rules to the public library, and download rules from the library.

Requirements

  • Users should be able to upload rules from Kubevious web app to the community library.
  • Users should be able to search rules from Kubevious web app.
  • Rules should be categorized in the library. Maybe even having sub-categories.
  • There should be a way to see the effectiveness of the rule. Users should be able to rate rules.
  • Kubevious should report usage of the rule to the library (if the user opts-in for reporting anonymous analytics).
  • Relevant markers should also be uploaded with the rule package.

Open Questions

  • Where should be the library stored? GitHub, some internal database, other?

Current Stage

At this stage, we want to gather initial interest from the community to decide whether to move forward with this new capability.
❓- What kinds of rules would you be interested in using in your cluster?
❓- Did you have an incident in K8s? Can you describe a condition and how it happend? Does it happen more than once?
❓- Are you comfortable implementing rules or would rather reuse rules developed by the community?
❤️ - Please answer below in the comments, and 👍 thumbs up if the answer is yes to any of the questions above.

Progress

  • ✅ Draft idea description
  • 👉 Gather initial interest from the community. Thumbs up if you think you would use this feature in production.
  • 🔳 Define high-level requirements
  • 🔳 Form a working group and elect a DRI
  • 🔳 Clarify implementation specifics
  • 🔳 Fun part - coding
  • 🔳 Released

Appendixes

Kubevious Rules Engine Editor

CleanShot 2022-05-05 at 15 48 32

CleanShot 2022-05-05 at 15 49 15

CleanShot 2022-05-05 at 15 48 45

CleanShot 2022-05-05 at 15 50 57

CleanShot 2022-05-05 at 15 49 58

Feature request - Have some rules already enabled in the default installation

I installed Kubenvious in my cluster and the installation worked fine.

However I found that no rules were defined.

I think that it is best if some rules are already there, so that the user can see the value of the tool right away

I suggest at least the following rules

  • public application
  • no-resource-limit-pods
  • latest-tag-check
  • no-memory-request-set

missing ServiceAccount for redis

In the current version of the Chart (1.0.3) there is no ServiceAccount defined for redis, but there exists a value in values.yaml and also a helper-function to define the name for ServiceAccount.

Feature Proposal: Support for Certificate Manager

Scope

We want to make usage of Certificate Manager easier and safer. This proposal is to implement native support of Certificate Manager to prevent and troubleshoot Certificate Manager issues right in Kubevious. That would involve tracking linked resources and guiding user to areas that need. Certificate Manager uses following path main ingress resource -> certificate -> certificaterequest -> order -> challenge -> challenge ingress during the certificate refresh. We want to implement a special view for Cert Manager, or integrated those resources with the Gateway View.

We also want to identify common conditions that lead to certificate refresh issues. What combination of Ingress / Issuer parameters and annotations caused issues with fetching certificates? Such checks can be implemented in Kubevious.

Current Stage

At this stage, we want to gather initial interest from the community to decide whether to move forward with this new capability.
❓- Did you have any issues with Certificate Manager?
❓- What caused those issues?
❓- How often did they happen?

❤️ - Please answer below in the comments, and 👍 thumbs up if the answer is yes to any of the questions above.

Progress

  • ✅ Draft idea description
  • 👉 Gather initial interest from the community. Thumbs up if you think you would use this feature in production.
  • 🔳 Define high-level requirements
  • 🔳 Form a working group and elect a DRI
  • 🔳 Clarify implementation specifics
  • 🔳 Fun part - coding
  • 🔳 Released

Appendixes

Kubevious Gateway View

A glimpse of what Kubevious does for K8s Ingresses. We want to do the same (and more) for 3rd party API Gateways.
Gateway View

Disappearing Timeline

Describe the bug

Timeline won't reappear after clicking "Hide Timeline"

To Reproduce

Steps to reproduce the behavior:

  1. Go to demo.kubevious.io
  2. Click on 'Hide Timeline' under the settings drop-down menu on the top-right corner.
  3. Click on 'Show Timeline' under the settings drop-down menu
  4. See error

Screenshots

Example

Environment Details:

  • Chrome
  • Kubevious v0.5

Failure to prepare mysql statements

Describe the bug

Exception in UI. Should repair broken mysql connection.

Screenshots

Error: Can't add new command when connection is in closed state
    at Connection._addCommandClosedState (/app/node_modules/mysql2/lib/connection.js:137:17)
    at PreparedStatementInfo.execute (/app/node_modules/mysql2/lib/commands/prepare.js:27:29)
    at /app/node_modules/kubevious-helpers/lib/mysql-driver.js:88:23
    at Promise._execute (/app/node_modules/bluebird/js/release/debuggability.js:384:9)
    at Promise._resolveFromExecutor (/app/node_modules/bluebird/js/release/promise.js:518:18)
    at new Promise (/app/node_modules/bluebird/js/release/promise.js:103:10)
    at MySqlDriver.executeStatement (/app/node_modules/kubevious-helpers/lib/mysql-driver.js:75:16)
    at HistorySnapshotReader._execute (/app/node_modules/kubevious-helpers/lib/history/snapshot-reader.js:405:29)
    at HistorySnapshotReader.queryTimeline (/app/node_modules/kubevious-helpers/lib/history/snapshot-reader.js:67:29)
    at /app/lib/routers/history.js:38:24
    at handleReturn (/app/node_modules/express-promise-router/lib/express-promise-router.js:24:27)
    at /app/node_modules/express-promise-router/lib/express-promise-router.js:56:9
    at handleReturn (/app/node_modules/express-promise-router/lib/express-promise-router.js:24:27)
    at /app/node_modules/express-promise-router/lib/express-promise-router.js:56:9
    at Layer.handle [as handle_request] (/app/node_modules/express/lib/router/layer.js:95:5)
    at next (/app/node_modules/express/lib/router/route.js:137:13)

Add the opportunity to see other object produce by or for CRD

Is your feature request related to a problem? Please describe.

Every K8S dashboard have the same lack. They don't show yaml produced for/by CRD
For example I use traefik V2. I have lot of IngressRoute object but I can not see them
Same case for about all K8S operators I have on my cluster. They can only be listed by kubectlcli

Describe the solution you'd like

I would be great if by config we can add a list of object type. Then you can show them by a kubectl get XXX or by api call. Another option could also to make a call for each CRD to have the list of object by namespace

Creating a marker after a rule does not reset the mark error

Describe the bug

I created a rule (fat-namespace) without actually having created a marker first. The rule complained that the marker is not present. I added the mark, but the rule was still complaining

To Reproduce

Steps to reproduce the behavior:

  1. Create a new rule (fat-namespace)
  2. Save it
  3. See the error message that mark is not present
  4. Create mark and save it
  5. Rule editor still complains that mark is not there.

Expected behavior

I expected the rule editor to "refresh" and detect the presence of mark

Screenshots

If applicable, add screenshots to help explain your problem.

Environment Details:

  • Kubernetes Distribution/Version- Azure 1.15.10
  • Browser Firefox
  • Version 0.6.36

"logger" undefined if kubevious.log.level changed from default

Describe the bug

logger is missing if I change the loglevel in the helmchart

To Reproduce

  1. kubectl create namespace kubevious
  2. echo 'kubevious: {log: {level: warning}}' >values.yaml (EDIT: fixed typo)
  3. helm upgrade --atomic --install kubevious --namespace kubevious kubevious/kubevious --version 0.8.15 --debug --values values.yaml

Expected behavior

Kubevious is deployed with less noise in the logs, or an error message about an inappropriate string in the config file i.e. "no such log level 'warning' "

Unexpected behaviour

$ kubectl --namespace kubevious logs -c kubevious --tail 1000 --previous kubevious-744ffb8f78-zk84p
/app/node_modules/@kubevious/helper-backend/dist/backend.js:94
        this.logger.error('[_uncaughtException] Reason: ', reason);
                    ^

TypeError: Cannot read property 'error' of undefined
    at Backend._uncaughtException (/app/node_modules/@kubevious/helper-backend/dist/backend.js:94:21)
    at process.emit (events.js:314:20)
    at process._fatalException (internal/process/execution.js:165:25)

Environment Details:

  • Kubernetes: 1.18.20
  • Helm: 3.3.0
  • Browser: n/a
  • Version: 0.8.15

FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory

Hello.
I have an empty Universe in ui and parser sometimes crushes with the error

FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory

<--- Last few GCs --->

[1:0x561c150a5f00]   224796 ms: Mark-sweep 1761.5 (2068.0) -> 1746.9 (2068.5) MB, 2813.1 / 0.8 ms  (average mu = 0.090, current mu = 0.023) allocation failure scavenge might not succeed
[1:0x561c150a5f00]   227659 ms: Mark-sweep 1762.1 (2068.5) -> 1747.4 (2069.2) MB, 2814.7 / 0.6 ms  (average mu = 0.054, current mu = 0.017) allocation failure scavenge might not succeed


<--- JS stacktrace --->

==== JS stack trace =========================================

    0: ExitFrame [pc: 0x561c11d5d999]
Security context: 0x1d7b2f5808d1 <JSObject>
    1: sort [0x1d7b2f59a889](this=0x1fbd54989a29 <JSArray[12]>,0x3ed22bf404b1 <undefined>)
    2: stringify(aka stringify) [0x39fb9f396c41] [/app/node_modules/fast-json-stable-stringify/index.js:~19] [pc=0x2b17cb8d8df5](this=0x3ed22bf404b1 <undefined>,0x39fb9f3961e1 <Object map = 0x6e5bb02bf49>)
    3: stringify(aka stringify) [0x39fb9f396c41] [/app/node_mod...

May be it's because my cluster is pretty big (27 nodes). I've tried to increase parser and kubevious replicas up to 7, but it didn't help.

rule 'no-resource-limits-pods' has typo

Describe the bug

The condition of rule 'no-resource-limits-pods' is '!container.resources.limit'.
but, It should be '!container.resources.limits' as followed.
You omit 's' at 'limit'.
for(var container of item.config.spec.containers)
{
if (!container.resources.limits)
{
warning('No resource limit set');
}
}

To Reproduce

Steps to reproduce the behavior:

  1. Go to '...'
  2. Click on '....'
  3. Scroll down to '....'
  4. See error

Expected behavior

A clear and concise description of what you expected to happen.

Screenshots

If applicable, add screenshots to help explain your problem.

Environment Details:

  • Kubernetes Distribution/Version [e.g. GKE v1.13.12-gke.25, OpenShift 4.3]
  • Browser [e.g. chrome, safari]
  • Version [e.g. v0.3]

Additional context

Add any other context about the problem here.

Does not work on Talos

Describe the bug

Recently deployed it via talos k8s

To Reproduce

Steps to reproduce the behavior:

It works when I run docker locally, yet when I run it on my k8s, and tryng to connect to it via proxy, I get the bellow.

08:42:08.607 [WebSocket] Subscribe: {"dn":"summary","kind":"props"} client.ts:76:16
08:42:08.963 [WebSocket] Subscribe: {"kind":"notifications-info"} client.ts:76:16
08:42:08.964 [WebSocket] Subscribe: {"kind":"notifications"} client.ts:76:16
08:42:09.185 [WebSocket] Subscribe: {"dn":"root","kind":"node"} client.ts:76:16
08:42:09.186 [WebSocket] Subscribe: {"dn":"root","kind":"children"} client.ts:76:16
08:42:09.668 [WebSocket] Subscribe: {"kind":"rules-statuses"} client.ts:76:16
08:42:09.669 [WebSocket] Subscribe: {"kind":"rule-result","name":null} client.ts:76:16
08:42:09.817 [WebSocket] Subscribe: {"kind":"markers-statuses"} client.ts:76:16
08:42:39.686 [WebSocket] Unsubscribe: {"dn":"summary","kind":"props"} client.ts:94:24
08:42:42.239 [WebSocket] Subscribe: {"dn":"summary","kind":"props"} client.ts:76:16
08:42:42.753 [WebSocket] Unsubscribe: {"dn":"summary","kind":"props"} client.ts:94:24
08:44:34.242
XHRGEThttp://127.0.0.1:8080/socket/?EIO=4&transport=polling&t=Ns4Pj-Q
[HTTP/1.1 502 Bad Gateway 37990ms]

08:44:40.338
[TRACKER::fail]  get  ::  /history/timeline  ::  Request failed with status code 502 remote-track.ts:38:16
08:44:41.417
[TRACKER::fail]  get  ::  /history/timeline  ::  Request failed with status code 502 remote-track.ts:38:16
08:44:43.228
XHRGEThttp://127.0.0.1:8080/api/v1/history/timeline
[HTTP/1.1 502 Bad Gateway 30890ms]

08:44:44.350
XHRGEThttp://127.0.0.1:8080/api/v1/history/props?dn=summary&date=2021-12-04T07:42:09.164Z
[HTTP/1.1 502 Bad Gateway 50799ms]

08:44:59.251
XHRGEThttp://127.0.0.1:8080/socket/?EIO=4&transport=polling&t=Ns4Pq5Q
[HTTP/1.1 502 Bad Gateway 41305ms]

08:45:08.649
XHRGEThttp://127.0.0.1:8080/api/v1/history/timeline
[HTTP/1.1 502 Bad Gateway 31907ms]

08:45:09.139
XHRGEThttp://127.0.0.1:8080/api/v1/history/props?dn=summary&date=2021-12-04T07:42:09.164Z
[HTTP/1.1 502 Bad Gateway 32369ms]

08:45:11.532
[TRACKER::fail]  get  ::  /history/snapshot  ::  Request failed with status code 502 remote-track.ts:38:16
08:45:14.815
XHRGEThttp://127.0.0.1:8080/api/v1/history/snapshot?date=2021-12-04T07:42:09.164Z
[HTTP/1.1 502 Bad Gateway 30044ms]

08:45:14.818
XHRGEThttp://127.0.0.1:8080/api/v1/history/timeline
[HTTP/1.1 502 Bad Gateway 30044ms]

08:45:16.275
XHRGEThttp://127.0.0.1:8080/api/v1/history/timeline
[HTTP/1.1 502 Bad Gateway 48924ms]

08:45:24.814
XHRGEThttp://127.0.0.1:8080/socket/?EIO=4&transport=polling&t=Ns4PwCF
[HTTP/1.1 502 Bad Gateway 45787ms]

08:45:35.161
[TRACKER::fail]  get  ::  /history/props  ::  Request failed with status code 502 remote-track.ts:38:16
08:45:41.520
[TRACKER::fail]  get  ::  /history/props  ::  Request failed with status code 502 remote-track.ts:38:16
08:45:41.855
XHRGEThttp://127.0.0.1:8080/api/v1/history/timeline
[HTTP/1.1 502 Bad Gateway 30064ms]

08:45:44.865
[TRACKER::fail]  get  ::  /history/snapshot  ::  Request failed with status code 502 remote-track.ts:38:16
08:45:44.871
[TRACKER::fail]  get  ::  /history/timeline  ::  Request failed with status code 502 remote-track.ts:38:16
08:45:49.362
XHRGEThttp://127.0.0.1:8080/socket/?EIO=4&transport=polling&t=Ns4Q0J0
[HTTP/1.1 502 Bad Gateway 30459ms]

08:46:08.642
XHRGEThttp://127.0.0.1:8080/api/v1/history/timeline
[HTTP/1.1 502 Bad Gateway 30204ms]

08:46:09.222
XHRGEThttp://127.0.0.1:8080/api/v1/history/timeline
[HTTP/1.1 502 Bad Gateway 30522ms]

08:46:13.953
XHRGEThttp://127.0.0.1:8080/api/v1/history/timeline
[HTTP/1.1 502 Bad Gateway 30398ms]

08:46:14.270
XHRGEThttp://127.0.0.1:8080/socket/?EIO=4&transport=polling&t=Ns4Q6Po
[HTTP/1.1 502 Bad Gateway 30118ms]

08:46:39.284
XHRGEThttp://127.0.0.1:8080/socket/?EIO=4&transport=polling&t=Ns4QCWa
[HTTP/1.1 502 Bad Gateway 30039ms]

08:46:39.756
[TRACKER::fail]  get  ::  /history/timeline  ::  Request failed with status code 502 remote-track.ts:38:16
08:46:39.918
XHRGEThttp://127.0.0.1:8080/api/v1/history/timeline
[HTTP/1.1 502 Bad Gateway 30045ms]

08:46:48.374
XHRGEThttp://127.0.0.1:8080/api/v1/history/timeline
[HTTP/1.1 502 Bad Gateway 30039ms]

08:47:04.290
XHRGEThttp://127.0.0.1:8080/socket/?EIO=4&transport=polling&t=Ns4QIdI
[HTTP/1.1 502 Bad Gateway 30046ms]

08:47:08.643
XHRGEThttp://127.0.0.1:8080/api/v1/history/timeline
[HTTP/1.1 502 Bad Gateway 30042ms]

08:47:12.004
XHRGEThttp://127.0.0.1:8080/api/v1/history/timeline
[HTTP/1.1 502 Bad Gateway 30281ms]

08:47:18.419
[TRACKER::fail]  get  ::  /history/timeline  ::  Request failed with status code 502 remote-track.ts:38:16
08:47:29.305
XHRGEThttp://127.0.0.1:8080/socket/?EIO=4&transport=polling&t=Ns4QOk2
[HTTP/1.1 502 Bad Gateway 30041ms]

Iterate on your code faster with the new multi-line editor mode. Use Enter to add new lines and Cmd+Enter to run.

Expected behavior

A clear and concise description of what you expected to happen.

Screenshots

image

Environment Details:

  • Kubernetes Distribution/Version [Talos - v1.23.3]
  • Browser [firefox, chrome, safari]
  • Version [latest]

Additional context

Add any other context about the problem here.

Support pan/scroll in overview pane

Is your feature request related to a problem? Please describe.
I instinctively try to use my track-pad to scroll around in the overview pane, instead of using the click-and-drag navigator.

Describe the solution you'd like
I'd like the navigation pane to pan and scroll when using my track-pad. When using a mouse, scrolling should scroll, and alt+scrolling or ctrl+scrolling should pan.

Describe alternatives you've considered
There is already a navigator implemented, but that is not a completely natural UX, especially when expanding nodes near the bottom of the screen.

no-resource-limits-pods rule uses limit instead of limits

Describe the bug

The no-resource-limits-pods rule uses container.resources.limit instead of container.resources.limits

To Reproduce

Steps to reproduce the behavior:

  1. Go to 'Rule Editor'
  2. Click on 'no-resource-limits-pods' rule
  3. Go to 'Rule script' tab
  4. See error

Environment Details:

  • Kubernetes Distribution/Version [e.g. GKE v1.13.12-gke.25, OpenShift 4.3] AKS v1.19.3
  • Browser [e.g. chrome, safari] Chrome
  • Version [e.g. v0.3] 0.8.15

Where's the open source code for Kubevious?

According to the Kubevious website, this is an open source project. However, the website links to this repository, which doesn't actually contain any application source code. Is Kubevious truly open source? 🤔

Error 500: Not Connected

Just follow the installation instructions.
Installed using Helm.
Installed version of Kubevious is latest (0.7.2).
Went to the page... got "Error 500: Not Connected"

Environment Details:

  • Kubernetes Vanilla 1.18
  • Browser chrome
  • Version 0.7.2

History DB Redesign

Describe the bug

History database grows very fast and consumes too much storage. There are too many duplicates in JSON data. We should use hashing to deduplicate JSON data.

Current Implementation

db
Majority of storage is taken by config columns of snap_items and diff_items columns.

Suggested Proposal

db
Proposal is to deduplicate config column data by storing json data in config_hashes table, and referring to hash keys instead. A low collision function should be used. Probably SHA256, CityHash or similar. To make storage optimal use BINARY(64) column type instead.

All public APIs should stay same.

Legend

legend

Open Issues

  • Design does not include solution for snapshot cleanup. Propose solution to cleanup unused hashes when snapshots & diffs are removed from database.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.