skooner-k8s / skooner Goto Github PK
View Code? Open in Web Editor NEWSimple Kubernetes real-time dashboard and management.
Home Page: http://skooner.io/
License: Apache License 2.0
Simple Kubernetes real-time dashboard and management.
Home Page: http://skooner.io/
License: Apache License 2.0
Hi,
I have used k8dash to create dashboard. Everything seems to work fine, except that Node CPU use and Node RAM use under Nodes tab is not working. Any ideas what could be missing?
Please see below relevant information. Thanks,
Raghu
ubuntu2:$ kubectl get pods -A -o wide | grep metrics-server$ kubectl get pods -A -o wide | grep k8dash
kube-system metrics-server-67db467b7b-89b6g 1/1 Running 0 5h12m 192.200.10.29 rack13-cluster-oam-2
ubuntu2:
kube-system k8dash-8684c6bfbd-d9zj8 1/1 Running 0 11m 192.200.3.54 rack13-cluster-oam-1
On running the install as per the readme I get prompted for a basic auth user & password.
This prevents me from getting to enter in the auth token
edit: forgot to mention I was trying to access it doing a kubectl port-forward service/k8dash 8080:80
The same way it is done in the Kubernetes dashboard (see https://github.com/kubernetes/dashboard/blob/ac31fc56afac27a7ff2f1bc17f050f7403acd844/src/app/frontend/common/components/chips/template.html#L30), we could implement this in k8dash (but in a more efficient way ๐ ).
This can come handy when dealing with annotations that contains URLs.
@herbrandson What do you think ?
Hi, great job on the dashboard, its much lighter and faster than the standard k8s dashboard :)
But i am wondering if its possible to add support for viewing and managing Replication Controller type of resource, now when i click on Owned by
link and if its replicationcontroller/default/...
i get PAGE NOT FOUND
Looking forward for an answer.
Keep up the great job :)
Hi there,
I really dig k8dash, thank you! I want to allow our developers to use it as they prefer a UI over command line. Our developer role has read-only everything to their namespace. I had to add a cluster role for them to list all namespaces so they can select the namespace they have access to in order to see anything. Once the namespace is selected it only shows pods. There's no WORKLOADS tab displayed. What clusterroles/roles are necessary to see the WORKLOADS tab?
Thanks!
Right now the details page for individual jobs show the start date, age and end date. I'd love it if it could show duration as well.
Currently we don't see taints when we access the node, would be great to have them listed as well.
Thx.
Would it be possible to publish arm builds for k8dash. Right now the image is x86 only and I want to use it on my raspberry pi 4 cluster. I know how to do this manually and would be willing to investigate how to automate it. https://engineering.docker.com/2019/04/multi-arch-images/
Plain helm install of initial k8dash is failing:
$ helm install k8dash-0.0.1.tgz
Error: validation failed: error validating "": error validating data: ValidationError(Service.spec.ports[0]): unknown field "targetport" in io.k8s.api.core.v1.ServicePort
Helm is latest/stable in this time (v2.14.0).
$ helm version
Client: &version.Version{SemVer:"v2.14.0", GitCommit:"05811b84a3f93603dd6c2fcfe57944dfa7ab7fd0", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.14.0", GitCommit:"05811b84a3f93603dd6c2fcfe57944dfa7ab7fd0", GitTreeState:"clean"}
container 40c1d24be018ca1310f648f0b351e707418a43b3f501d57209f9dff00c9235f3 encountered an error during CreateProcess: failure in a Windows system call: The system cannot find the file specified.
Must use powershell
to exec into windows container.
https://github.com/herbrandson/k8dash/blob/57da03ca1e89a5befaca5853ac128eb13693f6f8/client/src/services/api.js#L141
Hello.
How to reproduce:
run kubectl proxy
go to 'http://localhost:8001/api/v1/namespaces/kube-system/services/http:k8dash:/proxy/' in a browser and try to log in
k8dash's logs:
[HPM] POST /apis/authorization.k8s.io/v1/selfsubjectrulesreviews -> https://10.96.0.1:443
POST /apis/authorization.k8s.io/v1/selfsubjectrulesreviews 403
apiserver's logs
Jun 19 00:14:14 kube-apiserver[6002]: I0619 00:14:14.253445 6002 handler.go:153] kube-aggregator: POST "/api/v1/namespaces/kube-sy
stem/services/http:k8dash:/proxy/apis/authorization.k8s.io/v1/selfsubjectrulesreviews" satisfied by nonGoRestful
Jun 19 00:14:14 kube-apiserver[6002]: I0619 00:14:14.254161 6002 pathrecorder.go:247] kube-aggregator: "/api/v1/namespaces/kube-sy
stem/services/http:k8dash:/proxy/apis/authorization.k8s.io/v1/selfsubjectrulesreviews" satisfied by prefix /api/
Jun 19 00:14:14 kube-apiserver[6002]: I0619 00:14:14.254495 6002 handler.go:143] kube-apiserver: POST "/api/v1/namespaces/kube-sys
tem/services/http:k8dash:/proxy/apis/authorization.k8s.io/v1/selfsubjectrulesreviews" satisfied by gorestful with webservice /api/v1
Jun 19 00:14:14 kube-apiserver[6002]: I0619 00:14:14.276633 6002 rbac.go:118] RBAC DENY: user "system:anonymous" groups ["system:u
nauthenticated"] cannot "create" resource "selfsubjectrulesreviews.authorization.k8s.io" cluster-wide
Jun 19 00:14:14 kube-apiserver[6002]: I0619 00:14:14.276984 6002 authorization.go:73] Forbidden: "/apis/authorization.k8s.io/v1/se
lfsubjectrulesreviews", Reason: ""
Jun 19 00:14:14 kube-apiserver[6002]: I0619 00:14:14.277770 6002 wrap.go:47] POST /apis/authorization.k8s.io/v1/selfsubjectrulesre
views: (1.328014ms) 403 [Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/74.0.3729.169 Chrome/74.0.3729.169 Safari/537.36 95.217.56.49:56004]
Sorry for the description...
As an admin k8dash is great ! But for the users (once you can connect without cluster-admin role #19 )it would be great to be able to hire some of the left pane icons
For example : roles, nodes....
Maybe a sort a low rights profiles where we can define what can be seen
Great application.
A feature request:
If the pod is run with the service account, the frontend should not prompt for api token.
I tried starting it with the following spec:
kind: Deployment
apiVersion: apps/v1
metadata:
name: {{ .Release.Namespace }}-{{ .Chart.Name }}
spec:
replicas: 1
selector:
matchLabels:
app: {{ .Release.Namespace }}-{{ .Chart.Name }}
template:
metadata:
labels:
app: {{ .Release.Namespace }}-{{ .Chart.Name }}
spec:
serviceAccountName: {{ .Release.Namespace }}-{{ .Chart.Name }}
containers:
- name: k8dash
imagePullPolicy: Always
image: {{ .Values.image.repository}}:{{ .Values.image.tag }}
[...]
Where serviceAccountName refers to the service account with cluster-admin permissions, but the login still prompts for the token. I can't easily use OIDC on EKS and i don't want to expose the token to users. It would really be nice to have this.
I have not tried the actual kubernetes dashboard, but looks like we can deploy and create Pods from dashboard. Is this feature missing here? If so, any plans to add it?
Thanks,
Raghu
Dashboard lets you create and deploy a containerized application as a Deployment and optional Service with a simple wizard. You can either manually specify application details, or upload a YAML or JSON file containing application configuration
First great job on the dashboard -- seems a lot more lightweight and less buggy than the standard dashboard. We have a couple of minor UI requests:
On the nodes page it would be nice if unready nodes showed up by default on top, and maybe with a red icon, instead of the text 'READY' column. Even without the icon change, it would be good to float unready nodes to the top by default (we run on bare metal, so unready nodes are a big deal for us). We have alerts, obviously, but it still seems like a logical change to the UI.
It would also be great if on that same page it was a bit more obvious which nodes were masters. You can look at the labels, but an icon change (or replacing the READY column with a MASTER column) would be pretty nice.
Thanks again for all your great work!
it seems that oidc scope information is hardcoded. We are using oidc groups to authenticate against k8s. If oidc scope could be manually defined I think groups could work.
Hi when i try to use my oidc (keycloak) with k8dash it doesn't work.
In the pod logs i have:
[HPM] POST /apis/authorization.k8s.io/v1/selfsubjectrulesreviews -> https://10.96.0.1:443 โ
โ POST /apis/authorization.k8s.io/v1/selfsubjectrulesreviews 403 โ
โ GET /favicon.ico 200 โ
โ GET /static/js/2.db22b280.chunk.js.map 304 โ
โ GET /static/js/main.34226f17.chunk.js.map 304 โ
โ GET /static/css/main.0d6d7525.chunk.css.map 304 โ
โ GET /static/css/2.b522e268.chunk.css.map 304 โ
โ (node:8) UnhandledPromiseRejectionWarning: ReferenceError: next is not defined โ
โ at getOidc (/usr/src/app/index.js:79:9) โ
โ at processTicksAndRejections (internal/process/task_queues.js:89:5) โ
โ (node:8) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was n โ
โ ot handled with .catch(). (rejection id: 5) โ
and in the browser network tab for the path:
/apis/authorization.k8s.io/v1/selfsubjectrulesreviews
i have the response:
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "selfsubjectrulesreviews.authorization.k8s.io is forbidden: User \"system:anonymous\" cannot create resource \"selfsubjectrulesreviews\" in API group \"authorization.k8s.io\" at the cluster scope",
"reason": "Forbidden",
"details": {
"group": "authorization.k8s.io",
"kind": "selfsubjectrulesreviews"
},
"code": 403
}
I don't understand why k8dash use the system:anonymous account.
I use k8s version 1.15.4
Hi. I cant seem to get any logs on the pods. As a matter of fact there are no logs for anything. I am running k8dash as described in the README and all went well with no issues returned. My only issue is logs missing or just seems to be a dead feature for now. My developers rely heavily on logs to trace the issues in a pod. Please help.
Kind Regards
Darrell
logs of pod k8dash like:
[HPM] GET /api/v1/namespaces/production/pods/ipquery-v1-867579b494-5nrjs/log?container=ipquery-k8s&previous=false&tailLines=1000&follow=true -> https://10.96.0.1:443 GET /api/v1/namespaces/production/pods/ipquery-v1-867579b494-5nrjs/log?container=ipquery-k8s&previous=false&tailLines=1000&follow=true 403
maybe cannot connect to https://10.96.0.1:443๏ผ
As the field for the token is a password type field it should be fillable using the attribute autocomplete="password" this allows managers like 1pass or lastpass to keep hold of these keys and is much more secure than passing it around the traditional way.
Hi
Total and used ram should show up the other way around.
"Ram used" is showing 123.6GB of 14.4GB.
Openshift Console has a baller key-value secret editor UI.
It'd be nice if this feature can be adapted for k8dash as well.
To preview this feature in your kubernetes cluster: deploy the Openshift Console
apiVersion: apps/v1
kind: Deployment
metadata:
name: origin-console
namespace: kube-system
labels:
app: origin-console
spec:
replicas: 1
selector:
matchLabels:
app: origin-console
template:
metadata:
labels:
app: origin-console
spec:
containers:
- name: origin-console-container
image: quay.io/openshift/origin-console:4.6.0
env:
- name: BRIDGE_USER_AUTH
value: disabled
- name: BRIDGE_K8S_MODE
value: off-cluster
- name: BRIDGE_K8S_MODE_OFF_CLUSTER_ENDPOINT
value: https://kubernetes.default
- name: BRIDGE_K8S_MODE_OFF_CLUSTER_SKIP_VERIFY_TLS
value: "true"
- name: BRIDGE_K8S_AUTH
value: bearer-token
- name: BRIDGE_K8S_AUTH_BEARER_TOKEN
valueFrom:
secretKeyRef:
name: admin-sa-token-abc123 # change this to your cluster token secret
key: token
---
kind: Service
apiVersion: v1
metadata:
name: origin-console-svc
namespace: kube-system
spec:
selector:
app: origin-console
ports:
- name: http
port: 80
targetPort: 9000
We run all cluster admin tools behind an ingress controller (traefik) and mount tools under their own subpaths such as /grafana and /graylog. Currently k8dash cannot run under a subpath because it uses absolute references to /js etc.
Options to resolve:
Unfortunately my React-mojo was not strong enough to put together a PR..
Environment:
AKS (K8s version 1.12.6)
With ingress (Nginx):
Login page is loaded (GET) but any POST fails because endpoint returns 404.
Error message: Error occured attempting to login.
Instead of contacting API, request is routed back to web.
Request URL: https://something.com/apis/authentication.k8s.io/v1/tokenreviews
Request Method: POST
Status Code: 404
Logs:
OIDC_URL: None
[HPM] Proxy created: / -> https://something.hcp.westeurope.azmk8s.io:443
Server started
GET /
GET /static/css/2.7b1d7de3.chunk.css
GET /static/js/2.ab8f1278.chunk.js
GET /static/css/main.a9446ed5.chunk.css
GET /static/js/main.c1206f38.chunk.js
GET /static/css/2.7b1d7de3.chunk.css.map
GET /static/css/main.a9446ed5.chunk.css.map
GET /static/js/2.ab8f1278.chunk.js.map
GET /oidc
GET /static/js/main.c1206f38.chunk.js.map
GET /favicon.ico
GET /manifest.json
GET /
POST /apis/authentication.k8s.io/v1/tokenreviews
GET /
The same thing happens when port-forwarded.
Request URL: http://localhost:4654/apis/authentication.k8s.io/v1/tokenreviews
Request Method: POST
Status Code: 404 Not Found
I've defined a service account and it has full access in its own namespace.
When I use its token to login in K8Dash, I see nothing! While I have to see pods, deploys, etc in its own namespace.
would you please help me with this issue?
Thanks in advance.
Hi,
I get invalid credentials error like below when authenticated with dex as an oidc-provider.
An error occured during the request { OpenIdConnectError: invalid_client (Invalid client credentials.)
at Client.requestErrorHandler (/usr/src/app/node_modules/openid-client/lib/helpers/error_handler.js:16:11)
at processTicksAndRejections (internal/process/next_tick.js:81:5)
error: 'invalid_client',
error_description: 'Invalid client credentials.' } POST /oidc
POST /oidc 500
If I turn off oidc auth, k8dash asks for token and it works if I enter a valid token.
Dex is authenticating with github.com and it works fine with kubectl.
Here is the kubectl settings
user:
auth-provider:
config:
client-id: kubernetes
client-secret: ZXhhbXBsZS1hcHAtc2VjcmV0
extra-scopes: offline_access openid profile email groups
id-token: REDACTED
idp-certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMrakNDQWVLZ0F3SUJBZ0lKQU1lRXJhSHYzNXJWTUEwR0NTcUdTSWIzRFFFQkN3VUFNQkl4RURBT0JnTlYKQkFNTUIydDFZbVV0WTJFd0hoY05NVGt3TXpNeE1Ua3dPVEE0V2hjTk1Ua3dOREV3TVRrd09UQTRXakFTTVJBdwpEZ1lEVlFRRERBZHJkV0psTFdOaE1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBCjBkb2NjV3Zpb29xbDRVa05oejFCZ01KV25JU0w5TUExRm1ySEZ4U2hUYysrL1V0VURxMVVlU0xCRXpXTjNZZmcKQm5TQVNBQUNmS0lCRTBDRWJWdzhSTUtodXJReExGT0hQUDBodWtVRGkxNmVnaXBHSjI0WWdWcnJ4cUpVYWxsYQo2cUpaTkdsUHQ3SmxWdWtrSHRlY0hONjVneG0wQjBzMWtwV1VRNFh2L0E2ZldOaHVhV3VqYlRjRWx0SEFtQlJnCmtmMHpRYnV2ZCtMRnl3V0V2VDdBai9ua1FVZko1L21DOTQyUmlYVDNXdUtyc1g1a3F3ellrVU9xN2hOM1B1aVQKU1NYRm9JNUxqQWd5eDVqVEhubDdmb3JWSnhObDYvdEc2eFg4S3BxMmpST3FZSzlUWFdhSFlDVktQeTlMUTFuegpBNG9jTXQyRkFzREY4a2ZMUjBhK2l3SURBUUFCbzFNd1VUQWRCZ05WSFE0RUZnUVVMK1gzejRKWkhDZkg4Ry80Ckl0ZDhUdUZ5ZEV3d0h3WURWUjBqQkJnd0ZvQVVMK1gzejRKWkhDZkg4Ry80SXRkOFR1RnlkRXd3RHdZRFZSMFQKQVFIL0JBVXdBd0VCL3pBTkJna3Foa2lHOXcwQkFRc0ZBQU9DQVFFQU1rTFB0dkZoZlZxM0VibUJFU3dER09ZdwpVYjFYS0VKb1JEVGV5dlozamZSWGhTVDlmdmM0bC9GMWVOd1ZKZnhXb0piUjdCU0JmbURiNzR5anBOcGVYS2xZClZVWnE1Mmx1dnlwNDlFNHJOQ1JHTDNzL0NjUnFnV0tqVmxKZWZGakg2TU8zYTZnM0NFZElGNXJSZi8zRXFGSDYKZm9tUkZ0MEw5NzZodmpGRXFyMlVYR01yTk1LMUN6YXJreDhaUXNkekwySGFhMzV6ei9aUG1PdFA1a2dzYUlMegpoSC9CQ215N242Q2pDVmx3UXZFRmFUOXVRRDZWa216eVNmQ29oaGo4WFYwanBMa2doeG12cGJRdzFDWmwvcDJSCkRwSTh3aCtNVkhGczMvZzNKa0lqUkU0SVJtV2ROWE5hWTBwMVVZUEVIMys3bDlDOXZTQ2Q3OXgvSTZtOVB3PT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
idp-issuer-url: https://dex.example.com:32000
refresh-token: ChlibzZjeDJyNnMzNWMzZjVoeWpuZm5oem8zEhltaWt3YmRxc3Eyem1qeHAyNmk2ZWlqYnd0
name: oidc
And this is k8s yaml manifests
kind: Deployment
apiVersion: apps/v1
metadata:
name: k8dash
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
k8s-app: k8dash
template:
metadata:
labels:
k8s-app: k8dash
spec:
hostAliases:
- hostnames:
- dex.example.com
ip: 10.0.2.100
containers:
- name: k8dash
image: herbrandson/k8dash:dev
command:
- sh
- -c
- |
npm config set cafile /ca/dex-ca.pem
/sbin/tini -- node .
ports:
- containerPort: 4654
livenessProbe:
httpGet:
scheme: HTTP
path: /
port: 4654
initialDelaySeconds: 30
timeoutSeconds: 30
env:
- name: OIDC_URL
valueFrom:
secretKeyRef:
name: k8dash
key: url
- name: OIDC_CLIENT_ID
valueFrom:
secretKeyRef:
name: k8dash
key: id
- name: OIDC_SECRET
valueFrom:
secretKeyRef:
name: k8dash
key: secret
- name: NODE_EXTRA_CA_CERTS
value: /ca/dex-ca.pem
- name: OIDC_SCOPES
value: "openid email groups"
volumeMounts:
- name: cafile
mountPath: /ca
volumes:
- name: cafile
configMap:
name: k8dash
---
kind: Service
apiVersion: v1
metadata:
name: k8dash
namespace: kube-system
spec:
ports:
- port: 80
targetPort: 4654
selector:
k8s-app: k8dash
---
apiVersion: v1
data:
dex-ca.pem: |
-----BEGIN CERTIFICATE-----
MIIC+jCCAeKgAwIBAgIJAMeEraHv35rVMA0GCSqGSIb3DQEBCwUAMBIxEDAOBgNV
BAMMB2t1YmUtY2EwHhcNMTkwMzMxMTkwOTA4WhcNMTkwNDEwMTkwOTA4WjASMRAw
DgYDVQQDDAdrdWJlLWNhMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA
0doccWviooql4UkNhz1BgMJWnISL9MA1FmrHFxShTc++/UtUDq1UeSLBEzWN3Yfg
BnSASAACfKIBE0CEbVw8RMKhurQxLFOHPP0hukUDi16egipGJ24YgVrrxqJUalla
6qJZNGlPt7JlVukkHtecHN65gxm0B0s1kpWUQ4Xv/A6fWNhuaWujbTcEltHAmBRg
kf0zQbuvd+LFywWEvT7Aj/nkQUfJ5/mC942RiXT3WuKrsX5kqwzYkUOq7hN3PuiT
SSXFoI5LjAgyx5jTHnl7forVJxNl6/tG6xX8Kpq2jROqYK9TXWaHYCVKPy9LQ1nz
A4ocMt2FAsDF8kfLR0a+iwIDAQABo1MwUTAdBgNVHQ4EFgQUL+X3z4JZHCfH8G/4
Itd8TuFydEwwHwYDVR0jBBgwFoAUL+X3z4JZHCfH8G/4Itd8TuFydEwwDwYDVR0T
AQH/BAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAQEAMkLPtvFhfVq3EbmBESwDGOYw
Ub1XKEJoRDTeyvZ3jfRXhST9fvc4l/F1eNwVJfxWoJbR7BSBfmDb74yjpNpeXKlY
VUZq52luvyp49E4rNCRGL3s/CcRqgWKjVlJefFjH6MO3a6g3CEdIF5rRf/3EqFH6
fomRFt0L976hvjFEqr2UXGMrNMK1Czarkx8ZQsdzL2Haa35zz/ZPmOtP5kgsaILz
hH/BCmy7n6CjCVlwQvEFaT9uQD6VkmzySfCohhj8XV0jpLkghxmvpbQw1CZl/p2R
DpI8wh+MVHFs3/g3JkIjRE4IRmWdNXNaY0p1UYPEH3+7l9C9vSCd79x/I6m9Pw==
-----END CERTIFICATE-----
kind: ConfigMap
metadata:
creationTimestamp: null
name: k8dash
namespace: kube-system
---
apiVersion: v1
data:
id: a3ViZXJuZXRlcw==
secret: ZXhhbXBsZS1hcHAtc2VjcmV0
url: aHR0cHM6Ly9kZXguZXhhbXBsZS5jb206MzIwMDA=
kind: Secret
metadata:
creationTimestamp: null
name: k8dash
namespace: kube-system
Do you have any idea?
Hi,
I'm trying to login with a read-only account into k8dash with no success.
Steps to reproduce:
1.- create ServiceAccount
apiVersion: v1
kind: ServiceAccount
metadata:
name: k8dash-cluster-reader
namespace: default
2.- create ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cluster-reader
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: [
"get",
"list",
"proxy",
"redirect",
"watch"
]
- nonResourceURLs: ["*"]
verbs: ["get"]
3.- create ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: k8dash-cluster-reader
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-reader
subjects:
- kind: ServiceAccount
name: k8dash-cluster-reader
namespace: default
If I modify the ClusterRole verbs by verbs: ["*"]
I'm able to login, and if I replace the ClusterRole definition with this ClusterRole definition while I'm logged, everything works as expected, so I think the problem could be the login check...
Any suggestions?
Thanks in advance,
Joan
I noticed that when we open pod logs page some times it will show the same logs multiple times. Do you know about this issue?
Hi,
I just wondering if we can have anonymous user to login to the dashboard without using token,
At the moment we us key-cloak out side the cluster to authenticate the read-only user but then after authentication we need to use the token again which it make it ugly.
is it possible to have the anonymous mode ?
The YAML formatting seems to be broken in Firefox, it works for me in Chrome.
How to reproduce: go to some resource and click the EDIT button, everything is displayed in one line
It seems to me that this is a browser issue (Firefox ignoring newlines)
Without logging in via the UI, if I pass a Bearer token with a request it redirects to the token login screen. It would be nice if it could recognize I already have a token and use that without needing to go through the browser login flow
Hi,
The dashboard looks great but I'm just wondering if there is a plan to add CRD to the dashboard , especially I'm looking for Istio support. ATM thats a big gap for this dashboard
Here we are having some auth issues, when the auth token is too large because of the groups list in the JWT. It is common in our network a user that are a member of a lot of LDAP groups.
In these requests, that last successful one was the oidc
that returned the token in the response body.
In the next ones (all selfsubjectrulesreviews
requests), it sends the same token returned in the previous oidc
request as a Authorization
HTTP header, and then, every request fails because of this big header.
Does this authorization token need to be sent in the header or it could be sent in another way?
The dashboard UI does not show any data after login. Sometime it just hangs/stucks after pressing login button with the secrete token .
We have installed the NodePort version of the k8dash (kubernetes-k8dash-nodeport.yaml).
Please check and assists at earliest .
root@ubuntu03: kubectl get svc k8dash -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
k8dash NodePort 10.111.223.113 4654:31330/TCP 12h
root@ubuntu03:
Brower URL :
http://192.168.0.17:31330/#!
192.168.0.17 is a worker node .
root@ubuntu03: kubectl get nodes --all-namespaces
NAME STATUS ROLES AGE VERSION
ubuntu01 Ready 5d2h v1.16.2
ubuntu02 Ready 12d v1.16.2
ubuntu03 Ready master 12d v1.16.2
root@ubuntu03:
root@ubuntu03:/new-softwares/metrics-server/deploy/1.8+# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-dc6cb64cb-x97cv 1/1 Running 4 12d
calico-node-9zzkq 0/1 Running 1 5d2h
calico-node-csx7b 1/1 Running 4 12d
calico-node-p2lvb 1/1 Running 1 12d
coredns-5644d7b6d9-7fpl8 1/1 Running 4 12d
coredns-5644d7b6d9-l69xb 1/1 Running 4 12d
etcd-ubuntu03 1/1 Running 4 12d
k8dash-58c77f9d45-dg84m 1/1 Running 0 113s
kube-apiserver-ubuntu03 1/1 Running 4 12d
kube-controller-manager-ubuntu03 1/1 Running 4 12d
kube-proxy-46986 1/1 Running 1 5d2h
kube-proxy-7cg6x 1/1 Running 1 12d
kube-proxy-gzg2d 1/1 Running 4 12d
kube-scheduler-ubuntu03 1/1 Running 4 12d
kubernetes-dashboard-7c9d8bcbbc-wrk99 1/1 Running 3 10d
metrics-server-8fc66cfdf-mkfhv 1/1 Running 0 21s
root@ubuntu03:/new-softwares/metrics-server/deploy/1.8+#
root@ubuntu03:/new-softwares/metrics-server/deploy/1.8+# kubectl logs k8dash-58c77f9d45-dg84m -n kube-system
OIDC_URL: None
API URL: https://10.96.0.1:443
[HPM] Proxy created: / -> https://10.96.0.1:443
[HPM] Subscribed to http-proxy events: [ 'error', 'close' ]
Server started
(node:6) [DEP0123] DeprecationWarning: Setting the TLS ServerName to an IP address is not permitted by RFC 6066. This will be ignored in a future version.
GET / 200
GET / 200
GET / 200
GET / 200
GET / 200
GET / 200
GET / 200
GET / 200
GET / 200
GET / 200
Error getting cluster info Error: connect ETIMEDOUT 10.96.0.1:443
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1054:14) {
errno: 'ETIMEDOUT',
code: 'ETIMEDOUT',
syscall: 'connect',
address: '10.96.0.1',
port: 443
}
GET / 200
GET / 200
GET / 200
GET / 200
GET / 200
GET / 200
root@ubuntu03:/new-softwares/metrics-server/deploy/1.8+#
K8 - VERSION:
root@ubuntu03:/new-softwares/metrics-server/deploy/1.8+# kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean", BuildDate:"2019-10-15T19:18:23Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean", BuildDate:"2019-10-15T19:09:08Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}
root@ubuntu03:/new-softwares/metrics-server/deploy/1.8+#
This is probably a question more than an issue.
I tried installing the k8dash on an Azure aks
at version 1.12.6
with rbac enabled.
Following the manual I am unable to get the auth token (option 1) to work.
In the logs I see
OIDC_URL: None
[HPM] Proxy created: / -> https://noise-dns-eea9be65.hcp.eastus.azmk8s.io:443
[HPM] Subscribed to http-proxy events: [ 'error', 'close' ]
Server started
Version Info: {
"major": "1",
"minor": "12",
"gitVersion": "v1.12.6",
"gitCommit": "ab91afd7062d4240e95e51ac00a18bd58fddd365",
"gitTreeState": "clean",
"buildDate": "2019-02-26T12:49:28Z",
"goVersion": "go1.10.8",
"compiler": "gc",
"platform": "linux/amd64"
}
Available APIs: [
"admission.certmanager.k8s.io/v1beta1",
"admissionregistration.k8s.io/v1beta1",
"apiextensions.k8s.io/v1beta1",
"apiregistration.k8s.io/v1",
"apps/v1",
"authentication.k8s.io/v1",
"authorization.k8s.io/v1",
"autoscaling/v1",
"batch/v1",
"certificates.k8s.io/v1beta1",
"certmanager.k8s.io/v1alpha1",
"coordination.k8s.io/v1beta1",
"events.k8s.io/v1beta1",
"extensions/v1beta1",
"metrics.k8s.io/v1beta1",
"networking.k8s.io/v1",
"policy/v1beta1",
"rbac.authorization.k8s.io/v1",
"scheduling.k8s.io/v1beta1",
"storage.k8s.io/v1"
]
Auth Response: {
"kind": "SelfSubjectAccessReview",
"apiVersion": "authorization.k8s.io/v1",
"metadata": {
"creationTimestamp": null
},
"spec": {
"resourceAttributes": {}
},
"status": {
"allowed": false,
"reason": "no RBAC policy matched"
}
}
GET / 304
GET /static/css/main.74e8a81c.chunk.css 304
GET /static/css/2.7b1d7de3.chunk.css 304
GET /static/js/2.429c3e96.chunk.js 304
GET /static/js/main.41a4b71b.chunk.js 304
GET /oidc 304
GET /manifest.json 304
GET /favicon.ico 200
GET / 200
GET / 200
GET / 200
GET / 200
[HPM] POST /apis/authorization.k8s.io/v1/selfsubjectaccessreviews -> https://noise-dns-eea9be65.hcp.eastus.azmk8s.io:443
POST /apis/authorization.k8s.io/v1/selfsubjectaccessreviews 401
The "no RBAC policy matched"
seems ominous. Is there any way you could help me get this awesome dashboard up and running?
In a mixed mode os cluster, ex windows & linux, we should ensure that k8dash is only scheduled on the linux nodes. This is due to the fact that the container image is for linux only.
Hi
Just small nice-to-have feature request.
I often want to redeploy pods created by deployments when I updated configmap or secrets.
I do the folllowing
kubectl patch deployment/nginx --type='json' -p='[{"op": "replace", "path": "/spec/template/metadata/annotations", "value": {"serial": "'$(date '+%Y%m%d%H%M%S')'"} }]' --record
The command is just an example. If we can get a button to insert/update something similar to deployments, that would be nice.
Hi
I feel really bad beause I'm always asking stuff so tried to find out where to fix but I realised that I'm not qualified enough. ๐
When I click one of namespaces, namespace doesn't get loaded.
I get the following in k8dash logs.
[HPM] GET /api/v1/namespaces/[object%20Object] -> https://10.96.0.1:443
GET /api/v1/namespaces/[object%20Object] 404
[HPM] GET /api/v1/namespaces/[object%20Object]/events -> https://10.96.0.1:443
GET /api/v1/namespaces/[object%20Object]/events 200
[HPM] GET /api/v1/namespaces/[object%20Object]/events?watch=1&resourceVersion=597953 -> https://10.96.0.1:443
[HPM] Upgrading to WebSocket
Hi,
I'm using keycloak as an OIDC provider, does anyone succeed with k8dash ?
I still getting invalid credentials" in k8dash, but keycloak is working fine (use it for grafana and legacy kubernetes dashboard).
Just do a basic openid connect client.
Sorry for not getting more detailed, but if anyone had this issue....
Had also a look at the secret base64 encoding but it doesn't seem to be that.
Thanks
Would it be possible to make the main color of the webinterface configurable somewhere in the deployment?
Use-case: when running multiple instances of k8dash in test and production environments, I would like to make the differences between the environments visually very clear
Hi.
Do you plan to create a helm chart for it? Will be easier to test.
Thx.
We are using the master
tag and with the latest update from 4 days ago, this broke our k8Dash instance and users were brought in a infinite loop of login/logout.
We have opened the following PR: #33 which we compiled and are using the local image for now. We would like to move back to the public images while enforcing us to use a Fixed TAG version: (e.g v1.0.1) ?
Refined issue from #59. On the node list, it would be good to have unready nodes (not ready or unknown) to float to the top of the display.
For most users, this is probably the most important thing to see in a node list. In particular, we run on bare metal, so dead nodes are a big deal. Even with alerting, this seems like a sensible default.
This is a feature request to make K8Dash significantly more powerful than Kubernetes official dashboard.
For example:
MANIFEST_GIT_URL=https://github.com/herbrandson/kubernetes-manifests.git
MANIFEST_GIT_USERNAME=herbrandson
MANIFEST_GIT_PASSWORD=asdf1234
K8Dash should have a UI and a Log Window with a simple button: "Sync"
K8Dash should pull all files in the Git repository using library like https://github.com/isomorphic-git/isomorphic-git
K8Dash should apply all JSON and YAML files in that Git repository using Kubernetes Server-Side Apply API kubernetes/enhancements#555
K8Dash should then write logs to the window in the UI.
Currently init containers does not have logs and would be nice to have them.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.