Giter Site home page Giter Site logo

kubesphere / kubesphere Goto Github PK

View Code? Open in Web Editor NEW
14.4K 221.0 2.1K 69.23 MB

The container platform tailored for Kubernetes multi-cloud, datacenter, and edge management ⎈ 🖥 ☁️

Home Page: https://kubesphere.io

License: Apache License 2.0

Makefile 0.20% Go 97.19% Shell 2.42% Dockerfile 0.07% HTML 0.08% Mustache 0.06%
devops container-management k8s cncf cloud-native servicemesh kubesphere kubernetes-platform-solution kubernetes jenkins

kubesphere's Introduction

banner

The container platform tailored for Kubernetes multi-cloud, datacenter, and edge management

A+ good first issue follow on Twitter


What is KubeSphere

English | 中文

KubeSphere is a distributed operating system for cloud-native application management, using Kubernetes as its kernel. It provides a plug-and-play architecture, allowing third-party applications to be seamlessly integrated into its ecosystem. KubeSphere is also a multi-tenant container platform with full-stack automated IT operation and streamlined DevOps workflows. It provides developer-friendly wizard web UI, helping enterprises to build out a more robust and feature-rich platform, which includes most common functionalities needed for enterprise Kubernetes strategy, see Feature List for details.

The following screenshots give a close insight into KubeSphere. Please check What is KubeSphere for further information.

Workbench Project Resources
CI/CD Pipeline App Store

Demo environment

🎮 KubeSphere Lite provides you with free, stable, and out-of-the-box managed cluster service. After registration and login, you can easily create a K8s cluster with KubeSphere installed in only 5 seconds and experience feature-rich KubeSphere.

🖥 You can view the Demo Video to get started with KubeSphere.

Features

🕸 Provisioning Kubernetes Cluster Support deploy Kubernetes on any infrastructure, support online and air-gapped installation. Learn more.
🔗 Kubernetes Multi-cluster Management Provide a centralized control plane to manage multiple Kubernetes clusters, and support the ability to propagate an app to multiple K8s clusters across different cloud providers.
🤖 Kubernetes DevOps Provide GitOps-based CD solutions and use Argo CD to provide the underlying support, collecting CD status information in real time. With the mainstream CI engine Jenkins integrated, DevOps has never been easier. Learn more.
🔎 Cloud Native Observability Multi-dimensional monitoring, events and auditing logs are supported; multi-tenant log query and collection, alerting and notification are built-in. Learn more.
🧩 Service Mesh (Istio-based) Provide fine-grained traffic management, observability and tracing for distributed microservice applications, provides visualization for traffic topology. Learn more.
💻 App Store Provide an App Store for Helm-based applications, and offer application lifecycle management on Kubernetes platform. Learn more.
💡 Edge Computing Platform KubeSphere integrates KubeEdge to enable users to deploy applications on the edge devices and view logs and monitoring metrics of them on the console. Learn more.
📊 Metering and Billing Track resource consumption at different levels on a unified dashboard, which helps you make better-informed decisions on planning and reduce the cost. Learn more.
🗃 Support Multiple Storage and Networking Solutions
  • Support GlusterFS, CephRBD, NFS, LocalPV solutions, and provide CSI plugins to consume storage from multiple cloud providers.
  • Provide Load Balancer Implementation OpenELB for Kubernetes in bare-metal, edge, and virtualization.
  • Provides network policy and Pod IP pools management, support Calico, Flannel, Kube-OVN
  • ..
    🏘 Multi-tenancy Provide unified authentication with fine-grained roles and three-tier authorization system, and support AD/LDAP authentication.
    🧠 GPU Workloads Scheduling and Monitoring Create GPU workloads on the GUI, schedule GPU resources, and manage GPU resource quotas by tenant.

    Architecture

    KubeSphere uses a loosely-coupled architecture that separates the frontend from the backend. External systems can access the components of the backend through the REST APIs.

    Architecture


    Latest release

    🎉 KubeSphere v3.4.0 was released! It brings enhancements and better user experience, see the Release Notes For 3.4.0 for the updates.

    Component supported versions table

    Component Version K8s supported version
    Alerting N/A 1.21,1.22,1.23,1.24,1.25,1.26
    Auditing v0.2.0 1.21,1.22,1.23,1.24,1.25,1.26
    Monitoring N/A 1.21,1.22,1.23,1.24,1.25,1.26
    DevOps v3.4.0 1.21,1.22,1.23,1.24,1.25,1.26
    EdgeRuntime v1.13.0 1.21,1.22,1.23
    Events N/A 1.21,1.22,1.23,1.24,1.25,1.26
    Logging opensearch:v2.6.0
    fluentbit-operator: v0.14.0
    fluent-bit-tag: v1.9.4
    1.21,1.22,1.23,1.24,1.25,1.26
    Metrics Server v0.4.2 1.21,1.22,1.23,1.24,1.25,1.26
    Network N/A 1.21,1.22,1.23,1.24,1.25,1.26
    Notification v2.3.0 1.21,1.22,1.23,1.24,1.25,1.26
    AppStore N/A 1.21,1.22,1.23,1.24,1.25,1.26
    Storage pvc-autoresizer: v0.3.0
    storageclass-accessor: v0.2.2
    1.21,1.22,1.23,1.24,1.25,1.26
    ServiceMesh Istio: v1.14.6 1.21,1.22,1.23,1.24
    Gateway Ingress NGINX Controller: v1.3.1 1.21,1.22,1.23,1.24

    Installation

    KubeSphere can run anywhere from on-premise datacenter to any cloud to edge. In addition, it can be deployed on any version-compatible Kubernetes cluster. The installer will start a minimal installation by default, you can enable other pluggable components before or after installation.

    Quick start

    Installing on K8s/K3s

    Ensure that your cluster has installed Kubernetes v1.21.x, v1.22.x, v1.23.x, * v1.24.x, * v1.25.x, or * v1.26.x. For Kubernetes versions with an asterisk, some features may be unavailable due to incompatibility.

    Run the following commands to install KubeSphere on an existing Kubernetes cluster:

    kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.4.0/kubesphere-installer.yaml
    
    kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.4.0/cluster-configuration.yaml

    All-in-one

    👨‍💻 No Kubernetes? You can use KubeKey to install both KubeSphere and Kubernetes/K3s in single-node mode on your Linux machine. Let's take K3s as an example:

    # Download KubeKey
    curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.10 sh -
    # Make kk executable
    chmod +x kk
    # Create a cluster
    ./kk create cluster --with-kubernetes v1.24.14 --container-manager containerd --with-kubesphere v3.4.0

    You can run the following command to view the installation logs. After KubeSphere is successfully installed, you can access the KubeSphere web console at http://IP:30880 and log in using the default administrator account ( admin/P@88w0rd).

    kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f

    KubeSphere for hosted Kubernetes services

    KubeSphere is hosted on the following cloud providers, and you can try KubeSphere by one-click installation on their hosted Kubernetes services.

    You can also install KubeSphere on other hosted Kubernetes services within minutes, see the step-by-step guides to get started.

    👨‍💻 No internet access? Refer to

    the Air-gapped Installation on Kubernetes

    or Air-gapped Installation on Linux

    for instructions on how to use private registry to install KubeSphere.

    Guidance, discussion, contribution, and support

    We ❤️ your contribution. The community walks you through how to get started contributing KubeSphere. The development guide explains how to set up development environment.

    🤗 Please submit any KubeSphere bugs, issues, and feature requests to KubeSphere GitHub Issue.

    💟 The KubeSphere team also provides efficient official ticket support to respond in hours. For more information, click KubeSphere Online Support.

    Who are using KubeSphere

    The user case studies page includes the user list of the project. You can leave a comment to let us know your use case.

    Landscapes



        

    KubeSphere is a member of CNCF and a Kubernetes Conformance Certified platform , which enriches the CNCF CLOUD NATIVE Landscape.

    kubesphere's People

    Contributors

    123liubao avatar bettygogo2021 avatar calvinyv avatar duanjiong avatar f10atin9 avatar hongzhouzi avatar iawia002 avatar johnniang avatar junotx avatar ks-ci-bot avatar linuxsuren avatar live77 avatar lxm avatar lynxcat avatar min-zh avatar rayzhou2017 avatar shaowenchen avatar swiftslee avatar tester-rep avatar wanjunlei avatar wansir avatar wenchajun avatar wnxn avatar xyz-li avatar yunkunrao avatar zackzhangkai avatar zheng1 avatar zhou1203 avatar zhu733756 avatar zryfish avatar

    Stargazers

     avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

    Watchers

     avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

    kubesphere's Issues

    system suspend when input illegal password

    If input illegal password when create user, at first I will got a warning message,it's fine. Then I change password and try to create again, but the system seems suspend and can't click the "OK" button. Pls check the screen copy.
    1

    Design of image repo management

    image repo management provide a management console to help user to manage their access credential data of public or private docker registries, it should supports:

    • get/add/update/delete/list api for secret in k8s cluster
    • use customized annotation, we should be able to mark some secret token as default.
    • the list api should be called on image repo list page, but also called when creating deploy/ds/stateful workload, there should be a secret dropdown list when specifying image, pre-choose the default token.

    示例五 - 设置弹性伸缩,第四步:标签设置,点击 创建 后,提示 Forbidden

    第四步:标签设置
    标签是一个或多个关联到资源如容器组上的键值对,通过标签来识别、组织或查找资源对象,此处标签设置为 app : nginx-hpa。

    节点选择器无需设置,kube-scheduler 将根据各主机 (Node) 的负载状态,将 Pod 随机调度到各台主机上。点击 创建,即可查看 Nginx 的运行状态详情。

    结果界面显示如下:
    image

    而实际上这个 deployment 被正常创建了。
    image

    Swagger-UI issue

    Following screenshots described a couple of typos or spelling errors from Swagger-UI. It seems some APIs like daemonsets, job, deployments ,etc. are not completed.

    1. See the highlighted parts.

    image
    2. See the highlighted part.

    image
    3. See below.

    image

    创建密钥成功无法正常显示

    一个用户,workspace-regular+project admin 角色,按照文档实例一创建密钥,创建成功,但无法显示在密钥列表中,用admin进去可以看到。

    示例6 第二步:添加仓库 时报错

    首次添加是正常的,第 2 次重复该 Lab 时,做到

    4、复制生成的 token,在 KubeSphere Token 框中输入该 token 然后点击保存。

    报 internal server error
    image

    kube-dns CrashLoopBackOff

    I tested install kubesphere-all-offline-express-1.0.0-alpha_amd64.tar.gz on a 8c/12G/Ubuntu 16.04.4 server .

    when finish i got

    root@ks-allinone:/home/suser/kubesphere-all-offline-express-1.0.0-alpha# kubectl get po --all-namespaces
    NAMESPACE                    NAME                                                      READY     STATUS             RESTARTS   AGE
    kube-system                  calico-node-8q69z                                         1/1       Running            0          1h
    kube-system                  elasticsearch-logging-0                                   1/1       Running            0          1h
    kube-system                  fluent-bit-2dkwm                                          1/1       Running            0          1h
    kube-system                  heapster-77d984457-znt5d                                  4/4       Running            14         1h
    kube-system                  kibana-logging-54587d8d68-r2plc                           1/1       Running            0          1h
    kube-system                  kube-apiserver-ks-allinone                                1/1       Running            0          1h
    kube-system                  kube-controller-manager-ks-allinone                       1/1       Running            0          1h
    kube-system                  kube-dns-859879d98d-2xm5t                                 2/3       CrashLoopBackOff   21         1h
    kube-system                  kube-proxy-ks-allinone                                    1/1       Running            0          1h
    kube-system                  kube-scheduler-ks-allinone                                1/1       Running            0          1h
    kube-system                  kubedns-autoscaler-685f865b88-wwgk7                       1/1       Running            0          1h
    kube-system                  tiller-deploy-7f4859d95f-5cnb4                            1/1       Running            0          1h
    kubesphere-controls-system   default-http-backend-7b7d7f5d6c-96wr2                     1/1       Running            0          1h
    kubesphere-system            ks-account-797644fb8b-l7dkb                               0/1       Init:0/1           0          1h
    kubesphere-system            ks-apiserver-6c9f45b4c7-r26pk                             1/1       Running            0          1h
    kubesphere-system            ks-console-5d4788fcd-f456n                                2/2       Running            0          1h
    openpitrix-system            openpitrix-api-gateway-deployment-6c4c7578bf-bf9rt        0/1       Init:0/2           0          1h
    openpitrix-system            openpitrix-app-manager-deployment-9899bf57c-lqjfl         0/1       Init:0/2           0          1h
    openpitrix-system            openpitrix-category-manager-deployment-6bff8cb96d-xfqs6   0/1       Init:0/2           0          1h
    openpitrix-system            openpitrix-cluster-manager-deployment-5d69fb9cb4-xwr5t    0/1       Init:0/2           0          1h
    openpitrix-system            openpitrix-db-deployment-7b7fb6558d-944ln                 1/1       Running            0          1h
    openpitrix-system            openpitrix-etcd-deployment-5fd5c95b84-tgzjg               1/1       Running            0          1h
    openpitrix-system            openpitrix-job-manager-deployment-f54fd849-lmnhr          0/1       Init:0/2           0          1h
    openpitrix-system            openpitrix-repo-indexer-deployment-69979f8c9d-gkd28       0/1       Init:0/2           0          1h
    openpitrix-system            openpitrix-repo-manager-deployment-75b99f46fc-fxl8g       0/1       Init:0/2           0          1h
    openpitrix-system            openpitrix-runtime-manager-deployment-fc5fb97d5-b2jfg     0/1       Init:0/2           0          1h
    openpitrix-system            openpitrix-task-manager-deployment-67f8b597f6-gd9mk       0/1       Init:0/2           0          1h
    
    [root@ks-allinone:/home/suser/kubesphere-all-offline-express-1.0.0-alpha# kubectl logs kube-dns-859879d98d-2xm5t --namespace=kube-system
    Error from server (BadRequest): a container name must be specified for pod kube-dns-859879d98d-2xm5t, choose one of: [kubedns dnsmasq sidecar]
    root@ks-allinone:/home/suser/kubesphere-all-offline-express-1.0.0-alpha# kubectl logs kube-dns-859879d98d-2xm5t dnsmasq --namespace=kube-system
    I0823 09:23:46.134303       1 main.go:74] opts: {{/usr/sbin/dnsmasq [-k --cache-size=1000 --log-facility=- --server=/cluster.local/127.0.0.1#10053 --server=/in-addr.arpa/127.0.0.1#10053 --server=/ip6.arpa/127.0.0.1#10053] true} /etc/k8s/dns/dnsmasq-nanny 10000000000}
    I0823 09:23:46.134421       1 nanny.go:94] Starting dnsmasq [-k --cache-size=1000 --log-facility=- --server=/cluster.local/127.0.0.1#10053 --server=/in-addr.arpa/127.0.0.1#10053 --server=/ip6.arpa/127.0.0.1#10053]
    I0823 09:23:46.403816       1 nanny.go:119]
    W0823 09:23:46.403846       1 nanny.go:120] Got EOF from stdout
    I0823 09:23:46.403889       1 nanny.go:116] dnsmasq[17]: started, version 2.78 cachesize 1000
    I0823 09:23:46.403970       1 nanny.go:116] dnsmasq[17]: compile time options: IPv6 GNU-getopt no-DBus no-i18n no-IDN DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth no-DNSSEC loop-detect inotify
    I0823 09:23:46.404046       1 nanny.go:116] dnsmasq[17]: using nameserver 127.0.0.1#10053 for domain ip6.arpa
    I0823 09:23:46.404080       1 nanny.go:116] dnsmasq[17]: using nameserver 127.0.0.1#10053 for domain in-addr.arpa
    I0823 09:23:46.404106       1 nanny.go:116] dnsmasq[17]: using nameserver 127.0.0.1#10053 for domain cluster.local
    I0823 09:23:46.404166       1 nanny.go:116] dnsmasq[17]: reading /etc/resolv.conf
    I0823 09:23:46.404194       1 nanny.go:116] dnsmasq[17]: using nameserver 127.0.0.1#10053 for domain ip6.arpa
    I0823 09:23:46.404219       1 nanny.go:116] dnsmasq[17]: using nameserver 127.0.0.1#10053 for domain in-addr.arpa
    I0823 09:23:46.404244       1 nanny.go:116] dnsmasq[17]: using nameserver 127.0.0.1#10053 for domain cluster.local
    I0823 09:23:46.404293       1 nanny.go:116] dnsmasq[17]: using nameserver 10.233.0.3#53
    I0823 09:23:46.404369       1 nanny.go:116] dnsmasq[17]: read /etc/hosts - 7 addresses
    I0823 09:23:47.307989       1 nanny.go:116] dnsmasq[17]: Maximum number of concurrent DNS queries reached (max: 150)
    I0823 09:23:57.328852       1 nanny.go:116] dnsmasq[17]: Maximum number of concurrent DNS queries reached (max: 150)
    I0823 09:24:07.347854       1 nanny.go:116] dnsmasq[17]: Maximum number of concurrent DNS queries reached (max: 150)
    I0823 09:24:17.379447       1 nanny.go:116] dnsmasq[17]: Maximum number of concurrent DNS queries reached (max: 150)
    I0823 09:24:27.391023       1 nanny.go:116] dnsmasq[17]: Maximum number of concurrent DNS queries reached (max: 150)
    I0823 09:24:37.404137       1 nanny.go:116] dnsmasq[17]: Maximum number of concurrent DNS queries reached (max: 150)
    I0823 09:24:47.431729       1 nanny.go:116] dnsmasq[17]: Maximum number of concurrent DNS queries reached (max: 150)
    I0823 09:24:57.449864       1 nanny.go:116] dnsmasq[17]: Maximum number of concurrent DNS queries reached (max: 150)
    I0823 09:25:07.464136       1 nanny.go:116] dnsmasq[17]: Maximum number of concurrent DNS queries reached (max: 150)
    I0823 09:25:17.482151       1 nanny.go:116] dnsmasq[17]: Maximum number of concurrent DNS queries reached (max: 150)
    I0823 09:25:27.493706       1 nanny.go:116] dnsmasq[17]: Maximum number of concurrent DNS queries reached (max: 150)
    I0823 09:25:37.507370       1 nanny.go:116] dnsmasq[17]: Maximum number of concurrent DNS queries reached (max: 150)
    I0823 09:25:47.522315       1 nanny.go:116] dnsmasq[17]: Maximum number of concurrent DNS queries reached (max: 150)]
    

    who has any idea ?

    点击创建凭证后,界面仍保留了上次创建的凭证数据

    示例 6,第 2 次创建凭证时:

    第二步:创建 GitHub 凭证
    同上,创建一个用于 GitHub 的凭证,凭证 ID 命名为 github-id,类型选择 账户凭证,输入您个人的 GitHub 用户名和密码,备注描述信息,完成后点击 确定。

    此时,弹出的界面仍带有第一次创建时填写的信息,没有清空:
    image

    Install Error by kubesphere-all-express-1.0.0-alpha

    ASK [kubernetes/preinstall : Update package management cache (APT)] ***********
    fatal: [ks-allinone]: FAILED! => {"changed": false, "cmd": "apt-get update", "msg": "E: The repository 'http://ppa.launchpad.net/wine/wine-builds/ubuntu bionic Release' does not have a Release file.", "rc": 100, "stderr": "E: The repository 'http://ppa.launchpad.net/wine/wine-builds/ubuntu bionic Release' does not have a Release file.\n", "stderr_lines": ["E: The repository 'http://ppa.launchpad.net/wine/wine-builds/ubuntu bionic Release' does not have a Release file."], "stdout": "Ign:1 http://dl.google.com/linux/chrome/deb stable InRelease\nHit:2 http://dl.google.com/linux/chrome/deb stable Release\nHit:4 http://cn.archive.ubuntu.com/ubuntu bionic InRelease\nHit:5 http://security.ubuntu.com/ubuntu bionic-security InRelease\nHit:6 https://download.docker.com/linux/ubuntu bionic InRelease\nHit:7 https://dl.winehq.org/wine-builds/ubuntu bionic InRelease\nHit:8 http://cn.archive.ubuntu.com/ubuntu bionic-updates InRelease\nHit:9 http://cn.archive.ubuntu.com/ubuntu bionic-backports InRelease\nHit:10 http://ppa.launchpad.net/noobslab/macbuntu/ubuntu bionic InRelease\nHit:11 http://ppa.launchpad.net/openjdk-r/ppa/ubuntu bionic InRelease\nIgn:12 http://ppa.launchpad.net/wine/wine-builds/ubuntu bionic InRelease\nErr:13 http://ppa.launchpad.net/wine/wine-builds/ubuntu bionic Release\n 404 Not Found [IP: 91.189.95.83 80]\nHit:14 http://linux.teamviewer.com/deb stable InRelease\nHit:15 http://linux.teamviewer.com/deb preview InRelease\nHit:16 http://packages.microsoft.com/repos/vscode stable InRelease\nReading package lists...\n", "stdout_lines": ["Ign:1 http://dl.google.com/linux/chrome/deb stable InRelease", "Hit:2 http://dl.google.com/linux/chrome/deb stable Release", "Hit:4 http://cn.archive.ubuntu.com/ubuntu bionic InRelease", "Hit:5 http://security.ubuntu.com/ubuntu bionic-security InRelease", "Hit:6 https://download.docker.com/linux/ubuntu bionic InRelease", "Hit:7 https://dl.winehq.org/wine-builds/ubuntu bionic InRelease", "Hit:8 http://cn.archive.ubuntu.com/ubuntu bionic-updates InRelease", "Hit:9 http://cn.archive.ubuntu.com/ubuntu bionic-backports InRelease", "Hit:10 http://ppa.launchpad.net/noobslab/macbuntu/ubuntu bionic InRelease", "Hit:11 http://ppa.launchpad.net/openjdk-r/ppa/ubuntu bionic InRelease", "Ign:12 http://ppa.launchpad.net/wine/wine-builds/ubuntu bionic InRelease", "Err:13 http://ppa.launchpad.net/wine/wine-builds/ubuntu bionic Release", " 404 Not Found [IP: 91.189.95.83 80]", "Hit:14 http://linux.teamviewer.com/deb stable InRelease", "Hit:15 http://linux.teamviewer.com/deb preview InRelease", "Hit:16 http://packages.microsoft.com/repos/vscode stable InRelease", "Reading package lists..."]}

    创建devops project失败报错

    用户role为”workspace-regular“ 时创建 devops工程时失败,如果用workspace-manager角色创建则没有问题。

    报错信息:

    <title>Jenkins [Jenkins]</title><script>var isRunAsTest=false; var rootURL=""; var resURL="/static/8229c2bd";</script><script src="/static/8229c2bd/scripts/prototype.js" type="text/javascript"></script><script src="/static/8229c2bd/scripts/behavior.js" type="text/javascript"></script><script src='/adjuncts/8229c2bd/org/kohsuke/stapler/bind.js' type='text/javascript'></script><script src="/static/8229c2bd/scripts/yui/yahoo/yahoo-min.js"></script><script src="/static/8229c2bd/scripts/yui/dom/dom-min.js"></script><script src="/static/8229c2bd/scripts/yui/event/event-min.js"></script><script src="/static/8229c2bd/scripts/yui/animation/animation-min.js"></script><script src="/static/8229c2bd/scripts/yui/dragdrop/dragdrop-min.js"></script><script src="/static/8229c2bd/scripts/yui/container/container-min.js"></script><script src="/static/8229c2bd/scripts/yui/connection/connection-min.js"></script><script

    。。。。
    无法copy完,不知系统是否有日志文件可以查询到。

    展现devops工程详情报错

    用户角色为"workspace-manager",创建devops工程,系统报”成功“,点击进入该工程时,系统报错:见附件。

    _20181219002259

    devops 流水线活动页面出错

    devops 流水线活动页面出错,在某些情况下显示为空白页面, 页面地址:

    http://***.io/demo-workspace/devops/project-Om9z13jMA7A7/pipelines/web-deploy-test/pipeline

    error

    一个角色为project admin的用户邀请成员到项目中有报错

    一个用户,workspace-regular,同时是project admin,邀请一个角色为workspace-regular的用户成为project的viewer时,第一次没有显示成功(没有显示打勾)也没有报错,再配置一次则报错“Already Exist rolebindings.rbac.authorization.k8s.io "viewer-smpv" already exists“ ,我猜测后台数据库改了,但前端没有正确显示。
    在项目用户列表,刷新后可以看到修改结果。

    _20181215203304

    components status display

    For now, in kubesphere and k8s system,components are follow table

    name kind namespace label
    kubectl Deployment default k8s-app = kubectl
    openpitrix Deployment default app = openpitrix
    iam Deployment default run = iam
    heapster Deployment kube-system k8s-app = heapster
    kibana-logging Deployment kube-system k8s-app = kibana-logging
    kube-dns Deployment kube-system k8s-app = kube-dns
    kube-state-metrics Deployment kube-system k8s-app =kube-state-metrics
    kubectl Deployment kube-system k8s-app = kubectl
    kubectl2 Deployment kube-system k8s-app = kubectl
    kubernetes-dashboard Deployment kube-system k8s-app = kubernetes-dashboard
    kubespherebackend Deployment kube-system k8s-app = kubespherebackend
    kubesphereui Deployment kube-system k8s-app = kubesphereui
    metrics-server Deployment kube-system k8s-app = metrics-server
    monitoring-grafana Deployment kube-system k8s-app = monitoring-grafana
    monitoring-influxdb Deployment kube-system k8s-app = monitoring-influxdb
    cloud-controller-manager Pod kube-system tier=control-plane
    kube-apiserver Pod kube-system tier=control-plane
    etcd Pod kube-system tier=control-plane
    kube-controller Pod kube-system tier=control-plane
    qingcloud-volume-provisioner Pod kube-system tier=control-plane
    kube-proxy Pod kube-system k8s-app = kube-proxy
    1. is there any component not appropriate?
    2. k8s api provided is GET /api/v1/componentstatuses,but it only return

    Design of applications management

    KubeSphere will not develop its own applications management module, but just use the same module from OpenPitrix(https://github.com/openpitrix/openpitrix)

    During KubeSphere installation, user should be able to specify to use app mgr service from OpenPitrix, which will be integrated with KubeSphere as a svc/deployment in k8s cluster.

    部署列表无法加载,在一直在 loading

    sp01

    效果如图,一直在显示loading画面,在浏览器控制台查看,这个接口一直在加载中: `/apis/kubesphere.io/v1alpha1/resources/deployments?conditions=namespace%3Dwahaha-testing&paging=limit%3D10%2Cpage%3D1 `

    查看 ks-apigateway 日志显示请求是成功的:
    log

    "regular" user can't access "Application Management“ function.

    One user has "regular" role, but don't have authorization to access "application management", no ”application template" or “application depot". I think "application template” or "application depot" both are basic component to all users. So pls check what's the wrong.

    Create Ceph secret in every namespace

    Motivation

    It's inconvenient that user must create secret in Pod's namespace for mounting RBD volume to Pods in Kubernetes v1.10. Therefore, we could develop a controller to create the Ceph user secret in each namespace.

    Prerequisite

    • Kubernetes v1.10 cluster
    • Already created RBD StorageClass
    • Already created Ceph user secret in kube-system namespace

    Limitation

    User Controller
    Create sc and a secret Create or update a copy of Ceph secret in each namespace except kube-system
    Create ns Create Ceph secret in the namespace
    Create sc -
    Delete sc -
    Delete secret in kube-system -
    Update secret content -

    Diagram

    cephsecret-flowchart

    新建任务后,刷新时间太长

    新建 job,配置完成后,前端没有展示新建的记录,要等大概 7-10 秒后才能看到。

    image

    Job 失败的话,显示的结束时间不正确。
    image

    ks-account waiting for mysql

    I tested kubesphere-all-offline-express-1.0.0-alpha_amd64.tar.gz and got this message.

    root@ks-allinone:~# kubectl get po --all-namespaces
    NAMESPACE                    NAME                                                      READY     STATUS     RESTARTS   AGE
    kube-system                  calico-node-nw5z7                                         1/1       Running    1          1h
    kube-system                  elasticsearch-logging-0                                   1/1       Running    1          1h
    kube-system                  fluent-bit-5rcjz                                          1/1       Running    1          1h
    kube-system                  heapster-77d984457-p9l4h                                  4/4       Running    20         1h
    kube-system                  kibana-logging-54587d8d68-kq884                           1/1       Running    1          1h
    kube-system                  kube-apiserver-ks-allinone                                1/1       Running    6          1h
    kube-system                  kube-controller-manager-ks-allinone                       1/1       Running    2          1h
    kube-system                  kube-dns-859879d98d-w7wdn                                 3/3       Running    13         1h
    kube-system                  kube-proxy-ks-allinone                                    1/1       Running    4          1h
    kube-system                  kube-scheduler-ks-allinone                                1/1       Running    2          1h
    kube-system                  kubedns-autoscaler-685f865b88-842zw                       1/1       Running    1          1h
    kube-system                  tiller-deploy-7f4859d95f-kz25d                            1/1       Running    1          1h
    kubesphere-controls-system   default-http-backend-7b7d7f5d6c-xf6zp                     1/1       Running    1          1h
    kubesphere-system            ks-account-797644fb8b-v2c8m                               0/1       Init:0/1   0          13m
    kubesphere-system            ks-apiserver-f964d84d9-48nd5                              1/1       Running    5          1h
    kubesphere-system            ks-console-5d4788fcd-jmxbc                                2/2       Running    2          1h
    openpitrix-system            openpitrix-api-gateway-deployment-6c4c7578bf-4k56g        0/1       Init:0/2   1          1h
    openpitrix-system            openpitrix-app-manager-deployment-9899bf57c-9v5d4         0/1       Init:0/2   1          1h
    openpitrix-system            openpitrix-category-manager-deployment-6bff8cb96d-hckcp   0/1       Init:0/2   1          1h
    openpitrix-system            openpitrix-cluster-manager-deployment-5d69fb9cb4-4jq5h    0/1       Init:0/2   1          1h
    openpitrix-system            openpitrix-db-deployment-7b7fb6558d-vdwck                 1/1       Running    1          1h
    openpitrix-system            openpitrix-etcd-deployment-5fd5c95b84-v6zcx               1/1       Running    1          1h
    openpitrix-system            openpitrix-job-manager-deployment-f54fd849-qnh7g          0/1       Init:0/2   1          1h
    openpitrix-system            openpitrix-repo-indexer-deployment-69979f8c9d-9bjsb       0/1       Init:0/2   1          1h
    openpitrix-system            openpitrix-repo-manager-deployment-75b99f46fc-t8mvw       0/1       Init:0/2   1          1h
    openpitrix-system            openpitrix-runtime-manager-deployment-fc5fb97d5-4kls9     0/1       Init:0/2   1          1h
    openpitrix-system            openpitrix-task-manager-deployment-67f8b597f6-dpnkn       0/1       Init:0/2   1          1h
    

    as you see ,there are so many po waiting to init, here is the po msg

    root@ks-allinone:~# kubectl describe po ks-account-797644fb8b-v2c8m --namespace=kubesphere-system
    Name:           ks-account-797644fb8b-v2c8m
    Namespace:      kubesphere-system
    Node:           ks-allinone/10.0.2.4
    Start Time:     Thu, 23 Aug 2018 07:45:52 -0400
    Labels:         app=kubesphere
                    component=ks-account
                    pod-template-hash=3532009646
                    tier=backend
    Annotations:    <none>
    Status:         Pending
    IP:             10.233.87.182
    Controlled By:  ReplicaSet/ks-account-797644fb8b
    Init Containers:
      wait-mysql:
        Container ID:  docker://b220ebe6009bce991b187118be75fe8427416387fb3d787b6eb314ca2e8fd12f
        Image:         127.0.0.1:5000/busybox:1.28.4
        Image ID:      docker-pullable://127.0.0.1:5000/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335
        Port:          <none>
        Host Port:     <none>
        Command:
          sh
          -c
          until nc -z openpitrix-db.openpitrix-system.svc 3306; do echo "waiting for mysql"; sleep 2; done;
        State:          Running
          Started:      Thu, 23 Aug 2018 07:45:53 -0400
        Ready:          False
        Restart Count:  0
        Environment:    <none>
        Mounts:
          /var/run/secrets/kubernetes.io/serviceaccount from default-token-t98cq (ro)
    Containers:
      ks-account:
        Container ID:
        Image:          127.0.0.1:5000/kubesphere/ks-account:express-1.0.0-alpha
        Image ID:
        Port:           <none>
        Host Port:      <none>
        State:          Waiting
          Reason:       PodInitializing
        Ready:          False
        Restart Count:  0
        Environment:
          KUBESPHERE_DB_HOST:         openpitrix-db.openpitrix-system.svc
          KUBESPHERE_DB_PORT:         3306
          KUBESPHERE_ADMIN_EMAIL:     [email protected]
          KUBESPHERE_ADMIN_PASSWORD:  passw0rd
          KUBESPHERE_DB_USERNAME:     <set to the key 'username' in secret 'db-user-pass'>  Optional: false
          KUBESPHERE_DB_PASSWORD:     <set to the key 'password' in secret 'db-user-pass'>  Optional: false
        Mounts:
          /var/run/secrets/kubernetes.io/serviceaccount from default-token-t98cq (ro)
    Conditions:
      Type           Status
      Initialized    False
      Ready          False
      PodScheduled   True
    Volumes:
      default-token-t98cq:
        Type:        Secret (a volume populated by a Secret)
        SecretName:  default-token-t98cq
        Optional:    false
    QoS Class:       BestEffort
    Node-Selectors:  <none>
    Tolerations:     <none>
    Events:
      Type    Reason                 Age   From                  Message
      ----    ------                 ----  ----                  -------
      Normal  Scheduled              16m   default-scheduler     Successfully assigned ks-account-797644fb8b-v2c8m to ks-allinone
      Normal  SuccessfulMountVolume  16m   kubelet, ks-allinone  MountVolume.SetUp succeeded for volume "default-token-t98cq"
      Normal  Pulled                 16m   kubelet, ks-allinone  Container image "127.0.0.1:5000/busybox:1.28.4" already present on machine
      Normal  Created                16m   kubelet, ks-allinone  Created container
      Normal  Started                16m   kubelet, ks-allinone  Started container
    

    some err log from the init container, as you see the container cant resolv openpitrix-db.openpitrix-system.svc

    root@ks-allinone:~# docker ps | grep ks-account
    b220ebe6009b        8c811b4aec35                                       "sh -c 'until nc -z ⋯"   14 minutes ago      Up 14 minutes                                k8s_wait-mysql_ks-account-797644fb8b-v2c8m_kubesphere-system_162e379e-a6ca-11e8-80dc-0800271e06ea_0
    7cc2ff6a59ee        127.0.0.1:5000/google_containers/pause-amd64:3.1   "/pause"                 14 minutes ago      Up 14 minutes                                k8s_POD_ks-account-797644fb8b-v2c8m_kubesphere-system_162e379e-a6ca-11e8-80dc-0800271e06ea_0
    root@ks-allinone:~# docker logs b220e
    waiting for mysql
    nc: bad address 'openpitrix-db.openpitrix-system.svc'
    nc: bad address 'openpitrix-db.openpitrix-system.svc'
    waiting for mysql
    waiting for mysql
    nc: bad address 'openpitrix-db.openpitrix-system.svc'
    nc: bad address 'openpitrix-db.openpitrix-system.svc'
    waiting for mysql
    waiting for mysql
    
    root@ks-allinone:~# docker inspect b22 | grep reso
            "ResolvConfPath": "/var/lib/docker/containers/7cc2ff6a59ee9998cc5b26463ba96b60e4bf2fad81f9a9087e10ee7efcabdb9f/resolv.conf",
    root@ks-allinone:~# cat /var/lib/docker/containers/7cc2ff6a59ee9998cc5b26463ba96b60e4bf2fad81f9a9087e10ee7efcabdb9f/resolv.conf
    nameserver 10.233.0.3
    search kubesphere-system.svc.cluster.local svc.cluster.local cluster.local
    options ndots:5
    

    but i can resolv the domain on the host

    root@ks-allinone:~# nslookup openpitrix-db.openpitrix-system.svc.cluster.local 10.233.0.3
    Server:         10.233.0.3
    Address:        10.233.0.3#53
    
    Name:   openpitrix-db.openpitrix-system.svc.cluster.local
    Address: 10.233.31.246
    

    No "cluster-operator" in kubesphere Advanced Edition

    When config the account of Kubesphere advanced edition, the document said the default role of workspace are "cluster-admin, cluster-operator, workspace-manager", but I can't find the “cluster-operator" in system, just has "cluster-regular". I think document may has error need to be fixed.

    删除自定义角色报错

    一个用户,其角色为workspace-admin,在一个project中创建了一个角色(无所谓具体权限),在删除改角色时(没有用户关联到该角色),删除成功的同时会有一个报错信息,如下图:
    _20181222103344

    自己估计时删除后界面没有回退到上层角色列表,而是继续试图展现该被删除角色的信息,所以会有报错。因为界面如下图展现了undefined信息:
    _20181222103455

    版本和回退

    在定义部署后,可以通过【编辑配置模板】进行修改,并且可以有新的版本,从而可以版本回退,但如果通过【编辑配置文件】修改则好像没有版本显示,也没有版本回退。这个是work on design还是可以增强使【编辑配置文件】也支持版本和回退?

    Design of node management

    Through node management module, we should be able to:
    in Express, we just assume nodes to be joined into cluster already deploy kubelet and docker runtime

    • access credential management for different iaas platform, for example, add access key of QingCloud, and use it to create/delete instance. and prepare kubelet and docker runtime env(TBD in SE or EE). In Express, just provide function to add ssh key or username/password which is used to access assets.
    • Add/remove node to/from cluster: we use node self-reg mode, that is passing kubeconfig file to node, and node will connect to api by themselves. in Express, the node is not really erased, but just delete node object from k8s cluster
    • Add/delete/update node label
    • Maintain operations: taint/drain/cordon/uncordon node
    • Node general status: OutOfDisk, Ready, MemoryPressure, DiskPressure, NetworkUnavailable, ConfigOK
    • Node detail status: cpu, memory, pods running..etc

    可视化编辑流水线不能支持中文输入?

    在使用”可视化编辑流水线“时,就算是作为message,看上去也不支持中文输入?如有中文会出现报错,请问是kubernetes原生不支持还是KubeSphere的问题,是否有计划支持?
    _20181231225930

    Recommend Projects

    • React photo React

      A declarative, efficient, and flexible JavaScript library for building user interfaces.

    • Vue.js photo Vue.js

      🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

    • Typescript photo Typescript

      TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

    • TensorFlow photo TensorFlow

      An Open Source Machine Learning Framework for Everyone

    • Django photo Django

      The Web framework for perfectionists with deadlines.

    • D3 photo D3

      Bring data to life with SVG, Canvas and HTML. 📊📈🎉

    Recommend Topics

    • javascript

      JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

    • web

      Some thing interesting about web. New door for the world.

    • server

      A server is a program made to process requests and deliver data to clients.

    • Machine learning

      Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

    • Game

      Some thing interesting about game, make everyone happy.

    Recommend Org

    • Facebook photo Facebook

      We are working to build community through open source technology. NB: members must have two-factor auth.

    • Microsoft photo Microsoft

      Open source projects and samples from Microsoft.

    • Google photo Google

      Google ❤️ Open Source for everyone.

    • D3 photo D3

      Data-Driven Documents codes.