Giter Site home page Giter Site logo

labring / sealos Goto Github PK

View Code? Open in Web Editor NEW
13.0K 187.0 2.0K 337.34 MB

Sealos is a production-ready Kubernetes distribution that provides a one-stop solution for both public and private cloud. https://sealos.io

Home Page: https://cloud.sealos.io

License: Apache License 2.0

Go 36.75% Shell 1.49% Makefile 2.24% Dockerfile 0.80% JavaScript 0.44% TypeScript 56.20% SCSS 0.69% PowerShell 0.04% HTML 0.04% Smarty 1.03% HCL 0.04% CSS 0.25%
kubernetes kubernetes-ha ipvs kubeadm golang docker cloudos container install

sealos's Introduction

A Cloud Operating System designed for managing cloud-native applications

Open in Dev Container Build Status FOSSA Status codecov Website OSCS Status


discord

sealos-website.mp4

Docs | 简体中文 | Roadmap

Sealos['siːləs] is a cloud operating system distribution based on the Kubernetes kernel. Using the cloud like using a personal computer, reducing the cost of the cloud to 1/10 of the original.

🚀 Deploy your app on Sealos

Quick Start

🔍 Some Screen Shots of Sealos:

Templates App Launchpad
Database Serverless

Install

💡 Core features

  • 🚀 Application Management: Easy management and quick release of publicly accessible distributed applications in the templates marketplace.
  • 🗄️ Database Management: Create high-availability databases in seconds, offering support for MySQL, PostgreSQL, MongoDB, and Redis.
  • 🌥️ Cloud Universality: Equally effective in both public and private cloud, enabling a seamless transition of traditional applications to the cloud.

🌟 Advantages

  • 💰 Efficient & Economical: Pay solely for the containers you utilize; automatic scaling prevents resource squandering and substantially reduces costs.
  • 🌐 High Universality & Ease of Use: Concentrate on your core business activities without worrying about system complexities; negligible learning costs involved.
  • 🛡️ Agility & Security: The distinctive multi-tenancy sharing model ensures both effective resource segmentation and collaboration, all under a secure framework.

🏘️ Community & support

  • 🌐 Visit the Sealos website for full documentation and useful links.
  • 💬 Join our Discord server is to chat with Sealos developers and other Sealos users. This is a good place to learn about Sealos and Kubernetes, ask questions, and share your experiences.
  • 🐦 Tweet at @sealosio on Twitter and follow us.
  • 🐞 Create GitHub Issues for bug reports and feature requests.

🚧 Roadmap

Sealos maintains a public roadmap. It gives a a high-level view of the main priorities for the project, the maturity of different features and projects, and how to influence the project direction.

👩‍💻 Contributing & Development

Have a look through existing Issues and Pull Requests that you could help with. If you'd like to request a feature or report a bug, please create a GitHub Issue using one of the templates provided.

📖 See contribution guide →

🔧 See development guide →

Links

  • Laf is a function as a service application on sealos.
  • Buildah The functionalities of Buildah are extensively utilized in Sealos 4.0 to ensure that cluster images are compatible with OCI standard.

sealos's People

Contributors

abingcbc avatar bxy4543 avatar c121914yu avatar cuisongliu avatar fanux avatar fengxsong avatar ghostloda avatar gitccl avatar jinnzy avatar leezq avatar lingdie avatar lzihan avatar mond77 avatar nowinkeyy avatar oldthreefeng avatar pathoo avatar sakcer avatar sealos-release-robot avatar signormercurio avatar whybeyoung avatar willzhang avatar xiao-jay avatar xiaohan1202 avatar xudaotutou avatar yangchuansheng avatar yxxchange avatar yyf1986 avatar zhangguanzhang avatar zjy365 avatar zzjin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sealos's Issues

SCP分发包

加个参数

--local-pkg bool  if true, scp offline package to every nodes.

send package need md5 check

2019-05-08 14:27:53 [DEBG] [github.com/fanux/sealos/install/utils.go:90] command result is: 
gzip: stdin: unexpected end of file
tar: Child returned status 1
tar: Error is not recoverable: exiting now

kubeadm安装的单master集群升级为HA

如题,使用的kubernetes官方的kubeadm安装方式安装的cluster,现在想升级为多master HA,但是多个master位于不同网段,之前使用keepalived+harpoxy方式升级,vip和漂移到不同网段的master时,vip无法访问,使用keepalived单播方式也无法很好解决。

请问使用sealos可以解决么?或者如何使用sealos来升级kubeadm安装的单master集群,不考虑重装的前提下。

ssh非默认端口不支持

sealos为2.0.4版本
sealos init --kubeadm-config kubeadm-config.yaml.tmpl
--master 192.168.52.51:60022
--master 192.168.52.52:60022
--master 192.168.52.53:60022
--node 192.168.52.54:60022
--user root
--passwd 1
--version v1.14.4
--pkg-url /root/kube1.14.4.tar.gz

image

安装过程需要科学上网么?

您好,安装过程需要科学上网么?,安装过程必须要使用root 的账号密码么?我可以通过其他用户账号密码安装么?

增加scp分发离线包功能

sealos init --pkg kube1.14.1.tar.gz

分发到所有节点,解压,再执行init.sh

scp kube1.14.1.tar.gz ~
tar zxvf kube1.14.1.tar.gz
cd ~/kube/shell && sh init.sh

app安装支持

prometheus.tar   #app文件
    config             #配置文件
    images.tar       # 镜像文件
    manifests/      # 编排文件
        kustomization.yaml
        deploy.yaml

sealos install --pkg-url /root/prometheus.tar.gz
用户可在任意电脑上执行install命令,执行sealos命令的电脑需要配置.sealos/config文件,该文件可在install的时候回传给执行电脑(用户在执行init的时候,可把master上的/etc/kubernetes/admin.conf 复制到执行sealos的电脑上)。

  1. 使用go-client读取集群所有节点
    2.复制到每个节点上并进行docker load命令。
  2. 在master0上直接执行 kubectl -k prometheus/manifests

统一使用sealos-pass,进行密码管理.可重构之前的clean join 等命令。

在master0上生成kubeadm-config.yaml配置文件

apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
kubernetesVersion: v1.14.0
controlPlaneEndpoint: "apiserver.cluster.local:6443" # apiserver DNS name
apiServer:
        certSANs:
        - 127.0.0.1
        - apiserver.cluster.local
        - 172.20.241.205
        - 172.20.241.206
        - 172.20.241.207
        - 172.20.241.208
        - 10.103.97.1          # virturl ip
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"
ipvs:
        excludeCIDRs: 
        - "10.103.97.1/32"

把所有master地址渲染进去,写到/root/kubeadm-config.yaml文件

lvscare can't create ipvs rules when server reboot

Could not get ipvs family information from the kernel. It is possible that ipvs is not enabled in your kernel. Native loadbalancing will not work until this is fixed.

kubelet already has pre start scripts:

/etc/systemd/system/kubelet.service
[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=http://kubernetes.io/docs/

[Service]
ExecStart=/usr/bin/kubelet
ExecStartPre=sh /usr/bin/kubelet-pre-start.sh
Restart=always
StartLimitInterval=0
RestartSec=10

[Install]
WantedBy=multi-user.target
[root@iz2ze2d8u10y3g9c1hxrc1z ~]# cat /usr/bin/kubelet-pre-start.sh
# Open ipvs
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4

cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
sysctl -w net.ipv4.ip_forward=1
systemctl stop firewalld && systemctl disable firewalld
swapoff -a
setenforce 0

指定 --kubeadm-config string 配置文件初始化集群问题

因为集群需要通过外网访问APIserver,当执行初始化集群的时候指定 --kubeadm-config config.yaml 配置,sealos还是使用默认参数重新渲染了一次kubeadm的配置,导致自定义的floatingip参数被覆盖掉。

初始化命令

sealos init --kubeadm-config kubeadm-config.yaml --master 172.16.10.31 --master 172.16.10.32 --master 172.16.10.33 --node 172.16.10.34 --user root --passwd redhat --pkg-url kube1.14.4.tar.gz --version v1.14.4

相关日志
2019-07-28 03:58:30 [DEBG] [github.com/fanux/sealos/install/send_package.go:36] please wait for tar zxvf exec
2019-07-28 03:58:30 [INFO] [github.com/fanux/sealos/install/utils.go:81] 172.16.10.32 ls -l /root | grep kube1.14.4.tar.gz | wc -l
2019-07-28 03:58:30 [DEBG] [github.com/fanux/sealos/install/send_package.go:56] please wait for tar zxvf exec
2019-07-28 03:58:30 [INFO] [github.com/fanux/sealos/install/utils.go:81] 172.16.10.34 ls -l /root | grep kube1.14.4.tar.gz | wc -l
2019-07-28 03:58:30 [DEBG] [github.com/fanux/sealos/install/send_package.go:36] please wait for tar zxvf exec
2019-07-28 03:58:30 [INFO] [github.com/fanux/sealos/install/utils.go:81] 172.16.10.33 ls -l /root | grep kube1.14.4.tar.gz | wc -l
2019-07-28 03:58:30 [DEBG] [github.com/fanux/sealos/install/send_package.go:36] please wait for tar zxvf exec
2019-07-28 03:58:30 [INFO] [github.com/fanux/sealos/install/utils.go:81] 172.16.10.31 ls -l /root | grep kube1.14.4.tar.gz | wc -l
2019-07-28 03:58:30 [DEBG] [github.com/fanux/sealos/install/utils.go:90] command result is: 1
...
...
2019-07-28 03:59:48 [INFO] [github.com/fanux/sealos/install/utils.go:81] 172.16.10.31 echo "apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
kubernetesVersion: v1.14.4
controlPlaneEndpoint: apiserver.cluster.local:6443
networking:
podSubnet: 100.64.0.0/10
apiServer:
certSANs:
- 127.0.0.1
- apiserver.cluster.local
- 172.16.10.31
- 172.16.10.32
- 172.16.10.33
- 10.103.97.2
- 172.16.10.100
- k8s-cluster.viken.local

apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
ipvs:
excludeCIDRs:
- 10.103.97.2/32
" > /root/kubeadm-config.yaml
2019-07-28 03:59:48 [DEBG] [github.com/fanux/sealos/install/utils.go:90] command result is:
2019-07-28 03:59:48 [INFO] [github.com/fanux/sealos/install/utils.go:81] 172.16.10.31 echo 172.16.10.31 apiserver.cluster.local >> /etc/hosts
2019-07-28 03:59:48 [DEBG] [github.com/fanux/sealos/install/utils.go:90] command result is:
2019-07-28 03:59:48 [INFO] [github.com/fanux/sealos/install/utils.go:81] 172.16.10.31 echo "apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
kubernetesVersion: v1.14.4
controlPlaneEndpoint: "apiserver.cluster.local:6443"
networking:
podSubnet: 100.64.0.0/10
apiServer:
certSANs:
- 127.0.0.1
- apiserver.cluster.local
- 172.16.10.31
- 172.16.10.32
- 172.16.10.33
- 10.103.97.2

apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"
ipvs:
excludeCIDRs:
- "10.103.97.2/32"" > /root/kubeadm-config.yaml
2019-07-28 03:59:48 [DEBG] [github.com/fanux/sealos/install/utils.go:90] command result is:

这里sealos第两次渲染把 kubeadm-config.yaml 自定义参数给覆盖掉了。

部署报错

3台master
系统为 CentOS Linux release 7.5.1804 (Core)
Linux 4.19.1-1.el7.elrepo.x86_64

1
2

keepalived track_script not working

global_defs {
   router_id k8s
}

vrrp_script Checkhaproxy {
    script "/etc/keepalived/check_haproxy.sh"
    interval 3
    weight -30
}

vrrp_instance VI_1 {
    state MASTER

    interface eth0
    virtual_router_id  100
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass kuburnetes
    }
    virtual_ipaddress {
         10.1.86.209
    }
    track_script {
        Checkhaproxy
    }
}
if [ `curl https://10.1.86.203:6444 --insecure |grep kind |wc -l` -eq 0 ] ; then
   exit 1 # just exit, MASTER will reduce weight(-25), so vip will move on BACKUP node
   else
   exit 0
fi

exit 1 but MASTER not realse the vip

kubernetes1.14.1 known issues

k8s 1.14.1版本存在一个bug,可能会导致sealos HA集群不稳定 kubernetes/kubernetes#76267 (comment) 1.14.2修复了此bug,建议升级。 不然kube-proxy会清理掉不是它创建的ipvs规则,导致master ipvs负载均衡失败,虽然lvscare会重新创建规则,但是可能存在短暂影响。 1.14.2 excludeCIDRs 参数可正常使用,这样sealos虚拟ip已经配置到这个参数里,不再会被kube-proxy清理

There is a bug in the k8s version 1.14.1, which may cause the sealos HA cluster to be unstable. see this 1.14.2 Fixed this bug and recommended an upgrade. Otherwise kube-proxy will clean up the ipvs rules that it did not create.
The master ipvs load balancing failed, although lvscare will re-create the rules, but there may be a short-term impact. 1.14.2 The excludeCIDRs parameter can be used normally, so the sealos virtual ip has been configured into this parameter and will no longer be cleaned up by kube-proxy. .

join support

sealos join 
    --master 192.168.0.2 
    --master 192.168.0.3 
    --master 192.168.0.4 
    --vip 10.103.97.2            
    --node 192.168.0.5                 
    --user root                         
    --passwd your-server-password 
    --pkg-url /root/kube1.15.0.tar.gz

install is error

2019-07-31 07:34:35 [DEBG] [github.com/fanux/sealos/install/utils.go:89] command result is: * Applying /usr/lib/sysctl.d/00-system.conf ...
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0

  • Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
    kernel.yama.ptrace_scope = 0
  • Applying /usr/lib/sysctl.d/50-default.conf ...
    kernel.sysrq = 16
    kernel.core_uses_pid = 1
    net.ipv4.conf.default.rp_filter = 1
    net.ipv4.conf.all.rp_filter = 1
    net.ipv4.conf.default.accept_source_route = 0
    net.ipv4.conf.all.accept_source_route = 0
    net.ipv4.conf.default.promote_secondaries = 1
    net.ipv4.conf.all.promote_secondaries = 1
    fs.protected_hardlinks = 1
    fs.protected_symlinks = 1
  • Applying /etc/sysctl.d/99-sysctl.conf ...
  • Applying /etc/sysctl.d/k8s.conf ...
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
  • Applying /etc/sysctl.conf ...
    net.ipv4.ip_forward = 1
    Error processing tar file(exit status 1): write /965e10d02b1e22d5c082297ae3f7d8b162cd6d5cefba3b884779ac990f006d12/layer.tar: no space left on device
    cp: 无法创建普通文件"/usr/bin/sealos": 文本文件忙
    mkdir: 无法创建目录"/etc/systemd/system/kubelet.service.d": 文件已存在
    driver is cgroupfs
    Failed to execute operation: Invalid argument

免密报错

2019-09-02 14:34:35 [EROR] [github.com/fanux/sealos/install/utils.go:89] Error create ssh session failed ssh: handshake failed: ssh: unable to authenticate, attempted methods [none password], no supported methods remain

ssh 创建的密钥对算法是:Ed25519 ,貌似不支持

1.13.2 HA 使用vip访问服务,间接性无法访问

按照 https://sealyun.com/post/sealos/ 中的步骤进行安装,
1.修改了容器中的roles/etcd/templates/kubeletonce.service.j2文件,增加启动参数--cgroup-driver=systemd。
2.修改hosts文件
[k8s-master]
10.8.8.21 name=node01 order=1 role=master lb=MASTER lbname=lbmaster priority=100
10.8.8.22 name=node02 order=2 role=master lb=BACKUP lbname=lbbackup priority=80
10.8.8.23 name=node03 order=3 role=master

[k8s-node]
#10.1.86.207 name=node04 role=node

[k8s-all:children]
k8s-master
k8s-node

[all:vars]
vip=10.8.8.19
k8s_version=1.13.2
ip_interface=enp.*
etcd_crts=["ca-key.pem","ca.pem","client-key.pem","client.pem","member1-key.pem","member1.pem","server-key.pem","server.pem","ca.csr","client.csr","member1.csr","server.csr"]
k8s_crts=["apiserver.crt","apiserver-kubelet-client.crt","ca.crt", "front-proxy-ca.key","front-proxy-client.key","sa.pub", "apiserver.key","apiserver-kubelet-client.key", "ca.key", "front-proxy-ca.crt", "front-proxy-client.crt" , "sa.key"]

3.启动安装
ansible-playbook roles/install-all.yaml

4.用火狐浏览器访问 https://10.8.8.19:32000,不断刷新页面,有时很快,有时要等很久。
访问https://10.8.8.21:32000, 不断刷新页面,没有问题。

5.kube-proxy日志

I0116 03:04:23.878659 1 graceful_termination.go:171] Not deleting, RS 10.8.8.19:32000/TCP/100.80.37.65:8443: 2 ActiveConn, 2 InactiveConn
I0116 03:05:19.221613 1 graceful_termination.go:160] Trying to delete rs: 10.8.8.19:32000/TCP/100.80.37.65:8443
I0116 03:05:19.221667 1 graceful_termination.go:171] Not deleting, RS 10.8.8.19:32000/TCP/100.80.37.65:8443: 1 ActiveConn, 0 InactiveConn
I0116 03:05:19.221714 1 graceful_termination.go:160] Trying to delete rs: 10.8.8.19:32001/TCP/100.80.37.72:3000
I0116 03:05:19.221738 1 graceful_termination.go:174] Deleting rs: 10.8.8.19:32001/TCP/100.80.37.72:3000
I0116 03:05:19.221770 1 graceful_termination.go:160] Trying to delete rs: 10.8.8.19:30000/TCP/100.80.37.74:3000
I0116 03:05:19.221789 1 graceful_termination.go:174] Deleting rs: 10.8.8.19:30000/TCP/100.80.37.74:3000
I0116 03:05:23.878734 1 graceful_termination.go:160] Trying to delete rs: 10.8.8.19:32000/TCP/100.80.37.65:8443
I0116 03:05:23.878834 1 graceful_termination.go:171] Not deleting, RS 10.8.8.19:32000/TCP/100.80.37.65:8443: 1 ActiveConn, 0 InactiveConn
I0116 03:06:19.332984 1 graceful_termination.go:160] Trying to delete rs: 10.8.8.19:32000/TCP/100.80.37.65:8443
I0116 03:06:19.333041 1 graceful_termination.go:171] Not deleting, RS 10.8.8.19:32000/TCP/100.80.37.65:8443: 1 ActiveConn, 0 InactiveConn
I0116 03:06:19.333105 1 graceful_termination.go:160] Trying to delete rs: 10.8.8.19:32001/TCP/100.80.37.72:3000
I0116 03:06:19.333132 1 graceful_termination.go:174] Deleting rs: 10.8.8.19:32001/TCP/100.80.37.72:3000
I0116 03:06:19.333228 1 graceful_termination.go:160] Trying to delete rs: 10.8.8.19:30000/TCP/100.80.37.74:3000
I0116 03:06:19.333250 1 graceful_termination.go:174] Deleting rs: 10.8.8.19:30000/TCP/100.80.37.74:3000
I0116 03:06:23.879766 1 graceful_termination.go:160] Trying to delete rs: 10.8.8.19:32000/TCP/100.80.37.65:8443
I0116 03:06:23.880183 1 graceful_termination.go:171] Not deleting, RS 10.8.8.19:32000/TCP/100.80.37.65:8443: 1 ActiveConn, 0 InactiveConn
I0116 03:07:23.880408 1 graceful_termination.go:160] Trying to delete rs: 10.8.8.19:32000/TCP/100.80.37.65:8443
I0116 03:07:23.880681 1 graceful_termination.go:171] Not deleting, RS 10.8.8.19:32000/TCP/100.80.37.65:8443: 1 ActiveConn, 0 InactiveConn
I0116 03:08:49.684462 1 graceful_termination.go:160] Trying to delete rs: 10.8.8.19:32001/TCP/100.80.37.72:3000
I0116 03:08:49.684496 1 graceful_termination.go:174] Deleting rs: 10.8.8.19:32001/TCP/100.80.37.72:3000
I0116 03:08:49.684525 1 graceful_termination.go:160] Trying to delete rs: 10.8.8.19:32000/TCP/100.80.37.65:8443
I0116 03:08:49.684540 1 graceful_termination.go:171] Not deleting, RS 10.8.8.19:32000/TCP/100.80.37.65:8443: 3 ActiveConn, 0 InactiveConn
I0116 03:08:49.684576 1 graceful_termination.go:160] Trying to delete rs: 10.8.8.19:30000/TCP/100.80.37.74:3000
I0116 03:08:49.684592 1 graceful_termination.go:174] Deleting rs: 10.8.8.19:30000/TCP/100.80.37.74:3000
I0116 03:09:23.881010 1 graceful_termination.go:160] Trying to delete rs: 10.8.8.19:32000/TCP/100.80.37.65:8443
I0116 03:09:23.881259 1 graceful_termination.go:171] Not deleting, RS 10.8.8.19:32000/TCP/100.80.37.65:8443: 3 ActiveConn, 0 InactiveConn

I0116 03:10:49.984058 1 graceful_termination.go:160] Trying to delete rs: 10.8.8.19:30000/TCP/100.80.37.74:3000
I0116 03:10:49.984084 1 graceful_termination.go:174] Deleting rs: 10.8.8.19:30000/TCP/100.80.37.74:3000
I0116 03:10:49.984137 1 graceful_termination.go:160] Trying to delete rs: 10.8.8.19:32000/TCP/100.80.37.65:8443
I0116 03:10:49.984158 1 graceful_termination.go:171] Not deleting, RS 10.8.8.19:32000/TCP/100.80.37.65:8443: 2 ActiveConn, 0 InactiveConn
I0116 03:10:49.984201 1 graceful_termination.go:160] Trying to delete rs: 10.8.8.19:32001/TCP/100.80.37.72:3000
I0116 03:10:49.984222 1 graceful_termination.go:174] Deleting rs: 10.8.8.19:32001/TCP/100.80.37.72:3000
I0116 03:11:50.100714 1 graceful_termination.go:160] Trying to delete rs: 10.8.8.19:32001/TCP/100.80.37.72:3000
I0116 03:11:50.100741 1 graceful_termination.go:174] Deleting rs: 10.8.8.19:32001/TCP/100.80.37.72:3000
I0116 03:11:50.100787 1 graceful_termination.go:160] Trying to delete rs: 10.8.8.19:30000/TCP/100.80.37.74:3000
I0116 03:11:50.100804 1 graceful_termination.go:174] Deleting rs: 10.8.8.19:30000/TCP/100.80.37.74:3000
I0116 03:11:50.100833 1 graceful_termination.go:160] Trying to delete rs: 10.8.8.19:32000/TCP/100.80.37.65:8443
I0116 03:11:50.100854 1 graceful_termination.go:171] Not deleting, RS 10.8.8.19:32000/TCP/100.80.37.65:8443: 3 ActiveConn, 0 InactiveConn
I0116 03:12:23.881960 1 graceful_termination.go:160] Trying to delete rs: 10.8.8.19:32000/TCP/100.80.37.65:8443
I0116 03:12:23.882199 1 graceful_termination.go:171] Not deleting, RS 10.8.8.19:32000/TCP/100.80.37.65:8443: 3 ActiveConn, 0 InactiveConn

coredns CrashLoopBackOff

sealos init --master 100.69.224.124
--master 100.69.224.147
--master 100.69.224.172
--node 100.69.224.173
--user root
--passwd xxxxxx
--version v1.15.0
--pkg-url /root/kube1.15.0.tar.gz

get:
coredns-5c98db65d4-h6446 0/1 CrashLoopBackOff 13 50m
coredns-5c98db65d4-qmzn4 0/1 CrashLoopBackOff 13 50m

E0722 08:35:38.525354 1 reflector.go:134] github.com/coredns/coredns/plugin/kubernetes/controller.go:315: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0722 08:35:38.525354 1 reflector.go:134] github.com/coredns/coredns/plugin/kubernetes/controller.go:315: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
log: exiting because of error: log: cannot create log: open /tmp/coredns.coredns-5c98db65d4-qmzn4.unknownuser.log.ERROR.20190722-083538.1: no such file or directory

keepalived track_script not work

global_defs {
   router_id k8s
}

vrrp_script Checkhaproxy {
    script "/etc/keepalived/check_haproxy.sh"
    interval 3
    weight -25 # priority MASTER(100) - 25 < BACKUP(80)
}

vrrp_instance VI_1 {
    state BACKUP

    interface eth0
    virtual_router_id  100
    priority 80
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass kuburnetes
    }
    virtual_ipaddress {
         10.1.86.209
    }
    track_script {
        Checkhaproxy
    }
}
if [ `curl https://10.1.86.202:6444 --insecure |grep kind |wc -l` -eq 0 ] ; then
   exit 1 # just exit, MASTER will reduce weight(-25), so vip will move on BACKUP node
fi

when script exit1, but MASTER not realse the vip

sealos app install设计

sealos load --master xxx --master xxx --node xxx --node --pkg-url /root/prometheus.tar.gz

  1. 同步包到各个节点
  2. kubectl apply -f /root/prometheus/conf

keepalived not work in container

It cause by version, using 1.2.13 fix it,
dockerfile:

FROM centos:7.4.1708
RUN yum install -y keepalived && yum install -y net-tools
CMD /usr/sbin/keepalived -P -C -d -D -S 7 -f /etc/keepalived/keepalived.conf --dont-fork --log-console 

build form source:

FROM centos:7.4.1708
RUN yum install -y  wget && wget http://www.keepalived.org/software/keepalived-1.2.13.tar.gz && tar zxvf keepalived-1.2.13.tar.gz && yum install -y gcc-c++ openssl-devel openssl && \
    cd keepalived-1.2.13 && ./configure && make && make install && yum install -y net-tools
CMD /usr/sbin/keepalived -P -C -d -D -S 7 -f /etc/keepalived/keepalived.conf --dont-fork --log-console 

deploy error

I0905 16:10:54.516684 1 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
E0905 16:10:54.517277 1 prometheus.go:138] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
E0905 16:10:54.517293 1 prometheus.go:150] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
E0905 16:10:54.517317 1 prometheus.go:162] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
E0905 16:10:54.517333 1 prometheus.go:174] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
E0905 16:10:54.517347 1 prometheus.go:189] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
E0905 16:10:54.517359 1 prometheus.go:202] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
I0905 16:10:54.517373 1 plugins.go:158] Loaded 9 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook.
I0905 16:10:54.517377 1 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
I0905 16:10:54.518724 1 client.go:352] parsed scheme: ""
I0905 16:10:54.518735 1 client.go:352] scheme "" not registered, fallback to default scheme
I0905 16:10:54.518788 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 }]
I0905 16:10:54.518961 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 }]
I0905 16:10:55.516683 1 client.go:352] parsed scheme: ""
I0905 16:10:55.516695 1 client.go:352] scheme "" not registered, fallback to default scheme
I0905 16:10:55.516719 1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0 }]
I0905 16:10:55.516751 1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 }]
F0905 16:11:14.519004 1 storage_decorator.go:57] Unable to create storage backend: config (&{ /registry {[https://127.0.0.1:2379] /etc/kubernetes/pki/apiserver-etcd-client.key /etc/kubernetes/pki/apiserver-etcd-client.crt /etc/kubernetes/pki/etcd/ca.crt} false true 0xc000739710 apiextensions.k8s.io/v1beta1 5m0s 1m0s}), err (context deadline exceeded)

apiserver启动出错

Deprecated kubeadm flags on k8s 1.15

2019-07-24 15:48:30 [DEBG] [/berk.can/sealos/install/utils.go:90] command result is: Flag --experimental-upload-certs has been deprecated, use --upload-certs instead

kubeonce 增加cgroup driver检测

临时kubelet可能启动失败,因为没有检测docker的cgroup driver,kubeletonce的作用是在安装k8s之前先用kubelet把etcd static pod 拉起来。 拉起啦之后这个kubelet就会退出

[root@k8s-master01 ~]# kubectl get no Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

您好:
已经建立好的集群,按照您的步骤执行完成以后,就报这个错误:

[root@k8s-master01 ~]# kubectl get no 

Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

按照网上的步骤也处理了,但是一直是这个错误。

清理命令

sealos clean --master xxx --master xx --master xx --node xx

执行

kubeadm reset  -f && rm -rf /var/etcd && rm -rf /var/lib/etcd

然后清理掉/etc/hosts里的apiserver.cluster.local条目

kubernetes cert expired

Using this kubeadm(99 years cert)

More info...

install:

chmod +x kubeadm && cp kubeadm /usr/bin

update pki:

[root@dev-86-202 ~]# rm /etc/kubernetes/pki/ -rf
[root@dev-86-202 ~]# kubeadm alpha phase certs all --config  kube/conf/kubeadm.yaml

update kubeconfig

[root@dev-86-202 ~]# rm -rf /etc/kubernetes/*conf
[root@dev-86-202 ~]# kubeadm alpha phase kubeconfig all --config ~/kube/conf/kubeadm.yaml
[root@dev-86-202 ~]# cp /etc/kubernetes/admin.conf ~/.kube/config

test:

$ cd /etc/kubernetes/pki
$ openssl x509 -in apiserver-etcd-client.crt -text -noout
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 4701787282062078235 (0x41401a9f34c2711b)
    Signature Algorithm: sha256WithRSAEncryption
        Issuer: CN=etcd-ca
        Validity
            Not Before: Nov 22 11:58:50 2018 GMT
            Not After : Oct 29 11:58:51 2117 GMT
kubectl get pod -n kube-system

kubeadm定制功能移植

原因:
每次merge kubeadm代码解决冲突太恶心,是自动化的拦路石,移植到sealos就可以自动进行打包。

  1. 证书延期,需要sealos提前生成好证书
  2. join之前先创建ipvs规则
  3. join之后创建static pod去守护ipvs规则

k8s1.14.1重启节点,导致该节点无法自动加入集群

系统:centos7.5
内核版本:5.0
docker版本:18.06.1-ce
利用sealos init搭建好k8s集群后,手动重启其中一个节点,该节点无法自动加入集群。
systemctl status其中kubelet、docker、显示是running状态。
但lsmod ip_vs,则没有运行,输出为空。
即便手动modprobe ip_vs,同时重启docker、kubelet等,任然无法使该节点加入原有集群。

Please don’t spam

Hi! I just got spammed in the email on my profile! That’s not nice, please don’t spam people (specially those who specifically opted-out from spam).

upgrade support

sealos upgrade --master 192.168.0.2 \
    --master 192.168.0.3 \
    --master 192.168.0.4 \              
    --node 192.168.0.5 \                 
    --user root \                        
    --passwd your-server-password \      
    --from-version v1.14.1 \
    --to-version v1.15.0 \
    --pkg-url /root/kube1.15.0.tar.gz   

[feature request]etcd role define decoupling

[k8s-master]
10.1.86.204 name=node01 order=1 role=master lb=MASTER lbname=lbmaster priority=100  etcd=true
10.1.86.205 name=node02 order=2 role=master lb=BACKUP lbname=lbbackup priority=80 etcd=true
10.1.86.206 name=node03 order=3 role=master  etcd=true

[k8s-node]
10.1.86.207 name=node04 role=node

[k8s-all:children]
k8s-master
k8s-node

[all:vars]
vip=10.1.86.209

So with etcd=true you can init etcd cluster and with one kubernetes master.

like this:

[k8s-master]
10.1.86.204 name=node01 order=1 role=master lb=MASTER lbname=lbmaster priority=100  etcd=true

[k8s-node]
10.1.86.205 name=node02 order=2 role=node lb=BACKUP lbname=lbbackup priority=80 etcd=true
10.1.86.206 name=node03 order=3 role=node  etcd=true
10.1.86.207 name=node04 role=node

[k8s-all:children]
k8s-master
k8s-node

[all:vars]
vip=10.1.86.209

Three node with one master

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.