Giter Site home page Giter Site logo

rootsongjc / kubernetes-vagrant-centos-cluster Goto Github PK

View Code? Open in Web Editor NEW
1.9K 72.0 594.0 75.6 MB

Setting up a distributed Kubernetes cluster along with Istio service mesh locally with Vagrant and VirtualBox, only PoC or Demo use.

Home Page: https://jimmysong.io

License: Apache License 2.0

Shell 92.12% Dockerfile 7.88%
kubernetes vagrant vagrantfile docker kubernetes-cluster virtualbox traefik helm istio cloud-native

kubernetes-vagrant-centos-cluster's Introduction

kubernetes-vagrant-centos-cluster's People

Contributors

0312birdzhang avatar cnlyzy avatar coordinate35 avatar custa avatar deenamanick avatar j4ckzh0u avatar jamalshahverdiev avatar libaoan avatar onecer avatar pengisgood avatar raducrisan1 avatar realforce1024 avatar rootsongjc avatar samacs avatar senthilrch avatar weizhe0422 avatar wzhliang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kubernetes-vagrant-centos-cluster's Issues

运行一段时间后,8443 就没有在Listening

[root@node1 log]# netstat -an|grep 8443
[root@node1 log]# netstat -an|grep 6443
tcp        0      0 172.17.8.101:6443       0.0.0.0:*               LISTEN 

日志

Jan 23 16:32:10 node1 flanneld-start: E0123 16:32:10.276190    6014 network.go:102] failed to retrieve network config: 100: Key not found (/kube-centos) [3]
Jan 23 16:32:11 node1 flanneld-start: E0123 16:32:11.278138    6014 network.go:102] failed to retrieve network config: 100: Key not found (/kube-centos) [3]
Jan 23 16:32:12 node1 flanneld-start: E0123 16:32:12.280508    6014 network.go:102] failed to retrieve network config: 100: Key not found (/kube-centos) [3]
Jan 23 16:32:13 node1 flanneld-start: E0123 16:32:13.283003    6014 network.go:102] failed to retrieve network config: 100: Key not found (/kube-centos) [3]

Restart Error

Environment

  • OS:
  • Kubernetes version: 1.9.1
  • VirtualBox version:5.2.18
  • Vagrant version:2.1.5

What I did?

重启之后,集群不启动
image

Messages

Logs or error messages.

Initial start kube-apiserver, kube-controller-manager, kube-scheduler, etcd, kubelet, docker, flannel missing?

kube-apiserver, kube-controller-manager, kube-scheduler, etcd, kubelet, docker, flannel missing?

Those are the error i get from vagrant:

node1: Specific bridge 'en0: Wi-Fi (AirPort)' not found. You may be asked to specify
node2: Specific bridge 'en0: Wi-Fi (AirPort)' not found. You may be asked to specify
node3: Specific bridge 'en0: Wi-Fi (AirPort)' not found. You may be asked to specify

node1: net.ipv4.ip_forward = 1
node1: sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: No such file or directory
node1: sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory	

node2: net.ipv4.ip_forward = 1
node2: sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: No such file or directory
node2: sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory	

node3: net.ipv4.ip_forward = 1
node3: sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: No such file or directory
node3: sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory	

Node1 Result:

[root@node1 vagrant]# kubectl get nodes
NAME      STATUS    ROLES     AGE       VERSION
node1     Ready     <none>    14h       v1.9.3
node2     Ready     <none>    14h       v1.9.3
node3     Ready     <none>    14h       v1.9.3
[root@node1 vagrant]# kubectl get pods --all-namespaces -o wide
NAMESPACE     NAME                                    READY     STATUS    RESTARTS   AGE       IP             NODE
kube-system   kubernetes-dashboard-65486f5fdf-g2l5n   1/1       Running   0          14h       172.17.8.101   node1
kube-system   traefik-ingress-controller-5v2zm        1/1       Running   0          14h       172.17.8.102   node2
[root@node1 vagrant]# kubectl get svc --all-namespaces
NAMESPACE     NAME                      TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)           AGE
default       kubernetes                ClusterIP   10.254.0.1       <none>        443/TCP           14h
kube-system   kubernetes-dashboard      ClusterIP   10.254.109.161   <none>        8443/TCP          14h
kube-system   traefik-ingress-service   ClusterIP   10.254.245.93    <none>        80/TCP,8080/TCP   14h

vagrant up, network config error

I got errors while vagrant up the system. It seems the network problem.
I am using VirtualBox v5.0.20

The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!

# Down the interface before munging the config file. This might
# fail if the interface is not actually set up yet so ignore
# errors.
/sbin/ifdown 'eth1'
# Move new config into place
mv -f '/tmp/vagrant-network-entry-eth1-1530307442-0' '/etc/sysconfig/network-scripts/ifcfg-eth1'
# attempt to force network manager to reload configurations
nmcli c reload || true

# Down the interface before munging the config file. This might
# fail if the interface is not actually set up yet so ignore
# errors.
/sbin/ifdown 'eth2'
# Move new config into place
mv -f '/tmp/vagrant-network-entry-eth2-1530307443-1' '/etc/sysconfig/network-scripts/ifcfg-eth2'
# attempt to force network manager to reload configurations
nmcli c reload || true

# Restart network
service network restart


Stdout from the command:

Restarting network (via systemctl):  [FAILED]


Stderr from the command:

usage: ifdown <configuration>
usage: ifdown <configuration>
Job for network.service failed because the control process exited with error code. See "systemctl status network.service" and "journalctl -xe" for details.

yum always timeout

当我运行vagrant up时,在执行yum总是超时,我怀疑这和yum的配置有关,是否是IPV6的原因,那么我该如何解决呢?

    node1: http://mirrors.163.com/centos/7/os/x86_64/repodata/repomd.xml: [Errno 12] Timeout on http://mirrors.163.com/centos/7/os/x86_64/repodata/repomd.xml: (28, 'Connection timed out after 30001 milliseconds')
    node1: Trying other mirror.
    node1: http://mirrors.163.com/centos/7/os/x86_64/repodata/repomd.xml: [Errno 12] Timeout on http://mirrors.163.com/centos/7/os/x86_64/repodata/repomd.xml: (28, 'Connection timed out after 30001 milliseconds')
    node1: Trying other mirror.
    node1: http://mirrors.163.com/centos/7/os/x86_64/repodata/repomd.xml: [Errno 12] Timeout on http://mirrors.163.com/centos/7/os/x86_64/repodata/repomd.xml: (28, 'Connection timed out after 30001 milliseconds')
    node1: Trying other mirror.
    node1: http://mirrors.163.com/centos/7/os/x86_64/repodata/repomd.xml: [Errno 12] Timeout on http://mirrors.163.com/centos/7/os/x86_64/repodata/repomd.xml: (28, 'Connection timed out after 30002 milliseconds')
    node1: Trying other mirror.
    node1: http://mirrors.163.com/centos/7/os/x86_64/repodata/repomd.xml: [Errno 12] Timeout on http://mirrors.163.com/centos/7/os/x86_64/repodata/repomd.xml: (28, 'Connection timed out after 30001 milliseconds')
    node1: Trying other mirror.
    node1: http://mirrors.163.com/centos/7/os/x86_64/repodata/repomd.xml: [Errno 12] Timeout on http://mirrors.163.com/centos/7/os/x86_64/repodata/repomd.xml: (28, 'Connection timed out after 30002 milliseconds')
    node1: Trying other mirror.
    node1: http://mirrors.163.com/centos/7/os/x86_64/repodata/repomd.xml: [Errno 12] Timeout on http://mirrors.163.com/centos/7/os/x86_64/repodata/repomd.xml: (28, 'Connection timed out after 30001 milliseconds')
    node1: Trying other mirror.
    node1: http://mirrors.163.com/centos/7/os/x86_64/repodata/repomd.xml: [Errno 12] Timeout on http://mirrors.163.com/centos/7/os/x86_64/repodata/repomd.xml: (28, 'Connection timed out after 30001 milliseconds')

Kubelet server start failed - v1.10.0

In this PR, the --require-kubeconfig flag has been removed.

When I use these scripts to setup cluster with kubelet version 1.10.0, it will failed due to the unknown flag --require-kubeconfig.

After I removed this flag in the kubelet config file, it worked.

opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/plugins/hosts/bsd/cap/nfs.rb:11:in `nfs_export': wrong number of arguments (given 4, expected 5) (ArgumentError)

Environment

  • OS: MacOS Mojave 10.14.2(18C54)
  • Kubernetes version:
  • VirtualBox version: 版本 6.0.2 r128162 (Qt5.6.3)
  • Vagrant version: Vagrant 2.2.3

What I did?

按照README我安装了VirtualBox和vagrant,然后执行vagrant up, 出现了下面的问题

➜  kubernetes-vagrant-centos-cluster git:(master) vagrant up
sh: netsh: command not found
sh: cscript: command not found
It seems that you don't have the privileges to change the firewall rules. NFS will not work without that firewall
changes. Execute the following commands via cmd as administrator:
netsh advfirewall firewall add rule name="VagrantWinNFSd-1.4.0" dir="in" action=allow protocol=any program="\Users\zhaoxin\.vagrant.d\gems\2.4.4\gems\vagrant-winnfsd-1.4.0\bin\winnfsd.exe" profile=any
netsh advfirewall firewall add rule name="VagrantWinNFSd-1.4.0" dir="out" action=allow protocol=any program="\Users\zhaoxin\.vagrant.d\gems\2.4.4\gems\vagrant-winnfsd-1.4.0\bin\winnfsd.exe" profile=any
If you are an Windows XP user run the following command instead:
netsh firewall add allowedprogram "\Users\zhaoxin\.vagrant.d\gems\2.4.4\gems\vagrant-winnfsd-1.4.0\bin\winnfsd.exe" VagrantWinNFSd-1.4.0 ENABLE
Bringing machine 'node1' up with 'virtualbox' provider...
Bringing machine 'node2' up with 'virtualbox' provider...
Bringing machine 'node3' up with 'virtualbox' provider...
==> node1: Importing base box 'centos/7'...
==> node1: Matching MAC address for NAT networking...
==> node1: Setting the name of the VM: node1
==> node1: Clearing any previously set network interfaces...
==> node1: Preparing network interfaces based on configuration...
    node1: Adapter 1: nat
    node1: Adapter 2: hostonly
==> node1: Forwarding ports...
    node1: 22 (guest) => 2222 (host) (adapter 1)
==> node1: Running 'pre-boot' VM customizations...
==> node1: Booting VM...
==> node1: Waiting for machine to boot. This may take a few minutes...
    node1: SSH address: 127.0.0.1:2222
    node1: SSH username: vagrant
    node1: SSH auth method: private key
    node1:
    node1: Vagrant insecure key detected. Vagrant will automatically replace
    node1: this with a newly generated keypair for better security.
    node1:
    node1: Inserting generated public key within guest...
    node1: Removing insecure key from the guest if it's present...
    node1: Key inserted! Disconnecting and reconnecting using new SSH key...
==> node1: Machine booted and ready!
==> node1: Checking for guest additions in VM...
    node1: No guest additions were detected on the base box for this VM! Guest
    node1: additions are required for forwarded ports, shared folders, host only
    node1: networking, and more. If SSH fails on this machine, please install
    node1: the guest additions and repackage the box to continue.
    node1:
    node1: This is not an error message; everything may continue to work properly,
    node1: in which case you may ignore this message.
==> node1: Setting hostname...
==> node1: Configuring and enabling network interfaces...
==> node1: Exporting NFS shared folders...
==> node1: Forcing shutdown of VM...
==> node1: Destroying VM and associated drives...
/opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/plugins/hosts/bsd/cap/nfs.rb:11:in `nfs_export': wrong number of arguments (given 4, expected 5) (ArgumentError)
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/capability_host.rb:111:in `call'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/capability_host.rb:111:in `capability'
	from /Users/zhaoxin/.vagrant.d/gems/2.4.4/gems/vagrant-winnfsd-1.4.0/lib/vagrant-winnfsd/synced_folder.rb:43:in `block (2 levels) in enable'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/environment.rb:614:in `lock'
	from /Users/zhaoxin/.vagrant.d/gems/2.4.4/gems/vagrant-winnfsd-1.4.0/lib/vagrant-winnfsd/synced_folder.rb:41:in `block in enable'
	from /Users/zhaoxin/.vagrant.d/gems/2.4.4/gems/vagrant-winnfsd-1.4.0/lib/vagrant-winnfsd/synced_folder.rb:39:in `synchronize'
	from /Users/zhaoxin/.vagrant.d/gems/2.4.4/gems/vagrant-winnfsd-1.4.0/lib/vagrant-winnfsd/synced_folder.rb:39:in `enable'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/builtin/synced_folders.rb:93:in `block in call'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/builtin/synced_folders.rb:90:in `each'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/builtin/synced_folders.rb:90:in `call'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/warden.rb:34:in `call'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/builtin/synced_folder_cleanup.rb:28:in `call'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/warden.rb:34:in `call'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/plugins/synced_folders/nfs/action_cleanup.rb:25:in `call'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/warden.rb:34:in `call'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/plugins/providers/virtualbox/action/prepare_nfs_valid_ids.rb:12:in `call'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/warden.rb:34:in `call'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/builtin/handle_forwarded_port_collisions.rb:49:in `call'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/warden.rb:34:in `call'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/plugins/providers/virtualbox/action/prepare_forwarded_port_collision_params.rb:30:in `call'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/warden.rb:34:in `call'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/builtin/env_set.rb:19:in `call'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/warden.rb:34:in `call'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/builtin/provision.rb:80:in `call'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/warden.rb:34:in `call'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/plugins/providers/virtualbox/action/clear_forwarded_ports.rb:15:in `call'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/warden.rb:34:in `call'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/plugins/providers/virtualbox/action/set_name.rb:50:in `call'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/warden.rb:34:in `call'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/plugins/providers/virtualbox/action/clean_machine_folder.rb:17:in `call'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/warden.rb:34:in `call'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/plugins/providers/virtualbox/action/check_accessible.rb:18:in `call'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/warden.rb:34:in `call'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/warden.rb:95:in `block in finalize_action'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/warden.rb:34:in `call'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/builder.rb:116:in `call'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/runner.rb:66:in `block in run'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/util/busy.rb:19:in `busy'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/runner.rb:66:in `run'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/builtin/call.rb:53:in `call'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/warden.rb:34:in `call'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/warden.rb:95:in `block in finalize_action'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/warden.rb:34:in `call'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/builder.rb:116:in `call'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/runner.rb:66:in `block in run'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/util/busy.rb:19:in `busy'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/runner.rb:66:in `run'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/builtin/call.rb:53:in `call'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/warden.rb:34:in `call'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/warden.rb:95:in `block in finalize_action'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/warden.rb:34:in `call'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/builder.rb:116:in `call'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/runner.rb:66:in `block in run'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/util/busy.rb:19:in `busy'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/runner.rb:66:in `run'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/builtin/call.rb:53:in `call'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/warden.rb:34:in `call'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/builtin/box_check_outdated.rb:23:in `call'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/warden.rb:34:in `call'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/builtin/config_validate.rb:25:in `call'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/warden.rb:34:in `call'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/plugins/providers/virtualbox/action/check_virtualbox.rb:26:in `call'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/warden.rb:34:in `call'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/warden.rb:95:in `block in finalize_action'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/warden.rb:34:in `call'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/plugins/providers/virtualbox/action/match_mac_address.rb:22:in `call'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/warden.rb:34:in `call'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/plugins/providers/virtualbox/action/discard_state.rb:15:in `call'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/warden.rb:34:in `call'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/plugins/providers/virtualbox/action/import.rb:74:in `import'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/plugins/providers/virtualbox/action/import.rb:13:in `call'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/warden.rb:34:in `call'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/plugins/providers/virtualbox/action/prepare_clone_snapshot.rb:17:in `call'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/warden.rb:34:in `call'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/builtin/prepare_clone.rb:15:in `call'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/warden.rb:34:in `call'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/plugins/providers/virtualbox/action/customize.rb:40:in `call'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/warden.rb:34:in `call'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/plugins/providers/virtualbox/action/check_accessible.rb:18:in `call'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/warden.rb:34:in `call'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/warden.rb:95:in `block in finalize_action'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/warden.rb:34:in `call'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/builder.rb:116:in `call'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/runner.rb:66:in `block in run'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/util/busy.rb:19:in `busy'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/runner.rb:66:in `run'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/builtin/call.rb:53:in `call'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/warden.rb:34:in `call'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/builtin/config_validate.rb:25:in `call'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/warden.rb:34:in `call'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/warden.rb:95:in `block in finalize_action'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/warden.rb:34:in `call'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/builtin/handle_box.rb:56:in `call'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/warden.rb:34:in `call'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/warden.rb:95:in `block in finalize_action'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/warden.rb:34:in `call'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/builder.rb:116:in `call'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/runner.rb:66:in `block in run'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/util/busy.rb:19:in `busy'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/runner.rb:66:in `run'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/builtin/call.rb:53:in `call'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/warden.rb:34:in `call'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/plugins/providers/virtualbox/action/check_virtualbox.rb:26:in `call'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/warden.rb:34:in `call'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/builder.rb:116:in `call'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/runner.rb:66:in `block in run'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/util/busy.rb:19:in `busy'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/action/runner.rb:66:in `run'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/machine.rb:239:in `action_raw'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/machine.rb:208:in `block in action'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/environment.rb:614:in `lock'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/machine.rb:194:in `call'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/machine.rb:194:in `action'
	from /opt/vagrant/embedded/gems/2.2.3/gems/vagrant-2.2.3/lib/vagrant/batch_action.rb:82:in `block (2 levels) in run'

Messages

怀疑是版本问题导致的

Could not resolve host: mirrors.163.com; Unknown error

node1: http://mirrors.163.com/centos/7/extras/x86_64/Packages/etcd-3.2.15-1.el7.x86_64.rpm: [Errno 14] curl#6 - "Could not resolve host: mirrors.163.com; Unknown error"
    node1: Trying other mirror.
    node
node1: 
    node1: Error downloading packages:
    node1:   etcd-3.2.15-1.el7.x86_64: [Errno 256] No more mirrors to try.
    node1: /tmp/vagrant-shell: line 63: /etc/etcd/etcd.conf: No such file or directory
    node1: cat: /etc/etcd/etcd.conf: No such file or directory
    node1: create network config in etcd
    node1: /tmp/vagrant-shell: line 79: /etc/etcd/etcd-init.sh: No such file or directory
    node1: chmod: cannot access ‘/etc/etcd/etcd-init.sh’: No such file or directory
    node1: start etcd...
    node1: Failed to execute operation: No such file or directory
    node1: Failed to start etcd.service: Unit not found.
    node1: create kubernetes ip range for flannel on 172.33.0.0/16
    node1: /tmp/vagrant-shell: line 91: /etc/etcd/etcd-init.sh: No such file or directory
    node1: /tmp/vagrant-shell: line 92: etcdctl: command not found
    node1: /tmp/vagrant-shell: line 93: etcdctl: command not found
    node1: install flannel...
    node1: Loaded plugins: fastestmirror

Node3 error

Environment

  • OS: ubuntu18.04
  • Vagrnant: vagrant2.14
  • VirtualBox: virtualbox5.2

Output

Choose the same network interface for the nodes.

3
4

Timed out while waiting for the machine to boot

[root@HFY20180626 kubernetes-vagrant-centos-cluster]# vagrant up
Bringing machine 'node1' up with 'virtualbox' provider...
Bringing machine 'node2' up with 'virtualbox' provider...
Bringing machine 'node3' up with 'virtualbox' provider...
==> node1: Importing base box 'centos/7'...
==> node1: Matching MAC address for NAT networking...
==> node1: Setting the name of the VM: node1
==> node1: Clearing any previously set network interfaces...
==> node1: Specific bridge 'en0: Wi-Fi (AirPort)' not found. You may be asked to specify
==> node1: which network to bridge to.
==> node1: Available bridged network interfaces:
1) eth0
2) virbr0
==> node1: When choosing an interface, it is usually the one that is
==> node1: being used to connect to the internet.
    node1: Which interface should the network bridge to? 2
==> node1: Preparing network interfaces based on configuration...
    node1: Adapter 1: nat
    node1: Adapter 2: hostonly
    node1: Adapter 3: bridged
==> node1: Forwarding ports...
    node1: 22 (guest) => 2222 (host) (adapter 1)
==> node1: Running 'pre-boot' VM customizations...
==> node1: Booting VM...
==> node1: Waiting for machine to boot. This may take a few minutes...
    node1: SSH address: 127.0.0.1:2222
    node1: SSH username: vagrant
    node1: SSH auth method: private key

Timed out while waiting for the machine to boot. This means that
Vagrant was unable to communicate with the guest machine within
the configured ("config.vm.boot_timeout" value) time period.

If you look above, you should be able to see the error(s) that
Vagrant had when attempting to connect to the machine. These errors
are usually good hints as to what may be wrong.

If you're using a custom box, make sure that networking is properly
working and you're able to connect to the machine. It is a common
problem that networking isn't setup properly in these boxes.
Verify that authentication configurations are also setup properly,
as well.

If the box appears to be booting properly, you may want to increase
the timeout ("config.vm.boot_timeout") value.

port number and node differ

Hi,

opening new issue, as am able to see running pods and services for grafana but still getting 'service unavailable" while accessing its dashboard at : http://grafana.jimmysong.io

please find the below screenshots and let me know how to access grafana ?

(noticed that dashboard is accessible at 8443 while k8 master is running at 6443. and grafana is running on node1 and not node2 as mentioned in repo)

screen shot 2018-07-29 at 12 17 49 pm

screen shot 2018-07-29 at 1 17 10 pm

After vagrant halt, Istio is missing

环境:

  • Mac
  • vagrant 2.1.2
  • virtualBox: 5.2.16
  • Kubernetes 1.11.0

遇到了几个问题:

  1. 在系统自动挂起之后vm 的时间会不对,没有自动同步,导致 Prometheus 监控数据显示不出来,应该同步时间就可以,不过我没有,而是使用的 vagrant halt ,就导致了另一个问题。
  2. 执行 vagrant halt 按照【重启】 的操作后,之前配置的 Istio 相关所有东西都不见了,包括 istio-system namespace 。
  3. 按照文档重新部署istio ,基本上都会有几个容器部署失败,servicegraph 、 egressgateway 获取其他的,只能vagrant destroy 重头再来。

不知道是我操作的问题还是环境的问题,可以稳定重现。

error: unable to forward port because pod is not running. Current status=Pending

➜  kubernetes-vagrant-centos-cluster git:(master) ✗ kubectl -n default port-forward $(kubectl -n default get pod -l app=vistio-api -o jsonpath='{.items[0].metadata.name}') 9091:9091 &
[1] 13333
➜  kubernetes-vagrant-centos-cluster git:(master) ✗ error: unable to forward port because pod is not running. Current status=Pending

[1]  + 13333 exit 1     kubectl -n default port-forward  9091:9091
➜  kubernetes-vagrant-centos-cluster git:(master) ✗ kubectl -n default port-forward $(kubectl -n default get pod -l app=vistio-web -o jsonpath='{.items[0].metadata.name}') 8080:8080 &
[1] 13375
➜  kubernetes-vagrant-centos-cluster git:(master) ✗ error: unable to forward port because pod is not running. Current status=Pending

[1]  + 13375 exit 1     kubectl -n default port-forward  8080:8080
➜  kubernetes-vagrant-centos-cluster git:(master) ✗ kubectl get nodes
NAME      STATUS    ROLES     AGE       VERSION
node1     Ready     <none>    2d        v1.9.1
node2     Ready     <none>    2d        v1.9.1
node3     Ready     <none>    2d        v1.9.1

关于yum源替换

虽然增加了163的yum源,但是在实际的安装过程中并没有生效,建议修改为:

mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.bak
cp /vagrant/yum/*.* /etc/yum.repos.d/
mv /etc/yum.repos.d/CentOS7-Base-163.repo /etc/yum.repos.d/CentOS-Base.repo

etcd can't start

vagrant status
Current machine states:

node1                     running (virtualbox)
node2                     running (virtualbox)
node3                     running (virtualbox)

This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.


[root@node1 ~]# kubectl get cs
NAME                 STATUS      MESSAGE                                                                                           ERROR
etcd-2               Unhealthy   Get http://172.17.8.103:2379/health: dial tcp 172.17.8.103:2379: getsockopt: connection refused
etcd-1               Unhealthy   Get http://172.17.8.102:2379/health: dial tcp 172.17.8.102:2379: getsockopt: connection refused
scheduler            Healthy     ok
controller-manager   Healthy     ok
etcd-0               Healthy     {"health": "true"}



[root@node2 ~]# systemctl status etcd
● etcd.service - Etcd Server
   Loaded: loaded (/usr/lib/systemd/system/etcd.service; disabled; vendor preset: disabled)
   Active: failed (Result: start-limit) since Fri 2018-04-13 00:39:31 CST; 46min ago
  Process: 5622 ExecStart=/usr/bin/etcd --name $ETCD_NAME --data-dir=$ETCD_DATA_DIR --listen-client-urls $ETCD_LISTEN_CLIENT_URLS --advertise-client-urls $ETCD_ADVERTISE_CLIENT_URLS (code=exited, status=217/USER)
 Main PID: 5622 (code=exited, status=217/USER)

Apr 13 00:39:31 node2 systemd[1]: etcd.service: main process exited, code=exited, status=217/USER
Apr 13 00:39:31 node2 systemd[1]: Failed to start Etcd Server.
Apr 13 00:39:31 node2 systemd[1]: Unit etcd.service entered failed state.
Apr 13 00:39:31 node2 systemd[1]: etcd.service failed.
Apr 13 00:39:31 node2 systemd[1]: etcd.service holdoff time over, scheduling restart.
Apr 13 00:39:31 node2 systemd[1]: start request repeated too quickly for etcd.service
Apr 13 00:39:31 node2 systemd[1]: Failed to start Etcd Server.
Apr 13 00:39:31 node2 systemd[1]: Unit etcd.service entered failed state.
Apr 13 00:39:31 node2 systemd[1]: etcd.service failed.

Failed to start node3 in 'vagrant up' phase

Test environment:
macOS 10.13.4

The error was thrown in 'vagrant up' phase
The first error message:

node3: deploy coredns
    node3: unknown shorthand flag: 'f' in -f
    node3:
    node3: Usage:
    node3:   kubectl [flags]
    node3:   kubectl [command]
    node3:
    node3: Available Commands:
    node3:   version                                             Print version of client and server
    node3:   proxy                                               Run a proxy to the Kubernetes API server
    node3:   get [(-o|--output=)json|yaml|...] <resource> [<id>] Display one or many resources
    node3:   describe <resource> <id>                            Show details of a specific resource
    node3:   create -f filename                                  Create a resource by filename or stdin
    node3:   createall [-d directory] [-f filename]              Create all resources specified in a directory, filename or stdin
    node3:   update -f filename                                  Update a resource by filename or stdin
    node3:   delete ([-f filename] | (<resource> <id>))          Delete a resource by filename, stdin or resource and id
    node3:   namespace [<namespace>]                             Set and view the current Kubernetes namespace
    node3:   log <pod> <container>                               Print the logs for a container in a pod
    node3:   help [command]                                      Help about any command
    node3:
    node3:  Available Flags:
    node3:       --api-version="v1beta1": The version of the API to use against the server
    node3:   -a, --auth-path="/root/.kubernetes_auth": Path to the auth info file. If missing, prompt the user. Only used if using https.
    node3:       --certificate-authority="": Path to a certificate file for the certificate authority
    node3:       --client-certificate="": Path to a client certificate for TLS.
    node3:       --client-key="": Path to a client key file for TLS.
    node3:   -h, --help=false: help for kubectl
    node3:       --insecure-skip-tls-verify=false: If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure.
    node3:       --match-server-version=false: Require server version to match client version
    node3:   -n, --namespace="": If present, the namespace scope for this CLI request.
    node3:       --ns-path="/root/.kubernetes_ns": Path to the namespace info file that holds the namespace context to use for CLI requests.
    node3:   -s, --server="": Kubernetes apiserver to connect to
    node3:
    node3: Use "kubectl help [command]" for more information about that command.

There are more similar error follows the first error.
The last error message:

The SSH command responded with a non-zero exit status. Vagrant
assumes that this means the command failed. The output for this command
should be in the log above. Please read the output to determine what
went wrong.

Could not resolve host: mirrors.163.com; Unknown error"

Environment

  • OS: MacOS10.13
  • Kubernetes version: 1.11.0
  • VirtualBox version: 5.2.18
  • Vagrant version: 2.2.0

What I did?

vagrant up

Your operations on this repo.

Messages

Prompt error when executing vagrant up.
It was available the day before yesterday. But not today.
The following attempts have been made, but still do not work.
1、 Change the DNS to 8.8.8.8 / 114.114.114.114
2、Clean Env. Vagrant destroy / rm -rf , Redownload, Git clone new Dir ,
3、Set local /etc/hosts ,59.111.0.251 mirrors.163.com
4、Change ‘ mirrors.sohu.com from 'mirrors.163.com' in file of CentOS7-Base-163.repo
5、Test to Downloading files directly from browser is successful. Whether 163 or sohu

Logs or error messages.

 node2: http://mirrors.163.com/centos/7/os/x86_64/Packages/PyYAML-3.10-11.el7.x86_64.rpm: [Errno 14] curl#6 - "Could not resolve host: mirrors.163.com; Unknown error"

----
 node2: Downloading packages:
    node2: http://mirrors.sohu.com/centos/7/os/x86_64/Packages/PyYAML-3.10-11.el7.x86_64.rpm: [Errno 14] curl#6 - "Could not resolve host: mirrors.sohu.com; Unknown error"
    node2: Trying other mirror.
    node2: http://mirrors.sohu.com/centos/7/extras/x86_64/Packages/atomic-registries-1.22.1-25.git5a342e3.el7.centos.x86_64.rpm: [Errno 14] curl#6 - "Could not resolve host: mirrors.sohu.com; Unknown error"
    node2: Trying other mirror.
    node2: http://mirrors.sohu.com/centos/7/os/x86_64/Packages/checkpolicy-2.5-6.el7.x86_64.rpm: [Errno 14] curl#6 - "Could not resolve host:
mirrors.sohu.com; Unknown error"
    node2: Trying other mirror.
    node2: http://mirrors.sohu.com/centos/7/os/x86_64/Packages/device-mapper-event-libs-1.02.146-4.el7.x86_64.rpm: [Errno 14] curl#6 - "Could
not resolve host: mirrors.sohu.com; Unknown error"
    node2: Trying other mirror.
    node2: http://mirrors.sohu.com/centos/7/os/x86_64/Packages/device-mapper-persistent-data-0.7.3-3.el7.x86_64.rpm: [Errno 14] curl#6 - "Could not resolve host: mirrors.sohu.com; Unknown error"
    node2: Trying other mirror.
    node2: http://mirrors.sohu.com/centos/7/extras/x86_64/Packages/oci-register-machine-0-6.git2b44233.el7.x86_64.rpm: [Errno 14] curl#6 - "Could not resolve host: mirrors.sohu.com; Unknown error"
    node2: Trying other mirror. 

vagrant up appear config.vm.boot_timeout problem

linux environment

Bringing machine 'node1' up with 'virtualbox' provider...
Bringing machine 'node2' up with 'virtualbox' provider...
Bringing machine 'node3' up with 'virtualbox' provider...
==> node1: Importing base box 'centos/7'...
==> node1: Matching MAC address for NAT networking...
==> node1: Setting the name of the VM: node1
==> node1: Clearing any previously set network interfaces...
==> node1: Specific bridge 'en0: Wi-Fi (AirPort)' not found. You may be asked to specify
==> node1: which network to bridge to.
==> node1: Preparing network interfaces based on configuration...
    node1: Adapter 1: nat
    node1: Adapter 2: hostonly
    node1: Adapter 3: bridged
==> node1: Forwarding ports...
    node1: 22 (guest) => 2222 (host) (adapter 1)
==> node1: Running 'pre-boot' VM customizations...
==> node1: Booting VM...
==> node1: Waiting for machine to boot. This may take a few minutes...
    node1: SSH address: 127.0.0.1:2222
    node1: SSH username: vagrant
    node1: SSH auth method: private key
Timed out while waiting for the machine to boot. This means that
Vagrant was unable to communicate with the guest machine within
the configured ("config.vm.boot_timeout" value) time period.

If you look above, you should be able to see the error(s) that
Vagrant had when attempting to connect to the machine. These errors
are usually good hints as to what may be wrong.

If you're using a custom box, make sure that networking is properly
working and you're able to connect to the machine. It is a common
problem that networking isn't setup properly in these boxes.
Verify that authentication configurations are also setup properly,
as well.

If the box appears to be booting properly, you may want to increase
the timeout ("config.vm.boot_timeout") value.

windows 10,install error

windows没有出现输入1回车的步骤,没出现输入mount NFS时候的密码,出现了node3的错误,尝试了解决方案,发现'-bash: kubectl: command not found',是不是安装过程中因为没权限没安装成功?

node3: Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
    node3: Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
    node3: deploy coredns
    node3: /tmp/vagrant-shell: line 198: kubectl: command not found
    node3: /tmp/vagrant-shell: ./dns-deploy.sh: /bin/bash^M: bad interpreter: No such file or directory
    node3: /home/vagrant
    node3: deploy kubernetes dashboard
    node3: /tmp/vagrant-shell: line 202: kubectl: command not found
    node3: create admin role token
    node3: /tmp/vagrant-shell: line 204: kubectl: command not found
    node3: the admin role token is:
    node3: /tmp/vagrant-shell: line 206: kubectl: command not found
    node3: /tmp/vagrant-shell: line 206: kubectl: command not found
    node3: login to dashboard with the above token
    node3: /tmp/vagrant-shell: line 208: kubectl: command not found
    node3: https://172.17.8.101:
    node3: install traefik ingress controller
    node3: /tmp/vagrant-shell: line 210: kubectl: command not found
    node3: Configure Kubectl to autocomplete
    node3: /tmp/vagrant-shell: line 214: kubectl: command not found
PS E:\github_doc\kubernetes-vagrant-centos-cluster> Sxunder
Sxunder : 无法将“Sxunder”项识别为 cmdlet、函数、脚本文件或可运行程序的名称。请检查名称的拼写,如果包括路径,请确保路径正确,然后再试一次。
所在位置 行:1 字符: 1
+ Sxunder
+ ~~~~~~~
    + CategoryInfo          : ObjectNotFound: (Sxunder:String) [], CommandNotFoundException
    + FullyQualifiedErrorId : CommandNotFoundException

PS E:\github_doc\kubernetes-vagrant-centos-cluster>
PS E:\github_doc\kubernetes-vagrant-centos-cluster>
PS E:\github_doc\kubernetes-vagrant-centos-cluster> vagrant ssh node3
[vagrant@node3 ~]$ sudo -i
-bash: kubectl: command not found
[root@node3 ~]# cd /vagrant/addon/dns
[root@node3 dns]# ls
coredns.yaml.sed  dns-deploy.sh
[root@node3 dns]# yum -y install dos2unix
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
Package dos2unix-6.0.3-7.el7.x86_64 already installed and latest version
Nothing to do
[root@node3 dns]# dos2unix dns-deploy.sh
dos2unix: converting file dns-deploy.sh to Unix format ...
[root@node3 dns]# ./dns-deploy.sh -r 10.254.0.0/16 -i 10.254.0.2 |kubectl apply -f -
-bash: kubectl: command not found
[root@node3 dns]# find / -name kubectl
/vagrant/kubernetes1.13/cluster/juju/layers/kubernetes-master/debug-scripts/kubectl
/vagrant/kubernetes1.13/cluster/juju/layers/kubernetes-worker/debug-scripts/kubectl
/vagrant/kubernetes1.13/cmd/kubectl
/vagrant/kubernetes1.13/pkg/kubectl
/vagrant/kubernetes1.13/test/e2e/kubectl
/vagrant/kubernetes1.13/test/e2e/testing-manifests/kubectl
/vagrant/kubernetes1.13/test/fixtures/pkg/kubectl
/vagrant/kubernetes1.13/translations/kubectl
[root@node3 dns]# vim /etc/profile
[root@node3 dns]# vim /etc/profile
[root@node3 dns]#
[root@node3 dns]# ./dns-deploy.sh -r 10.254.0.0/16 -i 10.254.0.2 |kubectl apply -f -
-bash: kubectl: command not found

All pods disappeared

Environment

  • OS: win10 64
  • Kubernetes version: 1.13.0
  • VirtualBox version: 6.0.4
  • Vagrant version: 2.2.3

What I did?

vagrant halt后,再次,vagrant up,所有pod都丢失了,尤其是kube-system下的pod。
请问,怎么重启这些pod?

image

image

not able to connect server

[root@node1 ~]# kubectl get nodes
The connection to the server 172.17.8.101:6443 was refused - did you specify the right host or port?

Namespaces "istio-system" not found while deploy istio on Kubernetes

Error from server (NotFound): error when creating "addon/istio/ingress.yaml": namespaces "istio-system" not found
错误如下:

➜  kubernetes-vagrant-centos-cluster git:(master) ✗   kubectl apply -f addon/istio/
namespace "istio-system" created
configmap "istio-galley-configuration" created
configmap "istio-grafana-custom-resources" created
configmap "istio-statsd-prom-bridge" created
configmap "prometheus" created
configmap "istio-security-custom-resources" created
configmap "istio" created
configmap "istio-sidecar-injector" created
serviceaccount "istio-galley-service-account" created
serviceaccount "istio-egressgateway-service-account" created
serviceaccount "istio-ingressgateway-service-account" created
serviceaccount "istio-grafana-post-install-account" created
clusterrole.rbac.authorization.k8s.io "istio-grafana-post-install-istio-system" created
clusterrolebinding.rbac.authorization.k8s.io "istio-grafana-post-install-role-binding-istio-system" created
job.batch "istio-grafana-post-install" created
serviceaccount "istio-mixer-service-account" created
serviceaccount "istio-pilot-service-account" created
serviceaccount "prometheus" created
serviceaccount "istio-cleanup-secrets-service-account" created
clusterrole.rbac.authorization.k8s.io "istio-cleanup-secrets-istio-system" created
clusterrolebinding.rbac.authorization.k8s.io "istio-cleanup-secrets-istio-system" created
job.batch "istio-cleanup-secrets" created
serviceaccount "istio-citadel-service-account" created
serviceaccount "istio-sidecar-injector-service-account" created
customresourcedefinition.apiextensions.k8s.io "virtualservices.networking.istio.io" created
customresourcedefinition.apiextensions.k8s.io "destinationrules.networking.istio.io" created
customresourcedefinition.apiextensions.k8s.io "serviceentries.networking.istio.io" created
customresourcedefinition.apiextensions.k8s.io "gateways.networking.istio.io" created
customresourcedefinition.apiextensions.k8s.io "envoyfilters.networking.istio.io" created
customresourcedefinition.apiextensions.k8s.io "httpapispecbindings.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "httpapispecs.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "quotaspecbindings.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "quotaspecs.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "rules.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "attributemanifests.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "bypasses.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "circonuses.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "deniers.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "fluentds.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "kubernetesenvs.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "listcheckers.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "memquotas.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "noops.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "opas.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "prometheuses.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "rbacs.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "redisquotas.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "servicecontrols.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "signalfxs.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "solarwindses.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "stackdrivers.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "statsds.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "stdios.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "apikeys.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "authorizations.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "checknothings.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "kuberneteses.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "listentries.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "logentries.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "edges.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "metrics.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "quotas.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "reportnothings.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "servicecontrolreports.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "tracespans.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "rbacconfigs.rbac.istio.io" created
customresourcedefinition.apiextensions.k8s.io "serviceroles.rbac.istio.io" created
customresourcedefinition.apiextensions.k8s.io "servicerolebindings.rbac.istio.io" created
customresourcedefinition.apiextensions.k8s.io "adapters.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "instances.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "templates.config.istio.io" created
customresourcedefinition.apiextensions.k8s.io "handlers.config.istio.io" created
clusterrole.rbac.authorization.k8s.io "istio-galley-istio-system" created
clusterrole.rbac.authorization.k8s.io "istio-egressgateway-istio-system" created
clusterrole.rbac.authorization.k8s.io "istio-ingressgateway-istio-system" created
clusterrole.rbac.authorization.k8s.io "istio-mixer-istio-system" created
clusterrole.rbac.authorization.k8s.io "istio-pilot-istio-system" created
clusterrole.rbac.authorization.k8s.io "prometheus-istio-system" created
clusterrole.rbac.authorization.k8s.io "istio-citadel-istio-system" created
clusterrole.rbac.authorization.k8s.io "istio-sidecar-injector-istio-system" created
clusterrolebinding.rbac.authorization.k8s.io "istio-galley-admin-role-binding-istio-system" created
clusterrolebinding.rbac.authorization.k8s.io "istio-egressgateway-istio-system" created
clusterrolebinding.rbac.authorization.k8s.io "istio-ingressgateway-istio-system" created
clusterrolebinding.rbac.authorization.k8s.io "istio-mixer-admin-role-binding-istio-system" created
clusterrolebinding.rbac.authorization.k8s.io "istio-pilot-istio-system" created
clusterrolebinding.rbac.authorization.k8s.io "prometheus-istio-system" created
clusterrolebinding.rbac.authorization.k8s.io "istio-citadel-istio-system" created
clusterrolebinding.rbac.authorization.k8s.io "istio-sidecar-injector-admin-role-binding-istio-system" created
service "istio-galley" created
service "istio-egressgateway" created
service "istio-ingressgateway" created
service "grafana" created
service "istio-policy" created
service "istio-telemetry" created
service "istio-statsd-prom-bridge" created
deployment.extensions "istio-statsd-prom-bridge" created
service "istio-pilot" created
service "prometheus" created
service "istio-citadel" created
service "servicegraph" created
service "istio-sidecar-injector" created
deployment.extensions "istio-galley" created
deployment.extensions "istio-egressgateway" created
deployment.extensions "istio-ingressgateway" created
deployment.extensions "grafana" created
deployment.extensions "istio-policy" created
deployment.extensions "istio-telemetry" created
deployment.extensions "istio-pilot" created
deployment.extensions "prometheus" created
deployment.extensions "istio-citadel" created
deployment.extensions "servicegraph" created
deployment.extensions "istio-sidecar-injector" created
deployment.extensions "istio-tracing" created
gateway.networking.istio.io "istio-autogenerated-k8s-ingress" created
horizontalpodautoscaler.autoscaling "istio-egressgateway" created
horizontalpodautoscaler.autoscaling "istio-ingressgateway" created
horizontalpodautoscaler.autoscaling "istio-policy" created
horizontalpodautoscaler.autoscaling "istio-telemetry" created
horizontalpodautoscaler.autoscaling "istio-pilot" created
service "jaeger-query" created
service "jaeger-collector" created
service "jaeger-agent" created
service "zipkin" created
service "tracing" created
mutatingwebhookconfiguration.admissionregistration.k8s.io "istio-sidecar-injector" created
attributemanifest.config.istio.io "istioproxy" created
attributemanifest.config.istio.io "kubernetes" created
stdio.config.istio.io "handler" created
logentry.config.istio.io "accesslog" created
logentry.config.istio.io "tcpaccesslog" created
rule.config.istio.io "stdio" created
rule.config.istio.io "stdiotcp" created
metric.config.istio.io "requestcount" created
metric.config.istio.io "requestduration" created
metric.config.istio.io "requestsize" created
metric.config.istio.io "responsesize" created
metric.config.istio.io "tcpbytesent" created
metric.config.istio.io "tcpbytereceived" created
prometheus.config.istio.io "handler" created
rule.config.istio.io "promhttp" created
rule.config.istio.io "promtcp" created
kubernetesenv.config.istio.io "handler" created
rule.config.istio.io "kubeattrgenrulerule" created
rule.config.istio.io "tcpkubeattrgenrulerule" created
kubernetes.config.istio.io "attributes" created
destinationrule.networking.istio.io "istio-policy" created
destinationrule.networking.istio.io "istio-telemetry" created
Error from server (NotFound): error when creating "addon/istio/ingress.yaml": namespaces "istio-system" not found

Unable to open https://172.17.8.101:8443/

您的连接不是私密连接
攻击者可能会试图从 172.17.8.101 窃取您的信息(例如:密码、通讯内容或信用卡信息)。了解详情
NET::ERR_CERT_INVALID

OS: Windows 10
Browser: Chrome 69.0.3497.100
Kubernetes version: 1.11.3

把ca.pem转换成ca.crt导入之后还是同样的提示,这个是要怎么操作?
P.S. 虽然我看到说不支持Windows,但是还是顺利的安装完成了

vagrant up failed while configuring node3 with Win10

Environment

  • OS:WIN10
  • kubernetes:1.11.0
  • vagrant: 2.1.5
  • virtualbox:5.2.18

What I did?

after " vagrant up"
got messages:

 node3: configure node3
 node3: Failed to start kubelet.service: Unit is not loaded properly: Bad message.
 node3: See system logs and 'systemctl status kubelet.service' for details.
 node3: Failed to execute operation: Bad message
 node3: Failed to start kube-proxy.service: Unit is not loaded properly: Bad message.
 node3: See system logs and 'systemctl status kube-proxy.service' for details.
 node3: Failed to execute operation: Bad message
 node3: deploy coredns
 node3: /tmp/vagrant-shell: ./dns-deploy.sh: /bin/bash^M: bad interpreter: No such file or directory

the Solution for Win10 got messages:

unable to recognize "STDIN": Get https://172.17.8.101:6443/api?timeout=32s: dial tcp 172.17.8.101:6443: connect: connection refused
unable to recognize "STDIN": Get https://172.17.8.101:6443/api?timeout=32s: dial tcp 172.17.8.101:6443: connect: connection refused
unable to recognize "STDIN": Get https://172.17.8.101:6443/api?timeout=32s: dial tcp 172.17.8.101:6443: connect: connection refused
unable to recognize "STDIN": Get https://172.17.8.101:6443/api?timeout=32s: dial tcp 172.17.8.101:6443: connect: connection refused
unable to recognize "STDIN": Get https://172.17.8.101:6443/api?timeout=32s: dial tcp 172.17.8.101:6443: connect: connection refused
unable to recognize "STDIN": Get https://172.17.8.101:6443/api?timeout=32s: dial tcp 172.17.8.101:6443: connect: connection refused

/proc/sys/net/bridge/bridge-nf-call-ip6tables: No such file or directory

de2: 
    node2: Complete!
    node2: sync time
    node2: Created symlink from /etc/systemd/system/multi-user.target.wants/ntpd.service to /usr/lib/systemd/system/ntpd.service.
    node2: disable selinux
    node2: enable iptable kernel parameter
    node2: net.ipv4.ip_forward = 1
    node2: sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: No such file or directory
    node2: sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
    node2: set host name resolution

problems feedback

  1. 测试过,安装vistio只能在node3的节点上安装,因为node1和node2节点的8080端口已经被占用。
  2. 按照步骤安装完成vistio后,给出的访问地址是这个:http://localhost:8080,请问这个是在哪里访问?我的理解应该是在node3上访问。那么,这个node3的虚拟机是在VirtualBox上的,请问如何通过VirtualBox客户端直接访问到node3?您的截图,即这个地址http://localhost:8080/istio-mesh截图是在哪里获取到的?
  3. 能否修改一下用nodeport或者ingress的方式访问呢?

IP that cannot Ping through pod on the master node

Environment

  • OS: windows 10 64
  • Kubernetes version: 1.13.0
  • VirtualBox version: 6.0.4
  • Vagrant version: 2.2.3

What I did?

  1. kubectl get pod -o wide
    NAME READY STATUS RESTARTS AGE IP
    nginx-deployment-** 1/1 Running 0 9s 172.18.0.2

  2. ping 172.18.0.2
    failed

Heapster not accessible

The Service is deployed. But Grafana is not accessible 172.17.8.102 (Grafana/service graphn none of them are working on the ip mentioned)
image
Is there something amiss

Segmentation fault: 11

image

安照宋总的步骤:

wget https://storage.googleapis.com/kubernetes-release/release/v1.11.0/kubernetes-client-darwin-amd64.tar.gz

安装kubectl,执行之后报错:Segmentation fault: 11
可能是只安装客户端的问题,我下了一个全的,就可以了。

 curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl

@rootsongjc song 不知道是我环境问题还是这一步确实可以改进

CRLF line separator in systemd unit files when using Vagrant in windows

Environment

  • OS: Windows 8.1
  • Kubernetes version: v1.11
  • VirtualBox version: v5.0
  • Vagrant version: v2.2.0

What I did?

I cloned the repo to my Windows 8.1 laptop. Followed the instructions in README.md to install a 3-node kubernetes cluster using Vagrant/Virtualbox. None of the nodes bootstrapped successfully.

Messages

Upon investigating the issue I found the reason. The systemd unit files for the master and worker services has CRLF line separator. Whereas CentOS is a Linux OS and it expects LF as line separator. I used dos2unix utility and replaced the CRLF with LF in all the unit files, then started the services manually. After that nodes bootstrapped successfully and kubernetes cluster was up and running.

Update to Istio 1.0

grafana.yaml里的镜像还是:docker.io/istio/grafana:0.8.0

istio-demo.yaml里的文件镜像都是gcr.io的镜像,如:gcr.io/istio-release/proxyv2:1.0.0等。

node 1~3 configure error and docker version 1.13.1 got confused

when i start vagrant with the command : vagrant up; i got the below error. plese help me
and i tried to identify the docker version , it told me that is Docker version 1.13.1, build 8633870/1.13.1.
however , at the nowtime , the docker version should be like Docker version 17.10.0-ce, build f4ffd25 or doker version 18.03-ce. i got confused that can it work correctly with the k8s 1.11?

secerely thanks

node3: configure node3
    node3: Failed to execute operation: Bad message
    node3: Failed to start kubelet.service: Unit is not loaded properly: Bad message.
    node3: See system logs and 'systemctl status kubelet.service' for details.
    node3: Failed to execute operation: Bad message
    node3: Failed to start kube-proxy.service: Unit is not loaded properly: Bad message.
    node3: See system logs and 'systemctl status kube-proxy.service' for details.
    node3: deploy coredns
    node3: /tmp/vagrant-shell: ./dns-deploy.sh: /bin/bash^M: bad interpreter: No such file or directory
    node3: error: no objects passed to apply
    node3: /home/vagrant
    node3: deploy kubernetes dashboard
    node3: unable to recognize "/vagrant/addon/dashboard/kubernetes-dashboard.yaml": Get https://172.17.8.101:6443/api?timeout=32s: dial tcp 172.17.8.101:6443: connect: connection refused
    node3: unable to recognize "/vagrant/addon/dashboard/kubernetes-dashboard.yaml": Get https://172.17.8.101:6443/api?timeout=32s: dial tcp 172.17.8.101:6443: connect: connection refused
    node3: unable to recognize "/vagrant/addon/dashboard/kubernetes-dashboard.yaml": Get https://172.17.8.101:6443/api?timeout=32s: dial tcp 172.17.8.101:6443: connect: connection refused
    node3: unable to recognize "/vagrant/addon/dashboard/kubernetes-dashboard.yaml": Get https://172.17.8.101:6443/api?timeout=32s: dial tcp 172.17.8.101:6443: connect: connection refused
    node3: unable to recognize "/vagrant/addon/dashboard/kubernetes-dashboard.yaml": Get https://172.17.8.101:6443/api?timeout=32s: dial tcp 172.17.8.101:6443: connect: connection refused
    node3: unable to recognize "/vagrant/addon/dashboard/kubernetes-dashboard.yaml": Get https://172.17.8.101:6443/api?timeout=32s: dial tcp 172.17.8.101:6443: connect: connection refused
    node3: create admin role token
    node3: unable to recognize "/vagrant/yaml/admin-role.yaml": Get https://172.17.8.101:6443/api?timeout=32s: dial tcp 172.17.8.101:6443: connect: connection refused
    node3: unable to recognize "/vagrant/yaml/admin-role.yaml": Get https://172.17.8.101:6443/api?timeout=32s: dial tcp 172.17.8.101:6443: connect: connection refused
    node3: the admin role token is:
    node3: The connection to the server 172.17.8.101:6443 was refused - did you specify the right host or port?
    node3: The connection to the server 172.17.8.101:6443 was refused - did you specify the right host or port?
    node3: login to dashboard with the above token
    node3: The connection to the server 172.17.8.101:6443 was refused - did you specify the right host or port?
    node3: https://172.17.8.101:
    node3: install traefik ingress controller
    node3: unable to recognize "/vagrant/addon/traefik-ingress/ingress.yaml": Get https://172.17.8.101:6443/api?timeout=32s: dial tcp 172.17.8.101:6443: connect: connection refused
    node3: unable to recognize "/vagrant/addon/traefik-ingress/traefik-rbac.yaml": Get https://172.17.8.101:6443/api?timeout=32s: dial tcp 172.17.8.101:6443: connect: connection refused
    node3: unable to recognize "/vagrant/addon/traefik-ingress/traefik.yaml": Get https://172.17.8.101:6443/api?timeout=32s: dial tcp 172.17.8.101:6443: connect: connection refused
    node3: unable to recognize "/vagrant/addon/traefik-ingress/traefik.yaml": Get https://172.17.8.101:6443/api?timeout=32s: dial tcp 172.17.8.101:6443: connect: connection refused
    node3: unable to recognize "/vagrant/addon/traefik-ingress/traefik.yaml": Get https://172.17.8.101:6443/api?timeout=32s: dial tcp 172.17.8.101:6443: connect: connection refused
The SSH command responded with a non-zero exit status. Vagrant
assumes that this means the command failed. The output for this command
should be in the log above. Please read the output to determine what
went wrong.

node1 machine bug

windows 10-hyper-v,系统:ubuntu 18.04新装的,软件都更新了。,其他的都是按照教程装的。

==> node1: An error occurred. The error will be shown after all tasks complete.
==> node3: Removing domain...
==> node3: An error occurred. The error will be shown after all tasks complete.
==> node2: Removing domain...
==> node2: An error occurred. The error will be shown after all tasks complete.
An error occurred while executing multiple actions in parallel.
Any errors that occurred are shown below.

An unexpected error occurred when executing the action on the
'node1' machine. Please report this as a bug:

The specified wait_for timeout (2 seconds) was exceeded

/usr/lib/ruby/vendor_ruby/fog/core/wait_for.rb:9:in `block in wait_for'
/usr/lib/ruby/vendor_ruby/fog/core/wait_for.rb:6:in `loop'
/usr/lib/ruby/vendor_ruby/fog/core/wait_for.rb:6:in `wait_for'
/usr/lib/ruby/vendor_ruby/fog/core/model.rb:74:in `wait_for'
/usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/action/wait_till_up.rb:42:in `block (2 levels) in call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/util/retryable.rb:17:in `retryable'
/usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/action/wait_till_up.rb:37:in `block in call'
/usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/util/timer.rb:9:in `time'
/usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/action/wait_till_up.rb:34:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/action/start_domain.rb:302:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/action/set_boot_order.rb:78:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/action/create_network_interfaces.rb:182:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/action/create_networks.rb:84:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/action/share_folders.rb:20:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/action/prepare_nfs_settings.rb:18:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builtin/synced_folders.rb:87:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builtin/synced_folder_cleanup.rb:28:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/plugins/synced_folders/nfs/action_cleanup.rb:25:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/action/prepare_nfs_valid_ids.rb:12:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builtin/provision.rb:80:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/action/create_domain.rb:317:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/action/create_domain_volume.rb:82:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/action/handle_box_image.rb:113:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builtin/handle_box.rb:56:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/action/handle_storage_pool.rb:52:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/action/set_name_of_domain.rb:35:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:95:in `block in finalize_action'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builder.rb:116:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/runner.rb:66:in `block in run'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/util/busy.rb:19:in `busy'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/runner.rb:66:in `run'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builtin/call.rb:53:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builtin/box_check_outdated.rb:23:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builtin/config_validate.rb:25:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builder.rb:116:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/runner.rb:66:in `block in run'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/util/busy.rb:19:in `busy'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/runner.rb:66:in `run'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/machine.rb:227:in `action_raw'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/machine.rb:202:in `block in action'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/environment.rb:592:in `lock'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/machine.rb:188:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/machine.rb:188:in `action'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/batch_action.rb:82:in `block (2 levels) in run'

An unexpected error occurred when executing the action on the
'node2' machine. Please report this as a bug:

The specified wait_for timeout (2 seconds) was exceeded

/usr/lib/ruby/vendor_ruby/fog/core/wait_for.rb:9:in `block in wait_for'
/usr/lib/ruby/vendor_ruby/fog/core/wait_for.rb:6:in `loop'
/usr/lib/ruby/vendor_ruby/fog/core/wait_for.rb:6:in `wait_for'
/usr/lib/ruby/vendor_ruby/fog/core/model.rb:74:in `wait_for'
/usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/action/wait_till_up.rb:42:in `block (2 levels) in call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/util/retryable.rb:17:in `retryable'
/usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/action/wait_till_up.rb:37:in `block in call'
/usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/util/timer.rb:9:in `time'
/usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/action/wait_till_up.rb:34:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/action/start_domain.rb:302:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/action/set_boot_order.rb:78:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/action/create_network_interfaces.rb:182:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/action/create_networks.rb:84:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/action/share_folders.rb:20:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/action/prepare_nfs_settings.rb:18:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builtin/synced_folders.rb:87:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builtin/synced_folder_cleanup.rb:28:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/plugins/synced_folders/nfs/action_cleanup.rb:25:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/action/prepare_nfs_valid_ids.rb:12:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builtin/provision.rb:80:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/action/create_domain.rb:317:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/action/create_domain_volume.rb:82:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/action/handle_box_image.rb:113:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builtin/handle_box.rb:56:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/action/handle_storage_pool.rb:52:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/action/set_name_of_domain.rb:35:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:95:in `block in finalize_action'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builder.rb:116:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/runner.rb:66:in `block in run'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/util/busy.rb:19:in `busy'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/runner.rb:66:in `run'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builtin/call.rb:53:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builtin/box_check_outdated.rb:23:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builtin/config_validate.rb:25:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builder.rb:116:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/runner.rb:66:in `block in run'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/util/busy.rb:19:in `busy'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/runner.rb:66:in `run'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/machine.rb:227:in `action_raw'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/machine.rb:202:in `block in action'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/environment.rb:592:in `lock'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/machine.rb:188:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/machine.rb:188:in `action'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/batch_action.rb:82:in `block (2 levels) in run'

An unexpected error occurred when executing the action on the
'node3' machine. Please report this as a bug:

The specified wait_for timeout (2 seconds) was exceeded

/usr/lib/ruby/vendor_ruby/fog/core/wait_for.rb:9:in `block in wait_for'
/usr/lib/ruby/vendor_ruby/fog/core/wait_for.rb:6:in `loop'
/usr/lib/ruby/vendor_ruby/fog/core/wait_for.rb:6:in `wait_for'
/usr/lib/ruby/vendor_ruby/fog/core/model.rb:74:in `wait_for'
/usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/action/wait_till_up.rb:42:in `block (2 levels) in call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/util/retryable.rb:17:in `retryable'
/usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/action/wait_till_up.rb:37:in `block in call'
/usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/util/timer.rb:9:in `time'
/usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/action/wait_till_up.rb:34:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/action/start_domain.rb:302:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/action/set_boot_order.rb:78:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/action/create_network_interfaces.rb:182:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/action/create_networks.rb:84:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/action/share_folders.rb:20:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/action/prepare_nfs_settings.rb:18:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builtin/synced_folders.rb:87:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builtin/synced_folder_cleanup.rb:28:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/plugins/synced_folders/nfs/action_cleanup.rb:25:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/action/prepare_nfs_valid_ids.rb:12:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builtin/provision.rb:80:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/action/create_domain.rb:317:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/action/create_domain_volume.rb:82:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/action/handle_box_image.rb:113:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builtin/handle_box.rb:56:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/action/handle_storage_pool.rb:52:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-libvirt-0.0.43/lib/vagrant-libvirt/action/set_name_of_domain.rb:35:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:95:in `block in finalize_action'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builder.rb:116:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/runner.rb:66:in `block in run'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/util/busy.rb:19:in `busy'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/runner.rb:66:in `run'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builtin/call.rb:53:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builtin/box_check_outdated.rb:23:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builtin/config_validate.rb:25:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/warden.rb:34:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/builder.rb:116:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/runner.rb:66:in `block in run'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/util/busy.rb:19:in `busy'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/action/runner.rb:66:in `run'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/machine.rb:227:in `action_raw'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/machine.rb:202:in `block in action'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/environment.rb:592:in `lock'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/machine.rb:188:in `call'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/machine.rb:188:in `action'
/usr/share/rubygems-integration/all/gems/vagrant-2.0.2/lib/vagrant/batch_action.rb:82:in `block (2 levels) in run'

dashboard cannot startup

2018/06/27 15:03:48 Auto-generating certificates
2018/06/27 15:03:48 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2018/06/27 15:03:48 [ECDSAManager] Failed to open dashboard.crt for writing: open /certs/dashboard.crt: read-only file system

the kubernetes version is 1.10.5

when access dashboard UI m but Peer's Certificate has expired

Hello,

when access dashboard UI, but Peer's Certificate has expired.
Would you please help me? thanks a lot in advance.
your prompt reply is highly appreciated.

[root@node1 ~]# kubectl get nodes
NAME      STATUS    ROLES     AGE       VERSION
node1     Ready     <none>    34m       v1.11.0
node2     Ready     <none>    30m       v1.11.0
node3     Ready     <none>    27m       v1.11.0

[root@node1 ~]# curl https://172.17.8.101:8443/
curl: (60) Peer's Certificate has expired.
More details here: http://curl.haxx.se/docs/sslcerts.html

curl performs SSL certificate verification by default, using a "bundle"
 of Certificate Authority (CA) public keys (CA certs). If the default
 bundle file isn't adequate, you can specify an alternate file
 using the --cacert option.
If this HTTPS server uses a certificate signed by a CA represented in
 the bundle, the certificate verification probably failed due to a
 problem with the certificate (it might be expired, or the name might
 not match the domain name in the URL).
If you'd like to turn off curl's verification of the certificate, use
 the -k (or --insecure) option.

[root@node1 ~]# curl https://172.17.8.101:8443/ -k
 <!doctype html> <html ng-app="kubernetesDashboard"> <head> <meta charset="utf-8"> <title ng-controller="kdTitle as $ctrl" ng-bind="$ctrl.title()"></title> <link rel="icon" type="image/png" href="assets/images/kubernetes-logo.png"> <meta name="viewport" content="width=device-width"> <link rel="stylesheet" href="static/vendor.93db0a0d.css"> <link rel="stylesheet" href="static/app.93e259f7.css"> </head> <body ng-controller="kdMain as $ctrl"> <!--[if lt IE 10]>
      <p class="browsehappy">You are using an <strong>outdated</strong> browser.
      Please <a href="http://browsehappy.com/">upgrade your browser</a> to improve your
      experience.</p>
    <![endif]--> <kd-login layout="column" layout-fill ng-if="$ctrl.isLoginState()"> </kd-login> <kd-chrome layout="column" layout-fill ng-if="!$ctrl.isLoginState()"> </kd-chrome> <script src="static/vendor.bd425c26.js"></script> <script src="api/appConfig.json"></script> <script src="static/app.b5ad51ac.js"></script> </body> </html>

vagrant ssh node1 always permission denied

c:\vagrantplay\kubernetes-vagrant-centos-cluster>vagrant ssh node1
[email protected]: Permission denied (publickey,gssapi-keyex,gssapi-with-mic).

c:\vagrantplay\kubernetes-vagrant-centos-cluster>vagrant status
Current machine states:

node1                     running (virtualbox)
node2                     running (virtualbox)
node3                     not created (virtualbox)

This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.

c:\vagrantplay\kubernetes-vagrant-centos-cluster>

Can't open http://servicegraph.istio.jimmysong.io/graph, output Service Unavailable

kubernetes-vagrant-centos-cluster git:(master) ✗ kubectl apply -n default -f <(istioctl kube-inject -f yaml/istio-bookinfo/bookinfo.yaml)
istioctl create -f yaml/istio-bookinfo/bookinfo-gateway.yaml
service/details created
deployment.extensions/details-v1 created
service/ratings created
deployment.extensions/ratings-v1 created
service/reviews created
deployment.extensions/reviews-v1 created
deployment.extensions/reviews-v2 created
deployment.extensions/reviews-v3 created
service/productpage created
deployment.extensions/productpage-v1 created
Created config gateway/default/bookinfo-gateway at revision 9760
Created config virtual-service/default/bookinfo at revision 9767

host已增加
172.17.8.102 grafana.istio.jimmysong.io
172.17.8.102 servicegraph.istio.jimmysong.io

vagrant up 过程,报错"rsync" could not be found on your PATH. Make sure that rsync is properly installed on your system and available on the PATH.

[root@walle1 kubernetes-vagrant-centos-cluster-master]# vagrant up
Bringing machine 'node1' up with 'virtualbox' provider...
Bringing machine 'node2' up with 'virtualbox' provider...
Bringing machine 'node3' up with 'virtualbox' provider...
==> node1: Importing base box 'centos/7'...
==> node1: Matching MAC address for NAT networking...
==> node1: Setting the name of the VM: node1
"rsync" could not be found on your PATH. Make sure that rsync
is properly installed on your system and available on the PATH.

Dashboard cannot access

和这个 #6 有点类似,但是稍微有区别:Dashboard 的 pod 根本起不来

[root@node1 hack]# kubectl get pods --all-namespaces
NAMESPACE     NAME                                   READY     STATUS             RESTARTS   AGE
kube-system   coredns-6558b6549d-zs9dt               1/1       Running            0          32m
kube-system   kubernetes-dashboard-f95796d57-g6m9g   0/1       CrashLoopBackOff   11         32m
kube-system   traefik-ingress-controller-kzqvt       1/1       Running            0          32m

[root@node1 hack]# kubectl get svc --all-namespaces
NAMESPACE     NAME                      TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)           AGE
default       kubernetes                ClusterIP   10.254.0.1       <none>        443/TCP           4d
kube-system   kube-dns                  ClusterIP   10.254.0.2       <none>        53/UDP,53/TCP     34m
kube-system   kubernetes-dashboard      ClusterIP   10.254.113.186   <none>        8443/TCP          34m
kube-system   traefik-ingress-service   ClusterIP   10.254.167.251   <none>        80/TCP,8080/TCP   34m

[root@node1 hack]# docker images
REPOSITORY                                       TAG                 IMAGE ID            CREATED             SIZE
docker.io/jimmysong/kubernetes-dashboard-amd64   v1.8.2              c87ea0497294        3 months ago        102 MB
docker.io/jimmysong/pause-amd64                  3.0                 99e59f495ffa        23 months ago       747 kB

[root@node1 hack]# docker run -it docker.io/jimmysong/kubernetes-dashboard-amd64:v1.8.2 bash
2018/04/16 17:13:43 Starting overwatch
2018/04/16 17:13:43 Using in-cluster config to connect to apiserver
2018/04/16 17:13:43 Could not init in cluster config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
2018/04/16 17:13:43 Using random key for csrf signing
2018/04/16 17:13:43 No request provided. Skipping authorization
panic: Could not create client config. Check logs for more information

goroutine 1 [running]:
github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).initInsecureClient(0xc4201632c0)
	/home/travis/build/kubernetes/dashboard/.tmp/backend/src/github.com/kubernetes/dashboard/src/app/backend/client/manager.go:335 +0x9a
github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).init(0xc4201632c0)
	/home/travis/build/kubernetes/dashboard/.tmp/backend/src/github.com/kubernetes/dashboard/src/app/backend/client/manager.go:297 +0x47
github.com/kubernetes/dashboard/src/app/backend/client.NewClientManager(0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
	/home/travis/build/kubernetes/dashboard/.tmp/backend/src/github.com/kubernetes/dashboard/src/app/backend/client/manager.go:365 +0x84
main.main()
	/home/travis/build/kubernetes/dashboard/.tmp/backend/src/github.com/kubernetes/dashboard/src/app/backend/dashboard.go:91 +0x13b

我通过查看 docker containers 发现dashboard 的容器没有起来,尝试手动启 dashboard 的容器,结果报上面👆这个错误。

[root@node1 hack]# systemctl status kube-apiserver -a
● kube-apiserver.service - Kubernetes API Service
   Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)
   Active: active (running) since Mon 2018-04-16 23:06:31 CST; 2h 13min ago
     Docs: https://github.com/GoogleCloudPlatform/kubernetes
 Main PID: 1217 (kube-apiserver)
   Memory: 2.6M
   CGroup: /system.slice/kube-apiserver.service
           └─1217 /usr/bin/kube-apiserver --logtostderr=true --v=0 --etcd-servers=http://172.17.8.101:2379,http://172.17.8.102:2379,http://172.17.8.103:2379 --advertise-addres...

Apr 17 01:01:43 node1 kube-apiserver[1217]: I0417 01:01:43.675007    1217 logs.go:49] http: TLS handshake error from 172.17.8.1:62897: EOF
Apr 17 01:01:43 node1 kube-apiserver[1217]: I0417 01:01:43.807961    1217 logs.go:49] http2: server: error reading preface from client 172.17.8.1:62898: read tcp 172.17.8.101:6443->172.17.8.1:62898: read: connection reset by peer
Apr 17 01:02:22 node1 kube-apiserver[1217]: I0417 01:02:22.100809    1217 logs.go:49] http: TLS handshake error from 172.17.8.101:56790: remote error: tls: unknown certificate authority
Apr 17 01:05:23 node1 kube-apiserver[1217]: I0417 01:05:23.383208    1217 trace.go:76] Trace[932644373]: "List /api/v1/componentstatuses" (started: 2018-04-17 01:05:20.376024136 +0800 CST m=+7136.859394663) (total time: 3.007151421s):
Apr 17 01:05:23 node1 kube-apiserver[1217]: Trace[932644373]: [3.007009003s] [3.006957743s] Listing from storage done
Apr 17 01:08:11 node1 kube-apiserver[1217]: I0417 01:08:11.716342    1217 logs.go:49] http: TLS handshake error from 172.17.8.1:62985: EOF
Apr 17 01:08:51 node1 kube-apiserver[1217]: I0417 01:08:51.101732    1217 logs.go:49] http2: received GOAWAY [FrameHeader GOAWAY len=33], starting graceful shutdown
Apr 17 01:16:56 node1 kube-apiserver[1217]: I0417 01:16:56.548936    1217 logs.go:49] http: TLS handshake error from 172.17.8.1:63062: EOF
Apr 17 01:16:58 node1 kube-apiserver[1217]: I0417 01:16:58.937833    1217 logs.go:49] http: TLS handshake error from 172.17.8.1:63063: EOF
Apr 17 01:16:59 node1 kube-apiserver[1217]: I0417 01:16:59.109004    1217 logs.go:49] http2: server: error reading preface from client 172.17.8.1:63064: read tcp 172.17.8.101:6443->172.17.8.1:63064: read: connection reset by peer

git clone failed

环境:

  • Ubuntu 16.04.4 LTS
  • vagrant 2.0.3
  • virtualBox 5.2.16
  • git version 2.7.4
  • Kubernetes v1.9.5
  • Istio 1.0.0
    问题:现在在git clone 该项目时遇到了如下情况
root@node1:~# git clone https://github.com/rootsongjc/kubernetes-vagrant-centos-cluster.git
Cloning into 'kubernetes-vagrant-centos-cluster'...
remote: Counting objects: 539, done.
remote: Compressing objects: 100% (93/93), done.
remote: Total 539 (delta 68), reused 83 (delta 33), pack-reused 411
Receiving objects: 100% (539/539), 50.99 MiB | 1.36 MiB/s, done.
Resolving deltas: 100% (257/257), done.
error: RPC failed; curl 56 GnuTLS recv error (-110): The TLS connection was non-properly terminated.

我现在不清楚是哪里出了问题。

unable to recognize "addon/istio/istio-demo.yaml": no matches for kind "Gateway"

win10 安装k8s v1.12.0-rc.2成功后,下载istio-1.0.0 -win解压并设置PATH目录,安装demo时提示unable to recognize错误.

D:\code\istio>kubectl apply -f addon/istio/istio-demo.yaml
namespace/istio-system created
configmap/istio-galley-configuration created
configmap/istio-grafana-custom-resources created
configmap/istio-statsd-prom-bridge created
configmap/prometheus created
configmap/istio-security-custom-resources created
configmap/istio created
configmap/istio-sidecar-injector created
serviceaccount/istio-galley-service-account created
serviceaccount/istio-egressgateway-service-account created
serviceaccount/istio-ingressgateway-service-account created
serviceaccount/istio-grafana-post-install-account created
clusterrole.rbac.authorization.k8s.io/istio-grafana-post-install-istio-system created
clusterrolebinding.rbac.authorization.k8s.io/istio-grafana-post-install-role-binding-istio-system created
job.batch/istio-grafana-post-install created
serviceaccount/istio-mixer-service-account created
serviceaccount/istio-pilot-service-account created
serviceaccount/prometheus created
serviceaccount/istio-cleanup-secrets-service-account created
clusterrole.rbac.authorization.k8s.io/istio-cleanup-secrets-istio-system created
clusterrolebinding.rbac.authorization.k8s.io/istio-cleanup-secrets-istio-system created
job.batch/istio-cleanup-secrets created
serviceaccount/istio-citadel-service-account created
serviceaccount/istio-sidecar-injector-service-account created
customresourcedefinition.apiextensions.k8s.io/virtualservices.networking.istio.io created
customresourcedefinition.apiextensions.k8s.io/destinationrules.networking.istio.io created
customresourcedefinition.apiextensions.k8s.io/serviceentries.networking.istio.io created
customresourcedefinition.apiextensions.k8s.io/gateways.networking.istio.io created
customresourcedefinition.apiextensions.k8s.io/envoyfilters.networking.istio.io created
customresourcedefinition.apiextensions.k8s.io/httpapispecbindings.config.istio.io created
customresourcedefinition.apiextensions.k8s.io/httpapispecs.config.istio.io created
customresourcedefinition.apiextensions.k8s.io/quotaspecbindings.config.istio.io created
customresourcedefinition.apiextensions.k8s.io/quotaspecs.config.istio.io created
customresourcedefinition.apiextensions.k8s.io/rules.config.istio.io created
customresourcedefinition.apiextensions.k8s.io/attributemanifests.config.istio.io created
customresourcedefinition.apiextensions.k8s.io/bypasses.config.istio.io created
customresourcedefinition.apiextensions.k8s.io/circonuses.config.istio.io created
customresourcedefinition.apiextensions.k8s.io/deniers.config.istio.io created
customresourcedefinition.apiextensions.k8s.io/fluentds.config.istio.io created
customresourcedefinition.apiextensions.k8s.io/kubernetesenvs.config.istio.io created
customresourcedefinition.apiextensions.k8s.io/listcheckers.config.istio.io created
customresourcedefinition.apiextensions.k8s.io/memquotas.config.istio.io created
customresourcedefinition.apiextensions.k8s.io/noops.config.istio.io created
customresourcedefinition.apiextensions.k8s.io/opas.config.istio.io created
customresourcedefinition.apiextensions.k8s.io/prometheuses.config.istio.io created
customresourcedefinition.apiextensions.k8s.io/rbacs.config.istio.io created
customresourcedefinition.apiextensions.k8s.io/redisquotas.config.istio.io created
customresourcedefinition.apiextensions.k8s.io/servicecontrols.config.istio.io created
customresourcedefinition.apiextensions.k8s.io/signalfxs.config.istio.io created
customresourcedefinition.apiextensions.k8s.io/solarwindses.config.istio.io created
customresourcedefinition.apiextensions.k8s.io/stackdrivers.config.istio.io created
customresourcedefinition.apiextensions.k8s.io/statsds.config.istio.io created
customresourcedefinition.apiextensions.k8s.io/stdios.config.istio.io created
customresourcedefinition.apiextensions.k8s.io/apikeys.config.istio.io created
customresourcedefinition.apiextensions.k8s.io/authorizations.config.istio.io created
customresourcedefinition.apiextensions.k8s.io/checknothings.config.istio.io created
customresourcedefinition.apiextensions.k8s.io/kuberneteses.config.istio.io created
customresourcedefinition.apiextensions.k8s.io/listentries.config.istio.io created
customresourcedefinition.apiextensions.k8s.io/logentries.config.istio.io created
customresourcedefinition.apiextensions.k8s.io/edges.config.istio.io created
customresourcedefinition.apiextensions.k8s.io/metrics.config.istio.io created
customresourcedefinition.apiextensions.k8s.io/quotas.config.istio.io created
customresourcedefinition.apiextensions.k8s.io/reportnothings.config.istio.io created
customresourcedefinition.apiextensions.k8s.io/servicecontrolreports.config.istio.io created
customresourcedefinition.apiextensions.k8s.io/tracespans.config.istio.io created
customresourcedefinition.apiextensions.k8s.io/rbacconfigs.rbac.istio.io created
customresourcedefinition.apiextensions.k8s.io/serviceroles.rbac.istio.io created
customresourcedefinition.apiextensions.k8s.io/servicerolebindings.rbac.istio.io created
customresourcedefinition.apiextensions.k8s.io/adapters.config.istio.io created
customresourcedefinition.apiextensions.k8s.io/instances.config.istio.io created
customresourcedefinition.apiextensions.k8s.io/templates.config.istio.io created
customresourcedefinition.apiextensions.k8s.io/handlers.config.istio.io created
clusterrole.rbac.authorization.k8s.io/istio-galley-istio-system created
clusterrole.rbac.authorization.k8s.io/istio-egressgateway-istio-system created
clusterrole.rbac.authorization.k8s.io/istio-ingressgateway-istio-system created
clusterrole.rbac.authorization.k8s.io/istio-mixer-istio-system created
clusterrole.rbac.authorization.k8s.io/istio-pilot-istio-system created
clusterrole.rbac.authorization.k8s.io/prometheus-istio-system created
clusterrole.rbac.authorization.k8s.io/istio-citadel-istio-system created
clusterrole.rbac.authorization.k8s.io/istio-sidecar-injector-istio-system created
clusterrolebinding.rbac.authorization.k8s.io/istio-galley-admin-role-binding-istio-system created
clusterrolebinding.rbac.authorization.k8s.io/istio-egressgateway-istio-system created
clusterrolebinding.rbac.authorization.k8s.io/istio-ingressgateway-istio-system created
clusterrolebinding.rbac.authorization.k8s.io/istio-mixer-admin-role-binding-istio-system created
clusterrolebinding.rbac.authorization.k8s.io/istio-pilot-istio-system created
clusterrolebinding.rbac.authorization.k8s.io/prometheus-istio-system created
clusterrolebinding.rbac.authorization.k8s.io/istio-citadel-istio-system created
clusterrolebinding.rbac.authorization.k8s.io/istio-sidecar-injector-admin-role-binding-istio-system created
service/istio-galley created
service/istio-egressgateway created
service/istio-ingressgateway created
service/grafana created
service/istio-policy created
service/istio-telemetry created
service/istio-statsd-prom-bridge created
deployment.extensions/istio-statsd-prom-bridge created
service/istio-pilot created
service/prometheus created
service/istio-citadel created
service/servicegraph created
service/istio-sidecar-injector created
deployment.extensions/istio-galley created
deployment.extensions/istio-egressgateway created
deployment.extensions/istio-ingressgateway created
deployment.extensions/grafana created
deployment.extensions/istio-policy created
deployment.extensions/istio-telemetry created
deployment.extensions/istio-pilot created
deployment.extensions/prometheus created
deployment.extensions/istio-citadel created
deployment.extensions/servicegraph created
deployment.extensions/istio-sidecar-injector created
deployment.extensions/istio-tracing created
horizontalpodautoscaler.autoscaling/istio-egressgateway created
horizontalpodautoscaler.autoscaling/istio-ingressgateway created
horizontalpodautoscaler.autoscaling/istio-policy created
horizontalpodautoscaler.autoscaling/istio-telemetry created
horizontalpodautoscaler.autoscaling/istio-pilot created
service/jaeger-query created
service/jaeger-collector created
service/jaeger-agent created
service/zipkin created
service/tracing created
mutatingwebhookconfiguration.admissionregistration.k8s.io/istio-sidecar-injector created
unable to recognize "addon/istio/istio-demo.yaml": no matches for kind "Gateway" in version "networking.istio.io/v1alpha3"
unable to recognize "addon/istio/istio-demo.yaml": no matches for kind "attributemanifest" in version "config.istio.io/v1alpha2"
unable to recognize "addon/istio/istio-demo.yaml": no matches for kind "attributemanifest" in version "config.istio.io/v1alpha2"
unable to recognize "addon/istio/istio-demo.yaml": no matches for kind "stdio" in version "config.istio.io/v1alpha2"
unable to recognize "addon/istio/istio-demo.yaml": no matches for kind "logentry" in version "config.istio.io/v1alpha2"
unable to recognize "addon/istio/istio-demo.yaml": no matches for kind "logentry" in version "config.istio.io/v1alpha2"
unable to recognize "addon/istio/istio-demo.yaml": no matches for kind "rule" in version "config.istio.io/v1alpha2"
unable to recognize "addon/istio/istio-demo.yaml": no matches for kind "rule" in version "config.istio.io/v1alpha2"
unable to recognize "addon/istio/istio-demo.yaml": no matches for kind "metric" in version "config.istio.io/v1alpha2"
unable to recognize "addon/istio/istio-demo.yaml": no matches for kind "metric" in version "config.istio.io/v1alpha2"
unable to recognize "addon/istio/istio-demo.yaml": no matches for kind "metric" in version "config.istio.io/v1alpha2"
unable to recognize "addon/istio/istio-demo.yaml": no matches for kind "metric" in version "config.istio.io/v1alpha2"
unable to recognize "addon/istio/istio-demo.yaml": no matches for kind "metric" in version "config.istio.io/v1alpha2"
unable to recognize "addon/istio/istio-demo.yaml": no matches for kind "metric" in version "config.istio.io/v1alpha2"
unable to recognize "addon/istio/istio-demo.yaml": no matches for kind "prometheus" in version "config.istio.io/v1alpha2"
unable to recognize "addon/istio/istio-demo.yaml": no matches for kind "rule" in version "config.istio.io/v1alpha2"
unable to recognize "addon/istio/istio-demo.yaml": no matches for kind "rule" in version "config.istio.io/v1alpha2"
unable to recognize "addon/istio/istio-demo.yaml": no matches for kind "kubernetesenv" in version "config.istio.io/v1alpha2"
unable to recognize "addon/istio/istio-demo.yaml": no matches for kind "rule" in version "config.istio.io/v1alpha2"
unable to recognize "addon/istio/istio-demo.yaml": no matches for kind "rule" in version "config.istio.io/v1alpha2"
unable to recognize "addon/istio/istio-demo.yaml": no matches for kind "kubernetes" in version "config.istio.io/v1alpha2"
unable to recognize "addon/istio/istio-demo.yaml": no matches for kind "DestinationRule" in version "networking.istio.io/v1alpha3"
unable to recognize "addon/istio/istio-demo.yaml": no matches for kind "DestinationRule" in version "networking.istio.io/v1alpha3"

dashboard cannot access

vagrant up 起来后一切正常。
浏览器访问 https://172.17.8.101:8443,无法访问。
检查pod和service,都是正常的。

[root@node1 ~]# kubectl get pods --all-namespaces
NAMESPACE     NAME                                    READY     STATUS    RESTARTS   AGE
kube-system   coredns-5984fb8cbb-b5rth                1/1       Running   0          9m
kube-system   kubernetes-dashboard-65486f5fdf-rp648   1/1       Running   0          9m
kube-system   traefik-ingress-controller-d97wf        1/1       Running   0          9m
[root@node1 ~]# kubectl get svc --all-namespaces
NAMESPACE     NAME                      TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)           AGE
default       kubernetes                ClusterIP   10.254.0.1       <none>        443/TCP           20m
kube-system   kube-dns                  ClusterIP   10.254.0.2       <none>        53/UDP,53/TCP     10m
kube-system   kubernetes-dashboard      ClusterIP   10.254.29.80     <none>        8443/TCP          9m
kube-system   traefik-ingress-service   ClusterIP   10.254.150.197   <none>        80/TCP,8080/TCP   9m

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.