Giter Site home page Giter Site logo

homecores's Introduction

homecores

Introduction

What is homecores

homecores is a project to run a kubernetes cluster on VirtualMachines. Each virtual machine starts on a different computer, so you can easily work with your pods before moving them to production.
It's for development.

What is not homecores

It's not intended to replace a real high availability cluster.
The idea is not to have a production grade cluster with that product.

Context

  • the environments used for testing are:
    • Windows 7/10
    • VirtualBox 5.0
    • Cygwin or git bash

What this project needs

The project is a ruby script, that run a virtualbox image with the help of vagrant.

So you will need:

  • Virtualbox
  • Vagrant
  • an ssh client

A word on certificates:

  • To simplify the first run, demo certificates are given for the kubernetes cluster and the user. You will find a guide here on how to generate your own.

Prerequisite

Virtualbox

Download and install VirtualBox

Vagrant

Download and install Vagrant

ssh client

There are lots of ssh clients. For windows, you can use the one that come with that git installer.
Windows CMD is limitative, you should look for babun.

Preparing the project

Configuration, Step by Step

  • Start a terminal and move to the folder where you want to clone the project
  • run git clone https://github.com/tdeheurles/homecores
  • go in the project with cd homecores
  • run ./start.sh. This will generate the file config.sh
  • Edit config.sh with a text editor of your choice

You should have this:

# =================== USER CONFIGURATION ====================
# ===========================================================

# server name :
#  - this just need to be different for every virtual machine
coreos_hostname="master1"

# The mask of the local network used
network_mask="192.168.1"

# the network interface to use for virtualbox
#   refer to the README.md for information on how to get it
public_network_to_use="Qualcomm Atheros AR8151 PCI-E Gigabit Ethernet Controller (NDIS 6.20)"



# ========================== OPTIONAL =======================
# ===========================================================
[...]
  • You can let coreos_hostname=master1, it just needs to be unique on the cluster.
  • Enter the mask of your local network. This one is used to grep the ip_address after the start of CoreOS.
  • The third information is a bit more difficult to found. It's a vagrant configuration that can be found with a virtualbox tool:
    • run a new CLI (GitBash or another)
    • go to the virtualbox installation
      • for windows it's cd "C:\Program Files\Oracle\VirtualBox"
    • run vboxmanage list bridgedifs
    • the information needed is the one corresponding to Name:
    • So as an example:
cd "C:\Program Files\Oracle\VirtualBox"
➜  vboxmanage list bridgedifs
Name:            Qualcomm Atheros AR8151 PCI-E Gigabit Ethernet Controller (NDIS 6.20)
GUID:            f99dc65b-6c35-4790-bc6b-3d36d2638c8b
DHCP:            Enabled
IPAddress:       192.168.1.28
NetworkMask:     255.255.255.0
IPV6Address:     fe80:0000:0000:0000:052f:51af:4d49:9ccc
IPV6NetworkMaskPrefixLength: 64
HardwareAddress: 90:2b:34:58:5f:71
MediumType:      Ethernet
Status:          Up
VBoxNetworkName: HostInterfaceNetworking-Qualcomm Atheros AR8151 PCI-E Gigabit Ethernet Controller (NDIS 6.20)

Here, I will write public_network_to_use="Qualcomm Atheros AR8151 PCI-E Gigabit Ethernet Controller (NDIS 6.20)" in my config.sh file.

Running the project

Open a terminal and run ./start.sh.
Some download will occure.
Then run into VM with vagrant ssh -- -i ./id_rsa

Here is an example of the ./start.sh script:

➜  ./start.sh
==> master1: VM not created. Moving on...
Bringing machine 'master1' up with 'virtualbox' provider...
==> master1: Box 'coreos-alpha' could not be found. Attempting to find and install...
    master1: Box Provider: virtualbox
    master1: Box Version: >= 0
==> master1: Loading metadata for box 'http://alpha.release.core-os.net/amd64-usr/current/coreos_production_vagrant.json'
    master1: URL: http://alpha.release.core-os.net/amd64-usr/current/coreos_production_vagrant.json
==> master1: Adding box 'coreos-alpha' (v801.0.0) for provider: virtualbox
    master1: Downloading: http://alpha.release.core-os.net/amd64-usr/801.0.0/coreos_production_vagrant.box
    master1:
    master1: Calculating and comparing box checksum...
==> master1: Successfully added box 'coreos-alpha' (v801.0.0) for 'virtualbox'!
==> master1: Importing base box 'coreos-alpha'...
==> master1: Matching MAC address for NAT networking...
==> master1: Checking if box 'coreos-alpha' is up to date...
==> master1: Setting the name of the VM: homecores_master1_1442503740277_51132
==> master1: Clearing any previously set network interfaces...
==> master1: Preparing network interfaces based on configuration...
    master1: Adapter 1: nat
    master1: Adapter 2: bridged
    master1: Adapter 3: hostonly
==> master1: Forwarding ports...
    master1: 22 => 2222 (adapter 1)
==> master1: Running 'pre-boot' VM customizations...
==> master1: Booting VM...
==> master1: Waiting for machine to boot. This may take a few minutes...
    master1: SSH address: 127.0.0.1:2222
    master1: SSH username: core
    master1: SSH auth method: private key
    master1: Warning: Remote connection disconnect. Retrying...
==> master1: Machine booted and ready!
==> master1: Setting hostname...
==> master1: Configuring and enabling network interfaces...
==> master1: Running provisioner: shell...
    master1: Running: inline script
==> master1: Running provisioner: file...
==> master1: Running provisioner: file...
[... some provisioner lines ...]
==> master1: Running provisioner: file...
==> master1: Running provisioner: shell...
    master1: Running: inline script

How to test that everything is started correctly

If the script run successfully, you have been ssh to CoreOS.
Downloads will now be running (~250Mb).

Short Way:

  • run slj and wait that the jobs are finished
core@master1 ~ $ slj
No jobs running.
  • then run dps and wait for 8 containers to show up:
core@master1 ~ $ dps
CONTAINER ID        IMAGE                                       COMMAND                  CREATED             STATUS              PORTS               NAMES
259d16ba012e        gcr.io/google_containers/hyperkube:v1.0.6   "/hyperkube apiserver"   5 minutes ago       Up 5 minutes                            k8s_kube-apiserver.7bff4b40_kube-apiserver-192.168.1.83_default_7c4bf9aa9cfff4a366b0d917afef89de_95633f59
34e88c61ef41        gcr.io/google_containers/hyperkube:v1.0.6   "/hyperkube scheduler"   5 minutes ago       Up 5 minutes                            k8s_kube-scheduler.96058e0_kube-scheduler-192.168.1.83_default_1ad6d2fbf3f144bb17dc21ee398dd6e1_b4b085f8
ae1d4d3158d2        gcr.io/google_containers/hyperkube:v1.0.6   "/hyperkube proxy --m"   5 minutes ago       Up 5 minutes                            k8s_kube-proxy.db703083_kube-proxy-192.168.1.83_default_8770f171ca1f4f9d4aaf724284527622_badbb059
cd672b7d81fb        gcr.io/google_containers/hyperkube:v1.0.6   "/hyperkube controlle"   5 minutes ago       Up 5 minutes                            k8s_kube-controller-manager.b9acaee_kube-controller-manager-192.168.1.83_default_96779ee4ab5a79bb2f082a7e48fa30be_bd00fa84
5aea94e92c32        gcr.io/google_containers/pause:0.8.0        "/pause"                 7 minutes ago       Up 7 minutes                            k8s_POD.e4cc795_kube-proxy-192.168.1.83_default_8770f171ca1f4f9d4aaf724284527622_ecba1f9b
707eb50ffa21        gcr.io/google_containers/pause:0.8.0        "/pause"                 7 minutes ago       Up 7 minutes                            k8s_POD.e4cc795_kube-scheduler-192.168.1.83_default_1ad6d2fbf3f144bb17dc21ee398dd6e1_028b4c97
ea54b11143fa        gcr.io/google_containers/pause:0.8.0        "/pause"                 7 minutes ago       Up 7 minutes                            k8s_POD.e4cc795_kube-controller-manager-192.168.1.83_default_96779ee4ab5a79bb2f082a7e48fa30be_6f97c15e
b6e5a8db5d9b        gcr.io/google_containers/pause:0.8.0        "/pause"                 7 minutes ago       Up 7 minutes                            k8s_POD.e4cc795_kube-apiserver-192.168.1.83_default_7c4bf9aa9cfff4a366b0d917afef89de_ad4194e4
  • run kst and look for something like that to appear:
core@master1 ~ $ kst
SERVICES
NAME         LABELS                                    SELECTOR   IP(S)      PORT(S)
kubernetes   component=apiserver,provider=kubernetes   <none>     10.3.0.1   443/TCP

RC
CONTROLLER   CONTAINER(S)   IMAGE(S)   SELECTOR   REPLICAS

PODS
NAME                                   READY     STATUS    RESTARTS   AGE
kube-apiserver-192.168.1.83            1/1       Running   0          1m
kube-controller-manager-192.168.1.83   1/1       Running   0          1m
kube-proxy-192.168.1.83                1/1       Running   0          1m
kube-scheduler-192.168.1.83            1/1       Running   0          1m

ENDPOINTS
NAME         ENDPOINTS
kubernetes   192.168.1.83:443

NODES
NAME           LABELS                                STATUS
192.168.1.83   kubernetes.io/hostname=192.168.1.83   Ready

If you see the 4 kube containers running, it's cool!

Detailed way

Wait for systemd jobs

When you have ssh in, you will have to wait for some download and process to be done.
You can monitor these processes by using the slj alias for systemctl list-jobs :

core@master1 ~ $ slj
 JOB  UNIT                   TYPE    STATE
2265 flanneld.service        start   running
2352 kubelet.service         start   waiting
1315 user-cloudinit@var...   start   running

flanneld and kubelet need to be downloaded.
The last is the cloud-config that contains flanneld et kubelet jobs.
We can also see that kubelet state is waiting. It waits flanneld to be started.

You can also use sst (alias for systemctl status).

 $ sst
● coreos1 RETURN)
    State: running
     Jobs: 5 queued     <==== wait for this to become 0
   Failed: 0 units      <==== must be 0
   [...]

So first, wait for these queued jobs to end. (command does not update, re launch command ;-))

Control ETCD2 Key/Value store

etcd2 is our distributed KV store. Everything rests on his shoulders.
The command elsa as etcdctl ls --recursive should print the value stored on the cluster. Something like that must appear:

username@hostname ~ $ elsa
/coreos.com
/coreos.com/updateengine
/coreos.com/updateengine/rebootlock
/coreos.com/updateengine/rebootlock/semaphore
/coreos.com/network
/coreos.com/network/subnets
/coreos.com/network/subnets/10.200.24.0-24
/coreos.com/network/config
Control the FLANNEL network

flannel is the technology that create a virtual network for our docker daemons on each host.

The flannel network is defined in the config.sh file:
For example : flannel_network="10.200.0.0/16".

first look at the flannel environment with the alias fenv as flannel environment:

$ fenv
FLANNEL_NETWORK=10.200.0.0/16
FLANNEL_SUBNET=10.200.53.1/24
FLANNEL_MTU=1472
FLANNEL_IPMASQ=true
FLANNELD_IFACE=192.168.1.5

If the first 4 lines don't appear, flannel container should be downloading.
If the FLANNELD_IFACE=192.168.1.5 does not appear, there should be a problem with the configuration and the flannel will be local to that computer.

Then, run an ifconfig to see the networks.

core@coreos1 ~ $ ifconfig                                          
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500              
        inet 10.200.24.1  netmask 255.255.255.0  broadcast 0.0.0.0 
        [...]
                                                                   
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500         
        inet 10.0.2.15  netmask 255.255.255.0  broadcast 10.0.2.255
        [...]
                                                                   
eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500         
        inet 172.16.1.100  netmask 255.255.255.0  broadcast 172.16.
        [...]
                                                                   
eth2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500         
        inet 192.168.1.39  netmask 255.255.255.0  broadcast 192.168
        [...]
                                                                   
flannel0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST>  mtu 1
        inet 10.200.24.0  netmask 255.255.0.0  destination 10.200.2
        [...]
                                                                   
lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536                       
        inet 127.0.0.1  netmask 255.0.0.0                          
        [...]

We have:

  • docker0: The inet must be the same as the flannel defined in the config. If docker0 does not appear, just run dbox command (docker run -ti busybox sh). It will download a small container. Inside this container, run ifconfig, the eth0 should be something like 10.200.24.4 (corresponding to the flannel CIDR and your flannel0 network).
  • eth0: this is the vagrant NAT
  • eth1: this is the vagrant private_ip, it's used for NFS (folder sharing)
  • eth2: this one is important. It must corresponds to the ip defined in the config.sh file (network_mask="192.168.1"). It's your public_ip
  • flannel0: This is the one we are looking for. It must be in the CIDR define in config.sh (flannel_network="10.200.0.0/16"). It must correpond to:
    • docker0
    • eth0 inside containers.
  • lo: your machine loopback
The kubernetes controller: kubectl

kubectl is the CLI that can be used to communicate with kubernetes. It's downloaded after CoreOS is up. Just run kubectl, if the help appears then it's fine

kubelet and kubernetes

sytemd is in charge of running the kubelet (kubernetes part that starts and stops containers). So to look if everything is fine, just look to your running containers:

  • kst (alias that will prompt some kubernetes informations):
core@master1 ~ $ kst
SERVICES
NAME         LABELS                                    SELECTOR   IP(S)      PORT(S)
kubernetes   component=apiserver,provider=kubernetes   <none>     10.3.0.1   443/TCP

RC
CONTROLLER   CONTAINER(S)   IMAGE(S)   SELECTOR   REPLICAS

PODS
NAME                                   READY     STATUS    RESTARTS   AGE
kube-apiserver-192.168.1.83            1/1       Running   0          1m
kube-controller-manager-192.168.1.83   1/1       Running   0          1m
kube-proxy-192.168.1.83                1/1       Running   0          1m
kube-scheduler-192.168.1.83            1/1       Running   0          1m

ENDPOINTS
NAME         ENDPOINTS
kubernetes   192.168.1.83:443

NODES
NAME           LABELS                                STATUS
192.168.1.83   kubernetes.io/hostname=192.168.1.83   Ready

If you have running pods, it's fine. The kubelet have read a config file and started them.

homecores's People

Contributors

cluelessjoe avatar tdeheurles avatar

Stargazers

 avatar

Watchers

 avatar  avatar  avatar

homecores's Issues

Move certificate generation to master runtime

For now the generation of the certificate are done by this script. It's runned by hand and the resulting certificates have been pushed to the demo_certificates folder.

At line 54 of that script, we generate a certificate configuration.
I have bypassed the master public_ip for now supposing that it can be done. But at runtime, on the node, the connection to the master_kube-apiserver is rejected.

So. That script needs to be run when we have that public_ip. That public_ip is generated by the DHCP when we start the VM. I get the public_ip by running a systemd unit at startup that use that script.

So

We need to :

  • move the generate_new_cluster_certificates.shscript inside the master (provision with Vagrantfile)
  • run it via a unit.
    • That unit need to be delayed after the write_public_ip service

Network comportment

from @potgieterdl :
Question on the public_ip, my assumption was that if I make the public ip same as my local network I should be able to ping the server from my local pc as well as from a network pc?

network_mask="192.168.1"
core@master1 ~ $ ifconfig
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 10.2.15.1  netmask 255.255.255.0  broadcast 0.0.0.0
        ether 02:42:bc:3c:e7:d7  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.2.15  netmask 255.255.255.0  broadcast 10.0.2.255
        inet6 fe80::a00:27ff:fea1:41b4  prefixlen 64  scopeid 0x20<link>
        ether 08:00:27:a1:41:b4  txqueuelen 1000  (Ethernet)
        RX packets 151993  bytes 120688481 (115.0 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 88331  bytes 4997410 (4.7 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.0.131  netmask 255.255.255.0  broadcast 10.0.0.255
        inet6 fe80::a00:27ff:feae:46ef  prefixlen 64  scopeid 0x20<link>
        ether 08:00:27:ae:46:ef  txqueuelen 1000  (Ethernet)
        RX packets 335  bytes 31145 (30.4 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 124  bytes 10989 (10.7 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.16.1.100  netmask 255.255.255.0  broadcast 172.16.1.255
        inet6 fe80::a00:27ff:fe14:71cf  prefixlen 64  scopeid 0x20<link>
        ether 08:00:27:14:71:cf  txqueuelen 1000  (Ethernet)
        RX packets 89  bytes 13945 (13.6 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 45  bytes 4861 (4.7 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

flannel0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST>  mtu 1472
        inet 10.2.15.0  netmask 255.255.0.0  destination 10.2.15.0
        unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00  txqueuelen 500  (UNSPEC)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 0  (Local Loopback)
        RX packets 50923  bytes 5730027 (5.4 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 50923  bytes 5730027 (5.4 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
{lamb} ping 10.0.0.131
Pinging 10.0.0.131 with 32 bytes of data:
Request timed out.

{lamb} ping 172.16.1.100
Pinging 172.16.1.100 with 32 bytes of data:
Reply from 172.16.1.100: bytes=32 time<1ms TTL=64

Issue creating master, failing on coreos version in Vagrantfile

First off, thanks for a great project!!

Seems the version isn`t availible on coreos anymore? Not sure, new to this.
If i change the version to 815.0, then it starts up but doesnt complete. Seems to have failures with etcd. What version should i use?

Windows 10, fresh git clone, vagrant 1.7.4

$ ./start.sh
Control ssh key
Will start as master
Prepare folders
Prepare config for different services
  Create files and folders
  Add units to cloud_config
    templates/units/unit.write_public_ip.service.yml
    templates/units/unit.etcd2_master.service.yml
    templates/units/unit.kubelet_master.service.yml
    templates/units/unit.kubectl_master.service.yml
    templates/units/unit.flanneld.service.yml
    templates/units/unit.docker.service.yml
  Add files to cloud_config
    templates/coreos_files/file.write_ip.yml
    templates/coreos_files/file.sed_kubernetes_public_ip.yml
    templates/coreos_files/file.write_flannel_public_ip.yml
  Add environment variables
  remove temporary files
  Prepare kubernetes manifests
    apiserver
    controller-manager
    scheduler
    proxy
Configuration is finished


Launch Vagrant
==> master1: VM not created. Moving on...
Bringing machine 'master1' up with 'virtualbox' provider...
==> master1: Box 'coreos-alpha' could not be found. Attempting to find and install...
    master1: Box Provider: virtualbox
    master1: Box Version: 801.0
==> master1: Box file was not detected as metadata. Adding it directly...
You specified a box version constraint with a direct box file
path. Box version constraints only work with boxes from Vagrant
Cloud or a custom box host. Please remove the version constraint
and try again.

Changing it to 8.15.0.0
vi Vagrantfile

$image_version =  "815.0.0" 

I also tried version 808.0.0
then it starts....but, seems the services doent load correctly.
journalctl

Oct 05 10:22:45 master1 wget[1591]: 18900K .......... .......... .......... ..                   100% 5.10M=7.1s
Oct 05 10:22:45 master1 wget[1591]: 2015-10-05 10:22:45 (2.60 MB/s) - '/home/core/programs/kubectl' saved [19387136/19387136]
Oct 05 10:22:45 master1 systemd[1]: Started Download kubectl.
Oct 05 10:22:45 master1 coreos-cloudinit[1372]: 2015/10/05 10:22:45 Result of "start" on "kubectl_dl.service": done
Oct 05 10:22:45 master1 coreos-cloudinit[1372]: 2015/10/05 10:22:45 Calling unit command "start" on "kubelet.service"'
Oct 05 10:22:45 master1 systemd[1]: Starting Systemd test...
Oct 05 10:22:45 master1 systemd[1]: Started Systemd test.
Oct 05 10:22:47 master1 systemd[1]: flanneld.service: Service hold-off time over, scheduling restart.
Oct 05 10:22:47 master1 systemd[1]: etcd2.service: Service hold-off time over, scheduling restart.
Oct 05 10:22:47 master1 systemd[1]: Stopped Network fabric for containers.
Oct 05 10:22:47 master1 systemd[1]: Stopped ETCD2 service.
Oct 05 10:22:47 master1 systemd[1]: Starting Systemd test...
Oct 05 10:22:47 master1 systemd[1]: Started Systemd test.
Oct 05 10:22:47 master1 systemd[1]: Started ETCD2 service.
Oct 05 10:22:47 master1 systemd[1]: Starting Network fabric for containers...
Oct 05 10:22:47 master1 etcd2[1664]: 2015/10/5 10:22:47 etcdmain: etcd Version: 2.1.2
Oct 05 10:22:47 master1 etcd2[1664]: 2015/10/5 10:22:47 etcdmain: Git SHA: ff8d1ec
Oct 05 10:22:47 master1 etcd2[1664]: 2015/10/5 10:22:47 etcdmain: Go Version: go1.4.2
Oct 05 10:22:47 master1 etcd2[1664]: 2015/10/5 10:22:47 etcdmain: Go OS/Arch: linux/amd64
Oct 05 10:22:47 master1 etcd2[1664]: 2015/10/5 10:22:47 etcdmain: setting maximum number of CPUs to 1, total number of available CPUs is 1
Oct 05 10:22:47 master1 etcd2[1664]: 2015/10/5 10:22:47 etcdmain: listening for peers on http://:2380
Oct 05 10:22:47 master1 etcd2[1664]: 2015/10/5 10:22:47 etcdmain: listening for client requests on http://127.0.0.1:2379
Oct 05 10:22:47 master1 etcd2[1664]: 2015/10/5 10:22:47 etcdmain: stopping listening for client requests on http://127.0.0.1:2379
Oct 05 10:22:47 master1 etcd2[1664]: 2015/10/5 10:22:47 etcdmain: stopping listening for peers on http://:2380
Oct 05 10:22:47 master1 etcd2[1664]: 2015/10/5 10:22:47 etcdmain: listen tcp :2379: bind: address already in use
Oct 05 10:22:47 master1 systemd[1]: etcd2.service: Main process exited, code=exited, status=1/FAILURE
Oct 05 10:22:47 master1 systemd[1]: etcd2.service: Unit entered failed state.
Oct 05 10:22:47 master1 systemd[1]: etcd2.service: Failed with result 'exit-code'.
Oct 05 10:22:47 master1 etcdctl[1687]: Error:  cannot sync with the cluster using endpoints http://127.0.0.1:4001, http://127.0.0.1:2379
Oct 05 10:22:47 master1 systemd[1]: flanneld.service: Control process exited, code=exited status=2
Oct 05 10:22:47 master1 systemd[1]: Failed to start Network fabric for containers.
Oct 05 10:22:47 master1 systemd[1]: Dependency failed for Kubernetes Kubelet for Master.
Oct 05 10:22:47 master1 systemd[1]: kubelet.service: Job kubelet.service/start failed with result 'dependency'.
Oct 05 10:22:47 master1 systemd[1]: flanneld.service: Unit entered failed state.
Oct 05 10:22:47 master1 systemd[1]: flanneld.service: Failed with result 'exit-code'.
Oct 05 10:22:47 master1 coreos-cloudinit[1372]: 2015/10/05 10:22:47 Result of "start" on "kubelet.service": dependency
Oct 05 10:22:47 master1 systemd[1]: Started Load cloud-config from /var/lib/coreos-vagrant/vagrantfile-user-data.
Oct 05 10:22:49 master1 locksmithd[671]: etcd2.service is active
Oct 05 10:22:50 master1 locksmithd[671]: Unlocking old locks failed: error setting up lock: Error initializing etcd client: 501: All the given peers are not
chable (Tried to connect to each peer twice and failed) [0]. Retrying in 40s.
Oct 05 10:22:52 master1 systemd[1]: flanneld.service: Service hold-off time over, scheduling restart.
Oct 05 10:22:52 master1 systemd[1]: etcd2.service: Service hold-off time over, scheduling restart.
Oct 05 10:22:52 master1 systemd[1]: Stopped Network fabric for containers.
Oct 05 10:22:52 master1 systemd[1]: Stopped ETCD2 service.
Oct 05 10:22:52 master1 systemd[1]: Starting Systemd test...
Oct 05 10:22:53 master1 systemd[1]: Started Systemd test.
Oct 05 10:22:53 master1 systemd[1]: Started ETCD2 service.
Oct 05 10:22:53 master1 systemd[1]: Starting Network fabric for containers...
Oct 05 10:22:53 master1 etcd2[1711]: 2015/10/5 10:22:53 etcdmain: etcd Version: 2.1.2
Oct 05 10:22:53 master1 etcd2[1711]: 2015/10/5 10:22:53 etcdmain: Git SHA: ff8d1ec
Oct 05 10:22:53 master1 etcd2[1711]: 2015/10/5 10:22:53 etcdmain: Go Version: go1.4.2
Oct 05 10:22:53 master1 etcd2[1711]: 2015/10/5 10:22:53 etcdmain: Go OS/Arch: linux/amd64
Oct 05 10:22:53 master1 etcd2[1711]: 2015/10/5 10:22:53 etcdmain: setting maximum number of CPUs to 1, total number of available CPUs is 1
Oct 05 10:22:53 master1 etcd2[1711]: 2015/10/5 10:22:53 etcdmain: listening for peers on http://:2380
Oct 05 10:22:53 master1 etcd2[1711]: 2015/10/5 10:22:53 etcdmain: listening for client requests on http://127.0.0.1:2379
Oct 05 10:22:53 master1 etcd2[1711]: 2015/10/5 10:22:53 etcdmain: stopping listening for client requests on http://127.0.0.1:2379
Oct 05 10:22:53 master1 etcd2[1711]: 2015/10/5 10:22:53 etcdmain: stopping listening for peers on http://:2380
Oct 05 10:22:53 master1 etcd2[1711]: 2015/10/5 10:22:53 etcdmain: listen tcp :2379: bind: address already in use
Oct 05 10:22:53 master1 systemd[1]: etcd2.service: Main process exited, code=exited, status=1/FAILURE
Oct 05 10:22:53 master1 systemd[1]: etcd2.service: Unit entered failed state.
Oct 05 10:22:53 master1 systemd[1]: etcd2.service: Failed with result 'exit-code'.
Oct 05 10:22:53 master1 etcdctl[1733]: Error:  cannot sync with the cluster using endpoints http://127.0.0.1:4001, http://127.0.0.1:2379
Oct 05 10:22:53 master1 systemd[1]: flanneld.service: Control process exited, code=exited status=2
Oct 05 10:22:53 master1 systemd[1]: Failed to start Network fabric for containers.
Oct 05 10:22:53 master1 systemd[1]: flanneld.service: Unit entered failed state.
Oct 05 10:22:53 master1 systemd[1]: flanneld.service: Failed with result 'exit-code'.
Oct 05 10:22:58 master1 systemd[1]: flanneld.service: Service hold-off time over, scheduling restart.
Oct 05 10:22:58 master1 systemd[1]: etcd2.service: Service hold-off time over, scheduling restart.
Oct 05 10:22:58 master1 systemd[1]: Stopped Network fabric for containers.
Oct 05 10:22:58 master1 systemd[1]: Stopped ETCD2 service.
Oct 05 10:22:58 master1 systemd[1]: Starting Systemd test...
Oct 05 10:22:58 master1 systemd[1]: Started Systemd test.
Oct 05 10:22:58 master1 systemd[1]: Started ETCD2 service.
Oct 05 10:22:58 master1 systemd[1]: Starting Network fabric for containers...
Oct 05 10:22:58 master1 etcd2[1755]: 2015/10/5 10:22:58 etcdmain: etcd Version: 2.1.2
Oct 05 10:22:58 master1 etcd2[1755]: 2015/10/5 10:22:58 etcdmain: Git SHA: ff8d1ec
Oct 05 10:22:58 master1 etcd2[1755]: 2015/10/5 10:22:58 etcdmain: Go Version: go1.4.2
Oct 05 10:22:58 master1 etcd2[1755]: 2015/10/5 10:22:58 etcdmain: Go OS/Arch: linux/amd64
Oct 05 10:22:58 master1 etcd2[1755]: 2015/10/5 10:22:58 etcdmain: setting maximum number of CPUs to 1, total number of available CPUs is 1
Oct 05 10:22:58 master1 etcd2[1755]: 2015/10/5 10:22:58 etcdmain: listening for peers on http://:2380
Oct 05 10:22:58 master1 etcd2[1755]: 2015/10/5 10:22:58 etcdmain: listening for client requests on http://127.0.0.1:2379
Oct 05 10:22:58 master1 etcd2[1755]: 2015/10/5 10:22:58 etcdmain: stopping listening for client requests on http://127.0.0.1:2379
Oct 05 10:22:58 master1 etcd2[1755]: 2015/10/5 10:22:58 etcdmain: stopping listening for peers on http://:2380
Oct 05 10:22:58 master1 etcd2[1755]: 2015/10/5 10:22:58 etcdmain: listen tcp :2379: bind: address already in use
Oct 05 10:22:58 master1 systemd[1]: etcd2.service: Main process exited, code=exited, status=1/FAILURE
Oct 05 10:22:58 master1 systemd[1]: etcd2.service: Unit entered failed state.
Oct 05 10:22:58 master1 systemd[1]: etcd2.service: Failed with result 'exit-code'.
Oct 05 10:22:58 master1 etcdctl[1778]: Error:  cannot sync with the cluster using endpoints http://127.0.0.1:4001, http://127.0.0.1:2379
Oct 05 10:22:58 master1 systemd[1]: flanneld.service: Control process exited, code=exited status=2
Oct 05 10:22:58 master1 systemd[1]: Failed to start Network fabric for containers.
Oct 05 10:22:58 master1 systemd[1]: flanneld.service: Unit entered failed state.
Oct 05 10:22:58 master1 systemd[1]: flanneld.service: Failed with result 'exit-code'.
Oct 05 10:23:00 master1 update_engine[622]: [1005/102300:INFO:update_attempter.cc(485)] Updating boot flags...

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.