This project builds up the infrastructure for our SW Factory. We are using 3 Atomic Pi single board computers.
We'll be using as little manual setup and configuration as possible. Here are the manual steps performed on each Atomic Pi.
- Install CentOS 7 Minimal from a USB stick.
- During the installation process, enable ethernet (DHCP), allocate all disk space to the installation, set the language to English, set the keyboard layout to US, set the root password, and create a non-root user as an administrator (in the wheel group to allow sudo).
- Set the hostname by editing /etc/hostname and rebooting. I'm using api1, api2, and api3.
- Run yum update and reboot.
- Setup ssh keys from my laptop for my user and root using ssh-copy-id user@hostname on each machine. Ensure the ssh host keys are in the local keystore.
- Install ansible on my laptop.
The rest of the setup is automated using Ansible Infrastructure-as-Code. Most of the content was taken from this Digital Ocean post.
- Create a hosts file with 1 master and 2 nodes. In this setup, I'm using api1 as the master with api2 and api3 as nodes. Because I'm using DHCP within my home network, I used the hostnames in the hosts file rather than IP addresses.
- Install kubernetes dependencies using the kube-dependencies.yml playbook taken from Digital Ocean's post. I then rebooted all the Atomic Pis.
- Setup a centos user on each Atomic Pi using the centos-user.yml playbook.
- Setup the master node using the master.yml playbook taken from Digital Ocean's post.
Note: The master.yml playbook failed for me. I learned that kubernetes does not work with an active swap file. I have updated the kube-dependencies.yml playbook to disable the swap file before installing kubernetes. I continued to have issues runnign the master.yml playbook. I ssh'ed into api1 as root and ran kubeadm reset. I than ran kubeadm init manually...which worked. I was then able to run the master.yml playbook successfully. The key seemed to be running kubeadm reset.
- Setup the worker nodes using the worker.yml playbook taken from Digital Ocean's post.
Now that we have installed docker and kubernetes, setup our nodes, initialized our cluster, and joined workers, we need to verify the cluster status.
- ssh into api1 as root
- Switch to the centos user (su - centos)
- Run kubectl get nodes
We should now see a table showing the master and both workers with a status of Ready.