Giter Site home page Giter Site logo

sandbox's Introduction

Index

Introduction

Aim of this sandbox is two demonstrate two principles

  • A fully automated deployment, starting from a bare kubernetes cluster to a fully featured one, including middleware and application. This in a fully automated way.
  • Full GitOps, where all action on the cluster will be performed by editing files in git, without direct access to the cluster.

It mostly rely on Flux

Here is a short resume of the steps to be performed:

  • Create a cluster with Kind
  • Copy this repository
  • Perform some configuration.
  • Bind the cluster to the repo using the flux CLI command.
  • Wait for all components to be deployed

Prerequisite

  • Docker Desktop
  • Kind
  • For Mac Os: Docker Mac Net Connect
  • kubectl
  • flux, the FluxCD CLI client
  • Internet access
  • A Github account and a GitHub personal access token with repo permissions. See the GitHub documentation on creating a personal access token.

Optionally:

  • k9s: A terminal based UI to interact with your Kubernetes clusters
  • dnsmasq: To ease resolution of local DNS name. Editing /etc/hosts is a viable alternative.

Deployment

cluster creation

kind create cluster

This will create a fully operational kubernetes cluster, with a single node acting both as control plane and worker.

It is of course possible to create more sophisticated cluster with several workers or/and control plane node. But, take care if you intend to build cluster with more than one control plane node. See kind-fip

You can ensure your cluster is up and running:

kubectl get --all-namespaces pods
# or
k9s

Setup our cluster repository

Next step is to create your own copy of this repository. For this, click on the Use this template button on the upper right corner of this repo main page.

This procedure assume you have copied the repo into your personal GitHub account, under the name sandbox

It is better to copy the repo using this method than performing a Fork, which is more restrictive.

This repo will be the driver of the state of you cluster. In other words, the included tooling (based on FluxCD) will permanently reconcile the configuration of the cluster with the content of the repo.

Configuration

One key point of this sandbox is we will access the provided service through VIP (Virtual IP) on the docker network, using a load balancer. This will allow us to refer to services through DNS name, and not by using cumbersome local port number. And also this will be more realistic, similar to a 'real' cluster in the cloud or on bare metal.

This why, on Mac Os, we require Docker Mac Net Connect to be installed. It will provide access from the host to the Docker network.

The load balancer used here is metallb. We need to provide a range of IP for metallb to allocate VIPs. This range must be in the IPv4 subnet used by kind, but without conflict with existing containers.

The first step is to figure out what is this IPv4 subnet. For this, you can issue the following command:

docker network inspect kind -f '{{ .IPAM.Config}} }}' | grep -Eo '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}/[0-9]{1,2}'

The result is typically 172.18.0.0/16 or 172.19.0.0/16. For this README, let's assume it is 172.18.0.0/16. If not, you should adjust accordingly.

As docker allocate containers IP from the beginning of the range, we can assume than using IP above 172.18.200.0 is safe.

So, for our cluster we will allocate a small range from 172.18.200.1 to 172.18.200.4. And the first one will be bound to the ingress entry.

To define this range, enter the following in local /etc/hosts file:

172.18.200.1 first.pool.kind.local ingress.kind.local skas.ingress.kind.local podinfo.ingress.kind.local
172.18.200.4 last.pool.kind.local 

podinfo and skas are applications, which will be accessible through the ingress controller. As the /etc/hosts file does not accept some wildcard (such as *.ingres.kind.local), each ingress entry must be explicitly bound to the VIP

Alternatively, if dnsmasq is configured on your system, you can configure the following:

address=/first.pool.kind.local/172.18.200.1 
address=/.ingress.kind.local/172.18.200.1 
address=/last.pool.kind.local/172.18.200.4 

address=/.ingress.kind.local/172.18.200.1 is the 'dnsmasq way' to configure wildcard name. So podinfo.ingress.kind.local and skas.ingress.kind.local will resolve to 172.18.200.1

These value must now be configured in the Git repository. Edit the file /clusters/kind/kind/context.yaml:

# Context specific to a kind cluster named 'kind'

context:

  cluster:
    name: kind

  apiServer:
    portOnLocalhost: 53220    # <== To configure

  metallb:
    ipRanges:
      - first: 172.18.200.1  # <== To configure
        last: 172.18.200.4   # <== To configure

  ingress:
    urlRoot: ingress.kind.local
    vip: 172.18.200.1    # <== To configure

to reflect your subnetwork, if different.

Also, you must configure apiServer.portOnLocalhost with the value of the port on localhost exposing the k8s API api server. It is the external port bound the the internal port 6443. To find it; just enter docker ps:

$ docker ps
CONTAINER ID   IMAGE                  COMMAND                  CREATED       STATUS       PORTS                       NAMES
7ef081115829   kindest/node:v1.29.2   "/usr/local/bin/entr…"   7 hours ago   Up 7 hours   127.0.0.1:53220->6443/tcp   kind-control-plane
                                                                                                    -----

If you edit these file locally, after cloning your repo, don't forget to commit....

Boostrap FluxCD

Now, we can bootstrap our deployment process, by using our Github token and the flux CLI command

First, you need to setup some environment variables:

export GITHUB_USER=<Your username>
export GITHUB_TOKEN=<your token>
export GITHUB_REPO=sandbox

Some points to note here:

  • It is assumed the repo was copied into your personal GitHub account, under the name sandbox. If not the case, the command above should be slightly modified, by replacing the option --personal with --owner <your organization>
  • The repository will be updated by the flux command. So, the provided token must allow such access.

Then, enter the following:

flux bootstrap github \
--owner=${GITHUB_USER} \
--repository=${GITHUB_REPO} \
--branch=main \
--interval 15s \
--personal \
--path=clusters/kind/kind/flux

You can have a look on the deployment by using k9s. It will take several minutes. Be patient.

There is some stages in the deployment which involve a restart of the API server. This means the cluster will seem frozen for several minutes. Again, be patient.

All deployments are instances of helmRelease FluxCD resources. The deployment processing ends when all helmReleases are in the ready state:

$ kubectl get -n flux-system Helmreleases
NAME                    AGE     READY   STATUS
cert-manager-issuers    6m30s   True    Helm install succeeded for release cert-manager/cert-manager-issuers.v1 with chart [email protected]+26b4afce3722
cert-manager-main       6m31s   True    Helm install succeeded for release cert-manager/cert-manager-main.v1 with chart [email protected]
cert-manager-trust      6m31s   True    Helm install succeeded for release cert-manager/cert-manager-trust.v1 with chart [email protected]
ingress-nginx-main      6m31s   True    Helm install succeeded for release ingress-nginx/ingress-nginx-main.v1 with chart [email protected]
kad-controller          6m37s   True    Helm install succeeded for release flux-system/kad-controller.v1 with chart [email protected]
metallb-main            6m30s   True    Helm install succeeded for release metallb/metallb-main.v1 with chart [email protected]
metallb-pool            6m30s   True    Helm install succeeded for release metallb/metallb-pool.v1 with chart [email protected]+26b4afce3722
podinfo-main            6m31s   True    Helm install succeeded for release podinfo/podinfo-main.v1 with chart [email protected]
reloader-main           6m31s   True    Helm install succeeded for release kube-tools/reloader-main.v1 with chart [email protected]
replicator-main         6m31s   True    Helm install succeeded for release kube-tools/replicator-main.v1 with chart [email protected]
secret-generator-main   6m31s   True    Helm install succeeded for release kube-tools/secret-generator-main.v1 with chart [email protected]
skas-main               6m31s   True    Helm upgrade succeeded for release skas-system/skas-main.v2 with chart [email protected]

You should new be able to connect to the sample application podinfo by pointing your favorite browser to https://podinfo.ingress.kind.local/. Note, as we currently use only a self-signed certificate, you will have to go through some security warning. See below to fix this.

What is installed

Here is a list of installed components:

SKAS (Kubernetes authentication)

SKAS will allow you to act on your kubernetes cluster as an authenticated user and to restrict user's right using standard RBAC permissions.

To use SKAS, you will need to install locally a kubectl extension. Instruction here

Then, you should follow instruction for the user guide. But, for the impatient, here is a quick process:

First, you must configure your local ~/.kube/config file:

$ kubectl sk init https://skas.ingress.kind.local --authInsecureSkipVerify=true

If you encounter an error such as The connection to the server 127.0.0.1:53220 was refused...., the problem could be an incorrect value of apiServer.portOnLocalhost in the /clusters/kind/kind/context.yaml described before. Set the correct value, commit the change and wait for the skas-main pod to restart automatically.

Now, kubectl access to the cluster need an authentication. At this stage, the only account available is admin with password admin:

$ kubectl get nodes
Login:admin
Password:
Error from server (Forbidden): nodes is forbidden: User "admin" cannot list resource "nodes" in API group "" at the cluster scope

The error is fine: We are correctly identified as user admin, but this user does not have any rights on the cluster. It is only able to manage SKAS users. So, we can bind this user to an existing group, with full rights on the cluster:

kubectl sk user bind admin "system:masters"

After logout/login, we can now act as system administrator:

$ kubectl sk logout
Bye!

$ kubectl get nodes
Login:admin
Password:
NAME                 STATUS   ROLES           AGE   VERSION
kind-control-plane   Ready    control-plane   96m   v1.29.2

At this stage, it is better to change the admin password:

$ kubectl sk password
Will change password for user 'admin'
Old password:
New password:
Confirm new password:
Password has been changed sucessfully.

As admin is member of the group skas-admin, it will be able to create users and grant them some rights:

$ kubectl sk user create larry --commonName "Larry SIMMONS " --email "[email protected]" --password larry123
User 'larry' created in namespace 'skas-system'.

$ kubectl sk user bind larry "system:masters"
GroupBinding 'larry.system.masters' created in namespace 'skas-system'.

$ kubectl sk user bind larry "skas-admin"
GroupBinding 'larry.skas-admin' created in namespace 'skas-system'.

$ kubectl sk login larry
Password:
logged successfully..

$ kubectl get nodes
NAME                 STATUS   ROLES           AGE    VERSION
kind-control-plane   Ready    control-plane   104m   v1.29.2

Please, refer to the SKAS documentation for more features

Adding a Certificate Authority

TODO

More about configuration

TODO

Roadmap

  • Ubuntu VM as host
  • Windows as host (If possible ?)
  • Vagrant/kubespray based cluster.

sandbox's People

Contributors

sergealexandre avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.