- Read about the origins of K8s
- Understand declarative vs imperative programming
- Understand nodes, clusters and manifests in K8s
- Install
kubectl
, minikube
and k9s
- Start
minikube
with
- Run
k9s
and get familiar navigating the minikube cluster with it
- Try to
shell
into a pod, switch namespaces, deleting a pod etc
- A namespace in K8s is an organizational construct
- The manifest of our namespace is in
namespace/namespace.yaml
- Apply the namespace by running
kubectl apply -f namespace/namespace.yaml
- Navigate to
deployment/main.go
- This is the app we'll be deploying to K8s - we'll call it server
- Go through the pod manifest
deployment/pod.yaml
- Understand that a pod is a group of containers and is assumed to be stateless
- Apply the pod by running
kubectl apply -f deployment/pod.yaml
- Kill the pod
- Go through the replica set manifest
deployment/replicaset.yaml
- Note and understand
replicas
and template
in the manifest
- A replica set will always try to maintain the number of pods specified to it
- All pods in an replica set are considered interchangeable
- Apply the replica set by running
kubectl apply -f deployment/replicaset.yaml
- Delete a pod in the replica set and see K8s create another one
- Update the image in
deployment/replicaset.yaml
to a non-existent one and re-apply the replica set
- Notice how the pods were immediately updated and are now in a failing state
- Delete the replica set
- Go through the deployment manifest
deployment/deployment.yaml
- Notice how similar it is to the replica set's manifest
- Apply the deployment by running
kubectl apply -f deployment/deployment.yaml
- Verify that applying a deployment created a replica set internally
- Update the image in
deployment/deployment.yaml
to a non-existent one and re-apply the deployment
- Notice how the deployment did not terminate older pods till the new one is healthy
- Rollback the faulty deployment with
kubectl rollout undo deployment/server-deployment -n k8s-in-a-shell
- Notice how the failing pod gets terminated
- Containers in a pod don't have access to persistent storage that exists beyond the pod's lifecycle by default
- Containers in a pod don't share storage by default
- Volumes solve both these problems
- We will deploy redis to understand volumes
- Go through the persistent volume claim manifest
volume/volume.yaml
- Understand that in the claim we are only requesting for storage of the specified configuration
- The storage itself is dynamically allocated
- Go through redis's deployment manifest
volume/deployment.yaml
- Notice and understand the relationship between
volumeMounts
, volumes
and the volume manifest volume/volume.yaml
- Apply the volume and the deployment
kubectl apply -f volume/volume.yaml
kubectl apply -f volume/deployment.yaml
- Shell into redis pod and set some data in redis
redis-cli
set mykey myvalue
- Delete this pod and wait for K8s to create a new pod
- Shell into the new pod and attempt to get the data
- Verify that the response is
"myvalue"
- A service is a way to expose workloads within and outside a K8s cluster
- Go through the frontend application code
service/index.js
- specifically its /ping
and /
APIs
- Go through the frontend's service manifest
service/service.yaml
- This is of type
LoadBalancer
which is used to expose workloads outside the cluster
- Apply the service
kubectl apply -f service/service.yaml
- Expose the service external IP directly to the host operating system (your machine)
# in a new terminal window
minikube tunnel
- Open
localhost:3000/ping
on your browser - you should see pong
- You can now access the frontend app outside the cluster!
- Notice how redis is being used in
index.js
- This is called a FQDN - read about it
- Open
volume/service.yaml
and go through the service manifest
- This service is of type
ClusterIP
which is used to expose workloads within the cluster
- Apply the service to expose redis
kubectl apply -f volume/service.yaml
- A cron job as the name suggests is used to run recurring workloads
- Go through the manifest at
cronjob/cronjob.yaml
- we call it worker
- Go through the cron job code
cronjob/main.py
Stitching it all together
- Open
localhost:3000
on your browser
- Try to submit a wage - it should fail - can you guess why?
- If you guessed that our server is not exposed, you are right!
- Read then apply the server's service
kubectl apply -f deployment/service.yaml
- Now try submitting a wage again - it should succeed now
- Open worker logs and check the 30% of your submitted wage was paid as tax