An Express app deployed as a container in Kubernetes.
-
ESLint, Prettier & Airbnb Setup
$ npm i -D eslint prettier eslint-plugin-prettier eslint-config-prettier eslint-plugin-node eslint-config-node
$ npx install-peerdeps --dev eslint-config-airbnb
- Reference:
-
Express, Pug and Tachyons
$ npm i express pug
- Tachyons
-
MongoDB
The form for creating notes is defined in the index.pug template. It handles both the creation of notes and uploading of pictures. You should use Multer
, a middleware for multi-part form data, to handle the uploaded data.
$ npm i multer
- Multer npm
The Markdown notes should be rendered to HTML so that you can read them properly formatted. Marked
is an excellent engine for rendering Markdown to HTML.
$ npm i marked
- Marked npm
When a user uploads a picture, the file should be saved on disk/app directory (e.g. public/uploads
), and a link should be inserted in the text box.
You could deploy you app in different ways:
-
[The hard way] Provisioning your own
VPS
, install nvm, create the appropriate users, configureNode.js
as well asPM2
to restart the app when it crashes andNginx
to handle TLS and path-based routing; -
Using Platform as a Service (PaaS) like
Heroku
and forget about the underlying infrastructure and dependencies; -
Packaging applications as Linux containers and deploying them to specialised container platforms. Typically, a container contains a single process and its dependencies.
Containers are different from virtual machines:
- The process in a container still executes on the kernel of the host machine.
- With virtual machines, you run an entire guest operating system on top of your host operating system, and the processes that you want to execute on top of that.
- Containers are much more lightweight than virtual machines.
The magic of containers comes from two features in the Linux kernel:
Control groups
(cgroups): limit the resources a process can use, such as memory and CPU.Namespaces
: limit what a process can see.
Docker containers are built from
Dockerfiles
that defines what goes in a container (reference). Example:FROM node:12.0-slim COPY . . RUN npm install CMD [ "node", "index.js" ]
FROM
defines the base layer for the container, in this case, a version of Ubuntu with Node.js installedCOPY
copies the files of your app into the containerRUN
executes npm install inside the containerCMD
defines the command that should be executed when the container starts
You can build a container/Docker image from your app with the following command:
$ docker build -t knote .
Where:-t knote
defines the name/tag of your container, e.g.knote
.
is the location of the Dockerfile and application code, e.g., the current directory
A Docker image is an archive containing all the files that go in a container. You can create many Docker containers from the same Docker image. You can list all the images on your system with the following command:
$ docker images
.Docker Hub is a container registry — a place to distribute and share container images.
Previously, you installed MongoDB on your machine and ran it with the
mongod
command. You could do the same but as a container. MongoDB is provided as a Docker image named mongo on Docker Hub. However, theknote
andmongo
cointainers should communicate with each other, being on the same Docker network (reference).-
Create a new Docker network:
$ docker network create knote
-
Run MondoDB:
$ docker run --name=mongo --rm --network=knote mongo
where:--name
defines the name for the container. If you don't specify it, a name will be auto-generated;--rm
automatically cleans up the container and removes the file system when the container exits;--network
represents theDocker network
in which the container should run. If it's omitted, the container runs in the default network;mongo
is the name of the Docker image that you want to run.
-
Run the Knote app:
$ docker run --name=knote --rm --network=knote -p 3000:3000 -e MONGO_URL=mongodb://mongo:27017/dev knote
where:-p 3000:3000
publishes port3000
of the container to port3000
of your local machine. That means, if you now access port3000
on your computer, the request is forwarded to port3000
of the Knote container. You can use the forwarding to access the app from your local machine;-e
sets an environment variable inside the container. IMPORTANT: the hostname ismongo
which is precisely the name that you gave to the MongoDB container with the--name=mongo flag
.
-
Useful commands:
- Display all running containers:
$ docker ps
- Stop the containers:
$ docker stop mongo knote
- Remove the containers:
$ docker rm mongo knote
- Display all running containers:
You could create your images and upload them to DockerHub. Once you have your Docker ID, you have to authorise Docker to connect to the Docker Hub account:
$ docker login
Images uploaded to Docker Hub must have a name of the form
username/image:tag
:username
is your Docker ID;image
is the name of the image;tag
is an optional additional attribute — often it is used to indicate the version of the image.
Example:
- Rename:
$ docker tag knote <username>/knote-js:1.0.0
- Upload:
$ docker push <username>/knote-js:1.0.0
Your image is now publicly available as
<username>/knote-js:1.0.0
on Docker Hub and everybody can download and run it. To verify this, you can re-run your app, but this time using the new image name:$ docker run --name=mongo --rm --network=knote mongo
$ docker run --name=knote --rm --network=knote -p 3000:3000 -e MONGO_URL=mongodb://mongo:27017/dev <username>/knote-js:1.0.0
Once you're done testing your app, you can stop and remove the containers with:
$ docker stop mongo knote
If you use Docker containers and wish to deploy your app into production, you might have a few options:
-
Run the container in the server manually with a
docker run
; -
Use a tool such as
docker-compose
to run and manage several containers at the same time; -
Use a
container orchestrator
. A tool designed to manage and run containers at scale.Container orchestrators are designed to run complex applications with large numbers of scalable components. They work by inspecting the underlying infrastructure and determining the best server to run each container. Kubernetes is an excellent choice to deploy your containerised application, mainly because:
Open-source
: you can download and use it without paying any fee;Battle-tested
: there're plenty of examples of companies running it in production;Well-looked-after
: Big companies such as Redhat and Google have heavily invested in the future of Kubernetes by creating managed services, contributing to upstream development and offering training and consulting.
Although there are several ways to create a Kubernetes cluster, here we will use Minikube that runs a single-node Kubernetes cluster on your personal computer (including Windows, macOS and Linux PCs). So that you can try out Kubernetes, or for daily development work.
- Install the Minikube
- Create a folder first where you would like to allocate it, e.g.
C:\Program Files\Minikube
- Download it in that folder:
curl -Lo minikube.exe https://github.com/kubernetes/minikube/releases/latest/download/minikube-windows-amd64.exe
- Add the path to the variables (
C:\Program Files\Minikube\
).
-
Start your cluster:
$ minikube start
(NOTE: It could take a few minutes) -
Minikube can download the appropriate version of kubectl with:
$ minikube kubectl -- get po -A
.kubectl
is a Kubernetes command-line tool that allows you to run commands against Kubernetes clusters. You can usekubectl
to deploy applications, inspect and manage cluster resources, and view logs.- To access your new cluster:
$ kubectl get po -A
or$ kubectl cluster-info
IMPORTANT: For additional insight into your cluster state, Minikube bundles the
Kubernetes Dashboard
, allowing you to get easily acclimated to your new environment.$ minikube dashboard
Kubernetes has a declarative interface. In other words, you describe how you want the deployment of your application to look like, and Kubernetes figures out the necessary steps to reach this state. The way/language that you use to communicate with Kubernetes consists of so-called Kubernetes resources.
- There are many different Kubernetes resources, each is responsible for a specific aspect of your application (API reference).
- Kubernetes resources are defined in YAML files and submitted to the cluster through the Kubernetes HTTP API. In practice, you do all these interactions with kubectl - your primary client for the Kubernetes API.
It's a best practice to group all resource definitions for an application in the same folder because this allows to submit them to the cluster with a single command. So, you should create a folder named
kube
in the application directory.You can find the specification of the Deployment resource in the API reference or you can use the command
$ kubetctl explain deployment
that retrieves the same information as the web-based API reference. To drill down to a specific field use:$ kubectl explain deployment.spec.replicas
.NOTES:
- You don't usually talk about
containers
in Kubernetes butPods
. A Pod is a wrapper around one or more containers. - The container specification also defines an imagePullPolicy of Always — the instruction forces the Docker image to be downloaded, even if it was already downloaded.
A Deployment defines how to run an app in the cluster, but it doesn't make it available to other apps. To expose your app, you need a Service. Summarizing, a Service resource makes Pods accessible to other Pods or users outside the cluster. It is a best-practice to save resource definitions that belong to the same application in the same YAML file. To do so, separate the Service and Deployment resources with three dashes.
NOTES:
- The label corresponds exactly to what you specified for the Pods in the Deployment resource:
knote
. - In this case, the Service listens for requests on port
80
and forwards them to port3000
of the target Pods. And the type isLoadBalancer
, which makes the exposed Pods accessible from outside the cluster. - The default Service type is
ClusterIP
, which makes the exposed Pods only accessible from within the cluster. - Beyond exposing your Pods, a Service also ensures continuous availability for your app. If one of the Pod crashes and is restarted, the Service makes sure not to route traffic to this container until it is ready again. Also, when the Pod is restarted, and a new IP address is assigned, the Service automatically handles the update too.
- Furthermore, if you decide to scale your Deployment to 2, 3, 4, or 100 replicas, the Service keeps track of all of these Pods.
If the MongoDB Pod is deleted or moved to another node, the storage must persist.
- PersistentVolumeClaim
- The description of your database component should consist of three resource definitions:
- PersistentVolumeClaim
- Service
- Deployment
NOTES:
- If a Service does not have a type field, Kubernetes assigns it the default type
ClusterIP
. This is fine because the only entity that has to access the MongoDB Pod is your app. - The
volumes
field defines a storage volume named storage, which references thePersistentVolumeClaim
. ThevolumeMount
field mounts the referenced volume at the specified path in the container, which in this case is/data/db
(where MongoDB saves its data). - Pods within a cluster can talk to each other through the names of the Services exposing them.
- Kubernetes has an internal DNS system that keeps track of domain names and IP addresses. Similarly to how Docker provides DNS resolution for containers, Kubernetes provides DNS resolution for Services.
IMPORTANT: Make sure that your Minikube cluster is running.
$ minikube status
Time for submitting your resource definitions to Kubernetes:
$ kubectl apply -f kube
where the-f
flag accepts either a single filename or a directory. In the latter case, all YAML files in the directory are submitted. You can watch your Pods coming alive with:$ kubectl get pods --watch
. You should see two Pods transitioning from Pending toContainerCreating
toRunning
. As soon as both Pods are in the Running state, your application is ready.In Minikube, a Service can be accessed with the following command:
$ minikube service knote --url
IMPORTANT: Because you are using a Docker driver on windows, the terminal needs to be open to run it!! So you will also need another window for mongo:
$ minikube service mongo --url
When you're done testing the app, you can remove it from the cluster with the following command:
$ kubectl delete -f kube