Reproducible infrastructure to showcase GitOps workflows. Derived from our consulting experience.
- Create Cluster
- Apply apps to cluster
- Applications
- Test applications deployed via GitOps - PetClinic via Flux V1 - 3rd Party app (NGINX) via Flux V1 - PetClinic via Flux V2 - PetClinic via ArgoCD
- Remove apps from cluster
- Options
Can be run on a local k3s cluster or on Google Container Engine.
To be able to set up the infrastructure you need a linux machine (tested with Ubuntu 20.04) with docker installed.
All other tools like kubectl, k3s and helm are set up using the ./scripts/init-cluster.sh
script.
You can use your own k3s cluster, or use the script provided. Run this script from repo root with:
./scripts/init-cluster.sh
If you use your own cluster, note that jenkins relies on the --docker
mode to be enabled.
In a real-life scenario, it would make sense to run Jenkins agents outside the cluster for security and load reasons,
but in order to simplify the setup for this playground we use this slightly dirty workaround:
Jenkins builds in agent pods that are able to spawn plain docker containers docker host that runs the containers.
That's why we need the k3s' --docker
mode.
Don't use a setup such as this in production! The diagrams bellow show an overview of the playground's architecture, and a possible production scenario using our Ecosystem (more secure and better build performance using ephemeral build agents spawned in the cloud).
Playground on local machine | A possible production environment with Cloudogu Ecosystem |
---|---|
You will need the OWNER
role fpr GKE, because apply.sh
applies ClusterRoles
, which is only allowed to owners.
The following steps are deploying a k8s cluster with a node pool to GKE in the europe-west-3 region.
The required terraform files are located in the ./terraform/
folder.
You have to set PROJECT_ID
to the correct ID of your Google Cloud project.
Login to GCP from your local machine:
gcloud auth login
Select the project, where you want to deploy the cluster:
PROJECT_ID=<your project ID goes here>
gcloud config set project ${PROJECT_ID}
Create a service account:
gcloud iam service-accounts create terraform-cluster \
--display-name terraform-cluster --project ${PROJECT_ID}
Authorize Service Accout
gcloud projects add-iam-policy-binding ${PROJECT} \
--member terraform-cluster@${PROJECT_ID}.iam.gserviceaccount.com --role=roles/editor
Create an account.json file, which contains the keys for the service account. You will need this file to apply the infrastructure:
gcloud iam service-accounts keys create \
--iam-account terraform-cluster@${PROJECT_ID}.iam.gserviceaccount.com \
terraform/account.json
You can either use a remote state (default, described bellow) or use a local state by changing the following in main.tf
:
- backend "gcs" {}
+ backend "local" {}
If you want to work several persons on the project, use a remote state. The following describes how it works:
Create a bucket for the terraform state file:
BUCKET_NAME=terraform-cluster-state
gsutil mb -p ${PROJECT_ID} -l EUROPE-WEST3 gs://${BUCKET_NAME}
Grant the service account permissions for the bucket:
gsutil iam ch \
serviceAccount:terraform-cluster@${PROJECT_ID}.iam.gserviceaccount.com:roles/storage.admin \
gs://${BUCKET_NAME}
Before continuing with the terraform steps, you have to open the values.tfvars
file
and edit the gce_project
value to your specific ID.
For local state terraform init
suffices.
cd terraform
terraform init \
-backend-config "credentials=account.json" \
-backend-config "bucket=${BUCKET_NAME}\"
Apply infra:
terraform apply -var-file values.tfvars
terraform apply already adds an entry to your local kubeconfig
and activate the context. That is calling
kubectl get pod
should already connect to the cluser.
If not, you can create add an entry to your local kubeconfig
like so:
gcloud container clusters get-credentials ${cluster_name} --zone ${gce_location} --project ${gce_project}
Once you're done you can destroy the cluster using
terraform destroy -var-file values.tfvars
The gitops-playground can be deployed to the currently active context in kubeconfig via
scripts/apply.sh
.
You can also just install one GitOps module like Flux V1 or ArgoCD via parameters.
Use ./scripts/apply.sh --help
for more information.
Important options:
--remote
- deploy to remote cluster (not local k3s cluster), e.g. in GKE--password
- change admin passwords for SCM-Manager, Jenkins and ArgoCD. Should be set with--remote
for security reasons.--argocd
- deploy only argoCD GitOps operator--fluxv1
- deploy only Flux v1 GitOps operator--fluxv2
- deploy only Flux v2 GitOps operator
The scripts also prints a little intro on how to get started with a GitOps deployment.
Find jenkins on http://localhost:9090
Admin user: Same as SCM-Manager - admin/admin
Note: You can enable browser notifications about build results via a button in the lower right corner of Jenkins Web UI.
Find scm-manager on http://localhost:9091
Login with admin/admin
Find the ArgoCD UI on http://localhost:9092 (redirects to https://localhost:9093)
Login with admin/admin
Each GitOps operator comes with a couple of demo applications that allow for experimenting with different GitOps features.
All applications implement a simple staging mechanism:
After a successful Jenkins build, the staging application will be deployed into the cluster. The production applications can be deployed by accepting Pull Requests.
Please note that it might take about 1 Minute after the PullRequest has been accepted for the GitOps operator to start deploying.
The URLs of the applications depend on the environment the playground is deployed to. The following lists all application and how to find out their respective URLs for a GitOps playground being deployed to local or remote cluster.
For remote clusters you need the external IP, no need to specify the port (everything running on port 80). Basiscally, you can get the IP Adress as follows:
kubectl -n "${namespace}" get svc "${serviceName}" --template="{{range .status.loadBalancer.ingress}}{{.ip}}{{end}}"
There is also a convenience script scripts/get-remote-url
. The script waits, if externalIP is not present, yet.
You can open the application in the browser right away, like so for example:
xdg-open $(scripts/get-remote-url default jenkins)
-
Jenkinsfile for plain
k8s
deployment- Staging:
- local: localhost:30001
- remote:
scripts/get-remote-url spring-petclinic-plain fluxv1-staging
- Production:
- local: localhost:30002
- remote:
scripts/get-remote-url spring-petclinic-plain fluxv1-production
- QA (example for a 3rd stage)
- local: localhost:30003
- remote:
scripts/get-remote-url spring-petclinic-plain fluxv1-qa
- Staging:
-
Jenkinsfile for
helm
deployment- Staging
- local: localhost:30004
- remote:
scripts/get-remote-url spring-petclinic-helm-springboot fluxv1-staging
- Production
- localhost:30005
- remote:
scripts/get-remote-url spring-petclinic-helm-springboot fluxv1-production
- Staging
TODO not reachable via 30006!
- Jenkinsfile
- Staging
- local: localhost:30006
- remote:
scripts/get-remote-url nginx fluxv1-staging
- Production
- local: localhost:30007
- remote:
scripts/get-remote-url nginx fluxv1-staging
- Staging
- Jenkinsfile
- Staging
- local: localhost:30010
- remote:
scripts/get-remote-url spring-petclinic-plain fluxv2-staging
- Production
- local: localhost:30011
- remote:
scripts/get-remote-url spring-petclinic-plain fluxv2-production
- Staging
- Jenkinsfile
- Staging
- local localhost:30020
- remote:
scripts/get-remote-url spring-petclinic-plain argocd-staging
- Remote
- local localhost:30021
- remote:
scripts/get-remote-url spring-petclinic-plain argocd-production
- Staging
You can add additional stages in this Jenkinsfile for the plain-k8s petclinic version with fluxv1.
Look for the gitopsConfig
map and edit the following entry:
stages: [
staging: [ deployDirectly: true ],
production: [ deployDirectly: false ],
qa: [ ]
]
Just add another stage and define its deploy behaviour by setting deployDirectly
to true
or false
.
The default is false
so you can leave it empty like qa: [ ]
.
If set to true
the changes will deploy automatically when pushed to the gitops repository.
If set to false
a pull request is created.
After adding a new stage you need to also create k8s-files in the corresponding folder.
So for the stage qa
there have to be k8s-files in the following folder applications/petclinic/fluxv1/plain-k8s/k8s/qa