Giter Site home page Giter Site logo

cloudogu / gitops-playground Goto Github PK

View Code? Open in Web Editor NEW
181.0 11.0 43.0 19.5 MB

Creates a complete GitOps-based operational stack on your Kubernetes clusters

License: Other

Shell 9.74% HTML 0.10% HCL 0.68% Groovy 84.77% Dockerfile 1.81% Java 2.85% FreeMarker 0.06%
gitops jenkins scm-manager k8s kubernetes argo argocd helm gitops-playground k3s

gitops-playground's Introduction

gitops-playground

Creates a complete GitOps-based operational stack on your Kubernetes clusters:

The gitops-playground is derived from our experiences in consulting, operating the myCloudogu platform and is used in our GitOps trainings for both Flux and ArgoCD.

Playground features

TL;DR

You can try the GitOps Playground on a local Kubernetes cluster by running a single command:

bash <(curl -s \
  https://raw.githubusercontent.com/cloudogu/gitops-playground/main/scripts/init-cluster.sh) \
  && docker run --rm -t --pull=always -u $(id -u) \
    -v ~/.config/k3d/kubeconfig-gitops-playground.yaml:/home/.kube/config \
    --net=host \
    ghcr.io/cloudogu/gitops-playground --yes --argocd --ingress-nginx --base-url=http://localhost
# If you want to try all features, you might want to add these params: --mail --monitoring --vault=dev

Note that on some linux distros like debian do not support subdomains of localhost. There you might have to use --base-url=http://local.gd (see local ingresses).

See the list of applications to get started.

We recommend running this command as an unprivileged user, that is inside the docker group.

Table of contents

What is the GitOps Playground?

The GitOps Playground provides a reproducible environment for setting up a GitOps-Stack. It provides an image for automatically setting up a Kubernetes Cluster including CI-server (Jenkins), source code management (SCM-Manager), Monitoring and Alerting (Prometheus, Grafana, MailHog), Secrets Management (Hashicorp Vault and External Secrets Operator) and of course Argo CD as GitOps operator.

The playground also deploys a number of example applications.

The GitOps Playground lowers the barriers for operating your application on Kubernetes using GitOps. It creates a complete GitOps-based operational stack on your Kubernetes clusters. No need to read lots of books and operator docs, getting familiar with CLIs, ponder about GitOps Repository folder structures and promotion to different environments, etc.
The GitOps Playground is a pre-configured environment to see GitOps in motion, including more advanced use cases like notifications, monitoring and secret management.

In addition to creating an operational stack in production, you can run the playground locally, for learning and developing new features.

We aim to be compatible with various environments, e.g. OpenShift and in an air-gapped network. The support for these is work in progress.

Installation

There a several options for running the GitOps playground

  • on a local k3d cluster Works best on Linux, but is possible on Windows and Mac.
  • on a remote k8s cluster
  • each with the option
    • to use an external Jenkins, SCM-Manager and registry (this can be run in production, e.g. with a Cloudogu Ecosystem) or
    • to run everything inside the cluster (for demo only)

The diagrams below show an overview of the playground's architecture and three scenarios for running the playground. For a simpler overview including all optional features such as monitoring and secrets management see intro at the very top.

Note that running Jenkins inside the cluster is meant for demo purposes only. The third graphic shows our production scenario with the Cloudogu EcoSystem (CES). Here better security and build performance is achieved using ephemeral Jenkins build agents spawned in the cloud.

Overview

Playground on local machine Production environment with Cloudogu EcoSystem
Playground on local machine A possible production environment

Create Cluster

You can apply the GitOps playground to

  • a local k3d cluster (see docs or script for more details):
    bash <(curl -s \
      https://raw.githubusercontent.com/cloudogu/gitops-playground/main/scripts/init-cluster.sh)
  • a remote k8s cluster on Google Kubernetes Engine (e.g. via Terraform, see our docs),
  • or almost any k8s cluster.
    Note that if you want to deploy Jenkins inside the cluster, you either need Docker as container runtime or set Jenkins up to run its build on an agent that provides Docker.

Apply playground

You can apply the playground to your cluster using our container image ghcr.io/cloudogu/gitops-playground.
On success, the container prints a little intro on how to get started with the GitOps playground.

There are several options for running the container:

  • For local k3d cluster, we recommend running the image as a local container via docker
  • For remote clusters (e.g. on GKE) you can run the image inside a pod of the target cluster via kubectl.

All options offer the same parameters, see below.

Apply via Docker (local cluster)

When connecting to k3d it is easiest to apply the playground via a local container in the host network and pass k3d's kubeconfig.

CLUSTER_NAME=gitops-playground
docker pull ghcr.io/cloudogu/gitops-playground
docker run --rm -t -u $(id -u) \
  -v ~/.config/k3d/kubeconfig-${CLUSTER_NAME}.yaml:/home/.kube/config \
  --net=host \
  ghcr.io/cloudogu/gitops-playground # additional parameters go here

Note:

  • docker pull in advance makes sure you have the newest image, even if you ran this command before.
    Of course, you could also specify a specific version of the image.
  • Using the host network makes it possible to determine localhost and to use k3d's kubeconfig without altering, as it access the API server via a port bound to localhost.
  • We run as the local user in order to avoid file permission issues with the kubeconfig-${CLUSTER_NAME}.yaml.
  • If you experience issues and want to access the full log files, use the following command while the container is running:
docker exec -it \
  $(docker ps -q  --filter ancestor=ghcr.io/cloudogu/gitops-playground) \
  bash -c -- 'tail -f  -n +1 /tmp/playground-log-*'

Apply via kubectl (remote cluster)

For remote clusters it is easiest to apply the playground via kubectl. You can find info on how to install kubectl here.

# Create a temporary ServiceAccount and authorize via RBAC.
# This is needed to install CRDs, etc.
kubectl create serviceaccount gitops-playground-job-executer -n default
kubectl create clusterrolebinding gitops-playground-job-executer \
  --clusterrole=cluster-admin \
  --serviceaccount=default:gitops-playground-job-executer

# Then start apply the playground with the following command:
# To access services on remote clusters, add either --remote or --ingress-nginx --base-url=$yourdomain
kubectl run gitops-playground -i --tty --restart=Never \
  --overrides='{ "spec": { "serviceAccount": "gitops-playground-job-executer" } }' \
  --image ghcr.io/cloudogu/gitops-playground \
  -- --yes --argocd # additional parameters go here. 

# If everything succeeded, remove the objects
kubectl delete clusterrolebinding/gitops-playground-job-executer \
  sa/gitops-playground-job-executer pods/gitops-playground -n default  

In general docker run should work here as well. But GKE, for example, uses gcloud and python in their kubeconfig. Running inside the cluster avoids these kinds of issues.

Additional parameters

The following describes more parameters and use cases.

You can get a full list of all options like so:

docker run -t --rm ghcr.io/cloudogu/gitops-playground --help
Configuration file

You can also use a configuration file to specify the parameters (--config-file or --config-map). That file must be a YAML file.

Note that the config file is not yet a complete replacement for CLI parameters.

You can use --output-config-file to output the current config as set by defaults and CLI parameters.

In addition, For easier validation and auto-completion, we provide a schema file.

For example in Jetbrains IntelliJ IDEA, you can use the schema for autocompletion and validation when you put the following at the beginning of your config file:

# $schema: https://raw.githubusercontent.com/cloudogu/gitops-playground/main/docs/configuration.schema.json

If you work with an older version, you can use a specific git commit ID instead of main in the schema URL.

Then use the context assistant to enable coding assistance or fill in all available properties. See here for the full manual.

example of a config file inside Jetbrains IntelliJ IDEA

Apply via Docker
docker run --rm -t --pull=always -u $(id -u) \
    -v ~/.config/k3d/kubeconfig-gitops-playground.yaml:/home/.kube/config \
    -v $(pwd)/gitops-playground.yaml:/config/gitops-playground.yaml \
    --net=host \
    ghcr.io/cloudogu/gitops-playground --yes --argocd --config-file=/config/gitops-playground.yaml
Apply via kubectl

Create the serviceaccount and clusterrolebinding

$ cat config.yaml # for example
features: 
  monitoring:
    active: true

# Convention:
# Find the ConfigMap inside the current namespace for the config map
# From the config map, pick the key "config.yaml"
kubectl create configmap gitops-config --from-file=config.yaml

kubectl run gitops-playground -i --tty --restart=Never \
  --overrides='{ "spec": { "serviceAccount": "gitops-playground-job-executer" } }' \
  --image ghcr.io/cloudogu/gitops-playground \
  -- --yes --argocd --config-map=gitops-config

Afterwards, you might want to do a clean up. In addition, you might want to delete the config-map as well.

kubectl delete cm gitops-config 
Deploy Ingress Controller

In the default installation the GitOps-Playground comes without an Ingress-Controller.

We use Nginx as default Ingress-Controller. It can be enabled via the configfile or parameter --ingress-nginx.

In order to make use of the ingress controller, it is recommended to use it in conjunction with --base-url, which will create Ingress objects for all components of the GitOps playground.

The ingress controller is based on the helm chart ingress-nginx.

Additional parameters from this chart's values.yaml file can be added to the installation through the gitops-playground configuration file.

Example:

features:
  ingressNginx:
    active: true
    helm:
      values:
        controller:
          replicaCount: 4

In this Example we override the default controller.replicaCount (GOP's default is 2).

This config file is merged with precedence over the defaults set by

Deploy Ingresses

It is possible to deploy Ingress objects for all components. You can either

  • Set a common base url (--base-url=https://example.com) or
  • individual URLS:
--argocd-url https://argocd.example.com 
--grafana-url https://grafana.example.com 
--vault-url https://vault.example.com 
--mailhog-url https://mailhog.example.com 
--petclinic-base-domain petclinic.example.com 
--nginx-base-domain nginx.example.com
  • or both, where the individual URLs take precedence.

Note:

  • jenkins-url and scmm-url are for external services and do not lead to ingresses, but you can set them via --base-url for now.
  • In order to make use of the Ingress you need an ingress controller. If your cluster does not provide one, the Playground can deploy one for you, via the --ingress-nginx parameter.
Subdomains vs hyphen-separated ingresses
  • By default, the ingresses are built as subdomains of --base-url.
  • You can change this behaviour using the parameter --url-separator-hyphen.
  • With this, hyphen are used instead of dots to separate application name from base URL.
  • Examples:
    • --base-url=https://xyz.example.org: argocd.xyz.example.org (default)
    • --base-url=https://xyz.example.org: argocd-xyz.example.org (--url-separator-hyphen)
  • This is useful when you have a wildcard certificate for the TLD, but use a subdomain as base URL.
    Here, browsers accept the validity only for the first level of subdomains.
Local ingresses

The ingresses can also be used when running the playground on your local machine:

  • Ingresses might be easier to remember than arbitrary port numbers and look better in demos
  • With ingresses, we can execute our local clusters in higher isolation or multiple playgrounds concurrently
  • Ingresses are required for running on Windows/Mac.

To use them locally,

  • init your cluster (init-cluster.sh).
  • apply your playground with the following parameters
    • --base-url=http://localhost
      • this is possible on Windows (tested on 11), Mac (tested on Ventura) or when using Linux with systemd-resolved (default in Ubuntu, not Debian)
        As an alternative, you could add all *.localhost entries to your hosts file.
        Use kubectl get ingress -A to get a full list
      • Then, you can reach argocd on http://argocd.localhost, for example
    • --base-url=http://local.gd (or 127.0.0.1.nip.io, 127.0.0.1.sslip.io, or others)
      • This should work for all other machines that have access to the internet without further config
      • Then, you can reach argocd on http://argocd.local.gd, for example
  • Note that when using port 80, the URLs are shorter, but you run into issues because port 80 is regarded as a privileged port. Java applications seem not to be able to reach localhost:80 or even 127.0.0.1:80 (NoRouteToHostException)
  • You can change the port using init-cluster.sh --bind-ingress-port=8080.
    When you do, make sure to append the same port when applying the playground: --base-url=http://localhost:8080
  • If your setup requires you to bind to a specific interface, you can just pass it with e.g. --bind-ingress-port=127.0.0.1:80
Deploy GitOps operators
  • --argocd - deploy Argo CD GitOps operator

โš ๏ธ Note that switching between operators is not supported.
That is, expect errors (for example with cluster-resources) if you apply the playground once with Argo CD and the next time without it. We recommend resetting the cluster with init-cluster.sh beforehand.

Deploy with local Cloudogu Ecosystem

See our Quickstart Guide on how to set up the instance.
Then set the following parameters.

# Note: 
# * In this case --password only sets the Argo CD admin password (Jenkins and 
#    SCMM are external)
# * Insecure is needed, because the local instance will not have a valid cert
--jenkins-url=https://192.168.56.2/jenkins \ 
--scmm-url=https://192.168.56.2/scm \
--jenkins-username=admin \
--jenkins-password=yourpassword \
--scmm-username=admin \
--scmm-password=yourpassword \
--password=yourpassword \
--insecure
Deploy with productive Cloudogu Ecosystem and GCR

Using Google Container Registry (GCR) fits well with our cluster creation example via Terraform on Google Kubernetes Engine (GKE), see our docs.

Note that you can get a free CES demo instance set up with a Kubernetes Cluster as GitOps Playground here.

# Note: In this case --password only sets the Argo CD admin password (Jenkins 
# and SCMM are external) 
--jenkins-url=https://your-ecosystem.cloudogu.net/jenkins \ 
--scmm-url=https://your-ecosystem.cloudogu.net/scm \
--jenkins-username=admin \
--jenkins-password=yourpassword \
--scmm-username=admin \
--scmm-password=yourpassword \
--password=yourpassword \
--registry-url=eu.gcr.io \
--registry-path=yourproject \
--registry-username=_json_key \ 
--registry-password="$( cat account.json | sed 's/"/\\"/g' )" 
Override default images
gitops-build-lib

Images used by the gitops-build-lib are set in the gitopsConfig in each Jenkinsfile of an application like that:

def gitopsConfig = [
    ...
    buildImages          : [
            helm: 'ghcr.io/cloudogu/helm:3.10.3-1',
            kubectl: 'bitnami/kubectl:1.29',
            kubeval: 'ghcr.io/cloudogu/helm:3.10.3-1',
            helmKubeval: 'ghcr.io/cloudogu/helm:3.10.3-1',
            yamllint: 'cytopia/yamllint:1.25-0.7'
    ],...

To override each image in all the applications you can use following parameters:

  • --kubectl-image someRegistry/someImage:1.0.0
  • --helm-image someRegistry/someImage:1.0.0
  • --kubeval-image someRegistry/someImage:1.0.0
  • --helmkubeval-image someRegistry/someImage:1.0.0
  • --yamllint-image someRegistry/someImage:1.0.0
Tools and Exercises

Images used by various tools and exercises can be configured using the following parameters:

  • --grafana-image someRegistry/someImage:1.0.0
  • --external-secrets-image someRegistry/someImage:1.0.0
  • --external-secrets-certcontroller-image someRegistry/someImage:1.0.0
  • --external-secrets-webhook-image someRegistry/someImage:1.0.0
  • --vault-image someRegistry/someImage:1.0.0
  • --nginx-image someRegistry/someImage:1.0.0

Note that specifying a tag is mandatory.

Argo CD-Notifications

If you are using a remote cluster, you can set the --argocd-url parameter so that argocd-notification messages have a link to the corresponding application.

You can specify email addresses for notifications (note that by default, MailHog will not actually send emails)

Monitoring

Set the parameter --monitoring to enable deployment of monitoring and alerting tools like prometheus, grafana and mailhog.

See Monitoring tools for details.

You can specify email addresses for notifications (note that by default, MailHog will not actually send emails)

Mail server

The gitops-playground uses MailHog to showcase notifications.
Alternatively, you can configure an external mailserver.

Note that you can't use both at the same time.
If you set either --mailhog or --mail parameter, MailHog will be installed
If you set --smtp-* parameters, a external Mailserver will be used and MailHog will not be deployed.

MailHog

Set the parameter --mailhog to enable MailHog.

This will deploy MailHog and configure Argo CD and Grafana to send mails to MailHog.
Sender and recipient email addresses can be set via parameters in some applications, e.g. --grafana-email-from or --argocd-email-to-user.

Parameters:

  • --mailhog: Activate MailHog as internal Mailserver
  • --mailhog-url: Specify domain name (ingress) under which MailHog will be served
External Mailserver

If you want to use an external Mailserver you can set it with these parameters

  • --smtp-address: External Mailserver SMTP address or IP
  • --smtp-port: External Mailserver SMTP port
  • --smtp-user: External Mailserver login username
  • --smtp-password: External Mailservers login password. Make sure to put your password in single quotes.

This will configure Argo CD and Grafana to send mails using your external mailserver.
In addition you should set matching sender and recipient email addresses, e.g. --grafana-email-from or --argocd-email-to-user.

Secrets Management

Set the parameter --vault=[dev|prod] to enable deployment of secret management tools hashicorp vault and external secrets operator. See Secrets management tools for details.

Remove playground

For k3d, you can just k3d cluster delete gitops-playground. This will delete the whole cluster. If you want to delete k3d use rm .local/bin/k3d.

To remove the playground without deleting the cluster, use the option --destroy. You need to pass the same parameters when deploying the playground to ensure that the destroy script can authenticate with all tools. Note that this option has limitations. It does not remove CRDs, namespaces, locally deployed SCM-Manager, Jenkins and registry, plugins for SCM-Manager and Jenkins

Running on Windows or Mac

  • In general: We cannot use the host network, so it's easiest to access via ingress controller and ingresses.
  • --base-url=http://localhost --ingress-nginx should work on both Windows and Mac.
  • In case of problems resolving e.g. jenkins.localhost, you could try using --base-url=http://local.gd or similar, as described in local ingresses.

Mac and Windows WSL

On macOS and when using the Windows Subsystem Linux on Windows (WSL), you can just run our TL;DR command after installing Docker.

For Windows, we recommend using Windows Subsystem for Linux version 2 (WSL2) with a native installation of Docker Engine, because it's easier to set up and less prone to errors.

For macOS, please increase the Memory limit in Docker Desktop (for your DockerVM) to be > 10 GB. Recommendation: 16GB.

bash <(curl -s \
  https://raw.githubusercontent.com/cloudogu/gitops-playground/main/scripts/init-cluster.sh) \
  && docker run --rm -t --pull=always -u $(id -u) \
    -v ~/.config/k3d/kubeconfig-gitops-playground.yaml:/home/.kube/config \
    --net=host \
    ghcr.io/cloudogu/gitops-playground --yes --argocd --ingress-nginx --base-url=http://localhost
# If you want to try all features, you might want to add these params: --mail --monitoring --vault=dev

When you encounter errors with port 80 you might want to use e.g.

  • init-cluster.sh) --bind-ingress-port=8080 and
  • --base-url=http://localhost:8080 instead.

Windows Docker Desktop

  • As mentioned in the previous section, we recommend using WSL2 with a native Docker Engine.
  • If you must, you can also run using Docker Desktop from native Windows console (see bellow)
  • However, there seems to be a problem when the Jenkins Jobs running the playground access docker, e.g.
$ docker run -t -d -u 0:133 -v ... -e ******** bitnami/kubectl:1.25.4 cat
docker top e69b92070acf3c1d242f4341eb1fa225cc40b98733b0335f7237a01b4425aff3 -eo pid,comm
process apparently never started in /tmp/gitops-playground-jenkins-agent/workspace/xample-apps_petclinic-plain_main/.configRepoTempDir@tmp/durable-7f109066
(running Jenkins temporarily with -Dorg.jenkinsci.plugins.durabletask.BourneShellScript.LAUNCH_DIAGNOSTICS=true might make the problem clearer)
Cannot contact default-1bg7f: java.nio.file.NoSuchFileException: /tmp/gitops-playground-jenkins-agent/workspace/xample-apps_petclinic-plain_main/.configRepoTempDir@tmp/durable-7f109066/output.txt
  • In Docker Desktop, it's recommended to use WSL2 as backend.
  • Using the Hyper-V backend should also work, but we experienced random CrashLoopBackoffs of running pods due to liveness probe timeouts.
    Same as for macOS, increasing the Memory limit in Docker Desktop (for your DockerVM) to be > 10 GB might help.
    Recommendation: 16GB.

Here is how you can start the playground from a Windows-native PowerShell console:

winget install k3d --version x.y.z
  • Create k3d cluster. See K3S_VERSION in init-cluster.sh for $image, then execute
$ingress_port = "80"
$registry_port = "30000"
$image = "rancher/k3s:v1.25.5-k3s2"
# Note that ou can query the image version used by playground like so: 
# (Invoke-WebRequest -Uri 'https://raw.githubusercontent.com/cloudogu/gitops-playground/main/scripts/init-cluster.sh').Content -split "`r?`n" | Select-String -Pattern 'K8S_VERSION=|K3S_VERSION='

k3d cluster create gitops-playground `
    --k3s-arg=--kube-apiserver-arg=service-node-port-range=8010-65535@server:0 `
    -p ${ingress_port}:80@server:0:direct `
    -v /var/run/docker.sock:/var/run/docker.sock@server:0 `
    --image=${image} `
    -p ${registry_port}:30000@server:0:direct

# Write $HOME/.config/k3d/kubeconfig-gitops-playground.yaml
k3d kubeconfig write gitops-playground
  • Note that
    • You can ignore the warning about docker.sock
    • We're mounting the docker socket, so it can be used by the Jenkins Agents for the docker-plugin.
    • Windows seems not to provide a group id for the docker socket. So the Jenkins Agents run as root user.
    • If you prefer running with an unprivileged user, consider running on WSL2, Mac or Linux
    • You could also add -v gitops-playground-build-cache:/tmp@server:0 to persist the Cache of the Jenkins agent between restarts of k3d containers.
  • Apply playground:
    Note that when using a $registry_port other than 30000 append the command --internal-registry-port=$registry_port bellow
docker run --rm -t --pull=always `
    -v $HOME/.config/k3d/kubeconfig-gitops-playground.yaml:/home/.kube/config `
    --net=host `
    ghcr.io/cloudogu/gitops-playground --yes --argocd --ingress-nginx --base-url=http://localhost:$ingress_port # more params go here

Stack

As described above the GitOps playground comes with a number of applications. Some of them can be accessed via web.

  • Jenkins
  • SCM-Manager
  • Argo CD
  • Prometheus/Grafana
  • Vault
  • Example applications for each GitOps operator, some with staging and production environments.

The URLs of the applications depend on the environment the playground is deployed to. The following lists all applications and how to find out their respective URLs for a GitOps playground being deployed to local or remote cluster.

For remote clusters you need the external IP, no need to specify the port (everything running on port 80). Basically, you can get the IP address as follows:

kubectl -n "${namespace}" get svc "${serviceName}" \
  --template="{{range .status.loadBalancer.ingress}}{{.ip}}{{end}}"

There is also a convenience script scripts/get-remote-url. The script waits, if externalIP is not present, yet. You could use this conveniently like so:

bash <(curl -s \
  https://raw.githubusercontent.com/cloudogu/gitops-playground/main/scripts/get-remote-url) \
  jenkins default

You can open the application in the browser right away, like so for example:

xdg-open $(bash <(curl -s \
  https://raw.githubusercontent.com/cloudogu/gitops-playground/main/scripts/get-remote-url) \
   jenkins default)

Credentials

If deployed within the cluster, all applications can be accessed via: admin/admin

Note that you can change (and should for a remote cluster!) the password with the --password argument. There also is a --username parameter, which is ignored for argocd. That is, for now argos username ist always admin.

Argo CD

Argo CD's web UI is available at

  • http://localhost:9092 (k3d)
  • scripts/get-remote-url argocd-server argocd (remote k8s)
  • --argocd-url to specify domain name

Argo CD is installed in a production-ready way, that allows for operating Argo CD with Argo CD, using GitOps and providing a repo per team pattern.

When installing the GitOps playground, the following steps are performed to bootstrap Argo CD:

  • The following repos are created and initialized:
    • argocd (management and config of Argo CD itself),
    • example-apps (example for a developer/application team's GitOps repo) and
    • cluster-resources (example for a cluster admin or infra/platform team's repo; see below for details)
  • Argo CD is installed imperatively via a helm chart.
  • Two resources are applied imperatively to the cluster: an AppProject called argocd and an Application called bootstrap. These are also contained within the argocd repository.

From there everything is managed via GitOps. This diagram shows how it works.

  1. The bootstrap application manages the folder applications, which also contains bootstrap itself.
    With this, changes to the bootstrap application can be done via GitOps. The bootstrap application also deploys other apps (App Of Apps pattern)
  2. The argocd application manages the folder argocd which contains Argo CD's resources as an umbrella helm chart.
    The umbrella chart pattern allows describing the actual values in values.yaml and deploying additional resources (such as secrets and ingresses) via the templates folder. The actual ArgoCD chart is declared in the Chart.yaml
  3. The Chart.yaml contains the Argo CD helm chart as dependency. It points to a deterministic version of the Chart (pinned via Chart.lock) that is pulled from the Chart repository on the internet.
    This mechanism can be used to upgrade Argo CD via GitOps. See the Readme of the argocd repository for details.
  4. The projects application manages the projects folder, that contains the following AppProjects:
    • the argocd project, used for bootstrapping
    • the built-in default project (which is restricted to eliminate threats to security)
    • one project per team (to implement least privilege and also notifications per team):
      • cluster-resources (for platform admin, needs more access to cluster) and
      • example-apps (for developers, needs less access to cluster)
  5. The cluster-resources application points to the cluster-resources git repository (argocd folder), which has the typical folder structure of a GitOps repository (explained in the next step). This way, the platform admins use GitOps in the same way as their "customers" (the developers) and can provide better support.
  6. The example-apps application points to the example-apps git repository (argocd folder again). Like the cluster-resources, it also has the typical folder structure of a GitOps repository:
    • apps - contains the kubernetes resources of all applications (the actual YAML)
    • argocd - contains Argo CD Applications that point to subfolders of apps (App Of Apps pattern, again)
    • misc - contains kubernetes resources, that do not belong to specific applications (namespaces, RBAC, resources used by multiple apps, etc.)
  7. The misc application points to the misc folder
  8. The my-app-staging application points to the apps/my-app/staging folder within the same repo. This provides a folder structure for release promotion. The my-app-* applications implement the Environment per App Pattern. This pattern allows each application to have its own environments, e.g. production and staging or none at all. Note that the actual YAML here could either be pushed manually or using the CI server. The applications contain examples that push config changes from the app repo to the GitOps repo using the CI server. This implementation mixes the Repo per Team and Repo per App patterns
  9. The corresponding production environment is realizing using the my-app-production application, that points to the apps/my-app/production folder within the same repo.
    Note that it is recommended to protect the production folders from manual access, if supported by the SCM of your choice.
    Alternatively, instead of different YAMLs files as used in the diagram, these applications could be realized as
    • Two applications in the same YAML (implemented in the playground, see e.g. petclinic-plain.yaml)
    • Two application with the same name in different namespaces, when ArgoCD is enabled to search for applications within different namespaces (implemented in the playground, see Argo CD's values.yaml - application.namespaces setting)
    • One ApplicationSet, using the git generator for directories (not used in GitOps playground, yet)

To keep things simpler, the GitOps playground only uses one kubernetes cluster, effectively implementing the Standalone pattern. However, the repo structure could also be used to serve multiple clusters, in a Hub and Spoke pattern: Additional clusters could either be defined in the vaules.yaml or as secrets via the templates folder.

We're also working on an optional implementation of the namespaced pattern, using the Argo CD operator.

Why not use argocd-autopilot?

And advanced question: Why does the GitOps playground not use the argocd-autopilot?

The short answer is: As of 2023-05, version 0.4.15 it looks far from ready for production.

Here is a diagram that shows how the repo structure created by autopilot looks like:

Here are some thoughts why we deem it not a good fit for production:

  • The version of ArgoCD is not pinned.
    • Instead, the kustomization.yaml (3๏ธ in the diagram) points to a base within the autopilot repo, which in turn points to the stable branch of the Argo CD repo.
    • While it might be possible to pin the version using Kustomize, this is not the default and looks complicated.
    • A non-deterministic version calls for trouble. Upgrades of Argo CD might happen unnoticed.
    • What about breaking changes? What about disaster recovery?
  • The repository structure autopilot creates is more complicated (i.e. difficult to understand and maintain) than the one used in the playground
    • Why is the autopilot-bootstrap application (1๏ธ in the diagram) not within the GitOps repo and lives only in the cluster?
    • The approach of an ApplicationSet within the AppProject's yaml pointing to a config.json (more difficult to write than YAML) is difficult to grasp (4๏ธ and 6๏ธ in the diagram)
    • The cluster-resources ApplicationSet is a good approach to multi-cluster but again, requires writing JSON (4๏ธ in the diagram).
  • Projects are used to realize environments (6๏ธ and 7๏ธ in the diagram).
    How would we separate teams in this monorepo structure?
    One idea would be to use multiple Argo CD instances, realising a Standalone pattern. This would mean that every team would have to manage its own ArgoCD instance.
    How could this task be delegated to a dedicated platform team? These are the questions that lead to the structure realized in the GitOps playground.

cluster-resources

The playground installs cluster-resources (like prometheus, grafana, vault, external secrets operator, etc.) via the repo
argocd/cluster-resources. See ADR for more details.

When installing without Argo CD, the tools are installed using helm imperatively, we fall back to using imperative helm installation as a kind of neutral ground.

Jenkins

Jenkins is available at

You can enable browser notifications about build results via a button in the lower right corner of Jenkins Web UI.

Note that this only works when using localhost or https://.

Enable Jenkins Notifications

Example of a Jenkins browser notifications

External Jenkins

You can set an external jenkins server via the following parameters when applying the playground. See parameters for examples.

  • --jenkins-url,
  • --jenkins-username,
  • --jenkins-password

Note that the example applications pipelines will only run on a Jenkins that uses agents that provide a docker host. That is, Jenkins must be able to run e.g. docker ps successfully on the agent.

The user has to have the following privileges:

  • install plugins
  • set credentials
  • create jobs
  • restarting

SCM-Manager

SCM-Manager is available at

External SCM-Manager

You can set an external SCM-Manager via the following parameters when applying the playground. See Parameters for examples.

  • --scmm-url,
  • --scmm-username,
  • --scmm-password

The user on the scm has to have privileges to:

  • add / edit users
  • add / edit permissions
  • add / edit repositories
  • add / edit proxy
  • install plugins

Monitoring tools

Set the parameter --monitoring so the kube-prometheus-stack via its helm-chart is being deployed including Argo CD dashboards.

This leads to the following tools to be exposed:

  • Mailhog
    • http://localhost:9094 (k3d)
    • scripts/get-remote-url mailhog monitoring (remote k8s)
    • --mailhog-url to specify domain name
  • Grafana
    • http://localhost:9095 (k3d)
    • scripts/get-remote-url kube-prometheus-stack-grafana monitoring (remote k8s)
    • --grafana-url to specify domain name

Grafana can be used to query and visualize metrics via prometheus. Prometheus is not exposed by default.

In addition, argocd-notifications is set up. Applications deployed with Argo CD now will alert via email to mailhog the sync status failed, for example.

Note that this only works with Argo CD so far

Secrets Management Tools

Via the vault parameter, you can deploy Hashicorp Vault and the External Secrets Operator into your GitOps playground.

With this, the whole flow from secret value in Vault to kubernetes Secret via External Secrets Operator can be seen in action:

External Secret Operator <-> Vault - flow

For this to work, the GitOps playground configures the whole chain in Kubernetes and vault (when dev mode is used):

External Secret Operator Custom Resources

  • In k8s namespaces argocd-staging and argocd-production:
    • Creates SecretStore and ServiceAccount (used to authenticate with vault)
    • Creates ExternalSecrets
  • In Vault:
    • Create secrets for staging and prod
    • Create a human user for changing the secrets
    • Authorizes the service accounts on those secrets
  • Creates an example app that uses the secrets

dev mode

For testing you can set the parameter --vault=dev to deploy vault in development mode. This will lead to

  • vault being transient, i.e. all changes during runtime are not persisted. Meaning a restart will reset to default.
  • Vault is initialized with some fixed secrets that are used in the example app, see below.
  • Vault authorization is initialized with service accounts used in example SecretStores for external secrets operator
  • Vault is initialized with the usual admin/admin account (can be overriden with --username and --password)

The secrets are then picked up by the vault-backend SecretStores (connects External Secrets Operator with Vault) in the namespace argocd-staging and argocd-production namespaces

You can reach the vault UI on

  • http://localhost:8200 (k3d)
  • scripts/get-remote-url vault-ui secrets (remote k8s)
  • --vault-url to specify domain name
  • You can log in vie the user account mentioned above.
    If necessary, the root token can be found on the log:
    kubectl logs -n secrets vault-0 | grep 'Root Token'

prod mode

When using vault=prod you'll have to initialize vault manually but on the other hand it will persist changes.

If you want the example app to work, you'll have to manually

  • set up vault, unseal it and
  • authorize the vault service accounts in argocd-production and argocd-staging namspaces. See SecretStores and dev-post-start.sh for an example.

Example app

With vault in dev mode and ArgoCD enabled, the example app applications/nginx/argocd/helm-jenkins will be deployed in a way that exposes the vault secrets secret/<environment>/nginx-secret via HTTP on the URL http://<host>/secret, for example http://localhost:30024/secret.

While exposing secrets on the web is a very bad practice, it's very good for demoing auto reload of a secret changed in vault.

To demo this, you could

  • change the staging secret
  • Wait for the change to show on the web, e.g. like so
while ; do echo -n "$(date '+%Y-%m-%d %H:%M:%S'): " ; \
  curl http://localhost:30024/secret/ ; echo; sleep 1; done

This usually takes between a couple of seconds and 1-2 minutes.
This time consists of ExternalSecret's refreshInterval, as well as the kubelet sync period (defaults to 1 Minute)

  • cache propagation delay

The following video shows this demo in time-lapse:

secrets-demo-video.mp4

Example Applications

The playground comes with example applications that allow for experimenting with different GitOps features.

All applications are deployed via separated application and GitOps repos:

  • Separation of app repo (e.g. petclinic-plain) and GitOps repo (e.g. argocd/example-app)
  • Config is maintained in app repo,
  • CI Server writes to GitOps repo and creates PullRequests.

The applications implement a simple staging mechanism:

  • After a successful Jenkins build, the staging application will be deployed into the cluster by the GitOps operator.
  • Deployment of production applications can be triggered by accepting pull requests.
  • For some applications working without CI Server and committing directly to the GitOps repo is pragmatic
    (e.g. 3rd-party-application like NGINX, like argocd/nginx-helm-umbrella)

app-repo-vs-gitops-repo

Note that the GitOps-related logic is implemented in the gitops-build-lib for Jenkins. See the README there for more options like

  • staging,
  • resource creation,
  • validation (fail early / shift left).

Please note that it might take about a minute after the pull request has been accepted for the GitOps operator to start deploying. Alternatively, you can trigger the deployment via ArgoCD's UI or CLI.

PetClinic with plain k8s resources

Jenkinsfile for plain deployment

  • Staging
    • local localhost:30020
    • remote: scripts/get-remote-url spring-petclinic-plain argocd-staging
    • --petclinic-base-domain to specify base domain. Then use staging.petclinic-plain.$base-domain
  • Production
    • local localhost:30021
    • remote: scripts/get-remote-url spring-petclinic-plain argocd-production
    • --petclinic-base-domain to specify base domain. Then use production.petclinic-plain.$base-domain

PetClinic with helm

Jenkinsfile for helm deployment

  • Staging
    • local localhost:30022
    • remote: scripts/get-remote-url spring-petclinic-helm argocd-staging
    • --petclinic-base-domain to specify base domain. Then use staging.petclinic-helm.$base-domain
  • Production
    • local localhost:30023
    • remote: scripts/get-remote-url spring-petclinic-helm argocd-production
    • --petclinic-base-domain to specify base domain. Then use production.petclinic-helm.$base-domain

3rd Party app (NGINX) with helm, templated in Jenkins

Jenkinsfile

  • Staging
    • local: localhost:30024
    • remote: scripts/get-remote-url nginx argocd-staging
    • --nginx-base-domain to specify base domain. Then use staging.nginx.$base-domain
  • Production
    • local: localhost:30025
    • remote: scripts/get-remote-url nginx argocd-production
    • --nginx-base-domain to specify base domain. Then use production.nginx.$base-domain

3rd Party app (NGINX) with helm, using Helm dependency mechanism

  • Application name: nginx-helm-umbrella
  • local: localhost:30026
  • remote: scripts/get-remote-url nginx-helm-umbrella argocd-production
  • --nginx-base-domain to specify base domain. Then use production.nginx-helm-umbrella.$base-domain

Development

See docs/developers.md

gitops-playground's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gitops-playground's Issues

NodePort in nginx-helm deployment via ArgoCD not working

Deploying the nginx helm chart with ArgoCD leads to a release with no configured NodePort even though it is declared in the values.yaml. But when the nginx is deployed imperatively with helm install the NodePort will be configured.

ArgoCD is using helm dependency build and helm template to deploy a helm chart. It seems the helm template and helm install lead to different results.

Dependencies set by Jenkins are not pinned

When the jenkins plugin configuration-as-code is updated, all dependencies using this plugin will fail on install since they need the latest version of it.

Plugin git:4.5.2 (via credentials:2.3.18) depends on configuration-as-code:1.50, but there is an older version defined on the top level - configuration-as-code:1.47

The default version of configuration-as-code in the latest helm-chart version 3.3.10 (as of writing this issue) is set to 1.4.7

Installing the helm chart on its own yields no errors regarding the plugin installation process.
Somehow using the helm chart within our playground yields the above mentioned failure.

Enable alerting for fluxv2

  • Add to prometheus-stack-helm-values.yaml
alertmanager:
  # We use alertmanager as workaround to have Flux sent alerts to mailhog, as Flux does not support SMTP
  enabled: true
  config:
    global:
      smtp_from: [email protected]
      smtp_smarthost: mailhog.monitoring.svc.cluster.local:1025
      smtp_require_tls: false
    route:
      routes:
        - receiver: 'default'
        - receiver: 'empty'
          match:
            alertname: Watchdog # Ignore watchdog here.  Comment out this route to test alerts easily.
    receivers:
    - name: 'default'
      email_configs:
      - to: [email protected]
        send_resolved: true
    - name: 'empty'

# Needed for #102
prometheus:
  prometheusSpec:
    # Allow for flux podmonitoris to be discoverd: https://github.com/prometheus-operator/prometheus-operator/issues/3636#issuecomment-796688902
    podMonitorSelectorNilUsesHelmValues: false
    # Use the same for serviceMonitors to be consistent
    serviceMonitorSelector: false
  • Add file to folder fluxv2/clusters/gitops-playground/flux-system and add file to kustomization.yaml
apiVersion: notification.toolkit.fluxcd.io/v1beta1
kind: Provider
metadata:
  name: alertmanager
  namespace: flux-system
spec:
  type: alertmanager
  address: http://kube-prometheus-stack-alertmanager.monitoring.svc.cluster.local:9093/api/v2/alerts/
  • Implement a test alert such as
apiVersion: notification.toolkit.fluxcd.io/v1beta1
kind: Alert
metadata:
  name: flux-reconciliation-errors
  namespace: flux-system
spec:
  providerRef:
    name: alertmanager
  eventSeverity: error
  eventSources:
    - kind: Kustomization
      name: '*'
      namespace: '*'
    - kind: HelmRelease
      name: '*'
      namespace: '*'

An alert can then be triggered by breaking an existing k8s resource in the fluxv2/gitops repo. E.g. add a syntax error to kustomization.yaml

Upgrade to ArgoCD 2.3

  • โ˜‘๏ธ Upgrade helm chart
  • Move to integrated argocd-notifcations:
    • enable in helm argocd-helm chart
    • Move argocd-notifications-cm.yaml (example) to control-app
    • Remove custom argocd-notifcations application
    • Optional: Configure argocd-notifcations to write deployment annotations to grafana -> We will address this issue later
  • Optional: Use application set. -> We will address this issue later
    • Use ApplicationSet for each folder of the gitops repo instead of creating the applications manually
    • Add more examples?

Jenkins: enable notifications has no effect

Clicking the "Enable Notifications" button as described in Jenkins has no effect when running with http:// and an address other than localhost.

Browser's console shows: The Notification permission may only be requested in a secure context.

Use documented repo structure for apps in Flux2

For Fluxv2 change to recommended repo structure, described here.

That is, apps should be deployed to apps/<stage> instead of clusters/<stage>

Our repo currently looks like this

fluxv2
โ””โ”€โ”€ clusters
    โ””โ”€โ”€ gitops-playground
        โ”œโ”€โ”€ flux-system
        โ”‚   โ”œโ”€โ”€ gotk-components.yaml
        โ”‚   โ”œโ”€โ”€ gotk-sync.yaml
        โ”‚   โ””โ”€โ”€ kustomization.yaml
        โ”œโ”€โ”€ fluxv2-production
        โ”‚   โ””โ”€โ”€ spring-petclinic-plain
        โ”‚       โ”œโ”€โ”€ cm.yaml
        โ”‚       โ”œโ”€โ”€ deployment.yaml
        โ”‚       โ””โ”€โ”€ service.yaml
        โ””โ”€โ”€ fluxv2-staging
            โ””โ”€โ”€ spring-petclinic-plain
                โ”œโ”€โ”€ cm.yaml
                โ”œโ”€โ”€ deployment.yaml
                โ””โ”€โ”€ service.yaml

whereas this is recommended:

โ”œโ”€โ”€ apps
โ”‚   โ”œโ”€โ”€ production 
โ”‚   โ””โ”€โ”€ staging
โ””โ”€โ”€ clusters
    โ”œโ”€โ”€ production
    โ””โ”€โ”€ staging

missing user for jenkins ui since upgrade to newest jenkins helm chart version

Behaviour:
When opening the jenkins ui for the first time after a fresh deployment, you will be prompted to create an admin user.

Wanted Behaviour:
The user should have been created via config from code

Steps to reproduce:
Deploy a fresh Playground.
Wait until Jenkins is up and running.
Open Jenkins UI
You will be prompted to create ad admin user

Don't use argo's insecure flags by default

1f5020e introduces insecure flags to argo by default.
This is not secure by default and should never be done in production!
So I suggest we set the insecure flags only, when apply.sh is called with --insecure.

Exception when using config file

Tested on 8ae976a, images built locally

Config file

features: 
  argocd: 
    emailFrom: [email protected]
    emailToAdmin: [email protected]
    emailToUser: [email protected]
  monitoring: 
    grafanaEmailFrom: [email protected]
    grafanaEmailTo: [email protected]

Dev-image works:

docker build -t gitops-playground:dev --build-arg ENV=dev  --progress=plain .    

 docker run --rm  -u $(id -u) \
    -v ~/.config/k3d/kubeconfig-gitops-playground.yaml:/home/.kube/config \
  -v $PWD/config.yaml:/config/config.yaml \
    --net=host \
   gitops-playground:dev --yes --argocd --base-url=http://localhost --mail --metrics -x --config-file=/config/config.yaml

Build graal native image fails:

docker build -t gitops-playground  --progress=plain .                                                 

docker run --rm --pull=always -u $(id -u) \
    -v ~/.config/k3d/kubeconfig-gitops-playground.yaml:/home/.kube/config \
  -v $PWD/config.yaml:/config/config.yaml \
    --net=host \
   gitops-playground --yes --argocd --base-url=http://localhost --mail   --metrics -x --config-file=/config/config.yaml


com.networknt.schema.JsonMetaSchema - Could not load validator type
com.networknt.schema.JsonSchemaException: java.lang.NoSuchMethodException: com.networknt.schema.TypeValidator.<init>(java.lang.String, com.fasterxml.jackson.databind.JsonNode, com.networknt.schema.JsonSchema, com.networknt.schema.ValidationContext)
        at com.networknt.schema.JsonMetaSchema.newValidator(JsonMetaSchema.java:290)
        at com.networknt.schema.ValidationContext.newValidator(ValidationContext.java:63)
        at com.networknt.schema.JsonSchema.read(JsonSchema.java:295)
        at com.networknt.schema.JsonSchema.getValidators(JsonSchema.java:615)
        at com.networknt.schema.JsonSchema.validate(JsonSchema.java:388)
        at com.networknt.schema.BaseJsonValidator.validate(BaseJsonValidator.java:115)
        at com.cloudogu.gitops.config.schema.JsonSchemaValidator.validate(JsonSchemaValidator.groovy:20)
        at com.cloudogu.gitops.config.ApplicationConfigurator.setConfig(ApplicationConfigurator.groovy:196)
        at com.cloudogu.gitops.cli.GitopsPlaygroundCli.getConfig(GitopsPlaygroundCli.groovy:210)
        at com.cloudogu.gitops.cli.GitopsPlaygroundCli.run(GitopsPlaygroundCli.groovy:174)
        at picocli.CommandLine.executeUserObject(CommandLine.java:2026)
        at picocli.CommandLine.access$1500(CommandLine.java:148)
        at picocli.CommandLine$RunLast.executeUserObjectOfLastSubcommandWithSameParent(CommandLine.java:2461)
        at picocli.CommandLine$RunLast.handle(CommandLine.java:2453)
        at picocli.CommandLine$RunLast.handle(CommandLine.java:2415)
        at picocli.CommandLine$AbstractParseResultHandler.execute(CommandLine.java:2273)
        at picocli.CommandLine$RunLast.execute(CommandLine.java:2417)
        at com.cloudogu.gitops.cli.GitopsPlaygroundCliMain.executionStrategy(GitopsPlaygroundCliMain.groovy:23)
        at picocli.CommandLine.execute(CommandLine.java:2170)
        at com.cloudogu.gitops.cli.GitopsPlaygroundCliMain.exec(GitopsPlaygroundCliMain.groovy:35)
        at com.cloudogu.gitops.cli.GitopsPlaygroundCliMain.main(GitopsPlaygroundCliMain.groovy:13)
Caused by: java.lang.NoSuchMethodException: com.networknt.schema.TypeValidator.<init>(java.lang.String, com.fasterxml.jackson.databind.JsonNode, com.networknt.schema.JsonSchema, com.networknt.schema.ValidationContext)
        at [email protected]/java.lang.Class.getConstructor0(DynamicHub.java:3585)
        at [email protected]/java.lang.Class.getConstructor(DynamicHub.java:2271)
        at com.networknt.schema.ValidatorTypeCode.newValidator(ValidatorTypeCode.java:160)
        at com.networknt.schema.JsonMetaSchema.newValidator(JsonMetaSchema.java:278)

Grafana Dashboards for Flux v2

Add the output of those to fluxv2/clusters/gitops-playground/flux-system and add to kustomization.yaml.

flux create source git flux-monitoring \
--interval=30m \
--url=https://github.com/fluxcd/flux2 \
--namespace=monitoring \ 
--branch=main 

 flux create kustomization monitoring-config \
--interval=1h \
--prune=true \
--source=flux-monitoring \
--path="./manifests/monitoring/monitoring-config" \
--health-check-timeout=1m \
--namespace=monitoring \
--wait 
# Bases on: https://fluxcd.io/flux/guides/monitoring/
# But adds
# --namespace: PromStack seems to pick up its own namespace by default only
# And removes
# --depends-on=kube-prometheus-stack  -> Use GitOps Playground grafana

Also add the following to prometheus-stack values.yaml in order for PodMonitors to be picked up by prometheus ๐Ÿง
This might have been solved in #101

prometheus:
  prometheusSpec:
    # Allow for flux podmonitoris to be discoverd: https://github.com/prometheus-operator/prometheus-operator/issues/3636#issuecomment-796688902
    podMonitorSelectorNilUsesHelmValues: false
    # Use the same for serviceMonitors to be consistent
    serviceMonitorSelector: false

Choosing gitops-operator does not exclude every resource of other operators

Behaviour:

When applying the apps to the local cluster with the restrictions to only use "fluxv1", it does install the fluxv1 operator within the local cluster, creates and clones contents to different fluxv1 repositories and creates default build pipelines within jenkins.
But it also creates empty repositories for argocd / fluxv2 within the scmm and creates pipelines in jenkins

Expected Behaviour:

Applying only fluxv1 to the local cluster only creates resources for fluxv1, no empty repositories for argocd / fluxv2 in scmm nor build pipelines within jenkins.

Steps to reproduce:

Install local cluster with k3s and apply only fluxv1

./scripts/init-cluster.sh
./scripts/apply.sh --fluxv1

Git author undefined

image

We should either provide config options for this or use a sane default. Or both.

Syncing of kube-prometheus-stack fails

When starting with --metrics.

Resource apiextensions.k8s.io:CustomResourceDefinition is not permitted in project monitoring.
Resource apiextensions.k8s.io:CustomResourceDefinition is not permitted in project monitoring.
Resource rbac.authorization.k8s.io:ClusterRole is not permitted in project monitoring.
Resource policy:PodSecurityPolicy is not permitted in project monitoring.
Resource admissionregistration.k8s.io:ValidatingWebhookConfiguration is not permitted in project monitoring.

image
image

Create README for all example apps

From a user's perspective it would be really helpful if each repo contained a brief description.
For example the argocd gitops repo only shows an empty page when you click on it in SCM-Manager.

image

Installations fails with Jenkins 2.332.x

With Jenkins version 2.332 the plugin install mechanism starts to return HTTP 500.

In the Jenkins log we can see an IndexOutOfBoundsException. Curiously, posting the same plugin via web UI (http://jenkins/updateCenter/) works.
When creating a curl request from this successful request in the browser using developer tools and just replacing the
Content-Type: multipart/form-data and --binary-data arguments with a -F one (see bellow) the request fails with 500 again ๐Ÿค”

The exception occurs in hudson.PluginManager.doUploadPlugin(PluginManager.java:1809)

2022-04-19 12:43:28.773+0000 [id=18]        WARNING o.e.j.s.h.ContextHandler$Context#log: Error while serving http://localhost:9090/pluginManager/uploadPlugin
java.lang.IndexOutOfBoundsException: Index 1 out of bounds for length 1
    at java.base/jdk.internal.util.Preconditions.outOfBounds(Preconditions.java:64)
    at java.base/jdk.internal.util.Preconditions.outOfBoundsCheckIndex(Preconditions.java:70)
    at java.base/jdk.internal.util.Preconditions.checkIndex(Preconditions.java:248)
    at java.base/java.util.Objects.checkIndex(Objects.java:372)
    at java.base/java.util.ArrayList.get(ArrayList.java:459)
    at hudson.PluginManager.doUploadPlugin(PluginManager.java:1809)
    at java.base/java.lang.invoke.MethodHandle.invokeWithArguments(MethodHandle.java:710)
    at org.kohsuke.stapler.Function$MethodFunction.invoke(Function.java:398)
Caused: java.lang.reflect.InvocationTargetException
    at org.kohsuke.stapler.Function$MethodFunction.invoke(Function.java:402)
    at org.kohsuke.stapler.Function$InstanceFunction.invoke(Function.java:410)
    at org.kohsuke.stapler.interceptor.RequirePOST$Processor.invoke(RequirePOST.java:78)
    at org.kohsuke.stapler.PreInvokeInterceptedFunction.invoke(PreInvokeInterceptedFunction.java:26)
    at org.kohsuke.stapler.Function.bindAndInvoke(Function.java:208)
    at org.kohsuke.stapler.Function.bindAndInvokeAndServeResponse(Function.java:141)
    at org.kohsuke.stapler.MetaClass$11.doDispatch(MetaClass.java:558)
    at org.kohsuke.stapler.NameBasedDispatcher.dispatch(NameBasedDispatcher.java:59)
    at org.kohsuke.stapler.Stapler.tryInvoke(Stapler.java:766)
    at org.kohsuke.stapler.Stapler.invoke(Stapler.java:898)
    at org.kohsuke.stapler.MetaClass$1.doDispatch(MetaClass.java:172)
    at org.kohsuke.stapler.NameBasedDispatcher.dispatch(NameBasedDispatcher.java:59)
    at org.kohsuke.stapler.Stapler.tryInvoke(Stapler.java:766)
    at org.kohsuke.stapler.Stapler.invoke(Stapler.java:898)
    at org.kohsuke.stapler.Stapler.invoke(Stapler.java:694)
    at org.kohsuke.stapler.Stapler.service(Stapler.java:240)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
    at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:799)
    at org.eclipse.jetty.servlet.ServletHandler$ChainEnd.doFilter(ServletHandler.java:1626)
    at hudson.util.PluginServletFilter$1.doFilter(PluginServletFilter.java:157)
    at hudson.security.HudsonPrivateSecurityRealm$2.doFilter(HudsonPrivateSecurityRealm.java:998)
    at hudson.util.PluginServletFilter$1.doFilter(PluginServletFilter.java:154)
    at jenkins.telemetry.impl.UserLanguages$AcceptLanguageFilter.doFilter(UserLanguages.java:129)
    at hudson.util.PluginServletFilter$1.doFilter(PluginServletFilter.java:154)
    at jenkins.security.ResourceDomainFilter.doFilter(ResourceDomainFilter.java:81)
    at hudson.util.PluginServletFilter$1.doFilter(PluginServletFilter.java:154)
    at jenkins.metrics.impl.MetricsFilter.doFilter(MetricsFilter.java:125)
    at hudson.util.PluginServletFilter$1.doFilter(PluginServletFilter.java:154)
    at hudson.util.PluginServletFilter.doFilter(PluginServletFilter.java:160)
    at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193)
    at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1601)
    at hudson.security.csrf.CrumbFilter.doFilter(CrumbFilter.java:154)
    at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193)
    at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1601)
    at hudson.security.ChainedServletFilter$1.doFilter(ChainedServletFilter.java:94)
    at jenkins.security.AcegiSecurityExceptionFilter.doFilter(AcegiSecurityExceptionFilter.java:52)
    at hudson.security.ChainedServletFilter$1.doFilter(ChainedServletFilter.java:99)
    at hudson.security.UnwrapSecurityExceptionFilter.doFilter(UnwrapSecurityExceptionFilter.java:54)
    at hudson.security.ChainedServletFilter$1.doFilter(ChainedServletFilter.java:99)
    at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:122)
    at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:116)
    at hudson.security.ChainedServletFilter$1.doFilter(ChainedServletFilter.java:99)
    at org.springframework.security.web.authentication.AnonymousAuthenticationFilter.doFilter(AnonymousAuthenticationFilter.java:109)
    at hudson.security.ChainedServletFilter$1.doFilter(ChainedServletFilter.java:99)
    at org.springframework.security.web.authentication.rememberme.RememberMeAuthenticationFilter.doFilter(RememberMeAuthenticationFilter.java:102)
    at org.springframework.security.web.authentication.rememberme.RememberMeAuthenticationFilter.doFilter(RememberMeAuthenticationFilter.java:93)
    at hudson.security.ChainedServletFilter$1.doFilter(ChainedServletFilter.java:99)
    at org.springframework.security.web.authentication.AbstractAuthenticationProcessingFilter.doFilter(AbstractAuthenticationProcessingFilter.java:219)
    at org.springframework.security.web.authentication.AbstractAuthenticationProcessingFilter.doFilter(AbstractAuthenticationProcessingFilter.java:213)
    at hudson.security.ChainedServletFilter$1.doFilter(ChainedServletFilter.java:99)
    at jenkins.security.BasicHeaderProcessor.success(BasicHeaderProcessor.java:139)
    at jenkins.security.BasicHeaderProcessor.doFilter(BasicHeaderProcessor.java:86)
    at hudson.security.ChainedServletFilter$1.doFilter(ChainedServletFilter.java:99)
    at org.springframework.security.web.context.SecurityContextPersistenceFilter.doFilter(SecurityContextPersistenceFilter.java:110)
    at org.springframework.security.web.context.SecurityContextPersistenceFilter.doFilter(SecurityContextPersistenceFilter.java:80)
    at hudson.security.HttpSessionContextIntegrationFilter2.doFilter(HttpSessionContextIntegrationFilter2.java:63)
    at hudson.security.ChainedServletFilter$1.doFilter(ChainedServletFilter.java:99)
    at hudson.security.ChainedServletFilter.doFilter(ChainedServletFilter.java:111)
    at hudson.security.HudsonFilter.doFilter(HudsonFilter.java:172)
    at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193)
    at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1601)
    at org.kohsuke.stapler.compression.CompressionFilter.doFilter(CompressionFilter.java:53)
    at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193)
    at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1601)
    at hudson.util.CharacterEncodingFilter.doFilter(CharacterEncodingFilter.java:86)
    at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193)
    at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1601)
    at org.kohsuke.stapler.DiagnosticThreadNameFilter.doFilter(DiagnosticThreadNameFilter.java:30)
    at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193)
    at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1601)
    at jenkins.security.SuspiciousRequestFilter.doFilter(SuspiciousRequestFilter.java:38)
    at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193)
    at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1601)
    at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:548)
    at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
    at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:578)
    at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
    at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:235)
    at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1624)
    at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
    at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1434)
    at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188)
    at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:501)
    at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1594)
    at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186)
    at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1349)
    at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
    at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
    at org.eclipse.jetty.server.Server.handle(Server.java:516)
    at org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:388)
    at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:633)
    at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:380)
    at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:277)
    at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)
    at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:105)
    at org.eclipse.jetty.io.ChannelEndPoint$1.run(ChannelEndPoint.java:104)
    at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:338)
    at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:315)
    at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173)
    at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:131)
    at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:386)
    at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883)
    at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034)
    at java.base/java.lang.Thread.run(Thread.java:829)

Can be reproduced by checking out 742e86e:

scripts/init-cluster.sh
docker build -t gitops-playground .
docker run --rm -it -u $(id -u) -v ~/.k3d/kubeconfig-gitops-playground.yaml:/home/.kube/config --net=host gitops-playground --yes --argocd --debug --trace  

Which will likely fail.

The error itself can then be reproduced like so:

  • Download a plugin jpi or copy it from gitops-playground docker image, e.g. /gop/jenkins-plugins/plugins/ace-editor.jpi
  • Run
curl -s \
  -H Jenkins-Crumb:$(curl -s --cookie-jar /tmp/cookies --retry 3 --retry-delay 1 -u admin:admin --write-out '%{json}' http://localhost:9090/crumbIssuer/api/json |  jq -rsc '(.[0] | .crumb)') \
  --cookie /tmp/cookies -u admin:admin --fail -L -o /dev/null --write-out '%{http_code}' '-F [email protected]' \
http://localhost:9090/pluginManager/uploadPlugin

The error also occurs when using httpie.

http --form -a admin:admin localhost:9090/pluginManager/uploadPlugin\?Jenkins-Crumb\=$CRUMB name=name [email protected] Jenkins-Crumb:$CRUMB Cookie:JSESSIONID.502b70ef=node0s5ssv7ufstw31rmutq140gec01507.node0

Jenkins CASC: Pin plugin dependencies

Current

We have plugins that need to be installed pinned in plugins.txt. Those plugins may have plugin dependency to ones we do not pin. When downloading and installing our pinned plugins, the resolved plugins get installed in the latest version.

Issue

Updates to those transitive dependencies may lead to a broken jenkins, as an updated plugin may e.g. require a higher jenkins version. In this case, the commit that ran previously ran and built successful wont work anymore. This does in fact break our playground without any one touching it.

Possible solution

We could determine all the plugins we got plus their transitive dependencies and pin them all into the plugins.txt. This ensures build stability and really reproducible builds of the infrastructure.

Jenkins jobs do not build on commits in scmm

Behaviour:

Jenkins jobs within a multipipeline not having any build-triggers in place. A commit onto the corresponding repository in scmm does not trigger a build.

Wanted Behaviour:

Jenkins jobs poll the scmm and start building upon commits to repositories on scmm.

Steps to reproduce:

  • make changes to e.g. fluxv1/petclinic-plain
  • commit changes to scmm
  • watch jenkins (does not build, have to start it manually)

Spinner prints "ok" on failure

The new apply.sh looks really awesome!

One downside: when one of the initializations fails, it still prints ok and continues the initialization.

I'd prefer if it printed failed and stops the execution.

You can reproduce this by making an init function return 1 for example.

Fluxv2 namespace not deleted on destroy

Destroy.sh hangs when calling kubectl delete -f fluxv2/clusters/k8s-gitops-playground/fluxv2/gotk-gitrepository.yaml

When ignoring this results in errors on next apply.sh

Error from server (Forbidden): error when creating "STDIN": secrets "gitops-scmm" is forbidden: unable to create new content in namespace fluxv2 because it is being terminated

The error leads to applyBasicK8sResources being stopped silently. This results in local registry not being installed ๐Ÿ˜“

As a workaround, the namespace can be force terminated like so:

 kubectl proxy&
kubectl get ns fluxv2 -o json | \
  jq '.spec.finalizers=[]' | \
  curl -X PUT http://localhost:8001/api/v1/namespaces/fluxv2/finalize -H "Content-Type: application/json" --data @-
kill $!

Failure when `--argocd` is not passed

bash <(curl -s \                                                                                                                                                                                         
  https://raw.githubusercontent.com/cloudogu/gitops-playground/main/scripts/init-cluster.sh) \
  && sleep 2 && docker run -it -u $(id -u) \
    -v ~/.k3d/kubeconfig-gitops-playground.yaml:/home/.kube/config \
    --net=host \
    ghcr.io/cloudogu/gitops-playground:d3c8c75 --yes -x
13:00:17.817 [main] INFO  com.cloudogu.gitops.Feature - Installing Feature ArgoCD
13:00:17.817 [main] INFO  c.c.gitops.features.argocd.ArgoCD - Cloning Repositories
13:00:17.817 [main] DEBUG c.c.gitops.features.argocd.ArgoCD - Cloning petclinic base repo, revision 32c8653, from /gitops/repos/spring-petclinic.git
13:00:21.278 [main] DEBUG c.c.gitops.features.argocd.ArgoCD - Finished cloning petclinic base repo
13:00:21.278 [main] DEBUG com.cloudogu.gitops.scmm.ScmmRepo - Cloning argocd/cluster-resources repo
org.eclipse.jgit.api.errors.InvalidRemoteException: Invalid remote: origin
        at org.eclipse.jgit.api.FetchCommand.call(FetchCommand.java:246)
        at org.eclipse.jgit.api.CloneCommand.fetch(CloneCommand.java:325)
        at org.eclipse.jgit.api.CloneCommand.call(CloneCommand.java:191)
        at com.cloudogu.gitops.scmm.ScmmRepo.gitClone(ScmmRepo.groovy:128)
        at com.cloudogu.gitops.scmm.ScmmRepo.cloneRepo(ScmmRepo.groovy:56)
        at com.cloudogu.gitops.features.argocd.ArgoCD$RepoInitializationAction.initLocalRepo(ArgoCD.groovy:289)
        at com.cloudogu.gitops.features.argocd.ArgoCD$_enable_lambda1.doCall(ArgoCD.groovy:106)
        at [email protected]/java.util.ArrayList.forEach(ArrayList.java:1511)
        at com.cloudogu.gitops.features.argocd.ArgoCD.enable(ArgoCD.groovy:105)
        at com.cloudogu.gitops.Feature.install(Feature.groovy:11)
        at com.cloudogu.gitops.Application$_start_lambda1.doCall(Application.groovy:23)
        at [email protected]/java.util.ArrayList.forEach(ArrayList.java:1511)
        at com.cloudogu.gitops.Application.start(Application.groovy:22)
        at com.cloudogu.gitops.cli.GitopsPlaygroundCli.run(GitopsPlaygroundCli.groovy:173)
        at picocli.CommandLine.executeUserObject(CommandLine.java:2026)
        at picocli.CommandLine.access$1500(CommandLine.java:148)
        at picocli.CommandLine$RunLast.executeUserObjectOfLastSubcommandWithSameParent(CommandLine.java:2461)
        at picocli.CommandLine$RunLast.handle(CommandLine.java:2453)
        at picocli.CommandLine$RunLast.handle(CommandLine.java:2415)
        at picocli.CommandLine$AbstractParseResultHandler.execute(CommandLine.java:2273)
        at picocli.CommandLine$RunLast.execute(CommandLine.java:2417)
        at com.cloudogu.gitops.cli.GitopsPlaygroundCliMain.executionStrategy(GitopsPlaygroundCliMain.groovy:23)
        at picocli.CommandLine.execute(CommandLine.java:2170)
        at com.cloudogu.gitops.cli.GitopsPlaygroundCliMain.exec(GitopsPlaygroundCliMain.groovy:35)
        at com.cloudogu.gitops.cli.GitopsPlaygroundCliMain.main(GitopsPlaygroundCliMain.groovy:13)
Caused by: org.eclipse.jgit.errors.NoRemoteRepositoryException: http://localhost:9091/scm/repo/argocd/cluster-resources: http://localhost:9091/scm/repo/argocd/cluster-resources/info/refs?service=git-upload-pack not found: Not Found
        at org.eclipse.jgit.transport.TransportHttp.createNotFoundException(TransportHttp.java:600)
        at org.eclipse.jgit.transport.TransportHttp.connect(TransportHttp.java:668)
        at org.eclipse.jgit.transport.TransportHttp.openFetch(TransportHttp.java:465)
        at org.eclipse.jgit.transport.FetchProcess.executeImp(FetchProcess.java:153)
        at org.eclipse.jgit.transport.FetchProcess.execute(FetchProcess.java:105)
        at org.eclipse.jgit.transport.Transport.fetch(Transport.java:1462)
        at org.eclipse.jgit.api.FetchCommand.call(FetchCommand.java:238)
        ... 24 more

Builds in k3d fail on docker push with bind-localhost=false

Error: Get https://172.31.0.2:31826/v2/: http: server gave HTTP response to HTTPS client.

This can even be reproduced without the gitops playground:

k3d cluster create reg-test
helm upgrade -i docker-registry --set service.type=LoadBalancer --version 1.9.4 stable/docker-registry
# wait for external ip
sleep 10
EXT_IP=$(kgsvc docker-registry  --template='{{range .status.loadBalancer.ingress}}{{.ip}}{{end}}')
NODE_PORT=$( kgsvc  -o jsonpath="{.spec.ports[0].nodePort}"  docker-registry)

docker pull hello-world
docker tag hello-world $EXT_IP:$NODE_PORT/image
docker push $EXT_IP:$NODE_PORT/image
# e.g. Get https://172.31.0.2:31357/v2/: http: server gave HTTP response to HTTPS client

Add Webhook receiver to Flux for Push-like experience

  • Add file https://github.com/cloudogu/gitops-playground/issues/new to fluxv2/gitops/code/sources/main/clusters/gitops-playground/flux-system/:
  # Using webhook receivers make pull-based pipelines as responsive as push-based pipelines.
  # https://fluxcd.io/flux/guides/webhook-receivers/
  ---
  apiVersion: v1
  kind: Service
  metadata:
    name: receiver
    namespace: flux-system
  spec:
    type: LoadBalancer
    selector:
      app: notification-controller
    ports:
      - name: http
        port: 80
        protocol: TCP
        targetPort: 9292
  ---
  apiVersion: notification.toolkit.fluxcd.io/v1beta1
  kind: Receiver
  metadata:
    name: scmm-receiver-flux-system
    namespace: flux-system
  spec:
    type: generic
    secretRef:
      name: webhook-token
    resources:
      - apiVersion: source.toolkit.fluxcd.io/v1beta2
        kind: GitRepository
        name: flux-system
        namespace: flux-system
  • Add webhook-receiver-scm.yaml to kustomization.yaml/ - resources
  • Find out generated HOOK_URL:
kubectl get receivers scmm-receiver-flux-system -o go-template --template="{{.status.url}}"
  • On remote SCMM wait for LB to get exteranl IP
  • Add webhook to SCMM
curl $SCM_HOST/scm/api/v2/plugins/webhook/fluxv2/gitops' -X PUT -H 'Content-Type: application/json' \
   --data-raw '{"webhooks":[{"name":"SimpleWebHook","configuration":{"urlPattern":"http://$EXTERNAL_IP_OR_INTERNAL_SERVICE_NAME/$HOOK_URL","executeOnEveryCommit":false,"sendCommitData":false,"method":"AUTO"},"valid":true}]}'
  • Create a token. If we use a type: generic the token is not validated, but we need to create it, to avoid errors. So the $TOKEN value is ignored for generic.
    If SCMM Webhook Plugin supports setting header by then we could use type: gitlab and set the X-Gitlab-Token
kubectl -n flux-system create secret generic webhook-token \	
--from-literal=token=$TOKEN
  • Remove short intervals from Flux deployments

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.