Giter Site home page Giter Site logo

kubernetes-starter-kit-developers's Introduction

Day-2 Operations-ready DigitalOcean Kubernetes (DOKS) for Developers

Webinar video from 9/28/2021

Automating GitOps and Continuous Delivery With DigitalOcean Kubernetes

In this tutorial, we provide developers a hands-on introduction on how to get started with an operations-ready Kubernetes cluster on DigitalOcean Kubernetes (DOKS). Kubernetes is easy to set up and developers can use identical tooling and configurations across any cloud. Making Kubernetes operationally ready requires a few more tools to be set up, which are described in this tutorial.

Resources used by the Starter Kit include the following:

  • DigitalOcean Droplets (for DOKS cluster).

  • DigitalOcean Load Balancer.

  • DigitalOcean Block Storage for persistent storage.

  • DigitalOcean Spaces for object storage.

  • Kubernetes Helm Charts:

    ingress-nginx ingress-ambassador prometheus-stack loki-stack velero triliovault sealed-secrets

Notes:

  • Main branch should generally work. Just note that it is being frequently updated. If you want to be safe, pick a specific tag version corresponding to DOKS release (eg. v1.21.3, v1.21.5).
  • Tags specific points in a repository’s history when an important change applied.

Remember to verify and delete the resources at the end of the tutorial, if you no longer need those.

Operations-ready Setup Overview

Below is a diagram that gives a high-level overview of the Starter Kit setup, as well as the main steps:

Setup Overview

Table of Contents

  1. Scope
  2. Set up DO Kubernetes
  3. Set up DO Container Registry
  4. Set up Ingress Controller
  5. Set up Observability
  6. Set up Backup and Restore
  7. Kubernetes Secrets
  8. Scaling Application Workloads
  9. Continuous Delivery using GitOps
  10. Estimate Resource Usage of Starter Kit

Scope

This tutorial demonstrates the basic setup you need to be operations-ready.

All the steps are done manually using the command line interface (CLI). If you need end-to-end automation, refer to the last section.

None of the installed tools are exposed using Ingress or Load Balancer. To access the console for individual tools, we use kubectl port-forward.

We will use brew (on MacOS) to install the required command-line utilities on our local machine and use the command to work on a DOKS cluster.

For every service that gets deployed, we will enable metrics and logs. At the end, we will review the overhead from all these additional tools and services. That gives an idea of what it takes to be operations-ready after your first cluster install.

This tutorial will use manifest files from this repo. It is recommended to clone this repository to your local environment. The below command can be used to clone this repository.

git clone https://github.com/digitalocean/Kubernetes-Starter-Kit-Developers.git

git checkout <TAG>   # If you want to pick a tested tag corresponding to DOKS release, eg. v1.21.3

Notes:

  • For this Starter Kit, we recommend to start with a node pool of higher capacity nodes (say, 4cpu/8gb RAM) and have at least 2 nodes. Otherwise, review and allocate node capacity if you run into pods in PENDING state.
  • We customize the value files for Helm installs of individual components. To get the original value file, use helm show values. For example: helm show values prometheus-community/kube-prometheus-stack --version 30.0.1.
  • There are multiple places where you will change a manifest file to include a secret token for your cluster. Please be mindful of handling the secrets, and do not commit to public Git repositories. A safer method to use is Sealed Secrets or External Secrets Operator, explained in Kubernetes Sealed Secrets. The sample manifests provided in the Section 14 - Continuous Delivery using GitOps section, shows you how to use Sealed Secrets in combination with Flux CD, and reference sensitive data in each manifest that require secrets.
  • To keep the components up to date, helm provides you the option to upgrade them to latest version or desired version. For example helm upgrade kube-prom-stack prometheus-community/kube-prometheus-stack --version 30.0.0 --namespace monitoring -f "04-setup-prometheus-stack/assets/manifests/prom-stack-values-v30.0.1.yaml".

If you want to automate installation for all the components, refer to Section 14 - Continuous Delivery using GitOps.

Go to Section 1 - Set up DigitalOcean Kubernetes.

kubernetes-starter-kit-developers's People

Contributors

bhagirathhapse avatar bikram20 avatar chandansagar avatar dchebakov avatar facklambda avatar kumaripurnima avatar leonvisscher avatar lgarbo avatar saadismail avatar sharmita3 avatar suruaku avatar takotab avatar v-bpastiu avatar v-ctiutiu avatar vladciobancai avatar vomba avatar yusufkaratoprak avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kubernetes-starter-kit-developers's Issues

Nginx ingress chapter host rules point to the wrong service port

Description

As per discussion #149 seems that a mistake slipped, and nginx rules used in the ingress chapter point to the container port (8080). Nginx ingress controller rules should point to service port, not container port.

The echo deployment spec looks like below:

...
spec:
    containers:
      - name: echo
        image: jmalloc/echo-server
        ports:
          - name: http
            containerPort: 8080
...

The echo service spec looks like below:

...
spec:
  ports:
    - name: http
      port: 80
      targetPort: 8080
...

The ingress host rule spec for the echo service should look like:

...
spec:
  rules:
    - host: echo.starter-kit.online
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: echo
                port:
                  number: 80
...

Kubeseal CLI not able to export the Sealed Secrets controller public certificate

Bug Report

Kubeseal CLI is not able to export the Sealed Secrets controller public certificate. Running below command fails with error: cannot fetch certificate: no endpoints available for service "http:sealed-secrets-controller:":

kubeseal --fetch-cert --controller-namespace=sealed-secrets > pub-sealed-secrets.pem

Describe the bug

More info can be found here and here.

Upgrading to latest version (0.17.3) of Kubeseal CLI seems to fix the issue.

Affected Components

Kubernetes Sealed Secrets chapter.

Autoscaling application workloads chapter

Description

It would be a very nice addition (and a practical one), to have a dedicated chapter in the Starter Kit for Horizontal Pod Autoscaling (HPA) and Vertical Pod Autoscaling or VPA (in the future). What I suggest for now, is a generic name for the chapter, such as 09-autoscaling-application-workloads, and have HPAs discussed alongside metrics-server.

Then, we can also include Prometheus into the mix as an example, to get the required metrics for HPA (via prometheus-adapter). We already have it in place, and maybe most of people do as well.

In addition, HPA examples are required (and tooling), to simulate the load. Then, we can observe how the system behaves and how HPAs respond to external load, the outcome being scaling up or down application workloads.

Restic integration for velero

Is your feature request related to a problem? Please describe.

Restic integration is missing for velero.

Describe the solution you'd like

Enable restic integration. Describe how to use it, if any additional configuration is needed.

Describe alternatives you've considered

none

Additional context

none

Process for resizing the cluster

We bootstrapped the cluster following these docs and created the fully-functioning deployment of the app. Eventually, it appeared that we need more nodes.

If I understand correctly, all I need to do is to update main.tf with more nodes and follow the same process of applying the changes. However, when I was trying to run: terraform plan -out priz_prod_cluster.out I got an error:

╷
│ Error: Error retrieving Kubernetes cluster: GET https://api.digitalocean.com/v2/kubernetes/clusters/f9883560-f07a-4e54-9520-97f3210cb47b: 401 Unable to authenticate you
│
│   with module.doks_flux_cd.digitalocean_kubernetes_cluster.primary,
│   on .terraform/modules/doks_flux_cd/create-doks-with-terraform-flux/main.tf line 39, in resource "digitalocean_kubernetes_cluster" "primary":
│   39: resource "digitalocean_kubernetes_cluster" "primary" {

Is that expected? Am I doing anything wrong?

Default Grafana login credentials

I am following this kit step by step and got to Grafana section.
There is no information about default Grafana login credentials, so I took the one from the official docs. However, admin for username and password do not work.

How can I resolve that?

Loki/promtail configuration for including/excluding namespaces from sending logs

Loki, by default, captures ALL logs in the default installation in our starter kit. This is ~100K bytes/sec on inbound/outbound. This is too much of logs for users who do not need it.

  • Document how users can disable certain namespaces from sending logs.
  • Exclude Kube-system namespace by default from Loki logs.. document how users can re-enable it.

[Proxy Protocol] Fix proxy protocol issue from the Ingress Controller chapter

Problem Description

By default the protocol used by the DigitalOcean Load Balancers is tcp and when the proxy protocol is enabled the requests fails to get to backend droplets and the flag do-loadbalancer-tls-passthrough needs to be enabled
The following fix PR #90 will fix the problem partially.

in the Ingress Controller chapter the following annotation page should be added Service Annotations to allow users to use other services when the proxy protocol is configured

Impacted Areas

Setup Ingress Controller chapter.

Prerequisites

N/A.

Steps to Reproduce

- Nginx

  1. Create / Setup the Ingress Controller from steps 01-05 Ingress Controller Nginx
  2. Validate the Ingress Controller
Request served by echo-5d8d65c665-fpbwx

HTTP/1.1 GET /

Host: ....
X-Forwarded-Scheme: https
X-Scheme: https
User-Agent: curl/7.77.0
X-Request-Id: 6b066deeed56ee989b49269d5dce24b5
X-Real-Ip: 10.110.0.3
X-Forwarded-Host: echo.vlad.bond0.site
X-Forwarded-Port: 443
X-Forwarded-Proto: https
X-Forwarded-For: 10.110.0.3
  1. Run the following steps Proxy Protocol to enable it
  2. The validation will fail with the following erors
<html>
<head><title>400 Bad Request</title></head>
<body>
<center><h1>400 Bad Request</h1></center>
<hr><center>nginx</center>
</body>
</html>
  1. At the current stage the Ingress Controller is broken and even with a helm rollback to a previous revision will not fix it.

- Ambassador

  1. Create / Setup the Ingress Controller from steps 01-05 Ingress Controller Ambassador

  2. Validate the Ingress Controller

HTTP/1.1 200 OK
content-type: text/plain
date: Wed, 22 Dec 2021 08:53:28 GMT
content-length: 356
x-envoy-upstream-service-time: 0
server: envoy

Request served by echo-5d8d65c665-8spcr

HTTP/1.1 GET /

Host: ....
X-Forwarded-For: 79.119.116.72
X-Forwarded-Proto: https
X-Envoy-Original-Path: /echo/
User-Agent: curl/7.77.0
Accept: */*
X-Envoy-External-Address: 79.119.116.72
X-Request-Id: a3a148ea-3dee-4596-878e-d924b54be45f
X-Envoy-Expected-Rq-Timeout-Ms: 3000
Content-Length: 0
  1. Run the following steps Proxy Protocol to enable it

  2. The validation will fail with the following erors

curl: (52) Empty reply from server
curl: (35) LibreSSL SSL_connect: SSL_ERROR_SYSCALL in connection to <host>:443

Expected Results

The Ingress Controller should balance the traffic in both scenarios

Proposal

- Nginx

The following annotation service.beta.kubernetes.io/do-loadbalancer-tls-passthrough: true should be adding the helm value file 03-setup-ingress-controller/assets/manifests/nginx-values-v4.0.6.yaml

  service:
    type: LoadBalancer
    annotations:
      # Enable proxy protocol
      service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: "true"
      service.beta.kubernetes.io/do-loadbalancer-tls-passthrough: "true"
  config:
    use-proxy-protocol: "true"

Will enable the proxy protocol in the Ingress Controller Nginx config on the pod(s)

- Ambassador

The following annotation should be added/enabled in 03-setup-ingress-controller/assets/manifests/ambassador-values-v6.7.13.yaml

service:
  type: LoadBalancer
  annotations:
#     # You can keep your existing LB when migrating to a new DOKS cluster, or when reinstalling AES
#     kubernetes.digitalocean.com/load-balancer-id: "<YOUR_DO_LB_ID_HERE>"
#     service.kubernetes.io/do-loadbalancer-disown: false
      service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: "true"
      service.beta.kubernetes.io/do-loadbalancer-tls-passthrough: "true"

Update the configuration running helm upgrade

HELM_CHART_VERSION="6.7.13"                                                                                                                                                                                      

helm upgrade ambassador datawire/ambassador --version "$HELM_CHART_VERSION" \
 --namespace ambassador  -f "03-setup-ingress-controller/assets/manifests/ambassador-values-v${HELM_CHART_VERSION}.yaml"

Enable the Proxy Protocol for Ambassador using the steps described https://github.com/digitalocean/Kubernetes-Starter-Kit-Developers/blob/main/03-setup-ingress-controller/ambassador.md#step-6---enabling-proxy-protocol

rbac.authorization.k8s.io/v1beta1 ClusterRole is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRole

Thanks for the starter kit, that is exactly what I needed.
I am following the instructions and when trying to install the Ambassador

I am getting a bunch of deprecation warnings.

$ helm install ambassador datawire/ambassador --version "$HELM_CHART_VERSION" --namespace ambassador --create-namespace -f deployment/ambassador-values-v${HELM_CHART_VERSION}.yaml
W0120 16:56:39.071520   31941 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0120 16:56:39.161737   31941 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0120 16:56:39.252721   31941 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0120 16:56:39.353221   31941 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0120 16:56:39.461588   31941 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0120 16:56:39.654740   31941 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0120 16:56:39.797118   31941 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0120 16:56:39.904089   31941 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0120 16:56:40.006360   31941 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0120 16:56:40.140132   31941 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0120 16:56:40.337424   31941 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0120 16:56:40.505638   31941 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0120 16:56:41.189104   31941 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0120 16:56:41.316384   31941 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0120 16:56:41.457677   31941 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0120 16:56:41.619867   31941 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0120 16:56:41.742662   31941 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0120 16:56:41.897418   31941 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0120 16:56:42.086630   31941 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0120 16:56:42.312962   31941 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0120 16:56:42.379048   31941 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0120 16:56:42.463170   31941 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0120 16:56:42.563609   31941 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0120 16:56:42.655184   31941 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0120 16:56:42.730621   31941 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0120 16:56:42.827163   31941 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0120 16:56:42.913017   31941 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0120 16:56:43.066234   31941 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0120 16:56:43.205880   31941 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0120 16:56:43.242221   31941 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0120 16:56:43.272706   31941 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0120 16:56:43.344973   31941 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0120 16:56:43.373348   31941 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0120 16:56:43.400690   31941 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0120 16:56:43.427598   31941 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0120 16:56:43.453051   31941 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0120 16:56:43.479440   31941 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0120 16:56:43.505618   31941 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
manifest_sorter.go:192: info: skipping unknown hook: "crd-install"
manifest_sorter.go:192: info: skipping unknown hook: "crd-install"
manifest_sorter.go:192: info: skipping unknown hook: "crd-install"
manifest_sorter.go:192: info: skipping unknown hook: "crd-install"
manifest_sorter.go:192: info: skipping unknown hook: "crd-install"
manifest_sorter.go:192: info: skipping unknown hook: "crd-install"
manifest_sorter.go:192: info: skipping unknown hook: "crd-install"
manifest_sorter.go:192: info: skipping unknown hook: "crd-install"
manifest_sorter.go:192: info: skipping unknown hook: "crd-install"
manifest_sorter.go:192: info: skipping unknown hook: "crd-install"
manifest_sorter.go:192: info: skipping unknown hook: "crd-install"
manifest_sorter.go:192: info: skipping unknown hook: "crd-install"
manifest_sorter.go:192: info: skipping unknown hook: "crd-install"
manifest_sorter.go:192: info: skipping unknown hook: "crd-install"
manifest_sorter.go:192: info: skipping unknown hook: "crd-install"
manifest_sorter.go:192: info: skipping unknown hook: "crd-install"
manifest_sorter.go:192: info: skipping unknown hook: "crd-install"
manifest_sorter.go:192: info: skipping unknown hook: "crd-install"
manifest_sorter.go:192: info: skipping unknown hook: "crd-install"
W0120 16:56:45.588166   31941 warnings.go:70] rbac.authorization.k8s.io/v1beta1 ClusterRole is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRole
W0120 16:56:45.740206   31941 warnings.go:70] rbac.authorization.k8s.io/v1beta1 ClusterRole is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRole
W0120 16:56:45.875855   31941 warnings.go:70] rbac.authorization.k8s.io/v1beta1 ClusterRole is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRole
W0120 16:56:46.078671   31941 warnings.go:70] rbac.authorization.k8s.io/v1beta1 ClusterRole is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRole
W0120 16:56:46.322084   31941 warnings.go:70] rbac.authorization.k8s.io/v1beta1 ClusterRole is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRole
W0120 16:56:46.449950   31941 warnings.go:70] rbac.authorization.k8s.io/v1beta1 ClusterRole is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRole
W0120 16:56:46.607837   31941 warnings.go:70] rbac.authorization.k8s.io/v1beta1 ClusterRole is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRole
W0120 16:56:46.769699   31941 warnings.go:70] rbac.authorization.k8s.io/v1beta1 ClusterRole is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRole
W0120 16:56:46.847268   31941 warnings.go:70] rbac.authorization.k8s.io/v1beta1 ClusterRole is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRole
W0120 16:56:46.926276   31941 warnings.go:70] rbac.authorization.k8s.io/v1beta1 ClusterRole is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRole
W0120 16:56:47.006129   31941 warnings.go:70] rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRoleBinding
W0120 16:56:47.086689   31941 warnings.go:70] rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRoleBinding
W0120 16:56:47.219186   31941 warnings.go:70] rbac.authorization.k8s.io/v1beta1 Role is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 Role
W0120 16:56:47.296657   31941 warnings.go:70] rbac.authorization.k8s.io/v1beta1 RoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 RoleBinding
W0120 16:56:50.283837   31941 warnings.go:70] rbac.authorization.k8s.io/v1beta1 ClusterRole is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRole
W0120 16:56:50.283873   31941 warnings.go:70] rbac.authorization.k8s.io/v1beta1 ClusterRole is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRole
W0120 16:56:50.283837   31941 warnings.go:70] rbac.authorization.k8s.io/v1beta1 ClusterRole is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRole
W0120 16:56:50.284629   31941 warnings.go:70] rbac.authorization.k8s.io/v1beta1 ClusterRole is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRole
W0120 16:56:50.284629   31941 warnings.go:70] rbac.authorization.k8s.io/v1beta1 ClusterRole is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRole
W0120 16:56:50.284707   31941 warnings.go:70] rbac.authorization.k8s.io/v1beta1 ClusterRole is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRole
W0120 16:56:50.284629   31941 warnings.go:70] rbac.authorization.k8s.io/v1beta1 ClusterRole is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRole
W0120 16:56:50.284638   31941 warnings.go:70] rbac.authorization.k8s.io/v1beta1 ClusterRole is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRole
W0120 16:56:50.284629   31941 warnings.go:70] rbac.authorization.k8s.io/v1beta1 ClusterRole is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRole
W0120 16:56:50.284629   31941 warnings.go:70] rbac.authorization.k8s.io/v1beta1 ClusterRole is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRole
W0120 16:56:50.323620   31941 warnings.go:70] rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRoleBinding
W0120 16:56:50.326871   31941 warnings.go:70] rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRoleBinding
W0120 16:56:50.382133   31941 warnings.go:70] rbac.authorization.k8s.io/v1beta1 Role is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 Role
W0120 16:56:50.413209   31941 warnings.go:70] rbac.authorization.k8s.io/v1beta1 RoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 RoleBinding
NAME: ambassador
LAST DEPLOYED: Thu Jan 20 16:56:44 2022
NAMESPACE: ambassador
STATUS: deployed
REVISION: 1
NOTES:
-------------------------------------------------------------------------------
Congratulations! You have successfully installed The Ambassador Edge Stack!
-------------------------------------------------------------------------------
NOTE: You are currently running The Ambassador Edge Stack in EVALUATION MODE.

Request a free community license key at https://SERVICE_IP/edge_stack_admin/#dashboard
to unlock all the features of The Ambassador Edge Stack and update the value of
licenseKey.value in your values.yaml file.
-------------------------------------------------------------------------------
WARNING:

With your installation of the Ambassador Edge Stack, you have created a:

- AuthService named ambassador-auth

- RateLimitService named ambassador-ratelimit

in the ambassador namespace.

Please ensure there is not another of these resources configured in your cluster.
If there is, please either remove the old resource or run

helm upgrade ambassador -n ambassador --set authService.create=false --set RateLimit.create=false

For help, visit our Slack at http://a8r.io/Slack or view the documentation online at https://www.getambassador.io.

Is it expected? Can it be fixed?

The last message worries me more. I did not have any other instances of ambassador before the installation, and yet, there is a warning about an existing one.

Update step 7 alerting and notification

The alerting and notification chapter needs to be changed.
We should use the manifest file for the prom stack in this chapter since they are installed as a bundle.

Update chapter 4 with sample app

Currently we use ambassador as an example when illustrating prometheus setup and configuration.
We should add a sample app to not assume that the user installed ambassador or nginx in the previous step of the tutorial

tls termination not working for 03-ingress-controller tutorial

Hi,
I followed each step in the tutorial for ingress-controller using DO. Everything seems working except the second service, the quote service. The echo service returns what is expected, but tls termination doesn't seem to be working for quote. I get this when i try to curl:

HTTP/1.1 308 Permanent Redirect
Date: Thu, 14 Apr 2022 04:52:21 GMT
Content-Type: text/html
Content-Length: 164
Connection: keep-alive
Location: https://quote.mydomain.com

curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.haxx.se/docs/sslcerts.html

curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.

I just did kubectl apply -f quote_host.yaml with the annotations and tls sections un-commented, just some small changes. Did I miss a step? The tutorial doesn't explicitly lay out the steps to do the quote service, but I just repeated the steps for the echo service.

Would appreciate some assistance. Thanks.

Separate LB from DOKS

When LB changes, so is the external IP address. It requires reconfiguration of DNS A records. If there's a CDN, then it required reconfiguration of CDN.

Ideally, we want customers to just use their existing LB, and never have to delete the LB. We should make it a default practice.

Kubernetes event driven autoscaling

Streaming customers need auto-scaling due to bandwidth constraint. Typically, most users auto-scale based on CPU/memory. But for video streaming, they hit the bandwidth constraint (egress is limited to 2Gbps for droplet) sooner than CPU/memory. KEDA (kubernetes event driven autoscaling) can help for such use cases.

Question on costs

Is there a ballpark on the costs to run this? If I left it running for a month what is the baseline cost involved with no workload.

Update starter kit tool's versions

Go through the Starter Kit and upgrade the tools used to latest stable version and test.
Update corresponding marketplace 1-clicks wherever appropriate.

Regenerate SSL certificates when updating hostname

I have one subdomain configured for my host and mapping (for testing). Everything worked created there.
So, I decided to switch to the real prod URL: apii.priz.guru -> api.priz.guru

I am using the automated process, so once changes were pushed, I switch the DNS to point to Ambassador. It was switched, but now the certificate is invalid. How do I force regenerate the certificate?

error GitRepository/flux-system.flux-system

I have been following the automation tutorial. https://github.com/digitalocean/Kubernetes-Starter-Kit-Developers/tree/main/15-automate-with-terraform-flux

I've re-run it a few times (recreating clusters), but it seems to get stuck on creating all the necessary flux-system components

When I run flux get all, then I get:

NAME                            READY   MESSAGE                                                         REVISION        SUSPENDED 
gitrepository/flux-system       False   auth error: knownhosts: illegal base64 data at input byte 5                     False 

And flux logs gives:

2021-12-09T16:07:35.606Z error GitRepository/flux-system.flux-system - Reconciler error auth secret error: Secret &#34;flux-system&#34; not found
2021-12-09T16:07:35.713Z error GitRepository/flux-system.flux-system - Reconciler error auth secret error: Secret &#34;flux-system&#34; not found
2021-12-09T16:07:35.897Z error GitRepository/flux-system.flux-system - Reconciler error auth secret error: Secret &#34;flux-system&#34; not found
2021-12-09T16:07:36.243Z error GitRepository/flux-system.flux-system - Reconciler error auth secret error: Secret &#34;flux-system&#34; not found
2021-12-09T16:07:36.913Z error GitRepository/flux-system.flux-system - Reconciler error auth secret error: Secret &#34;flux-system&#34; not found
2021-12-09T16:07:38.216Z error GitRepository/flux-system.flux-system - Reconciler error auth secret error: Secret &#34;flux-system&#34; not found
2021-12-09T16:07:40.812Z error GitRepository/flux-system.flux-system - Reconciler error auth error: knownhosts: illegal base64 data at input byte 5
2021-12-09T16:07:45.950Z error GitRepository/flux-system.flux-system - Reconciler error auth error: knownhosts: illegal base64 data at input byte 5
2021-12-09T16:07:56.233Z error GitRepository/flux-system.flux-system - Reconciler error auth error: knownhosts: illegal base64 data at input byte 5
2021-12-09T16:08:16.751Z error GitRepository/flux-system.flux-system - Reconciler error auth error: knownhosts: illegal base64 data at input byte 5
2021-12-09T16:08:57.749Z error GitRepository/flux-system.flux-system - Reconciler error auth error: knownhosts: illegal base64 data at input byte 5
2021-12-09T16:10:19.710Z error GitRepository/flux-system.flux-system - Reconciler error auth error: knownhosts: illegal base64 data at input byte 5
2021-12-09T16:13:03.599Z error GitRepository/flux-system.flux-system - Reconciler error auth error: knownhosts: illegal base64 data at input byte 5

It seems the various git credentials added in the main.tf file are right since files got added to the git_repository_sync_path that I supplied. However, these logs above suggest a related problem, where it can't access the GitRepository for other purposes.

In the Github PAT, I granted these permissions in scope. Maybe that's not sufficient?

image

If I look in .terraform/modules/create-doks-with-terraform-flux/provider.tf I see:

provider "github" {
  owner = var.github_user
  token = var.github_token
}

There is no base64 encoding/decoding suggested here.

Googling here suggests that maybe if a github user is a person rather than an org, then --personal flag should be passed. I'm not sure if that's relevant here and if that is handled in this starter kit. Also it suggest checking the content of the flux-system secret on the cluster, which should equate to an encoded Github PAT supplied in the main.tf. It's not clear to me how that is best done.

Any thoughts on how I might get over this stumbling block? Tks

[TVK] Restoring Prometheus from a full backup renders the kube-prome-operator unusable

Problem Description

When trying to restore a full backup that includes Prometheus as one of the backed up components, the kube-prome-operator component fails to start.

Impacted Areas

TrilioVault for Kubernetes Namespaced or Multi-Namespaced restore operations.

Prerequisites

Prometheus must be deployed in your DOKS cluster as per Starter Kit guide.

Steps to Reproduce

  1. First, please follow the main guide for Installing the Prometheus Stack, to have a Prometheus instance running in your DOKS cluster.
  2. Then, have TrilioVault for Kubernetes installed and configured, as described in Installing TrilioVault for Kubernetes chapter.
  3. Activate a Clustered license type here. You can fetch the kube-system UID via: kubectl get ns kube-system -o jsonpath='{.metadata.uid}'.
  4. Next, make sure to configure and create a TVK Target for backups storage.
  5. Then, create a TVK Namespaced backup for Prometheus (default namespace is monitoring as per Starter Kit).
  6. Wait for the backup to complete successfully, then delete the Prometheus Helm release: helm delete kube-prom-stack -n monitoring
  7. Initiate a restore directly from the S3 Target using the TVK web management console.

Expected Results

The monitoring namespace applications (including Prometheus) backup and restore process should go smoothly, and without any issues. All Prometheus stack components should be up and running (Pods, Services, etc).

Actual Results

The restore process completes successfully, but the Prometheus Operator (or kube-prome-operator) is refusing to start. Running kubectl get pods -n monitoring yields:

NAME                                                   READY   STATUS              RESTARTS   AGE
kube-prom-szubu-grafana-5754d5b7b7-v97v2               2/2     Running             0          16m
kube-prom-szubu-kube-prome-operator-8649bb7b47-9qs8j   0/1     ContainerCreating   0          16m
kube-prom-szubu-kube-state-metrics-7f6f67d67f-8zfkh    1/1     Running             0          16m
kube-prom-szubu-prometheus-node-exporter-dlb44         1/1     Running             0          16m
kube-prom-szubu-prometheus-node-exporter-wktv7         1/1     Running             0          16m

Going further, and issuing kubectl describe pod/kube-prom-szubu-kube-prome-operator-8649bb7b47-9qs8j -n monitoring yields:

Events:
  Type     Reason       Age                   From               Message
  ----     ------       ----                  ----               -------
  Normal   Scheduled    6m6s                  default-scheduler  Successfully assigned monitoring/kube-prom-szubu-kube-prome-operator-8649bb7b47-9qs8j to flux-test-mt-pool-ug7di
  Warning  FailedMount  116s (x10 over 6m6s)  kubelet            MountVolume.SetUp failed for volume "tls-secret" : secret "kube-prom-szubu-kube-prome-admission" not found
  Warning  FailedMount  106s (x2 over 4m3s)   kubelet            Unable to attach or mount volumes: unmounted volumes=[tls-secret], unattached volumes=[tls-secret kube-api-access-bngnb]: timed out waiting for the condition

Seems that kube-prome-operator fails to find the secret named kube-prom-szubu-kube-prome-admission. Listing all the secrets from the monitoring namespace viakubectl get secrets -n monitoring, yields (notice that there's a secret named kube-prom-stack-kube-prome-admission which seems to be the right one):

NAME                                                   TYPE                                  DATA   AGE
alertmanager-kube-prom-szubu-kube-prome-alertmanager   Opaque                                1      19m
default-token-tsjk5                                    kubernetes.io/service-account-token   3      98m
kube-prom-stack-kube-prome-admission                   Opaque                                3      97m
kube-prom-szubu-grafana                                Opaque                                3      19m
...

Looking at the Prometheus Operator deployment via kubectl get deployment kube-prom-szubu-kube-prome-operator -o yaml, you can notice that the secret name was changed to kube-prom-szubu-kube-prome-admission (TVK replaced stack with szubu):

...
volumes:
      - name: tls-secret
        secret:
          defaultMode: 420
          secretName: kube-prom-szubu-kube-prome-admission
...

Next, after editing the deployment via kubectl edit deployment kube-prom-szubu-kube-prome-operator -n monitoring and replacing the secret name with the proper one kube-prom-stack-kube-prome-admission, the Prometheus Operator starts successfully:

kubectl get pods -n monitoring

The output looks like below:

NAME                                                     READY   STATUS    RESTARTS   AGE
alertmanager-kube-prom-szubu-kube-prome-alertmanager-0   2/2     Running   0          3m42s
kube-prom-szubu-grafana-5754d5b7b7-v97v2                 2/2     Running   0          33m
kube-prom-szubu-kube-prome-operator-bdb6bc8d-4rn9m       1/1     Running   0          3m44s
kube-prom-szubu-kube-state-metrics-7f6f67d67f-8zfkh      1/1     Running   0          33m
kube-prom-szubu-prometheus-node-exporter-dlb44           1/1     Running   0          33m
kube-prom-szubu-prometheus-node-exporter-wktv7           1/1     Running   0          33m
prometheus-kube-prom-szubu-kube-prome-prometheus-0       2/2     Running   0          3m42s

Everything seems back to normal now, as seen above.

After analysing everything that happened so far, it seems that TVK is renaming the Kubernetes resources in the backup/restore process using some internal logic or naming convention, but when restoring there are consistency problems.

[TF Flux CD Automation] Specify a version for the flux CLI that is compatible with the current version of the Flux CD server

Problem Description

See summary from issue #87.

Impacted Areas

TF Flux CD automation chapter.

Prerequisites

Flux CD server deployed on DOKS via Starter Kit custom TF module.

Steps to Reproduce

See summary from issue #87.

Expected Results

Listing Flux server resources (like Git Repository Source and Kustomizations) should work as usual.

Actual Results

See summary from issue #87.

Proposal

A flux CLI version that is compatible with the current Starter Kit guide should be mentioned , like 0.17.0:

curl -s https://fluxcd.io/install.sh | sudo FLUX_VERSION=0.17.0 bash

08-kubernetes-sealed-secrets feedback

The sealed secrets tutorial looks great overall, thanks for putting this together 👏. I had some feedback, mainly around expanding upon some of the security aspects to help make it really clear for users reading:

What Sealed Secrets allows you to do, is to store any Kubernetes secret in Git, without fearing that sensitive data is going to be exposed

It is important to call out that if one of the sealing keys used to encrypt git data is ever leaked, the plain-text content on git would be compromised. User's would not only need to rotate their sealing key, but also the underlying secrets used in their systems because the plain-text values would have to be considered exposed.

Sealed secrets decryption happens server side only, so as long as the DOKS cluster is secured (etcd database), everything should be safe.

I think it would be worth calling out that you need to ensure you have correct RBAC resources on your cluster to prevent unintended access to Secrets. One of the common misconceptions for those starting out is that Secrets are actually encrypted in some way, but like it is pointed out later in the article they are only base64 encoded, so anyone who can access that resource will have access.

In terms of security, meaning restricting other users to decrypt your sealed secrets inside the cluster, there are three scopes that you can use (kubeseal CLI --scope flag):

This is a great callout, but we should expand upon how this works in conjunction with Kubernetes RBAC to secure sealed secrets on the cluster.

Compared to other solutions, like Vault or KMS providers, Sealed Secrets is neither of those. It's just a way to safely encrypt your Kubernetes Secrets, so that the same GitOps principles can be applied as well when you need to manage sensitive data.

You call out the simplicity / narrow focus of sealed secrets, but it would be nice to expand upon this comparison a little more bit more. You can also support GitOpts style approaches with Vault for example, but on top of that it provides a lot more functionality in the space of secret management, identity and access control, cert management, etc.

GitOps continuous delivery chapter nice to haves

Description

Some really neat and nice to have additions for the GitOps continuous delivery chapter:

  1. System observability:
  • Alerting/notification support (Slack, Discord, etc).
  • Monitoring via Prometheus.
  • Loki integration for logging.
  1. Progressive delivery:
  1. Modelling environments via Kustomize overlays.

HTTP to HTTPS Redirect not working after upgrading to Ambassador v7.2.2

Bug Report

Describe the bug

After following the steps in part 03, HTTP to HTTPS redirection is not working.

Affected Components

Ambassador Ingress Controller

Expected Behavior

HTTP requests should 301 redirect to HTTPS.

Actual Behavior

200 HTTP response is served.

Steps to Reproduce

Follow steps outlined in the ambassador.md in step 03.

Additional context

Reverting to v6.9.3 resolves the issue.

Chapter for continuous delivery using GitOps

Description

Currently we have the 15-automate-with-terraform-flux chapter in the Starter Kit. That chapter is about GitOps principles, and FluxCD was picked as a practical implementation (simplest choice to start with at that time).

Main proposal is to have a more generic chapter instead, named 15-continuous-delivery-using-gitops. Then, have FluxCD in own subchapter, and also add ArgoCD to the mix. ArgoCD seems to be a more popular choice for doing GitOps compared to FluxCD, and seems to have a bigger community behind.

On the other hand, we should remove Terraform and focus on GitOps and continuous delivery for K8S only. FluxCD can be deployed very easy on an existing DOKS cluster using the official flux CLI. Thus, complexity is removed (like Terraform, and associated modules). Same applies for ArgoCD.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.