Giter Site home page Giter Site logo

kubedex / helm-controller Goto Github PK

View Code? Open in Web Editor NEW
48.0 4.0 2.0 4.88 MB

A simple way to manage helm charts with a Custom Resource Definitions in k8s.

Home Page: https://kubedex.com

License: Apache License 2.0

Dockerfile 6.60% Shell 5.04% Go 88.36%
helm controller kubernetes helm-charts crd helm-controller

helm-controller's Introduction

kubedex helm-controller

Go Report Card

A simple controller built with the Operator SDK that watches for chart CRD's within a namespace and manages installation, upgrades and deletes using Kubernetes jobs.

The helm-controller creates a Kubernetes job pod per CRD that runs helm upgrade --install with various options to make it idempotent. Within each job pod Helm is run in 'tillerless' mode.

To upgrade a chart you can use kubectl to modify the version or values in the chart CRD. To debug what happened you can use kubectl logs on the kubernetes job.

To completely remove the chart and do the equivalent of helm delete --purge simply delete the chart CRD.

The image used in the kubernetes job can be customised so you can easily add additional logic or helm plugins. You can set this by changing the JOB_IMAGE environment variable in the deployment manifest.

This means that Helm 3.0 will be supported on the day it goes GA.

Installation

The default manifests create a service account, role, rolebinding and deployment that runs the operator. It is recommended to run the controller in its own namespace alongside the CRD's that it watches.

Some example manifests are available in the deploy/ directory.

kubectl apply -f deploy/

This will install the helm-controller into the default namespace.

Or, install using helm.

helm repo add kubedex https://kubedex.github.io/charts
helm repo update
helm install kubedex/helm-controller

Then to install a chart you can apply the following manifest.

cat <<EOF | kubectl apply -f -
apiVersion: helm.kubedex.com/v1
kind: HelmChart
metadata:
  name: kubernetes-dashboard
  namespace: default
spec:
  chart: stable/kubernetes-dashboard
  version: 1.8.0
  targetNamespace: kube-system
  valuesContent: |-
    rbac.clusterAdminRole: true
    enableInsecureLogin: true
    enableSkipLogin: true
EOF

In this example we're installing the kubernetes-dashboard chart into the kube-system namespace and setting some truly dangerous values under valuesContent.

Private Chart Repos

In the example above we're using the pre-configured stable repo available in the default job image.

You can also specify your own private chart repo as follows.

apiVersion: helm.kubedex.com/v1
kind: HelmChart
metadata:
  name: myprivatechartname
  namespace: default
spec:
  chart: myprivatechartname
  repo: https://user:[email protected]
  version: 1.0.0
  targetNamespace: default

The default job image works with https registries and basic auth. You can support other registry types like S3 by modifying the job image and pre-installing plugins.

Ignoring Charts

You may want to only run charts on certain clusters. You can do this by templating the CRD and setting the ignore: field to true as per below.

apiVersion: helm.kubedex.com/v1
kind: HelmChart
metadata:
  name: myprivatechartname
  namespace: default
spec:
  chart: myprivatechartname
  repo: https://user:[email protected]
  version: 1.0.0
  targetNamespace: default
  ignore: true

The helm-controller will now ignore this chart CRD. Be aware that this simply ignores the chart. The controller will not uninstall the chart.

You will need to delete the CRD for the controller to manage the delete.

Installation Lifecycle

The helm-controller manifests should be deployed to the Kubernetes cluster using kubectl.

  • A single CRD per Helm Chart
  • The helm-controller triggers a Kubernetes job when a chart CRD is changed
  • Each Kubernetes job executes the upgrade logic for the Helm Chart
  • When a chart CRD is deleted the helm-controller will remove all resources associated with it

Helm Chart CRD's

Chart CRD's define which Helm Charts a cluster should be running. You can view all chart CRD's by executing the following command.

kubectl get helmcharts.helm.kubedex --all-namespaces

Or, to look at the contents of a single CRD you can use this command:

kubectl get helmchart.helm.kubedex kubernetes-dashboard -o yaml

For testing and playing around you can edit the chart CRD's directly to bump chart version or change values.

On change the helm-controller will immediately execute a Kubernetes job to apply the Helm Chart upgrade.

Troubleshooting

Use standard kubectl commands to validate each stage has completed successfully.

  • Check that the helm-controller is running and the logs are clean
  • Check the contents of the chart CRD on the cluster
  • Check the Kubernetes job logs for the chart you are troubleshooting

To fully reset a chart you can delete the CRD. Then wait for all resources to be removed. Then apply the CRD again.

To remove all charts from a cluster you can run:

kubectl delete helmcharts.helm.kubedex --all-namespaces --all

Credits

Heavily inspired by the Rancher Helm Controller.

helm-controller's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

helm-controller's Issues

configuring helmchart with s3 repo

Hello I would like to use the helm-controller with my helm repo on s3, I don't have see any configuration in the page. I need the s3 helm plugin installed and a HelmChart like this?

kind: HelmChart
metadata:
  name: test-s3
  namespace: default
spec:
  chart: chart-name
  repo: s3://url-of-the-bucket
  version: 1.0.0
  valuesContent: |-

Thanks

go build error - unable to compile the project

Hello,

Thanks for this tool. I would like to test it so I cloned this project on my laptop but when I tried to make a go build I have the following issue : can't load package: package github.com/Kubedex/helm-controller: build constraints exclude all Go files in /Users/xxxxxxxx/xxxxxx/xxxxxxx/helm-controller. I saw that your are using go.mod and I don't see any constraints which can cause this issue. Maybe you can help me to understand why I have this error message ?

For your information, as you are using go mod I cloned this project in a local directory outside of my GOPATH

Thanks you.

setup helm controller to use a rolebinding

Currently we use a clusterrolebinding and clusterrole.

We tried using a normal role and it fails on:

E0930 13:42:08.780111       1 reflector.go:134] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:95: Failed to list *v1.ClusterRoleBinding: clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "system:serviceaccount:test:helm-controller-hazzi" cannot list resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope

spec.ignore doesn't display on some clusters

It would be good to be able to query all CRD's that are not ignored. This works on K3s.

charts=$(kubectl get helmcharts.helm.kubedex -n helm-controller -o=jsonpath='{.items[?(@.spec.ignore==false)].metadata.name}' | tr " " "\n")

On K3s this works and displays ignore: false when looking at the yaml output with.

kubectl get helmcharts.helm.kubedex.com -n helm-controller kube2iam -o yaml

However, on a proper Kubernetes cluster for some reason it doesn't show ignore: false under spec. But it does under annotations.

apiVersion: helm.kubedex.com/v1
kind: HelmChart
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"helm.kubedex.com/v1","kind":"HelmChart","metadata":{"annotations":{},"name":"kube2iam","namespace":"helm-controller"},"spec":{"chart":"kube2iam","ignore":false,"repo":"https://repo.com","targetNamespace":"kube-system","valuesContent":"node_selector: minion","version":"1.0.14"}}

job status

Currently the CRD yaml shows a blank status: {}. It would be good if this could be set based on the success or failure of the jobimage.

This would be useful for kubectl wait commands that can check the status of the job directly.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.