Giter Site home page Giter Site logo

appuio / charts Goto Github PK

View Code? Open in Web Editor NEW
10.0 14.0 25.0 4.82 MB

Helm Charts for running application on APPUiO

Home Page: https://charts.appuio.ch/index.yaml

License: BSD 3-Clause "New" or "Revised" License

Shell 10.29% Go 30.69% Smarty 4.53% Makefile 4.02% Mustache 48.82% Lua 1.36% Dockerfile 0.28%
appuio helm-charts helm kubernetes openshift

charts's People

Contributors

a-tell avatar akosma avatar anothertobi avatar bastjan avatar ccremer avatar cimnine avatar corvus-ch avatar futurematt avatar glrf avatar hairmare avatar inisitijitty avatar isantospardo avatar kidswiss avatar knudsentaunus avatar ludovicm67 avatar luk43 avatar megian avatar mhutter avatar pree avatar psy-q avatar renovate-bot avatar renovate[bot] avatar sandhose avatar simu avatar soufianebenali avatar splattner avatar srueg avatar thebiglee avatar tobru avatar zugao avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

charts's Issues

[redis] Failure to elect new leader

Describe the bug

When restarting the leader pod, there is a possibility that the remaining nodes are unable to decide on a new leader.

Additional context

This should be fixed by bitnami/charts#7278 and further improved by bitnami/charts#7333.

This seems to be mostly mitigated in production by setting downAfterMilliseconds and failoverTimeout, but it's still possible.

To Reproduce

Steps to reproduce the behavior:

  1. Use default for downAfterMilliseconds and failoverTimeout
  2. Restart the leader Pod, or trigger a rolling update
  3. Be (un-)lucky

Logs

Nodes after restarting the master:

NAME                READY   STATUS             RESTARTS   AGE    IP            NODE                      NOMINATED NODE   READINESS GATES
test-redis-node-2   2/2     Running            0          119s   10.42.0.166   k3d-projectsyn-server-0   <none>           <none>
test-redis-node-1   2/2     Running            0          69s    10.42.0.167   k3d-projectsyn-server-0   <none>           <none>
test-redis-node-0   0/2     CrashLoopBackOff   2          29s    10.42.0.168   k3d-projectsyn-server-0   <none>           <none>

Log of former leader test-redis-node-0

 12:40:20.33 INFO  ==> test-redis-headless.redis-test.svc.cluster.local has my IP: 10.42.0.168
 12:40:20.34 INFO  ==> Cleaning sentinels in sentinel node: 10.42.0.167
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
1
 12:40:25.34 INFO  ==> Cleaning sentinels in sentinel node: 10.42.0.166
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
1
 12:40:30.35 INFO  ==> Sentinels clean up done
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
Could not connect to Redis at test-redis.redis-test.svc.cluster.local:26379: Connection refused
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
Could not connect to Redis at -p:6379: Name or service not known

Log of test-redis-node-1

1:X 09 Sep 2021 12:38:47.557 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1:X 09 Sep 2021 12:38:47.557 # Redis version=6.2.1, bits=64, commit=00000000, modified=0, pid=1, just started
1:X 09 Sep 2021 12:38:47.557 # Configuration loaded
1:X 09 Sep 2021 12:38:47.558 * monotonic clock: POSIX clock_gettime
1:X 09 Sep 2021 12:38:47.558 * Running mode=sentinel, port=26379.
1:X 09 Sep 2021 12:38:47.559 # Sentinel ID is 93d594182506a64e9c0fb3e893ec67dbd7d3255d
1:X 09 Sep 2021 12:38:47.559 # +monitor master mymaster 10.42.0.163 6379 quorum 2
1:X 09 Sep 2021 12:39:22.360 # +reset-master master mymaster 10.42.0.163 6379
1:X 09 Sep 2021 12:39:24.228 * +sentinel sentinel 362c939b89efbabc09ba1d11a50146bccd5614d9 10.42.0.166 26379 @ mymaster 10.42.0.163 6379
1:X 09 Sep 2021 12:39:30.849 # +reset-master master mymaster 10.42.0.163 6379
1:X 09 Sep 2021 12:39:32.460 * +sentinel sentinel 362c939b89efbabc09ba1d11a50146bccd5614d9 10.42.0.166 26379 @ mymaster 10.42.0.163 6379
1:X 09 Sep 2021 12:39:44.147 # +reset-master master mymaster 10.42.0.163 6379
1:X 09 Sep 2021 12:39:44.743 * +sentinel sentinel 362c939b89efbabc09ba1d11a50146bccd5614d9 10.42.0.166 26379 @ mymaster 10.42.0.163 6379
1:X 09 Sep 2021 12:40:04.228 # +sdown master mymaster 10.42.0.163 6379
1:X 09 Sep 2021 12:40:09.289 # +new-epoch 1
1:X 09 Sep 2021 12:40:09.291 # +vote-for-leader 362c939b89efbabc09ba1d11a50146bccd5614d9 1
1:X 09 Sep 2021 12:40:09.520 # +odown master mymaster 10.42.0.163 6379 #quorum 2/2
1:X 09 Sep 2021 12:40:09.520 # Next failover delay: I will not start a failover before Thu Sep  9 12:40:46 2021
1:X 09 Sep 2021 12:40:20.346 # +reset-master master mymaster 10.42.0.163 6379
1:X 09 Sep 2021 12:40:21.207 * +sentinel sentinel 362c939b89efbabc09ba1d11a50146bccd5614d9 10.42.0.166 26379 @ mymaster 10.42.0.163 6379
1:X 09 Sep 2021 12:40:40.375 # +sdown master mymaster 10.42.0.163 6379
1:X 09 Sep 2021 12:40:45.507 # +new-epoch 2
1:X 09 Sep 2021 12:40:45.511 # +vote-for-leader 362c939b89efbabc09ba1d11a50146bccd5614d9 2
1:X 09 Sep 2021 12:40:45.652 # +odown master mymaster 10.42.0.163 6379 #quorum 2/2
1:X 09 Sep 2021 12:40:45.652 # Next failover delay: I will not start a failover before Thu Sep  9 12:41:22 2021
1:X 09 Sep 2021 12:41:15.777 # -odown master mymaster 10.42.0.163 6379
1:X 09 Sep 2021 12:41:19.754 # +reset-master master mymaster 10.42.0.163 6379
1:X 09 Sep 2021 12:41:20.002 * +sentinel sentinel 362c939b89efbabc09ba1d11a50146bccd5614d9 10.42.0.166 26379 @ mymaster 10.42.0.163 6379
1:X 09 Sep 2021 12:41:39.795 # +sdown master mymaster 10.42.0.163 6379
1:X 09 Sep 2021 12:41:39.857 # +odown master mymaster 10.42.0.163 6379 #quorum 2/2
1:X 09 Sep 2021 12:41:39.857 # +new-epoch 3
1:X 09 Sep 2021 12:41:39.857 # +try-failover master mymaster 10.42.0.163 6379
1:X 09 Sep 2021 12:41:39.872 # +vote-for-leader 93d594182506a64e9c0fb3e893ec67dbd7d3255d 3
1:X 09 Sep 2021 12:41:39.879 # 362c939b89efbabc09ba1d11a50146bccd5614d9 voted for 93d594182506a64e9c0fb3e893ec67dbd7d3255d 3
1:X 09 Sep 2021 12:41:39.943 # +elected-leader master mymaster 10.42.0.163 6379
1:X 09 Sep 2021 12:41:39.943 # +failover-state-select-slave master mymaster 10.42.0.163 6379
1:X 09 Sep 2021 12:41:40.005 # -failover-abort-no-good-slave master mymaster 10.42.0.163 6379
1:X 09 Sep 2021 12:41:40.076 # Next failover delay: I will not start a failover before Thu Sep  9 12:42:16 2021
1:X 09 Sep 2021 12:42:16.041 # +new-epoch 4
1:X 09 Sep 2021 12:42:16.046 # +vote-for-leader 362c939b89efbabc09ba1d11a50146bccd5614d9 4
1:X 09 Sep 2021 12:42:16.089 # Next failover delay: I will not start a failover before Thu Sep  9 12:42:52 2021

The remaining nodes are unable to elect a new leader and try to connect to non existent former leader.

Expected behavior

Environment (please complete the following information):

  • Chart: latest
  • Helm: v3
  • Kubernetes API: v1.21
  • Distribution (Openshift, Rancher, etc.): k3s

Adding docs/best practices?

Hi ๐Ÿ‘‹ !

I was wondering what your plans are with respect to maintaining the charts in this repository. I'd like to deploy (for example) KeyCloak on openshift (and then appuio) via helm.

For example just helm install stable/keycloak isn't enough to get it to run (needs some securityContext stuff) which is straight forward once you know it. I assume other charts will be a bit like that, so should we add small snippets of READMEs in this repo for "these are some Stolpersteine and how to avoid them when using this chart on appuio"?

[k8up] Attach CRDs to GitHub release

Summary

As a k8up user
I want to install the chart and the corresponding CRDs automatically
So that I don't have to manually find the right CRD version for the app.

Context

Currently the chart releases for k8up do not contain the necessary CRDs. They are instead published alongside the binary releases in https://github.com/k8up-io/k8up. To install the correct CRD file one needs to look into the Chart.yaml, extract the appVersion, find the relevant release in the binary repo and fetch the CRDs from this release.
Because of the different versioning schemes it is also not possible (to my knowledge and according to several tries) to use renovate to automate those steps.

I created a workaround that automates those steps and republishes the CRDs as a helm chart with the appVersion as version.
This works for me, but I think the better solution would be to publish the CRDs directly with the chart in this repo here.

Thank you very much.

Further links

Acceptance criteria

  • When a helm chart is released, then the necessary CRDs are bundled with it together in the same release.

[signalio] cannot set SIGNALILO_ICINGA_INSECURE_TLS via helm

Describe the bug

Using a block like

  set {
    name = "extraEnvVars[0].name"
    value = "SIGNALILO_ICINGA_INSECURE_TLS"
  }
  set {
    name = "extraEnvVars[0].value"
    value = true     
  }

I want to set the given value.
This is not working unfortunately, resulting in error:
...ReadString: expects " or n, but found t, ...

Additional context

Seems to be related to helm/helm#4262

Expected behavior

Provide a way to set the SIGNALILO_ICINGA_INSECURE_TLS value via helm.

Environment (please complete the following information):

  • Chart: latest
  • Helm: v3.11.2
  • Kubernetes API: v1.22
  • Distribution: K3s

Dependency Dashboard

[metrics-server] Chart (and others) disappeared from repo

Describe the bug

Good day, since yesterday metrics-server and any other chart in your repo (except for k8up) is not showing up in helm search repo appuio results.

It seems that that the index.yaml contains only k8up and nothing else.

To Reproduce

Steps to reproduce the behavior:

  1. helm repo add appuio https://charts.appuio.ch
  2. helm repo update
  3. helm pull appuio/metrics-server

Error: chart "metrics-server" matching not found in appuio index. (try 'helm repo update'): no chart name found

k8up helm chart no longer provides the "jobType" label for k8up_jobs_failed_counter

I can see in our testing cluster running version 1.0.4 of the k8up helm chart that we have an alert for a failing backup job which seems to be entirely missing the jobType label, causing our alerts to look a bit weird.

Screen Shot 2021-04-13 at 10 25 42 AM

These are the metrics being exported by our k8up operator to Prometheus:

# HELP k8up_jobs_failed_counter The total number of backups that failed
# TYPE k8up_jobs_failed_counter counter
k8up_jobs_failed_counter{namespace="drupal-example-test1-master"} 3

It seems like this label simply isn't being exported correctly by the k8up operator, maybe?

[maxscale] Add Virtual IP, Keepalived, multi instanace Load Balancer

Summary

As a software developer
I want a High Available Maxscale
So that I can deploy proxy clusters for services which are not prone to single point of failures.

Context

The current Maxscale helm chart appears to be a single node instance. If maxscale fails the connection to the proxy becomes single point of failure. There exist a maxscale topology for deploying a virtual IP along side 2 maxscale instances for automatic failover.

Out of Scope

  • N/A

Further links

Image of Virtual IP used Keepalived and 2 instances of Maxscale (Active and one standby) Topology
image

Acceptance criteria

  • Uses Virtual IP
  • Keepalived Config
  • Uses at least 2 maxscale instances where 1 is master and other(s) are the backup
  • Both maxscale.conf must be the same to serve as backup

Rework repository with helm-chart-releaser and helm-docs

A long time ago we set up a Travis CI pipeline that did the PR checks and also released Helm charts into the gh-pages branch. Travis CI has now shut down and the time for a modernization is right.

Currently, we can't release any Helm charts.

My proposition for a new CI/CD pipeline looks like following:

  • Use GitHub Actions as CI/CD pipeline technology
  • Make use of https://github.com/helm/chart-releaser, resp. https://github.com/helm/chart-releaser-action
  • Keep the currently released helm releases in the gh-pages branch as *.tar.gz files, but new releases should go into GitHub releases page. (Hopefully chart-releaser doesn't remove existing entries from the index.yaml file, to be tested)
  • Automatically generate README documentation from values.yaml instead of handcrafting it. https://github.com/norwoodj/helm-docs solves this, but it requires a rework of all values.yaml in our charts. This is probably the biggest effort in this proposal.
  • better Unit test support based on Go. At the moment, each unit-test-enabled Chart has its own go.mod file, with lots of Renovate PRs/commits pending. We could move the go.mod file to the top-level dir to unify the dependencies for all charts. Experience in the past showed that there was never an issue with dependencies with K8s artifacts, so maintaining them once instead per-chart should not be a problem.

An example of the whole stack can be found in my private chart repo: https://github.com/ccremer/charts
Most of the plumbing can be copied. as mentioned, biggest effort would be to rework the chart documentation, except for K8up, where it's already being built with helm-docs, see https://github.com/appuio/charts/blob/master/k8up/values.yaml

In Scope

  • Integrate charts into this workflow that already use helm-docs or unit testing.
  • Unify all unit tests into same go.mod file.

Out of scope

  • Migrate all charts to use the helm-docs template. Instead, the template should be overwritten by the content of already existing READMEs.

Migrate chart documentation to helm-docs

https://github.com/norwoodj/helm-docs addresses an issue long present in helm charts:

There is often documentation and comments in values.yaml, and there is documentation in README.md.
There isn't a clear source of truth, as with manually maintained README it can get out-of-date pretty soon.

helm-docs follows the approach where the README is entirely generated from Chart metadata and values.yaml. however, it requires that comments in the yaml file follow a certain style in order to be properly parsed.

This issue aims to migrate the charts to make them "helm-docs-compatible" i.e. generate nicer READMEs.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.