Giter Site home page Giter Site logo

deliverybot / helm Goto Github PK

View Code? Open in Web Editor NEW
133.0 7.0 218.0 533 KB

GitHub action for deploying Helm charts.

Home Page: https://deliverybot.dev

License: MIT License

Dockerfile 7.53% JavaScript 70.65% Smarty 8.83% Shell 12.99%
kubernetes helm action deployment deployment-automation

helm's Introduction

Helm Action

Deploys a helm chart using GitHub actions. Supports canary deployments and provides a built in helm chart for apps that listen over http to get your ramped up quickly.

View an example repository using this action at github.com/deliverybot/example-helm.

Parameters

Inputs

Inputs below are additionally loaded from the payload of the deployment event payload if the action was triggered by a deployment.

  • release: Helm release name. Will be combined with track if set. (required)
  • namespace: Kubernetes namespace name. (required)
  • chart: Helm chart path. If set to "app" this will use the built in helm chart found in this repository. (required)
  • chart_version: The version of the helm chart you want to deploy (distinct from app version)
  • values: Helm chart values, expected to be a YAML or JSON string.
  • track: Track for the deployment. If the track is not "stable" it activates the canary workflow described below.
  • task: Task name. If the task is "remove" it will remove the configured helm release.
  • dry-run: Helm dry-run option.
  • token: Github repository token. If included and the event is a deployment then the deployment_status event will be fired.
  • value-files: Additional value files to apply to the helm chart. Expects a JSON encoded array or a string.
  • secrets: Secret variables to include in value file interpolation. Expects a JSON encoded map.
  • helm: Helm binary to execute, one of: [helm, helm3].
  • version: Version of the app, usually commit sha works here.
  • timeout: specify a timeout for helm deployment
  • repository: specify the URL for a helm repo to come from
  • atomic: If true, upgrade process rolls back changes made in case of failed upgrade. Defaults to true.

Additional parameters: If the action is being triggered by a deployment event and the task parameter in the deployment event is set to "remove" then this action will execute a helm delete $service

Versions

  • helm: v2.16.1
  • helm3: v3.0.0

Environment

  • KUBECONFIG_FILE: Kubeconfig file for Kubernetes cluster access.

Value file interpolation

The following syntax allows variables to be used in value files:

  • ${{ secrets.KEY }}: References secret variables passed in the secrets input.
  • ${{ deployment }}: References the deployment event that triggered this action.

Example

# .github/workflows/deploy.yml
name: Deploy
on: ['deployment']

jobs:
  deployment:
    runs-on: 'ubuntu-latest'
    steps:
    - uses: actions/checkout@v1

    - name: 'Deploy'
      uses: 'deliverybot/helm@v1'
      with:
        release: 'nginx'
        namespace: 'default'
        chart: 'app'
        token: '${{ github.token }}'
        values: |
          name: foobar
        value-files: >-
        [
          "values.yaml", 
          "values.production.yaml"
        ]
      env:
        KUBECONFIG_FILE: '${{ secrets.KUBECONFIG }}'

Example canary

If a track is chosen that is equal to canary, this updates the helm chart in a few ways:

  1. Release name is changed to {release}-{track} (eg. myapp-canary).
  2. The service is disabled on the helm chart service.enabled=false
  3. The ingress is disabled on the helm chart ingress.enabled=false

Not enabling the service or ingress allows the stable ingress and service resources to pick up the canary pods and route traffic to them.

# .github/workflows/deploy.yml
name: Deploy
on: ['deployment']

jobs:
  deployment:
    runs-on: 'ubuntu-latest'
    steps:
    - uses: actions/checkout@v1

    - name: 'Deploy'
      uses: 'deliverybot/helm@v1'
      with:
        release: 'nginx'
        track: canary
        namespace: 'default'
        chart: 'app'
        token: '${{ github.token }}'
        values: |
          name: foobar
      env:
        KUBECONFIG_FILE: '${{ secrets.KUBECONFIG }}'

Example pr cleanup

If you are creating an environment per pull request with Helm you may have the issue where pull request environments like pr123 sit around in your cluster. By using GitHub actions we can clean those up by listening for pull request close events.

# .github/workflows/pr-cleanup.yml
name: PRCleanup
on:
  pull_request:
    types: [closed]

jobs:
  deployment:
    runs-on: 'ubuntu-latest'
    steps:
    - name: 'Deploy'
      uses: 'deliverybot/helm@v1'
      with:
        # Task remove means to remove the helm release.
        task: 'remove'
        release: 'review-myapp-${{ github.event.pull_request.number }}'
        version: '${{ github.sha }}'
        track: 'stable'
        chart: 'app'
        namespace: 'example-helm'
        token: '${{ github.token }}'
      env:
        KUBECONFIG_FILE: '${{ secrets.KUBECONFIG }}'

helm's People

Contributors

colinjfw avatar dependabot[bot] avatar julienbreux avatar marudor avatar mpraski avatar mrahul17 avatar riker09 avatar rvichery avatar scarby avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

helm's Issues

No module 'awscli' when deploying to EKS

Getting the error below even on repos where configuration hasn't changed.

Traceback (most recent call last): File "/usr/bin/aws", line 19, in <module> import awscli.clidriver ModuleNotFoundError: No module named 'awscli'

How can I use multiple KUBECONFIG_FILE? - Works with auto_deploy_on, fails with manual deploy

I have 2 k8s clusters - one for production, one for review & staging environment.

Functionality I'm trying to achieve:

  1. Deploy to test cluster on merge to test branch
  2. Manually deploy master to production using the deliverybot web-app ( https://app.deliverybot.dev/ )

This is how my config looks like.

.github/workflows/cd.yml

name: 'Deploy'
on: [deployment]

jobs:
  deployment:
    runs-on: 'ubuntu-latest'
    steps:
      - name: 'Checkout'
        uses: 'actions/checkout@v1'

      - name: Set Test Kubeconfig
        if: github.ref == 'refs/heads/test'
        run: |
          echo "KUBECONFIG_FILE<<EOF" >> $GITHUB_ENV
          echo "${{ secrets.TEST_KUBECONFIG }}" >> $GITHUB_ENV
          echo "EOF" >> $GITHUB_ENV

      - name: Set Production Kubeconfig
        if: github.ref == 'refs/heads/master'
        run: |
          echo "KUBECONFIG_FILE<<EOF" >> $GITHUB_ENV
          echo "${{ secrets.PRODUCTION_KUBECONFIG }}" >> $GITHUB_ENV
          echo "EOF" >> $GITHUB_ENV

      - name: 'Deploy'
        # Parameters are pulled directly from the GitHub deployment event so the
        # configuration for the job here is very minimal.
        uses: 'deliverybot/helm@master'
        with:
          token: '${{ github.token }}'
          secrets: '${{ toJSON(secrets) }}'
          version: '${{ github.sha }}'
          chart: './charts/my-app'

.github/deploy.yml


production:
  production_environment: true
  required_contexts: []
  environment: production
  description: "Production Deployment"
  payload:
    value_files: ["./values/my-app/values.common.yaml", "./values/my-app/values.production.yaml"]
    # Remove the canary deployment if it exists when doing a full prod deploy.
    remove_canary: true
    release: production-my-app
    namespace: production
    track: stable
    helm: helm3


test:
  auto_deploy_on: refs/heads/test
  required_contexts: []
  environment: test
  description: "Test deployment"
  payload:
    value_files: ["./values/my-app/values.common.yaml", "./values/my-app/values.test.yaml"]
    remove_canary: true
    release: test-my-app
    namespace: test
    track: stable
    helm: helm3

This works when I push/merge to test branch.

image

But when i trigger a deployment from the deliverybot web-app targetting test/master , It skips the steps that sets kubeconfig

image

image

Add --create-namespace flag

Since you already have the --install flag present I was wondering if there is anything blocking the support of --create-namespace? This will create the provided namespace when installing a new helm Chart in an ephemeral environment. Should I provide a PR for that?

Allow Kubernetes token (Service account) deployments

Hello ๐Ÿ‘‹,

First of all, thank you for developing this great Github Action. It will really help in building our deployment flows.

I was wondering if it would be possible to support deploying help via providing a kubernetes token, which normally comes from a service account with limited deployment permissions.
In Drone CI for instance, I normally deploy my code like this:

- name: deploy
  image: quay.io/ipedrazas/drone-helm
  settings:
    chart: ./charts/my-srv
    client_only: true
    debug: true
    release: my-srv
    skip_tls_verify: true
    values: "tag=${DRONE_TAG}"
    values_files:
    - charts/my-srv/values.yaml
    wait: true
  environment:
    API_SERVER:
      from_secret: kube_server
    KUBERNETES_TOKEN:
      from_secret: kubernetes_token

This is a quite powerful way to deploy into kubernetes, without needing to provide a certificate and your kubeconfig file.

Please let me know if this makes sense and if it's possible to implement this. I would be glad to help if I can.

Thank you!!

Howto reuse values?

I want to establish the following workflow: In a microservice architectured application I have 20+ different apis, each is build as Docker image via GitHub Action and pushed into the GitHub Docker Registry. I want to utilize the registry_package event and update my helm chart when a package (read: Docker image) is updated.

During development I was using this command:

helm upgrade release-name path/to/chart/ --reuse-values --set service-name.image.tag=1234abc

Before this step I built and published a Docker image with the tag :1234abc, of course. The tag name is the first seven chars of the GitHub SHA that triggered the build in the first place, in case you are wondering.

How can this be done with the deliverybot/helm GitHub Action? I'm not sure if this the way to go. I am thankful for any hints or other solutions.

Error when pulling the action

Hello,

I'm trying to work around the error #6 by moving our staging deployment to the master pipeline instead of listening on the deployment event. Unfortunately, I was not able to pull the action. It returns an error when seting up the job. Maybe I'm configuring it worng?

Here is the error:

Current runner version: '2.158.0'
Prepare workflow directory
Prepare all required actions
Download action repository 'actions/checkout@v1'
Download action repository 'deliverybot/helm@v1'
##[warning]Failed to download action 'https://api.github.com/repos/deliverybot/helm/tarball/v1'. Error Response status code does not indicate success: 404 (Not Found).
##[warning]Back off 24.595 seconds before retry.
##[warning]Failed to download action 'https://api.github.com/repos/deliverybot/helm/tarball/v1'. Error Response status code does not indicate success: 404 (Not Found).
##[warning]Back off 24.547 seconds before retry.
##[error]Response status code does not indicate success: 404 (Not Found).

Here is my configuration:

    - name: Deploy
      uses: deliverybot/helm@v1
      with:
        chart: ./charts/my-chart
        token: ${{ secrets.GITHUB_TOKEN }}
        secrets: ${{ toJSON(secrets) }}
        value_files: ${{ toJSON('["./charts/my-chart/values.yaml", "./charts/my-chart/values-staging.yaml"]') }} 
        release: my-chart
        namespace: default
        values: |
          tag: ${{ github.sha }}
      env:
        KUBECONFIG_FILE: ${{ secrets.STAGING_KUBECONFIG }}

Can you please help me out? Of course, I'm always happy to help if I can, just let me know.

Thank you!

Canary Deployments

Helm controller that has a deployment strategy of canary deployments. May require using a special helm chart to enable this.

Pulling from GCR is Failing

Seems like previously the action would always rebuild the container so it didn't pull the cached version from gcr - now that actions is pulling the cached version it seems to be failing.

Image Build Fails

Getting this error message when attempting to build an image

=> ERROR [2/3] RUN apk add --no-cache ca-certificates     --repository http://dl-3.alpinelinux.org/alpine/edge/community/     jq curl bash nodejs aws-cli &&     curl -L https://get.helm.sh/helm-v2.17.0-linux-amd64.tar.gz |tar xvz &&     mv linux-amd64/he  1.2s
------
 > [2/3] RUN apk add --no-cache ca-certificates     --repository http://dl-3.alpinelinux.org/alpine/edge/community/     jq curl bash nodejs aws-cli &&     curl -L https://get.helm.sh/helm-v2.17.0-linux-amd64.tar.gz |tar xvz &&     mv linux-amd64/helm /usr/bin/helm &&     chmod +x /usr/bin/helm &&     rm -rf linux-amd64 &&     curl -L https://get.helm.sh/helm-v3.4.2-linux-amd64.tar.gz |tar xvz &&     mv linux-amd64/helm /usr/bin/helm3 &&     chmod +x /usr/bin/helm3 &&     rm -rf linux-amd64 &&     helm init --client-only:
#6 0.257 fetch http://dl-3.alpinelinux.org/alpine/edge/community/x86_64/APKINDEX.tar.gz
#6 0.834 fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/main/x86_64/APKINDEX.tar.gz
#6 0.834 WARNING: Ignoring http://dl-3.alpinelinux.org/alpine/edge/community/x86_64/APKINDEX.tar.gz: UNTRUSTED signature
#6 1.019 fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/community/x86_64/APKINDEX.tar.gz
#6 1.147 ERROR: unsatisfiable constraints:
#6 1.161   aws-cli (missing):
#6 1.161     required by: world[aws-cli]
------
executor failed running [/bin/sh -c apk add --no-cache ca-certificates     --repository http://dl-3.alpinelinux.org/alpine/edge/community/     jq curl bash nodejs aws-cli &&     curl -L ${BASE_URL}/${HELM_2_FILE} |tar xvz &&     mv linux-amd64/helm /usr/bin/helm &&     chmod +x /usr/bin/helm &&     rm -rf linux-amd64 &&     curl -L ${BASE_URL}/${HELM_3_FILE} |tar xvz &&     mv linux-amd64/helm /usr/bin/helm3 &&     chmod +x /usr/bin/helm3 &&     rm -rf linux-amd64 &&     helm init --client-only]: exit code: 1

Add helmfile

Helmfile very convenient tool to manage helm releases in clusters and I believe that will be great if we could add it to that action

Unexpected input(s) 'timeout'

Similar to issue #28 support to more arguments is missing and doesn't comply with the documentation

Warning: Unexpected input(s) 'timeout', valid inputs are ['entryPoint', 'args', 'release', 'namespace', 'chart', 'values', 'dry-run', 'helm', 'token', 'value-files', 'secrets', 'version']

Is adding to action.yml required?

deliverybot/[email protected] breaks CI

Hi, I guess the latest version you just pushed break things.

I'm getting the following error now:

Download action repository 'deliverybot/helm@v1'
##[error]/home/runner/work/_actions/deliverybot/helm/v1/action.yml:
##[error]/home/runner/work/_actions/deliverybot/helm/v1/action.yml: (Line: 20, Col: 48, Idx: 673) - (Line: 20, Col: 48, Idx: 673): Mapping values are not allowed in this context.
##[error]System.ArgumentException: Unexpected type '' encountered while reading 'action manifest root'. The type 'MappingToken' was expected.
   at GitHub.DistributedTask.ObjectTemplating.Tokens.TemplateTokenExtensions.AssertMapping(TemplateToken value, String objectDescription)
   at GitHub.Runner.Worker.ActionManifestManager.Load(IExecutionContext executionContext, String manifestFile)
##[error]Fail to load /home/runner/work/_actions/deliverybot/helm/v1/action.yml

I'm using deliverybot/helm@v1 and it worked 5 hours ago

Deploying to EKS, need credentials passed

Hi there,
I am attempting to deploy some helm chart to my EKS cluster.
In order to do that, I believe I have configured aws-iam-authenticator properly, since I get the following log:

helm upgrade seed-java helm/seed-java --install --wait --atomic --namespace=feat-SYS-1027-CD-core-dev --set=app.name=seed-java --values=./values.yml
could not get token: NoCredentialProviders: no valid providers in chain. Deprecated.
	For verbose messaging see aws.Config.CredentialsChainVerboseErrors

Which is created by:

 51          uses: 'deliverybot/helm@master'
 52          with:
 53            namespace: ${{ steps.helmvars.outputs.namespace}}
 54            chart: ${{steps.helmvars.outputs.chart}}
 55            token: ${{ github.token }}
 56            value_files: "[ ${{steps.helmvars.outputs.values_static}}, ${{steps.helmvars.outputs.values_ns}}]"
 57            helm: 'helm'
 58            release: ${{ steps.helmvars.outputs.release }}
 59          env:
 60            KUBECONFIG_FILE: '${{ secrets.KUBECONFIG }}'

Does this mean that the AWS_ACCESS_KEY_ID etc are not being properly passed into the container that is being executed? I am at a bit of a loss.

I know that the credentials work, and are available to prior steps in the workflow, since I push the containers I seek to deploy to ECR.

Any ideas?

Thanks!

Warning for missing input parameters

Not all input parameters that are available inside the action have been defined in the action.yml file. This results in a warning in the workflow log when running this action.

PR #34 fixes that.

How can i use nesting values ?

image:
  registry: 
  repository: 
  tag: latest
  pullSecret: regcred

It doesn't work


      - name: Deploy to DigitalOcean Kubernetes
        uses: 'deliverybot/[email protected]'
        with:
          release: ''
          version: '${{ github.sha }}'
          namespace: 'default'
          chart: './helm'
          helm: helm3
          values: 
            image: 
              tag: "${{ github.sha }}"
        env:
          KUBECONFIG_FILE: $GITHUB_WORKSPACE/.kubeconfig

Migrations

Guide for running migrations with a helm deployment for an application.

Unexpected input 'helm'

I want to use helm v3 so naturally I've added helm: helm3 to the list of my step with mapping. However:

##[warning]Unexpected input 'helm', valid inputs are ['entryPoint', 'args', 'release', 'namespace', 'chart', 'values', 'dry-run', 'token', 'value-files', 'secrets', 'version']

Looking at the action.yml there is no helm input, so the error message is correct.

Secrets

Allow getting secrets into application deployments.

Image build fails because of Helm having removed the stable repo

See https://helm.sh/blog/new-location-stable-incubator-charts/ and https://kubernetes-charts.storage.googleapis.com/ returns 403.

A "quick" fix is probably to update Helm to a newer version and / or removing Helm 2.

Log from failed GitHub Actions run:

[...]
Adding stable repo with URL: kubernetes-charts.storage.googleapis.com 
  Error: error initializing: Looks like "kubernetes-charts.storage.googleapis.com" is not a valid chart repository or cannot be reached: Failed to fetch kubernetes-charts.storage.googleapis.com/index.yaml : 403 Forbidden
[...]

helm chart base dir

hello,

how can I set the helm chart folder?

if I checkout the repo, helm Chart is in the helm/ folder,

I tried with chart: helm but then helm will try to load templates from the root folder not from helm folder and I got
Error: YAML parse error on app/templates/deploy.yml: error converting YAML to JSON: yaml: line 22: mapping values are not allowed in this context ( the helm chart is in app/helm )

also, uses now working together with working-directory

thanks

Problem with the order of value files

Hello,

We've recently encountered a problem with the order of the generated values.yml file and using custom value files. Take a look in this example:

    - name: Deploy
      uses: deliverybot/helm@master
      with:
        chart: ./charts/my-srv
        track: stable
        version: master-${{ github.sha }}
        token: ${{ github.token }}
        secrets: ${{ toJSON(secrets) }}
        value-files: '["./charts/my-srv/values.yaml", "./charts/my-srv/values-staging.yaml"]'
        release: my-srv
        namespace: default
        values: |
          tag: master-${{ github.sha }}
      env:
        KUBECONFIG_FILE: ${{ secrets.STAGING_KUBECONFIG }}

This is the command the action generates when we use the configuration above:

$ helm upgrade dna-consumer-srv ./charts/my-srv --install --wait --atomic --namespace=default --values=./values.yml --set=app.name=my-srv --set=app.version=master-4721fea6d4db083349659db7cfe61ee8e9cb498f --values=./charts/my-srv/values.yaml --values=./charts/my-srv/values-staging.yaml

As you can see, the ./values.yml comes before the other two files. In helm aparently the order matters, which means, when we deploy the tag param will still use the default value that is set in our ./charts/my-srv/values.yaml and not the one that we set in the action.
In this case, our deployment never updates to the correct version.

By changing the order of the generated command, we are able to deploy the correct version:

$ helm upgrade dna-consumer-srv ./charts/my-srv --install --wait --atomic --namespace=default --set=app.name=my-srv --set=app.version=master-4721fea6d4db083349629db5cfe61ee8e9cb498f --values=./charts/my-srv/values.yaml --values=./charts/my-srv/values-staging.yaml --values=./values.yml

We will open a PR to fix this issue, if it's ok by you @colinjfw?

Thank you!

Issue while downloading the chart by passing the full URL.

@colinjfw still same issue with helmv2
the git hub action is not able to download the chart. the error below - Error: failed to download "xxxx" (hint: running helm repo update may help)
##[error]Error: The process 'helm' failed with exit code 1
##[error]The process 'helm' failed with exit code 1
##[error]Docker run failed with exit code 1
Related - #11

Blue + Green Deployments

Blue + green deployments are a safer way to rollout certain changes. Enable this automation with the helm deployments.

Error when listening on deployment event

Hello Again,

Now I've bumped into an interesting error that seems to happen only when listening on the deployment event. I believe this is an error from GitHub Actions, but I wanted to leave an issue here in case someone has found a way around this.

### ERRORED 19:14:52Z

- There was an unexpected error when executing this Action. For help debugging what went wrong, please contact [email protected]. The unique ID for this error is E3D5:63A1:1D1C2:450F0:5D83D3AA

Thank you!

[Error] Error: unknown flag: --home

Hi all,

My Github Action was :

deployment:
    name: Deployment
    needs: build-and-push
    runs-on: 'ubuntu-latest'
    steps:
    - name: 'Checkout'  # Checkout the repository code.
      uses: 'actions/checkout@v1'

    - name: 'Deploy'
      uses: 'deliverybot/helm@master'
      with:
        helm: helm3
        release: my_service
        namespace: default
        chart: 'charts/app'
        version: '${{ github.sha }}'
        values: |
          env:
            ...
      env:
        KUBECONFIG_FILE: '${{ secrets.KUBECONFIG }}'

Everything was working fine
Until today I got CI fail with below error (without change anything in workflow):

Deploy1s
##[error]The process 'helm3' failed with exit code 1
Run deliverybot/helm@master
/usr/bin/docker run --name e87b5235ad321ecd91481da0869798ec4f9210_734b6e --label e87b52 --workdir /github/workspace --rm -e KUBECONFIG_FILE -e INPUT_HELM -e INPUT_RELEASE -e INPUT_NAMESPACE -e INPUT_CHART -e INPUT_VERSION -e INPUT_VALUES -e INPUT_DRY-RUN -e INPUT_TOKEN -e INPUT_VALUE-FILES -e INPUT_SECRETS -e HOME -e GITHUB_REF -e GITHUB_SHA -e GITHUB_REPOSITORY -e GITHUB_RUN_ID -e GITHUB_RUN_NUMBER -e GITHUB_ACTOR -e GITHUB_WORKFLOW -e GITHUB_HEAD_REF -e GITHUB_BASE_REF -e GITHUB_EVENT_NAME -e GITHUB_WORKSPACE -e GITHUB_ACTION -e GITHUB_EVENT_PATH -e RUNNER_OS -e RUNNER_TOOL_CACHE -e RUNNER_TEMP -e RUNNER_WORKSPACE -e ACTIONS_RUNTIME_URL -e ACTIONS_RUNTIME_TOKEN -e ACTIONS_CACHE_URL -e GITHUB_ACTIONS=true -v "/var/run/docker.sock":"/var/run/docker.sock" -v "/home/runner/work/_temp/_github_home":"/github/home" -v "/home/runner/work/_temp/_github_workflow":"/github/workflow" -v "/home/runner/work/my_service/my_service":"/github/workspace" e87b52:35ad321ecd91481da0869798ec4f9210
helm3 upgrade my_service charts/app --install --wait --atomic --namespace=default --home=/root/.helm/ --set=app.name=my_service --set=app.version=4ca54d0979e80bc99254313c0af495830519de8e --values=./values.yml
Error: unknown flag: --home
##[error]Error: The process 'helm3' failed with exit code 1
##[error]The process 'helm3' failed with exit code 1

I have tried to run CI pipeline again but still the same error.

Any help ? Thank you so much.

Best regards,
VietNC

Image build fails

  Step 5/8 : RUN apk add --no-cache ca-certificates     --repository http://dl-3.alpinelinux.org/alpine/edge/community/     jq curl bash nodejs aws-cli &&     curl -L ${BASE_URL}/${HELM_2_FILE} |tar xvz &&     mv linux-amd64/helm /usr/bin/helm &&     chmod +x /usr/bin/helm &&     rm -rf linux-amd64 &&     curl -L ${BASE_URL}/${HELM_3_FILE} |tar xvz &&     mv linux-amd64/helm /usr/bin/helm3 &&     chmod +x /usr/bin/helm3 &&     rm -rf linux-amd64 &&     helm init --client-only
   ---> Running in faf0e342fb70
  fetch http://dl-3.alpinelinux.org/alpine/edge/community/x86_64/APKINDEX.tar.gz
  fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/main/x86_64/APKINDEX.tar.gz
  fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/community/x86_64/APKINDEX.tar.gz
  ERROR: unsatisfiable constraints:
    so:libimagequant.so.0 (missing):
      required by: py3-pillow-9.0.0-r0[so:libimagequant.so.0]
  The command '/bin/sh -c apk add --no-cache ca-certificates     --repository http://dl-3.alpinelinux.org/alpine/edge/community/     jq curl bash nodejs aws-cli &&     curl -L ${BASE_URL}/${HELM_2_FILE} |tar xvz &&     mv linux-amd64/helm /usr/bin/helm &&     chmod +x /usr/bin/helm &&     rm -rf linux-amd64 &&     curl -L ${BASE_URL}/${HELM_3_FILE} |tar xvz &&     mv linux-amd64/helm /usr/bin/helm3 &&     chmod +x /usr/bin/helm3 &&     rm -rf linux-amd64 &&     helm init --client-only' returned a non-zero code: 6
Error: Docker build failed with exit code 6

update repos?

I'm trying to use github actions to deploy the stable/jenkins helm chart, but I'm getting this error.

/usr/bin/docker run --name e87b52395a539f58f148df9585ae32bad466a8_5fdaab --label e87b52 --workdir /github/workspace --rm -e KUBECONFIG_FILE -e INPUT_RELEASE -e INPUT_NAMESPACE -e INPUT_CHART -e INPUT_HELM -e INPUT_VALUE-FILES -e INPUT_SECRETS -e INPUT_VERSION -e INPUT_VALUES -e INPUT_DRY-RUN -e INPUT_TOKEN -e HOME -e GITHUB_REF -e GITHUB_SHA -e GITHUB_REPOSITORY -e GITHUB_RUN_ID -e GITHUB_RUN_NUMBER -e GITHUB_ACTOR -e GITHUB_WORKFLOW -e GITHUB_HEAD_REF -e GITHUB_BASE_REF -e GITHUB_EVENT_NAME -e GITHUB_WORKSPACE -e GITHUB_ACTION -e GITHUB_EVENT_PATH -e RUNNER_OS -e RUNNER_TOOL_CACHE -e RUNNER_TEMP -e RUNNER_WORKSPACE -e ACTIONS_RUNTIME_URL -e ACTIONS_RUNTIME_TOKEN -e GITHUB_ACTIONS=true -v "/var/run/docker.sock":"/var/run/docker.sock" -v "/home/runner/work/_temp/_github_home":"/github/home" -v "/home/runner/work/_temp/_github_workflow":"/github/workflow" -v "/home/runner/work/jenkins-deploy/jenkins-deploy":"/github/workspace" e87b52:395a539f58f148df9585ae32bad466a8
helm3 upgrade testjenkins2 stable/jenkins --install --wait --atomic --namespace=gh-actions --set=app.name=testjenkins2 --values=./config/sandbox8-demo.yml --values=./values.yml
Error: failed to download "stable/jenkins" (hint: running `helm repo update` may help)
##[error]Error: The process 'helm3' failed with exit code 1
##[error]The process 'helm3' failed with exit code 1

I got the same thing using helm2 . Does it need to call helm repo update at the beginning of the flow ?

workflow:

name: 'Deploy'
on:
  repository_dispatch:
    types: build-jenkins


jobs:
  deployment:
    runs-on: 'ubuntu-latest'
    steps:
    - name: 'Checkout'
      uses: 'actions/checkout@v1'

    - name: 'Deploy'
      uses: 'deliverybot/helm@v1'
      with:
        # Helm release name. Will be combined with track if set. (required)
        release: testjenkins2
        # Kubernetes namespace name. (required)
        namespace: 'gh-actions'
        # Helm chart path. If set to "app" this will use the built in helm chart found in this repository. (required)
        chart: stable/jenkins
        helm: 'helm3'
        # Helm chart values, expected to be a YAML or JSON string.
        # rewrite to take this as a value
        value-files: './config/sandbox8-demo.yml'
        # Secret variables to include in value file interpolation. Expects JSON encoded map.
        secrets: # optional
        # Version of the app, usually commit sha works here.
        version: # optional
      env:
        KUBECONFIG_FILE: '${{ secrets.KUBECONFIG }}'

Thanks very much - it's a neat tool.

Regarding release name

If we are changing release name as "A" to "A-canary" , then wouldn't it be less beneficial as our new release name will become "A" to "A-canary" ?
Please, clarify how to upgrade and during it change release name from "A" to "B"

Issue with running example helm chart

Hi,

I am new to K8S and following the example steps to deploy the deliverybot example helm chart on AWS EKS Cluster.
I followed all the steps and add my kubeconfig file details but when i run the action I am getting below error:
Error: Get https://A1D2B1ED99BC87833C122F269E72115B.gr7.us-east-1.eks.amazonaws.com/api/v1/namespaces/kube-system/pods?labelSelector=app%3Dhelm%2Cname%3Dtiller: getting credentials: *** *** "aws": executable file not found in $PATH

I am sure it must be something trivial that i am missing. Can you please guide me on how to troubleshoot this issue.

Regards,
Rishabh

value-files with multiple files: JSON string of array?

I'm trying to specify multiple value files. I've tried a JSON array as a string, as suggested:

          value-files: '[
            "api/config/env-base/values.yaml",
            "api/config/env-dev/values.yaml",
            ]'

However, the process fails with an error opening a file with the name of the entire JSON string.

Would it be possible to provide an example to demonstrate how this should work?

Run 'helm repo update'

From my Github Action log:

Run deliverybot/helm@v1
  with:
    release: xx
    namespace: default
    chart: bitnami/wordpress
    token: xx
    value-files: ["./deploy/values-production.yaml"]
  env:
    KUBECONFIG_FILE: xx
/usr/bin/docker run --name xx --label xx --workdir /github/workspace --rm -e KUBECONFIG_FILE -e INPUT_RELEASE -e INPUT_NAMESPACE -e INPUT_CHART -e INPUT_TOKEN -e INPUT_VALUE-FILES -e INPUT_VALUES -e INPUT_DRY-RUN -e INPUT_SECRETS -e INPUT_VERSION -e HOME -e GITHUB_REF -e GITHUB_SHA -e GITHUB_REPOSITORY -e GITHUB_ACTOR -e GITHUB_WORKFLOW -e GITHUB_HEAD_REF -e GITHUB_BASE_REF -e GITHUB_EVENT_NAME -e GITHUB_WORKSPACE -e GITHUB_ACTION -e GITHUB_EVENT_PATH -e RUNNER_OS -e RUNNER_TOOL_CACHE -e RUNNER_TEMP -e RUNNER_WORKSPACE -v "/var/run/docker.sock":"/var/run/docker.sock" -v "/home/runner/work/_temp/_github_home":"/github/home" -v "/home/runner/work/_temp/_github_workflow":"/github/workflow" -v "/home/runner/work/xx.no/xx.no":"/github/workspace" xx:xx
helm upgrade xx bitnami/wordpress --install --wait --atomic --namespace=default --values=./values.yml --set=app.name=xx --values=./deploy/values-production.yaml
Error: failed to download "bitnami/wordpress" (hint: running `helm repo update` may help)
##[error]Error: The process 'helm' failed with exit code 1
##[error]The process 'helm' failed with exit code 1
##[error]Docker run failed with exit code 1

'helm repo update' should be run before executing the helm deploy command.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.