travisghansen / argo-cd-helmfile Goto Github PK
View Code? Open in Web Editor NEWIntegration between argo-cd and helmfile
License: MIT License
Integration between argo-cd and helmfile
License: MIT License
Hello, I would like to know if it is possible to configure the argocd application using the helmfile plugin.
E.g: argocd app Values file as block file
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: sample-app
namespace: argocd
labels:
project: sample-app
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
destination:
namespace: sample-app
server: https://kubernetes.default.svc
project: sample-app
source:
path: helm
repoURL: https://gitlab.hub.seguros.vitta.com.br/devops/sample-app.git
targetRevision: develop
# helm specific config
helm:
parameters:
# Release name override (defaults to application name)
releaseName: sample-app
# Values file as block file
values: |
pods:
image:
name: ghcr.io/benc-uk/nodejs-demoapp
tag: latest
ingress:
enabled: true
ingressClassName: "nginx"
hosts:
- sample-app.devops.com
syncPolicy:
automated:
prune: true
selfHeal: true
E.g: argocd app ValuesFiles
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: sample-app
namespace: argocd
labels:
project: sample-app
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
destination:
namespace: sample-app
server: https://kubernetes.default.svc
project: sample-app
source:
path: helm
repoURL: https://gitlab.hub.seguros.vitta.com.br/devops/sample-app.git
targetRevision: develop
# helm specific config
helm:
- values-prod.yaml
- values-dev.yaml
syncPolicy:
automated:
prune: true
selfHeal: true
When I pass the spec plugin {} with the spec helm it doesn't work, is this possible and can you help me.
Thanks
Hello.
First, thanks for the great work.
Is there a plan to support plugin parameters in Application scope ? here in documentation
For the example case from the ArgocD documentation:
apiVersion: argoproj.io/v1alpha1
kind: Application
spec:
source:
plugin:
parameters:
- name: FOO
value: bar
- name: REV
value: test-$ARGOCD_APP_REVISION
The plugin would add --state-values-set FOO=bar --state-values-set REV=test-$ARGOCD_APP_REVISION
to the helmfile command.
It could also use --state-values-file
if the parameter value appears to be a file. It could fix issue #46 if I understand it well.
Does it makes sense ?
I might submit a PR if interrested.
We have multiple helmfile.d based deployments that deploy different charts in different namespaces.
E.g. content of 10-postgres.yml
...
- name: postgresql
namespace: namespace-1
chart: bitnami/postgresql
version: v11.1.4
- name: postgresql
namespace: namespace-2
chart: bitnami/postgresql
version: v11.1.4
When not defining a destination namespace argocd creates configmaps, serviceaccounts and other things w/o namespace metadata and therefore in the wrong namespace. Using helmfile -f ./helmfile.d sync does respect the namespace stanza and deploys the k8s objects in the correct namespaces.
Hello, I ran into an issue using your plugin when dealing with OCI private helm registries.
I want to add an ArgoCD application that pull a repo containing a helmfile.yaml
.
Within this helmfile.yaml
I got some OCI private repositories I need to authenticate first before being able to pull.
I tried a few things that are not working:
helmfile-plugin
sidecar using helm registry login
HARBOR_USERNAME
and HARBOR_PASSWORD
environment variables in the ArgoCD application directly (as per helmfile documentation)I got a 401 Unauthorized
response when fetching a private helm repository.
It tries to connect as an anonymous user in order to pull the chart (Harbor logs).
Make the OCI registry public solves the issue in the meantime, the app can be deployed without any issue.
This is not a long term solution for me because sometimes we host docker repositories along helm ones and these cannot be public.
# helmfile.yaml (redacted)
repositories:
- name: harbor
url: my.harbor.com/repo2/helm
oci: true
releases:
- name: "chart2"
chart: "harbor/chart2"
version: "1.0.2-9aef758b"
time="2024-01-09T08:27:46Z" level=error msg="`argo-cd-helmfile.sh generate` failed exit status 1: helm version v3.13.3+gc8b9489\nhelmfile version 0.159.0\nstarting generate\nDecrypting secret /tmp/_cmp_server/23d488cc-ed46-4a4d-a435-634cd856a856/int/secrets.yaml\nPulling my.harbor.com/repo1/helm/chart1:v0.2.5\nPulling my.harbor.com/repo2/helm/chart2:1.0.2-9aef758b\nin ./helmfile.yaml: [release \"chart2\": command \"/usr/local/bin/helm\" exited with non-zero status:\n\nPATH:\n /usr/local/bin/helm\n\nARGS:\n 0: /usr/local/bin/helm (19 bytes)\n 1: pull (4 bytes)\n 2: oci://my.harbor.com/repo2/helm/chart2 (46 bytes)\n 3: --version (9 bytes)\n 4: 1.0.2-9aef758b (14 bytes)\n 5: --destination (13 bytes)\n 6: /tmp/helmfile4217612109/chart2-int/repo2/chart2/1.0.2-9aef758b (57 bytes)\n 7: --untar (7 bytes)\n\nERROR:\n exit status 1\n\nEXIT STATUS\n 1\n\nSTDERR:\n Error: unexpected status from HEAD request to https://my.harbor.com/v2/repo2/helm/repo2/manifests/1.0.2-9aef758b: 401 Unauthorized\n\nCOMBINED OUTPUT:\n Error: unexpected status from HEAD request to https://my.harbor.com/v2/repo2/helm/repo2/manifests/1.0.2-9aef758b: 401 Unauthorized]" execID=fe3c4
time="2024-01-09T08:27:46Z" level=error msg="finished streaming call with code Unknown" error="error generating manifests: `argo-cd-helmfile.sh generate` failed exit status 1: helm version v3.13.3+gc8b9489\nhelmfile version 0.159.0\nstarting generate\nDecrypting secret /tmp/_cmp_server/23d488cc-ed46-4a4d-a435-634cd856a856/int/secrets.yaml\nPulling my.harbor.com/repo1/helm/chart1:v0.2.5\nPulling my.harbor.com/repo2/helm/chart2:1.0.2-9aef758b\nin ./helmfile.yaml: [release \"chart2\": command \"/usr/local/bin/helm\" exited with non-zero status:\n\nPATH:\n /usr/local/bin/helm\n\nARGS:\n 0: /usr/local/bin/helm (19 bytes)\n 1: pull (4 bytes)\n 2: oci://my.harbor.com/repo2/helm/chart2 (46 bytes)\n 3: --version (9 bytes)\n 4: 1.0.2-9aef758b (14 bytes)\n 5: --destination (13 bytes)\n 6: /tmp/helmfile4217612109/chart2-int/repo2/chart2/1.0.2-9aef758b (57 bytes)\n 7: --untar (7 bytes)\n\nERROR:\n exit status 1\n\nEXIT STATUS\n 1\n\nSTDERR:\n Error: unexpected status from HEAD request to https://my.harbor.com/v2/repo2/helm/repo2/manifests/1.0.2-9aef758b: 401 Unauthorized\n\nCOMBINED OUTPUT:\n Error: unexpected status from HEAD request to https://my.harbor.com/v2/repo2/helm/repo2/manifests/1.0.2-9aef758b: 401 Unauthorized]" grpc.code=Unknown grpc.method=GenerateManifest grpc.service=plugin.ConfigManagementPluginService grpc.start_time="2024-01-09T08:27:40Z" grpc.time_ms=5794.475 span.kind=server system=grpc
Thanks for your awesome work on this plugin !
PS: If we cannot use private helm repos using this plugin, imo it should be stated in the README.md
.
Hello,
I have setup helm secrets in ArgoCD following this guide:
https://github.com/jkroepke/helm-secrets/wiki/ArgoCD-Integration
And also installed helmfile plugin in ArgoCD
And I couldn't use helm secrets
Example helmfile:
repositories:
- name: k8s-at-home
url: https://k8s-at-home.com/charts/
releases:
- name: homer
version: 7.2.2
chart: k8s-at-home/homer
namespace: homer
values:
- values.yaml
secrets:
- values-secrets.yaml
# - secrets+gpg-import:///helm-secrets-private-keys/key.asc?values-secrets.yaml
Argo app:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
finalizers:
- resources-finalizer.argocd.argoproj.io
name: homer-helmfiles
namespace: argocd
spec:
destination:
namespace: homer
server: https://kubernetes.default.svc
project: apps
source:
path: nucs/dev/us-east/apps/namespaces/homer/helmfile
plugin:
name: helmfile
repoURL: https://<mygit>.git
targetRevision: HEAD
syncPolicy:
automated:
selfHeal: true
In nucs/dev/us-east/apps/namespaces/homer/helmfile
:
.sops.yaml
helmfile.yaml - the file above
values.yaml
values-secrets.yaml - encrypted with helm secrets
Everything works as intended without using the secrets
Argocd Helmfile plugin should detect changes when refresh or resync from the repository when we change the templates.
Currently its not detecting the change and even if we re-deploy the application the old manifests are cached somewhere.
Hello
When some helm repository uses password, and he gets "context deadline exceeded" while fetching the chart(s), he showing the full helmfile command without masking the helm repository password in ArgoCD UI
For example:
rpc error: code = Unknown desc = `argo-cd-helmfile.sh init` failed exit status 1: v3.8.1+g5cb9af4 helmfile version 0.148.1 starting init Adding repo elastic https://harbor.example.com/chartrepo/elastic in ./helmfile.yaml: command "/usr/local/bin/helm" exited with non-zero status: PATH: /usr/local/bin/helm ARGS: 0: /usr/local/bin/helm (19 bytes) 1: repo (4 bytes) 2: add (3 bytes) 3: elastic (7 bytes) 4: https://harbor.example.com/chartrepo/elastic (43 bytes) 5: --force-update (14 bytes) 6: --username (10 bytes) 7: admin (5 bytes) 8: --password (10 bytes) 9: thispasswordisnotmasked (16 bytes) ERROR: exit status 1 EXIT STATUS 1 STDERR: Error: context deadline exceeded COMBINED OUTPUT: Error: context deadline exceeded
Set thispasswordisnotmasked
as the password
How to use helmfile apply option instead of helmfile sync,is it possible ,how the argocd is actually working ?
Am not able to achieve your solution. can you please share your example deployment.
Here is my sample deployment
Added these in argocd-cm configmap
configManagementPlugins: |
- name: helmfile
init:
command: ["argo-cd-helmfile.sh"]
args: ["init"]
Sample app deployment
---
apiVersion: "argoproj.io/v1alpha1"
kind: "Application"
metadata:
name: "hello-file"
spec:
project: default
source:
repoURL: '[email protected]:xxo.git'
path: "argocd-helm/papertrail/test/"
destination:
server: ''
namespace: default
volumes:
- name: custom-tools
emptyDir: {}
initContainers:
- name: download-tools
image: alpine:3.8
command: [sh, -c]
args:
- wget -qO /custom-tools/argo-cd-helmfile.sh https://raw.githubusercontent.com/travisghansen/argo-cd-helmfile/master/src/argo-cd-helmfile.sh &&
chmod +x /custom-tools/argo-cd-helmfile.sh &&
wget -qO /custom-tools/helmfile https://github.com/roboll/helmfile/releases/download/v0.98.2/helmfile_linux_amd64 &&
chmod +x /custom-tools/helmfile
volumeMounts:
- mountPath: /custom-tools
name: custom-tools
volumeMounts:
- mountPath: /usr/local/bin/argo-cd-helmfile.sh
name: custom-tools
subPath: argo-cd-helmfile.sh
- mountPath: /usr/local/bin/helmfile
name: custom-tools
subPath: helmfile
Hello
Theres some charts for example ingress-nginx
that cannot read the go template .Capabilities.APIVersions.Has
For example im trying to install ingress-nginx chart with prometheus rule enabled and he doesnt get know if it capability
Example go template:
{{- if and ( .Values.controller.metrics.enabled ) ( .Values.controller.metrics.prometheusRule.enabled ) ( .Capabilities.APIVersions.Has "monitoring.coreos.com/v1" ) -}}
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: {{ include "ingress-nginx.controller.fullname" . }}
{{- if .Values.controller.metrics.prometheusRule.namespace }}
namespace: {{ .Values.controller.metrics.prometheusRule.namespace | quote }}
{{- end }}
labels:
{{- include "ingress-nginx.labels" . | nindent 4 }}
app.kubernetes.io/component: controller
{{- if .Values.controller.metrics.prometheusRule.additionalLabels }}
{{- toYaml .Values.controller.metrics.prometheusRule.additionalLabels | nindent 4 }}
{{- end }}
spec:
{{- if .Values.controller.metrics.prometheusRule.rules }}
groups:
- name: {{ template "ingress-nginx.name" . }}
rules: {{- toYaml .Values.controller.metrics.prometheusRule.rules | nindent 4 }}
{{- end }}
{{- end }}
What should I do ?
Unable to save changes: application spec for testapp is invalid: InvalidSpecError: Unable to generate manifests in new/: rpc error: code = Unknown desc = argo-cd-helmfile.sh init
failed exit status 1:
The official repo of helmfile https://github.com/roboll/helmfile is moved https://github.com/helmfile/helmfile
hello, I would like to ask if it's possible to use your scripts for argocd+helmfile with argocd-image-updater
as of right now, I was not able to make this work due to the fact that argocd-image-updater does not recognize Plugins
could you perhaps think of any way "fooling" image-updater into believing it's a plain'old Helm file? Afterall, it only needs to update image.tag
thank you
When there is an error in a Helm chart, Argo saves the error message in following path:
That error message is a huge string because Helm chart and other stuff rendered into the script before executing. It was around 1 MB on a cluster I manage. Because Argo makes updates to the application definition continuously and etcd also keeps previous revisions of a key, this adds up quickly to fill the available space in etcd database. I cannot paste the whole thing here but the error message starts with
rpc error: code = Unknown desc = `argo-cd-helmfile.sh generate` failed exit status 1: + [[ -z generate ]]\n++ basename /usr/local/bin/argo-cd-helmfile.sh\n+ SCRIPT_NAME=argo-cd-helmfile.sh\n+ [[ -n '' ]]\n+ [[ -n '' ]]\n+ [[ -n '' ]]\n+ [[ -n '' ]]\n+ [[ -n /helm-working-dir ]]\n++ variable_expansion
and ends with
STDERR:\n Error: failed to parse /tmp/helmfile1743267544/cenkalti-app2-values-191fc62d6a: error converting YAML to JSON: yaml: line 54: could not find expected ':'\n\nCOMBINED OUTPUT:\n Error: failed to parse /tmp/helmfile2840360558/cenkalti-app2-values-191fc62d6a: error converting YAML to JSON: yaml: line 42: could not find expected ':'",
My suggestion is to remove
argo-cd-helmfile/src/argo-cd-helmfile.sh
Line 42 in 30b77b7
or comment out and add a comment to link to this issue.
I'm trying to install the helm chart for redis operator with this helmfile:
repositories:
- name: ot-container-kit
url: https://ot-container-kit.github.io/helm-charts/
releases:
- name: redis-operator
namespace: redis
chart: ot-container-kit/redis-operator
version: 0.15.9
disableValidation: false
disableValidationOnInstall: false
disableOpenAPIValidation: false
with this Argocd app:
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: redis-operator
namespace: argocd
spec:
destination:
namespace: redis
server: https://kubernetes.default.svc
source:
path: helm-charts/redis-operator
repoURL: <GITHUB_REPO_URL>
targetRevision: master
syncPolicy:
automated:
prune: true
selfHeal: true
allowEmpty: true
syncOptions:
- CreateNamespace=true
- PruneLast=true
And I get this error in the deployed application
{"level":"error","ts":1708348036.8567078,"logger":"controller.redisreplication","msg":"Could not wait for Cache to sync","reconciler group":"redis.redis.opstreelabs.in","reconciler kind":"RedisReplication", "error":"failed to wait for redisreplication caches to sync: timed out waiting for cache to be synced","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:208\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:234\nsigs.k8s.io/controller-runtime/pkg/manager.(*runnableGroup).reconcile.func1\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/manager/runnable_group.go:218"}
If I execute this command kubectl get crds | grep redis.opstreelabs
I get an empty list
If I install the helm-chart using directly helmfile with this command helmfile sync
, then it installs the required CRDS.
➜ kubectl get crds | grep redis.opstreelabs
redis.redis.redis.opstreelabs.in 2024-02-19T15:05:06Z
redisclusters.redis.redis.opstreelabs.in 2024-02-19T15:05:01Z
redisreplications.redis.redis.opstreelabs.in 2024-02-19T15:05:04Z
redissentinels.redis.redis.opstreelabs.in 2024-02-19T15:05:05Z
This issue seems to be related to roboll/helmfile#1353
Why the arg-cd-helmfile plugin does not install the CRDS?
Hi,
First of all, great job on the ArgoCD / Helmfile integration. It has been working very smoothly for us.
Up until now we were using local directories for the Helm chart sources. This works like a charm but we need to move the locations over to our internal Helm registry proxy.
I've added a repository definition as described in the docs:
repositories:
- name: internal-registry
url: https://aaa.bbb.ccc
Next, in the release specification I use this repository for the Helm chart location:
releases:
- name: example
namespace: ns
chart: internal-registry/example
version: ~1.24.1
When syncing this definition in ArgoCD it produces an error saying that Helm can't find the 'internal-registry' repository.
This repository needs to be added first before the template command can run successfully. In the source I noticed the --skip-deps
argument: https://github.com/travisghansen/argo-cd-helmfile/blob/master/src/argo-cd-helmfile.sh#L408
Is this preventing the repository from being added? If so, how do we deal with this? If not, any suggestions why the repository might not be added before the template command?
Thanks!
Hello,
I'm experiencing a strange behavior when I'm using helmfile.d tree structure.
Some args are passed to helm pull command but they shouldn't (cf output below)
It seems passing args from helmfile to helm is not a good practice (doc)
Do you think it could be improved or make it optional ?
For now, I removed the --args parameter from the script like I'm not using it.
hi im try to follow tutorial installed helmfile plugin. but i got error
Unable to create application: application spec for app is invalid: InvalidSpecError: Unable to generate manifests in .: rpc error: code = Unknown desc = Manifest generation error (cached): fork/exec /usr/local/bin/argo-cd-helmfile.sh: permission denied
any suggest?
Hi,
we are evaluating ArgoCD in our company where we use a lot of helmfile already. Hence this plugin is our saviour. :-)
We use a lot of hooks where we fetch data from the cluster with kubectl
. It works like a charm when deploying to the cluster ArgoCD runs on but does not when working with multiple clusters. It seems that there is no actual context in kubeconfig.
I managed to exec this stupid bash script to add the content as annotation in my deployment.
#!/bin/bash
_CONFIG=$(kubectl config view | base64)
echo "${_CONFIG}"
It shows this:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null
What am I missing?
ArgoCD version: v2.10.0
argo-cd-helmfile: v0.3.9
Thank you very much!
Hello ,
I have this repo structure
git-repo
|___ scripts/
| |___ vault-pull
|___ environments/
| |___ prod.yaml
| |___ dev.yaml
|
|___app-1/
| |___ helmfile.yaml
| |___ values.yaml.gotmpl
|
|___app-2/
| |___ helmfile.yaml
| |___ values.yaml.gotmpl
The content of any helmfile starts by importing the environment files :
# app-x/helmfile.yaml content
# ...
environments:
dev:
values:
- ../environments/dev.yaml
prod:
values:
- ../environments/prod.yaml
releases:
#......
The first challenge is "environments/" directory is not in the same directory of helmfile. However it's accessible relatively within the same git repo.
The second challenge is "environments/" is in .gitignore, and i have to run script (scripts/vault-pull) in "init" phase in order to pull environment files.
To move forward, i customized this plugin :
"init")
# ...
if [ ! -z "${HELMFILE_INIT_SCRIPT_FILE}" ]; then
bash "${HELMFILE_INIT_SCRIPT_FILE}"
fi
in the argo app, i configure it like this :
kind: Application
metadata:
name: app-2
# ....
spec:
source:
path: app-1
repoURL: 'https://bitbucket.org/example/git-repo.git'
targetRevision: HEAD
plugin:
name: helmfile
env:
- name: HELMFILE_INIT_SCRIPT_FILE
value: ../scripts/vault-pull
However, the app is not deployed and i can see an error saying that file ../scripts/vault-pull
is not found/not exist.
What's issue ? does this plugin process things outside the git repo ?
hello
This is only an estimation, but I guess argocd with selfheal turned on, is interrupting manual rollout restart
when I trying to rollout the deployment, the newly created pods gets terminating after 20-40 seconds
when I do the same thing on deployment that doesnt managed by argo, I dont get the interruption
Don’t know if it’s the plugin, argo itself, or completely other think
Hi,
Would it be possible to set --skip-deps
flag as optional when calling template
command ?
My workaround actually is to point HELMFILE_INIT_SCRIPT_FILE
to a simple shell script:
#!/bin/sh
HELM_BINARY="helm-v3"
helm="$(which ${HELM_BINARY})"
${helm} repo update
${helm} dependency buid
Thank you
Hello,
I'm testing oci registry on AWS ECR and when I'm using helmfile through the argocd script, I got an 401 error.
I found that it's related to HOME var override. Registry creds are using HOME var I guess and overriding it, breaks the authentication.
Do you think it's ok to remove this override ?
Thanks
Environment:
Private Repository in Helmfile.yaml
repositories:
- name: C3SP-Helm-Charts
url: {{ fetchSecretValue (.StateValues | get "C3SP_HELM_REPO_URL" "secretref+k8s://v1/Secret/argo/argo-server-sso/helm-repo-url") }}
username: {{ fetchSecretValue (.StateValues | get "C3SP_HELM_REPO_USER" "secretref+k8s://v1/Secret/argo/argo-server-sso/helm-repo-user") }}
password: {{ fetchSecretValue (.StateValues | get "C3SP_HELM_REPO_PWD" "secretref+k8s://v1/Secret/argo/argo-server-sso/helm-repo-pwd") }}
ArgoCD Setup
# Source: argo-cd/templates/argocd-repo-server/role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: argo-cd-argocd-repo-server
namespace: "argo"
labels:
helm.sh/chart: argo-cd-6.7.1
app.kubernetes.io/name: argocd-repo-server
app.kubernetes.io/instance: argo-cd
app.kubernetes.io/component: repo-server
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/part-of: argocd
app.kubernetes.io/version: "v2.10.2"
rules:
- apiGroups:
- ""
resources:
- secrets
verbs:
- get
- list
- watch
---
# Source: argo-cd/templates/argocd-repo-server/rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: argo-cd-argocd-repo-server
namespace: "argo"
labels:
helm.sh/chart: argo-cd-6.7.1
app.kubernetes.io/name: argocd-repo-server
app.kubernetes.io/instance: argo-cd
app.kubernetes.io/component: repo-server
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/part-of: argocd
app.kubernetes.io/version: "v2.10.2"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: argo-cd-argocd-repo-server
subjects:
- kind: ServiceAccount
name: argo-cd-argocd-repo-server
namespace: argo
Confirmed that argo-cd-argocd-repo-server
is able to access argo-server-sso
from kubernetes Secret
argocd@argo-cd-argocd-repo-server-6644b58d8f-rqf69:~$ kubectl get Secret argo-server-sso
NAME TYPE DATA AGE
argo-server-sso Opaque 5 103d
Issue
When I try to create ArgoCD app with provided helmfile repository, it's throwing following error.
Unable to create application: application spec for delete is invalid: InvalidSpecError:
Unable to generate manifests in sample-app: rpc error: code = Unknown desc = plugin sidecar failed.
error generating manifests in cmp: rpc error: code = Unknown desc = error generating manifests: `argo-cd-helmfile.sh init` failed exit status 1: helm version v3.14.2+gc309b6f helmfile version 0.162.0
starting init vals-k8s: Unable to get a valid kubeConfig path: No path was found in any of the following: kubeContext URI param, KUBECONFIG environment variable, or default path /tmp/__argo-cd-helmfile.sh__/apps/delete/.kube/config does not exist.
vals-k8s: Unable to get a valid kubeConfig path: No path was found in any of the following: kubeContext URI param, KUBECONFIG environment variable, or default path /tmp/__argo-cd-helmfile.sh__/apps/delete/.kube/config does not exist. in ./helmfile.yaml:
error during helmfile.yaml.part.0 parsing: template: stringTemplate:3:10: executing "stringTemplate" at <fetchSecretValue (.StateValues | get "C3SP_HELM_REPO_URL" "secretref+k8s://v1/Secret/argo/argo-server-sso/helm-repo-url")>: error calling fetchSecretValue: expand k8s://v1/Secret/argo/argo-server-sso/helm-repo-url:
No path was found in any of the following: kubeContext URI param, KUBECONFIG environment variable, or default path /tmp/__argo-cd-helmfile.sh__/apps/delete/.kube/config does not exist.
Reference: Vals Kubernetes
Hello - firstly, thanks for awesome plugin. It solves big problems for us. Please can you help me solve this one? I am totally flummoxed.
I'm trying to add labels, following this https://github.com/roboll/helmfile/blob/master/docs/advanced-features.md#transformers
helmfile template | k apply -f -
run locally on my mac (ofcourse, it always does!)argo-cd = v2.7.6+00c914a.dirty
plugin = travisghansen/argo-cd-helmfile:v0.3.6
This works:
$ cat helmfile.yaml
---
repositories:
- name: prometheus-community
url: https://prometheus-community.github.io/helm-charts
releases:
- name: "prometheus-blackbox-exporter"
chart: "prometheus-community/prometheus-blackbox-exporter"
namespace: "kube-system-monitoring"
values:
- ./values/values-common.yaml
set:
- name: ingress.hosts[0].host
value: prometheus-blackbox.private.{{ requiredEnv "CLUSTER" }}.{{ requiredEnv "TOP_LEVEL_DOMAIN" }}
This adds the label, but it unwinds the set, and removes the psp, role and rolebind. ❓ 🤷 🧠
$ cat helmfile.yaml
---
repositories:
- name: prometheus-community
url: https://prometheus-community.github.io/helm-charts
releases:
- name: "prometheus-blackbox-exporter"
chart: "prometheus-community/prometheus-blackbox-exporter"
namespace: "kube-system-monitoring"
values:
- ./values/values-common.yaml
set:
- name: ingress.hosts[0].host
value: prometheus-blackbox.private.{{ requiredEnv "CLUSTER" }}.{{ requiredEnv "TOP_LEVEL_DOMAIN" }}
transformers:
- apiVersion: builtin
kind: LabelTransformer
metadata:
name: notImportantHere
labels:
foo: bar
fieldSpecs:
- kind: Deployment
path: spec/template/metadata/labels
create: true
$ argocd app diff argocd/prometheus-blackbox-exporter
===== apps/Deployment kube-system-monitoring/prometheus-blackbox-exporter ======
193a194
> foo: bar
===== networking.k8s.io/Ingress kube-system-monitoring/prometheus-blackbox-exporter ======
54c54
< - host: prometheus-blackbox.private.<redacted>
---
> - host: CHANGE_ME
===== policy/PodSecurityPolicy /prometheus-blackbox-exporter-psp ======
1,67d0
< apiVersion: policy/v1beta1
< kind: PodSecurityPolicy
< metadata:
< annotations:
< kubectl.kubernetes.io/last-applied-configuration: |
< {"apiVersion":"policy/v1beta1","kind":"PodSecurityPolicy","metadata":{"annotations":{},"labels":{"app.kubernetes.io/instance":"prometheus-blackbox-exporter","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/name":"prometheus-blackbox-exporter","app.kubernetes.io/version":"v0.24.0","argocd.argoproj.io/instance":"prometheus-blackbox-exporter","helm.sh/chart":"prometheus-blackbox-exporter-7.10.0"},"name":"prometheus-blackbox-exporter-psp"},"spec":{"allowPrivilegeEscalation":false,"fsGroup":{"ran
===== rbac.authorization.k8s.io/Role kube-system-monitoring/prometheus-blackbox-exporter ======
1,53d0
< apiVersion: rbac.authorization.k8s.io/v1
< kind: Role
< metadata:
< annotations:
< kubectl.kubernetes.io/last-applied-configuration: |
< {"apiVersion":"rbac.authorization.k8s.io/v1","kind":"Role","metadata":{"annotations":{},"labels":{"app.kubernetes.io/instance":"prometheus-blackbox-exporter","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/name":"prometheus-blackbox-exporter","app.kubernetes.io/version":"v0.24.0","argocd.argoproj.io/instance":"prometheus-blackbox-exporter","helm.sh/chart":"prometheus-blackbox-exporter-7.10.0"},"name":"prometheus-blackbox-exporter","namespace":"kube-system-monitoring"},"rules":[{"apiGroups":
===== rbac.authorization.k8s.io/RoleBinding kube-system-monitoring/prometheus-blackbox-exporter ======
1,52d0
< apiVersion: rbac.authorization.k8s.io/v1
< kind: RoleBinding
< metadata:
< annotations:
< kubectl.kubernetes.io/last-applied-configuration: |
< {"apiVersion":"rbac.authorization.k8s.io/v1","kind":"RoleBinding","metadata":{"annotations":{},"labels":{"app.kubernetes.io/instance":"prometheus-blackbox-exporter","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/name":"prometheus-blackbox-exporter","app.kubernetes.io/version":"v0.24.0","argocd.argoproj.io/instance":"prometheus-blackbox-exporter","helm.sh/chart":"prometheus-blackbox-exporter-7.10.0"},"name":"prometheus-blackbox-exporter","namespace":"kube-system-monitoring"},"roleRef":{"api%
My values file
$ cat values/values-common.yaml
---
ingress:
enabled: true
className: nginx-private
hosts:
- host: CHANGE_ME
paths:
- path: /
pathType: Prefix
local run (x86 mac)
helmfile = v0.153.1
helm = v3.11.3
kustomize = v5.1.0
Deploy the application using argo-cd (without the transformer) - first helmfile
Add the transformer - 2nd helmfile
$ helmfile template | k apply -f -
Adding repo prometheus-community https://prometheus-community.github.io/helm-charts
"prometheus-community" has been added to your repositories
Templating release=prometheus-blackbox-exporter, chart=/var/folders/9h/5kdwd1zd3bjchvwqpcg1t8fw0000gq/T/chartify3204551211/kube-system-monitoring/prometheus-blackbox-exporter/prometheus-blackbox-exporter
serviceaccount/prometheus-blackbox-exporter configured
configmap/prometheus-blackbox-exporter configured
service/prometheus-blackbox-exporter configured
deployment.apps/prometheus-blackbox-exporter configured
ingress.networking.k8s.io/prometheus-blackbox-exporter configured
$ argocd app diff argocd/prometheus-blackbox-exporter
===== /ConfigMap kube-system-monitoring/prometheus-blackbox-exporter ======
23a24
> argocd.argoproj.io/instance: prometheus-blackbox-exporter
===== /Service kube-system-monitoring/prometheus-blackbox-exporter ======
11a12
> argocd.argoproj.io/instance: prometheus-blackbox-exporter
===== /ServiceAccount kube-system-monitoring/prometheus-blackbox-exporter ======
11a12
> argocd.argoproj.io/instance: prometheus-blackbox-exporter
===== apps/Deployment kube-system-monitoring/prometheus-blackbox-exporter ======
15a16
> argocd.argoproj.io/instance: prometheus-blackbox-exporter
205d205
< foo: bar
===== networking.k8s.io/Ingress kube-system-monitoring/prometheus-blackbox-exporter ======
12a13
> argocd.argoproj.io/instance: prometheus-blackbox-exporter
Hello, first of all I would like to thank you for this project, I find it very useful!
The problems I'm having are that upon the migration to use the CMP as a sidecar container, a few of applications on our cluster broke for mainly 2 reasons described below.
Some apps broke because of missing kustomize
binary:
exec: "kustomize": executable file not found in $PATH COMMAND: kustomize build ...
Even though the binary is available in the main container of the pod.
I confirmed this with running something like this for all replicas of repo-server pod (-c helmfile-plugin
):
wget https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize%2Fv5.0.1/kustomize_v5.0.1_linux_amd64.tar.gz
tar xfz kustomize_v5.0.1_linux_amd64.tar.gz
mv kustomize /home/argocd/krew/bin/ # (the only writeable directory in PATH)
After that, "hard refresh" on most broken applications fixed all the errors.
Of course I can add a shared-volume init-container to download kustomize, but this raises a few concerns about maintainability: argo-cd routinely updates their bundled tools using this config, so it has all of the tools out of the box with versions guaranteed to work well with argo-cd.
The image tagged as travisghansen/argo-cd-helmfile:v0.3.1
has helm version v3.11.1
which is older than what's in argo-cd master, but newer than in argo-cd latest (2.6.4), so this is a bit inconsistent and may cause difficult to debug issues in the future.
Maybe there is a way to somehow align with argo-cd tools version, e.g. refer to the tool-versions.sh
file of the stable
git tag? Or work with upstream to extend the copyutil
init-container of the repo-server to also copy those tools to /run
along with the cmp-server
. Either way, your comment on this would be very valuable!
hacks/.../download.sh
The other issue is that some apps, for example kubevirt
, require helmfile to execute a download script to get the manifests, and with the move to the sidecar those charts stopped working, I get the following error in argo-cd:
panic: unexpected error: fork/exec ../../hacks/kubevirt/download.sh: permission denied goroutine 98 [running]: github.com/helmfile/helmfile/pkg/helmexec.Output(0xc0009ee000, {0xc000ae7740, 0x1, 0x2?}) /home/runner/work/helmfile/helmfile/pkg/helmexec/runner.go:107 +0xced github.com/helmfile/helmfile/pkg/helmexec.ShellRunner.Execute({{0x2cc9508?, 0x40dfe7?}, 0xc000aa46d0?}, {0xc00011caa0?, 0xc000ae7701?}, {0xc0009a3840?, 0x40eb25?, 0x21eed20?}, 0xc000ca30b0?, 0x0) /home/runner/work/helmfile/helmfile/pkg/helmexec/runner.go:40 +0x12e github.com/helmfile/helmfile/pkg/event.(*Bus).Trigger(0xc000ae7bd0, {0x2588551, 0x7}, {0x0, 0x0}, 0x0?) /home/runner/work/helmfile/helmfile/pkg/event/bus.go:121 +0xa0b github.com/helmfile/helmfile/pkg/state.(*HelmState).triggerReleaseEvent(0x4ca43c?, {0x2588551, 0x7}, {0x0?, 0x0}, 0xc00096d818, {0x258a656, 0x8}) /home/runner/work/helmfile/helmfile/pkg/state/state.go:2312 +0x331 github.com/helmfile/helmfile/pkg/state.(*HelmState).triggerPrepareEvent(...) /home/runner/work/helmfile/helmfile/pkg/state/state.go:2275 github.com/helmfile/helmfile/pkg/state.(*HelmState).PrepareCharts.func2(0x441fc5?) /home/runner/work/helmfile/helmfile/pkg/state/state.go:1136 +0x17e github.com/helmfile/helmfile/pkg/state.(*HelmState).scatterGather.func1(0x0?) /home/runner/work/helmfile/helmfile/pkg/state/state_run.go:33 +0x26 created by github.com/helmfile/helmfile/pkg/state.(*HelmState).scatterGather /home/runner/work/helmfile/helmfile/pkg/state/state_run.go:32 +0x74
I did not dive very deep into this, but it looks to be some kind of noexec
mount issue. Do you have any pointers on where to look to understand this better?
Both of those are non-issues when using the configmap approach. Here is the relevant versions info:
travisghansen/argo-cd-helmfile:v0.3.1
ArgoCD: 2.6.5 (chart 5.26.2)
Thank you, and looking forward to your reply!
Having trouble passing the helm values from one repository to another.
I have the following helmfile.yaml
in some-repo-a
bases:
- ../../../bases/helm-defaults.yaml
missingFileHandler: Error
releases:
- name: "namespaces"
namespace: "default"
createNamespace: true
chart: "../../../charts/namespaces"
values:
- {{ env "ARGOCD_ENV_NAMESPACES_VALUES_FILE" | quote }}
I have the following ArgoCD Application config in some-repo-b
spec:
project: default
sources:
- repoURL: "https://github.com/some-repo-a"
path: "src"
targetRevision: "main"
plugin:
name: "helmfile"
env:
- name: "NAMESPACES_VALUES_FILE"
value: $values/path/to/values.yaml
- repoURL: "https://github.com/some-repo-b"
targetRevision: main
ref: values
destination:
server: "in-cluster"
namespace: "default"
I configured the ArgoCD RepoServer following this guide
When I try to sync the application I am getting /path/to/values.yaml file or directory does not exists
.
Is there a way to pass the values yaml from a different repository like the above setup.
ArgoCD version: v2.11.2
argo-cd-helmfile: v0.3.11
helmfile: v0.165.0
Thank you
Hello
I dont know if its relating to helmfile / the integration
But couldnt use helmdiff against the helm charts installed:
When I do :
helmfile diff -f <file path> --validate
Im getting this output:
PATH:
/opt/homebrew/bin/helm
ARGS:
0: helm (4 bytes)
1: diff (4 bytes)
2: upgrade (7 bytes)
3: --reset-values (14 bytes)
4: --allow-unreleased (18 bytes)
5: datadog (7 bytes)
6: datadog/datadog (15 bytes)
7: --version (9 bytes)
8: 2.37.5 (6 bytes)
9: --namespace (11 bytes)
10: datadog (7 bytes)
11: --values (8 bytes)
12: /var/folders/9c/v5hnv4b107xdxr53sj9mt2s00000gn/T/helmfile3020135561/datadog-datadog-values-6676658555 (101 bytes)
13: --values (8 bytes)
14: /var/folders/9c/v5hnv4b107xdxr53sj9mt2s00000gn/T/helmfile2144237898/datadog-datadog-values-654b4745f4 (101 bytes)
15: --values (8 bytes)
16: /var/folders/9c/v5hnv4b107xdxr53sj9mt2s00000gn/T/helmfile2436640924/datadog-datadog-values-fbbdcb49d (100 bytes)
17: --color (7 bytes)
ERROR:
exit status 1
EXIT STATUS
1
STDERR:
Error: Failed to render chart: exit status 1: Error: rendered manifests contain a resource that already exists. Unable to continue with install: PodDisruptionBudget "datadog-cluster-agent" in namespace "datadog" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "datadog"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "datadog"
Error: plugin "diff" exited with error
COMBINED OUTPUT:
********************
Release was not present in Helm. Diff will show entire contents as new.
********************
Error: Failed to render chart: exit status 1: Error: rendered manifests contain a resource that already exists. Unable to continue with install: PodDisruptionBudget "datadog-cluster-agent" in namespace "datadog" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "datadog"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "datadog"
Error: plugin "diff" exited with error
Hello
for somehow, helmfile plugin getting authentication error while pulling charts from private OCI registries like dockerhub
I had an workaround, to put this init script (HELMFILE_INIT_SCRIPT_FILE)
echo "echo \$HELM_REPOSITORY_PASSWORD | helm registry login registry-1.docker.io -u \$HELM_REPOSITORY_USERNAME --password-stdin" > init_auth.sh
Hi,
Thank you for developing such a great plugin for helmfile. I was wondering what I did wrong if I want to achieve something like helmfile --environment development apply
.
Here is my application manifest:
spec:
destination:
namespace: default
server: {{ .Values.spec.destination.server }}
project: default
source:
path: {{ .Values.spec.source.path }}
repoURL: {{ .Values.spec.source.repoURL }}
targetRevision: {{ .Values.spec.source.targetRevision }}
plugin:
name: helmfile
env:
- name: HELMFILE_GLOBAL_OPTIONS
value: '"--environment development"'
helm:
# Release name override (defaults to application name)
releaseName: guestbook-{{ .Values.environment }}
# Helm values files for overriding values in the helm chart
# The path is relative to the spec.source.path directory defined above
valueFiles:
- values-{{ .Values.environment }}.yaml
Here is structure of the example project:
So basically what I want to achieve here is that I want to specify --environment
in order to tell helmfile which values.yaml to take when rendering the manifest. Thank you!
It looks like ExternalSecrets is more robust, flexible, and supports namy secrets managers, not just Vault as alternative to Sops.
Would love to see this project with ExternalSecrets instead.
We have little complex helmfile structure with dependencies , files, templates & multiple charts and the structure looks like below
When I;m trying to add one of the chart .. I see redis error but I feel this is the effect not the cause. Any help will be appreciated . thanks
Error:
Unable to save changes: application spec for app1 is invalid: InvalidSpecError: Unable to generate manifests in helmfile.d: rpc error: code = Unknown desc = dial tcp: lookup argocd-redis on 172.18.0.10:53: no such host
ArgoCD manifest
project: default source: repoURL: 'https://my repo' path: helmfile.d/ targetRevision: my branch plugin: name: helmfile env: - name: HELMFILE_GLOBAL_OPTIONS value: '--environment region1 --file helmfile.d/helmfile.yaml --selector name=app1' destination: server: 'https://kubernetes.default.svc' namespace: be syncPolicy: {}
Hi
Can you provide an example app deployment using this helmfile plugin in Argocd. How does the app manifest looks like
Do you expect this to work as sidecar option too ? With v2.5.1 configmap is deprecated . Any pointer to make it work for sidecar option ...
Hello and thank you for this initiative : argocd for helmfile.
I followed the documentation. I got confused where to put env vars.
Using argoCD UI, I put them under "external vars":
When i checked the alternative YAML , i found them under jsonnet:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: mysql-admin-ui
spec:
destination:
name: ''
namespace: ''
server: 'https://kubernetes.default.svc'
source:
path: phpmyadmin
repoURL: 'git@blah/blah/balh.git'
targetRevision: HEAD
directory:
jsonnet:
extVars:
- name: HELMFILE_GLOBAL_OPTIONS
value: '"-e prod -l name=mysql-admin-ui"'
code: true
tlas: []
project: default
syncPolicy:
automated: null
And if i put them under "TOP-LEVEL ARGUMENTS", i found them in yaml under spec.source.directory.tlas
.
Please what is the right place to put env vars of this plugin ?
Hello
we're getting these errors from the repo-servers & helmfile-plugin containers:
[argocd-repo-server-7c87b45c78-klqsl helmfile-plugin] 2023-12-02T23:31:00.576124817Z time="2023-12-02T23:31:00Z" level=error msg="`argo-cd-helmfile.sh discover` failed exit status 1: helm version v3.13.2+g2a2fb3b\nhelmfile version 0.159.0\nstarting discover" execID=9f414
[argocd-repo-server-7c87b45c78-klqsl repo-server] 2023-12-02T23:28:01.967570989Z {"CWE":775,"level":"error","msg":"repository /tmp/_argocd-repo/6ce5552d-0857-429f-976c-3270dcb35528 is not the match because error receiving stream response: rpc error: code = Unknown desc = match repository error: error running find command: `argo-cd-helmfile.sh discover` failed exit status 1: helm version v3.13.2+g2a2fb3b\nhelmfile version 0.159.0\nstarting discover","security":2,"time":"2023-12-02T23:28:01Z"}
we have 2 replicas of repo-server
but we dont know the problem
argocd version is: v2.9.2+c5ea5c4
and helmfile-plugin version is: travisghansen/argo-cd-helmfile:v0.3.7
what could be the problem ?
Hello,
We're just starting to use your plugin, and we're facing an issue. Maybe we're missing something.
We have a Helmfile configuration with many Helm releases to be installed in different namespaces. The namespaces are defined in the Helmfile configuration for each release.
But when we want to synchronize the Argo Application related to this Helmfile (with no namespace defined in the Application), we have errors like this for each resource:
After some investigations, these errors only happen for Helm charts that don't define the namespace metadata field in the charts templates.
There is no error when the field that corresponds to the namespace in the metadata section is defined in the templates, in this case the right namespace defined in the Helmfile configuration is taken into account. But most of the Helm charts are not defining this value by default in the templates...
It seems to not be able to retrieve or to apply the namespace from the context, that we define in the Helmfile configuration, if the namespace field is not already present in the chart.
Do you have any idea about what could be the issue?
Hi, first of all thanks for the work.
I migrated a month ago to the new system with sidecars and everything works except for one case:
APPs in APP deployed with argo.
the structure of the directory is this:
root_manifest:
the discovery command is successful, in fact the result of the find is this:
find . -type f -name "helmfile.yaml"
./application/redis/helmfile.yaml
./application/mysql/helmfile.yaml
./application/helmfile.yaml
./application/app/helmfile.yaml
./system-utils/external-secrets/helmfile.yaml
./system-utils/helmfile.yaml
is there a way to make the discovery command fail?
Hello
when I put this in helmfile.yaml:
helmDefaults:
args:
- "--skip-crds"
im getting this errror:
ComparisonError: rpc error: code = Unknown desc = Manifest generation error (cached): `argo-cd-helmfile.sh init` failed exit status 1: v3.10.3+g835b733 helmfile version 0.150.0 starting init Adding repo bitnami https://charts.bitnami.com/bitnami in ./helmfile.yaml: command "/usr/local/bin/helm" exited with non-zero status: PATH: /usr/local/bin/helm ARGS: 0: /usr/local/bin/helm (19 bytes) 1: repo (4 bytes) 2: add (3 bytes) 3: bitnami (7 bytes) 4: https://charts.bitnami.com/bitnami (34 bytes) 5: --force-update (14 bytes) 6: --skip-crds (11 bytes) ERROR: exit status 1 EXIT STATUS 1 STDERR: Error: unknown flag: --skip-crds COMBINED OUTPUT: Error: unknown flag: --skip-crds (retried 4 times).
When running helmfile sync its working
I'm using ArgoCD applicationset, meaning I have argo apps generated automatically for many helmfiles, but I want only skip crds for specific helmfile
how can I do that ?
Thanks!
Currently am facing issues when am trying to use the custom values
here is my helmfile
repositories:
- name: stable
url: https://kubernetes-charts.storage.googleapis.com/
releases:
- name: test-helmfile1
namespace: default
chart: stable/docker-registry
values:
- ../pizza-docker/values.yaml
Application manifest:
My docker image already have helmfile and argo-cd-helmfile.sh in this path /usr/local/bin
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: test
spec:
destination:
namespace: default
server: 'https://kubernetes.default.svc'
source:
path: argocd-helm/papertrail/test
repoURL: '[email protected]:xx/xx-demo.git'
targetRevision: HEAD
plugin:
name: helmfile
env:
- name: KUBE_VERSION
value: '1.16'
- name: HELMFILE_HELMFILE
value: helmfile.yaml
project: default
It doesn't work for me. do you have any clue?
and my logs here
ComparisonError
rpc error: code = Unknown desc = `argo-cd-helmfile.sh init` failed exit status 2: + [[ -z init ]] ++ basename /usr/local/bin/argo-cd-helmfile.sh + SCRIPT_NAME=argo-cd-helmfile.sh + [[ -n '' ]] + [[ -n '' ]] + [[ -n '' ]] + phase=init + export HELM_HOME=/tmp/__argo-cd-helmfile.sh__/apps/test + HELM_HOME=/tmp/__argo-cd-helmfile.sh__/apps/test + export HELMFILE_HELMFILE_HELMFILED=/tmp/[email protected]_xx-demo/argocd-helm/papertrail/test/.__argo-cd-helmfile.sh__helmfile.d + HELMFILE_HELMFILE_HELMFILED=/tmp/[email protected]_xxx-demo/argocd-helm/papertrail/test/.__argo-cd-helmfile.sh__helmfile.d + [[ ! -d /tmp/__argo-cd-helmfile.sh__/bin ]] + [[ -n '' ]] ++ which helm + helm=/usr/local/bin/helm + [[ -n '' ]] ++ which helmfile + [[ -n /usr/local/bin/helmfile ]] ++ which helmfile + helmfile=/usr/local/bin/helmfile + helmfile='/usr/local/bin/helmfile --helm-binary /usr/local/bin/helm --no-color --allow-no-matching-release' + [[ -n default ]] + helmfile='/usr/local/bin/helmfile --helm-binary /usr/local/bin/helm --no-color --allow-no-matching-release --namespace default' + [[ -n '' ]] + [[ -v HELMFILE_HELMFILE ]] + helmfile='/usr/local/bin/helmfile --helm-binary /usr/local/bin/helm --no-color --allow-no-matching-release --namespace default -f /tmp/[email protected]_xx_xx-demo/argocd-helm/papertrail/test/.__argo-cd-helmfile.sh__helmfile.d' + HELMFILE_HELMFILE_STRATEGY=REPLACE ++ /usr/local/bin/helm version --short --client ++ cut -d ' ' -f2 + helm_full_version=v3.2.0+ge11b7ce ++ echo v3.2.0+ge11b7ce ++ cut -d . -f1 ++ sed 's/[^0-9]//g' + helm_major_version=3 + export HOME=/tmp/__argo-cd-helmfile.sh__/apps/test + HOME=/tmp/__argo-cd-helmfile.sh__/apps/test ++ /usr/local/bin/helm version --short --client + echoerr v3.2.0+ge11b7ce + printf '%s\n' v3.2.0+ge11b7ce v3.2.0+ge11b7ce ++ /usr/local/bin/helmfile --helm-binary /usr/local/bin/helm --no-color --allow-no-matching-release --namespace default -f /tmp/[email protected]_xxx_xx-demo/argocd-helm/papertrail/test/.__argo-cd-helmfile.sh__helmfile.d --version /usr/local/bin/helmfile: line 1: syntax error near unexpected token `<' /usr/local/bin/helmfile: line 1: `<html><body>You are being <a href="https://github-production-release-asset-2e65be.s3.amazonaws.com/74499101/19b3258b9457abdad?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=&X-Amz-Expires=300&X-Amz-Signature=ced665cfe6a1abee27e24f8ff3a3ge091d&X-Amz-SignedHeaders=host&actor_id=0&repo_id=74499101&response-content-disposition=attachment%3B%20filename%3Dhelmfile_linux_amd64&response-content-type=application%2Foctet-stream">redirected</a>.</body></html>' + echoerr '' + printf '%s\n' '' + case $phase in + echoerr 'starting init' + printf '%s\n' 'starting init' starting init + [[ ! -d /tmp/__argo-cd-helmfile.sh__/apps/test ]] + [[ -v HELMFILE_HELMFILE ]] + rm -rf /tmp/[email protected]_xx-demo/argocd-helm/papertrail/test/.__argo-cd-helmfile.sh__helmfile.d + mkdir -p /tmp/[email protected]_xxxs-demo/argocd-helm/papertrail/test/.__argo-cd-helmfile.sh__helmfile.d + case "${HELMFILE_HELMFILE_STRATEGY}" in + echo helmfile.yaml + [[ ! -d .__argo-cd-helmfile.sh__helmfile.d ]] + [[ 3 -eq 2 ]] + [[ 3 -eq 3 ]] + export HELMFILE_HELM3=1 + HELMFILE_HELM3=1 + /usr/local/bin/helmfile --helm-binary /usr/local/bin/helm --no-color --allow-no-matching-release --namespace default -f /tmp/[email protected]_xxx-demo/argocd-helm/papertrail/test/.__argo-cd-helmfile.sh__helmfile.d repos /usr/local/bin/helmfile: line 1: syntax error near unexpected token `<' /usr/local/bin/helmfile: line 1: `<html><body>You are being <a href="https://github-production-release-asset-2e6de5be.s3.amazonaws.com/7401/19b32580-317f-11eae-9dr4-79b94575bdad?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=equest&X-Amz-Date=20200805T120846Z&X-Amz-Expires=300&X-Amz-Signature=ced665f845bd4adddeec6a1abee27e24f8ff3a3091d&X-Amz-SignedHeaders=host&actor_id=0&repo_id=74499101&response-content-disposition=attachment%3B%20filename%3Dhelmfile_linux_amd64&response-content-type=application%2Foctet-stream">redirected</a>.</body></html>'
2 minutes ago (Wed Aug 05 2020 15:24:31 GMT+0200)
actually values.yaml is location in different path
and also tried the structure like this
and also like this
both didn't worked
and also HELMFILE_HELMFILE
variable not working
Hello,
Is it possible to enable the https://github.com/helmfile/vals templating pass during the manifests render so that external values can be used as well?
I am using helmfile plugin additionally i have also installed helm secrets plugin, imported gpg key as well that is all working fine.
I am unable to create app in argo cd with below error:
Unable to create application: application spec is invalid: InvalidSpecError: Unable to generate manifests in wnccloud/releases: rpc error: code = Unknown desc = Manifest generation error (cached): failed to unmarshal manifest: error converting YAML to JSON: yaml: line 6: mapping values are not allowed in this context
However, i do helmfile sync on my k8s cluster, I am not getting any such type of issue.
I am giving path of repositry till releases folder
ex: myrepo/releases
Using plugin helmfile with parameter
RELEASE mydemorelease
Hello,
I'm using this plugin to deploy a helmfile sourced from a remote git repository.
Is there a method to create separate ArgoCD Applications for each release listed in the helmfile? It seems this is recommended per the helmfile documentation:
Do create ArgoCD Application custom resource per Helm/Helmfile release, each point to respective sub-directory generated by helmfile template --output-dir-template
Hi there!
I've got an as of yet private repository which is based on https://github.com/chatwork/dockerfiles/tree/master/argocd-helmfile but also includes a simplified version of your argo-cd-helmfile.sh
script. The repository by Chatwork is MIT licensed. Could you please clearify on the license of your repository? Licensing it under MIT would be most straightforward in my case but any MIT compatible license would help. Thanks!
Hello
Sometimes in ArgoCD I can see this message in apps:
/usr/local/bin/helmfile --helm-binary /usr/local/bin/helm --no-color --allow-no-matching-release --namespace chaos-testing repos Adding repo chaos-mesh https://charts.chaos-mesh.org in ./helmfile.yaml: command "/usr/local/bin/helm" exited with non-zero status: PATH: /usr/local/bin/helm ARGS: 0: /usr/local/bin/helm (19 bytes) 1: repo (4 bytes) 2: add (3 bytes) 3: chaos-mesh (10 bytes) 4: https://charts.chaos-mesh.org (29 bytes) 5: --force-update (14 bytes) ERROR: exit status 1 EXIT STATUS 1 STDERR: Error: context deadline exceeded COMBINED OUTPUT: Error: context deadline exceeded
They are in unknown sync , and after few minutes (1-3) its synced and back to normal
Its doesn't matter which app, it could be any
What cause it to happen ?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.