Giter Site home page Giter Site logo

terraform-provider-helmfile's Introduction

terraform-provider-helmfile

Deploy Helmfile releases from within Terraform.

Benefits:

  • Entry-point to your infrastructure and app deployments as a whole
  • Input terraform variables and outputs into Helmfile
  • Blue-green deployment of your whole stack with tf's create_before_destroy
  • AWS authentication and AssumeRole support

Prerequisites

  • Helmfile
    • v0.126.0 or greater is highly recommended due to #28

Installation

For Terraform 0.12:

Install the terraform-provider-helmfile binary under .terraform/plugins/${OS}_${ARCH}, so that the binary is at e.g. ${WORKSPACE}/.terraform/plugins/darwin_amd64/terraform-provider-helmfile.

For Terraform 0.13 and later:

The provider is available at Terraform Registry so you can just add the following to your tf file for installation:

terraform {
  required_providers {
    helmfile = {
      source = "mumoshu/helmfile"
      version = "VERSION"
    }
  }
}

Please replace VERSION with the version number of the provider without the v prefix, like 0.3.14.

Examples

There is nothing to configure for the provider, so you firstly declare the provider like:

provider "helmfile" {}

You can define a release in one of the four ways:

Inline helmfile_release

helmfile_release would be a natural choice for users who are familiar with Terraform. It just map each Terraform helmfile_release resource to a Helm release 1-by-1:

resource "helmfile_release" "myapp" {
	# `name` is the optional release name. When omitted, it's set to the ID of the resource, "myapp".
	# name = "myapp-${var.somevar}"
	namespace = "default"
	chart = "sp/podinfo"
	helm_binary = "helm3"

	working_directory = path.module
	values = [
		<<EOF
{ "image": {"tag": "3.14" } }
EOF
	]
}

External helmfile_release_set

External helmfile_release_set is the easiest way for existing Helmfile users.

Exsiting helmfile.yaml 1:1 mapping -

resource "helmfile_release_set" "mystack" {
    content = file("./helmfile.yaml")
}

Existing helmfile.d folder -

resource "helmfile_release_set" "mystack" {
	working_directory = "<directory_where_helmfile.d_exists>"
	kubeconfig        = pathexpand("<kube_config>")
	environment       = "prod"
	values = [
		<<EOF
{ "image": {"tag": "3.14" } }
EOF
	]
}

Inline helmfile_release_set

The inline variant of the release set allows you to render helmfile.yaml without Go template but with the Terraform syntax:

resource "helmfile_release_set" "mystack" {
    # Install and choose from one of installed versions of helm
    # By changing this, you can upgrade helm per release_set
    # Default: helm
    helm_binary = "helm-3.0.0"

    # Install and choose from one of installed versions of helmfile
    # By changing this, you can upgrade helmfile per release_set
    # Default: helmfile
    binary = "helmfile-v0.93.0"

    working_directory = path.module

    # Maximum number of concurrent helm processes to run, 0 is unlimited (0 is a default value)
    concurrency = 0

    # Helmfile environment name to deploy
    # Default: default
    environment = "prod"

    # Environment variables available to helmfile's requireEnv and commands being run by helmfile
    environment_variables = {
        FOO = "foo"
        KUBECONFIG = "path/to/your/kubeconfig"
    }
    
    # State values to be passed to Helmfile
    values = {
      # Corresponds to --state-values-set name=myapp
      name = "myapp"
    }
    
    # State values files to be passed to Helmfile
    values = [
      file("overrides.yaml"),
      file("another.yaml"),
    ]
    
    # Label key-value pairs to filter releases 
    selector = {
      # Corresponds to -l labelkey1=value1
      labelkey1 = "value1"
    }
}

output "mystack_diff" {
  value = helmfile_release_set.mystack.diff_output
}

output "mystack_apply" {
  value = helmfile_release_set.mystack.apply_output
}

In the example above I am changing my working_directory, setting some environment variables that will be utilized by all my helmfiles.

Stdout and stderr from Helmfile runs are available in the debug log files.

Running terraform plan runs helmfile diff.

It shows no changes if helmfile diff did not detect any changes:

helmfile_release_set.mystack: Refreshing state... [id=bnd30hkllhcvvgsrplo0]

------------------------------------------------------------------------

No changes. Infrastructure is up-to-date.

This means that Terraform did not detect any differences between your
configuration and real physical resources that exist. As a result, no
actions need to be performed.

terraform plan surfaces changes in the diff_output field if helmfile diff detected any changes:

helmfile_release_set.mystack: Refreshing state... [id=bnd30hkllhcvvgsrplo0]

------------------------------------------------------------------------

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # helmfile_release_set.mystack will be updated in-place
  ~ resource "helmfile_release_set" "mystack" {
        binary                = "helmfile"
      - diff_output           = "Comparing release=myapp-foo, chart=sp/podinfo\n\x1b[33mdefault, myapp-foo-podinfo, Deployment (apps) has changed:\x1b[0m\n  # Source: podinfo/templates/deployment.yaml\n  apiVersion: apps/v1\n  kind: Deployment\n  metadata:\n    name: myapp-foo-podinfo\n    labels:\n      app: podinfo\n      chart: podinfo-3.1.4\n      release: myapp-foo\n      heritage: Helm\n  spec:\n    replicas: 1\n    strategy:\n      type: RollingUpdate\n      rollingUpdate:\n        maxUnavailable: 1\n    selector:\n      matchLabels:\n        app: podinfo\n        release: myapp-foo\n    template:\n      metadata:\n        labels:\n          app: podinfo\n          release: myapp-foo\n        annotations:\n          prometheus.io/scrape: \"true\"\n          prometheus.io/port: \"9898\"\n      spec:\n        terminationGracePeriodSeconds: 30\n        containers:\n          - name: podinfo\n\x1b[31m-           image: \"stefanprodan/podinfo:foobar2aa\"\x1b[0m\n\x1b[32m+           image: \"stefanprodan/podinfo:foobar2a\"\x1b[0m\n            imagePullPolicy: IfNotPresent\n            command:\n              - ./podinfo\n              - --port=9898\n              - --port-metrics=9797\n              - --grpc-port=9999\n              - --grpc-service-name=podinfo\n              - --level=info\n              - --random-delay=false\n              - --random-error=false\n            env:\n            - name: PODINFO_UI_COLOR\n              value: cyan\n            ports:\n              - name: http\n                containerPort: 9898\n                protocol: TCP\n              - name: http-metrics\n                containerPort: 9797\n                protocol: TCP\n              - name: grpc\n                containerPort: 9999\n                protocol: TCP\n            livenessProbe:\n              exec:\n                command:\n                - podcli\n                - check\n                - http\n                - localhost:9898/healthz\n              initialDelaySeconds: 1\n              timeoutSeconds: 5\n            readinessProbe:\n              exec:\n                command:\n                - podcli\n                - check\n                - http\n                - localhost:9898/readyz\n              initialDelaySeconds: 1\n              timeoutSeconds: 5\n            volumeMounts:\n            - name: data\n              mountPath: /data\n            resources:\n              limits: null\n              requests:\n                cpu: 1m\n                memory: 16Mi\n        volumes:\n        - name: data\n          emptyDir: {}\n\nin ./helmfile.yaml: failed processing release myapp-foo: helm3 exited with status 2:\n  Error: identified at least one change, exiting with non-zero exit code (detailed-exitcode parameter enabled)\n  Error: plugin \"diff\" exited with error\n" -> null
      ~ dirty                 = true -> false
        environment           = "default"
        environment_variables = {
            "FOO" = "foo"
        }
        helm_binary           = "helm3"
        id                    = "bnd30hkllhcvvgsrplo0"
        path                  = "./helmfile.yaml"
        selector              = {
            "labelkey1" = "value1"
        }
        values                = {
            "name" = "myapp"
        }
        working_directory     = "."
    }

Plan: 0 to add, 1 to change, 0 to destroy.

Running terraform apply runs helmfile apply to deploy your releases.

The computed field apply_output is used to surface the output from Helmfile. You can use in the string interpolation to produce a useful Terraform output.

In the example below, the output mystack_apply is generated from apply_output so that you can review what has actually changed on helmfile apply:

helmfile_release_set.mystack: Refreshing state... [id=bnd30hkllhcvvgsrplo0]

Apply complete! Resources: 0 added, 0 changed, 0 destroyed.

Outputs:

mystack_apply = Comparing release=myapp-foo, chart=sp/podinfo
********************

	Release was not present in Helm.  Diff will show entire contents as new.

********************
...

mystack_diff = 

terraform apply just succeeds without any effect when there's no change detected by helmfile:

helmfile_release_set.mystack: Refreshing state... [id=bnd30hkllhcvvgsrplo0]

Apply complete! Resources: 0 added, 0 changed, 0 destroyed.

Outputs:

mystack_apply =
mystack_diff =

Advanced Features

Declarative binary version management

terraform-provider-helmfile has a built-in package manager called shoal. With that, you can specify the following helmfile_release_set attributes to let the provider install the executable binaries on demand:

  • version for installing helmfile
  • helm_version for installing helm
  • helm_diff_version for installing helm-diff

version and helm_version uses the Go runtime and go-git so it should work without any dependency.

helm_diff_version requires helm plugin install to be runnable. The plugin installation process can vary depending on the plugin and its plugin.yaml.

With the below example, the provider installs helmfile v0.128.0, helm 3.2.1, and helm-diff 3.1.3, so that you don't need to install them beforehand. This should be handy when you're trying to use this provider on Terraform Cloud, whose runtime environment is not available for customization by the user.

helmfile_release_set "mystack" {
  version = "0.128.0"
  helm_version = "3.2.1"
  helm_diff_version = "v3.1.3"

  // snip

AWS authentication and AssumeRole support

Providing any combination of aws_region, aws_profile, and aws_assume_role, the provider obtains the following environment variables and provider it to any helmfile command.

  • aws_region attribute: AWS_DEFAULT_REGION
  • aws_profile attribute: AWS_PROFILE
  • aws_assume_role block: AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_SESSION_TOKEN obtained by calling sts:AssumeRole
resource "helmfile_release_set" "mystack" {
  aws_region = var.region
  aws_profile = var.profile
  aws_assume_role {
    role_arn = "arn:aws:iam::${var.account_id}:role/${var.role_name}"
  }
  // snip

Those environment variables go from the provider to helmfile, helm, client-go, and finally the aws exec credentials provider that reads the environment variables to call aws eks get-token which internally calls sts:GetCallerIdentity(e.g. aws sts get-caller-identity) for authentication.

See https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html for more information on how authentication works on EKS.

Develop

If you wish to build this yourself, follow the instructions:

cd terraform-provider-helmfile
go build

Acknowledgement

The implementation of this product is highly inspired from terraform-provider-shell. A lot of thanks to the author!

terraform-provider-helmfile's People

Contributors

aknysh avatar andrewnazarov avatar ankitabhopatkar13 avatar dm3ch avatar kailunshi avatar mumoshu avatar naka-gawa avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

terraform-provider-helmfile's Issues

Error code when kubectl is not found

We run terraform from CI/CD pipeline.
Trying this provider I forgot to install kubectl to the image containing terraform and stuff. So the following error occured:

panic: unexpected error: exec: "kubectl": executable file not found in $PATH

goroutine 425 [running]:
github.com/roboll/helmfile/pkg/helmexec.combinedOutput(0xc000311ce0, 0x0, 0x91, 0xc00034df80, 0xc000486000, 0x91, 0x100)
	/home/circleci/workspace/helmfile/pkg/helmexec/runner.go:69 +0x45d
github.com/roboll/helmfile/pkg/helmexec.ShellRunner.Execute(0x1340b14, 0x1, 0x0, 0xc000450060, 0x7, 0xc00034c660, 0x3, 0x3, 0xc00034df80, 0xc00034df80, ...)
	/home/circleci/workspace/helmfile/pkg/helmexec/runner.go:36 +0xd9
github.com/roboll/helmfile/pkg/event.(*Bus).Trigger(0xc00084f480, 0x1345816, 0x7, 0x0, 0x0, 0xc00084f450, 0x10e35c0, 0xc00084f498, 0x519c97)
	/home/circleci/workspace/helmfile/pkg/event/bus.go:96 +0x906
github.com/roboll/helmfile/pkg/state.(*HelmState).triggerReleaseEvent(0xc00024dc00, 0x1345816, 0x7, 0x0, 0x0, 0xc0006ca4d0, 0x1341d68, 0x4, 0x0, 0x0, ...)
	/home/circleci/workspace/helmfile/pkg/state/state.go:1364 +0x2d0
github.com/roboll/helmfile/pkg/state.(*HelmState).triggerPresyncEvent(...)
	/home/circleci/workspace/helmfile/pkg/state/state.go:1343
github.com/roboll/helmfile/pkg/state.(*HelmState).SyncReleases.func2(0x12)
	/home/circleci/workspace/helmfile/pkg/state/state.go:572 +0x1e7
github.com/roboll/helmfile/pkg/state.(*HelmState).scatterGather.func1(0xc00024dc00, 0xc00022c040, 0xc00004c0c0, 0xc00022c050, 0x12)
	/home/circleci/workspace/helmfile/pkg/state/state_run.go:42 +0x124
created by github.com/roboll/helmfile/pkg/state.(*HelmState).scatterGather
	/home/circleci/workspace/helmfile/pkg/state/state_run.go:40 +0x1bc

And the processing was stopped. But the return code was like everything went well. The pipeline is green.

helmfile diff is run twice against the old tf state and the new one

The main thing is that terraform-helmfile-provider executes helmfile diff twice - for the old tf state and for the new one. Errors from the first run might be really confusing and cryptic. And it is not obvious what's going on and why these errors appear.

This seems to be a conceptual problem. Although it's possible to use this provider (and we are pretty happy with it), sometimes strange things might hit you.

The most obvious case is when you want to rename your helmfile or reorganise directory structure.

Say, you'd like to move from

resource "helmfile_release_set" "mystack" {
    path = "./helmfile.yaml"
...
}

to

resource "helmfile_release_set" "mystack" {
    path = "./new_path/helmfile.yaml"

It fails during terraform plan with

specified state file helmfile.yaml is not found

Since the provider runs helmfile diff using values stored in terraform state first, it won't find the old file (./helmfile.yaml). Therefore, if you need to rename or move a file you have to have two copies of it for a while.

Another case is when one wants to use environment variables or values. There might be some strange errors related to the fact that helmfile.yaml will be rendered using old environment variables or values taken from the state, but not those which you defined in main.tf file.

The same is for values files and their names and so on.

Environment Variables or Values based on resources yet to be created will cause an error

This may be a case of me using the tool incorrectly, but I'm receiving this error when I include something like a reference to an AWS ARN that has yet to be created in the values or environment variables for helmfile_release_set:

Error: diffing release set: running helmfile diff: running command: /bin/helmfile: exit status 1
in helmfile.d/30.cluster-autoscaler.yaml: error during 30.cluster-autoscaler.yaml.part.0 parsing: template: stringTemplate:20:24: executing "stringTemplate" at <requiredEnv "CLUSTER_NAME">: error calling requiredEnv: required env var `CLUSTER_NAME` is not set

My TF:

resource "helmfile_release_set" "eks_required" {
  aws_region  = var.region
  aws_profile = var.profile

  kubeconfig        = data.external.tempfile.result["path"]
  working_directory = "config"

  environment_variables = {
    CLUSTER_NAME                = var.cluster_name,
    AWS_DEFAULT_REGION          = var.region,
    CLUSTER_AUTOSCALER_IAM_ROLE = aws_iam_role.cluster_autoscaler.arn,
    NODE_TERMINATION_IAM_ROLE   = aws_iam_role.node_termination_handler.arn,
  }
}

The new, not yet created element here is aws_iam_role.node_termination_handler.arn

In the debug output I see the following:

2021-03-01T19:09:49.877Z [DEBUG] plugin.terraform-provider-helmfile_v0.13.3: 2021/03/01 19:09:49
[DEBUG] helmfile-provider(pid=10659,ppid=10443): Running helmfile --file  --no-color --helm-binary helm build on {Bin:helmfile Values:[] ValuesFiles:[] HelmBin:helm Path: Content: DiffOutput: ApplyOutput: Environment: Selector:map[] Selectors:[]
EnvironmentVariables:map[AWS_DEFAULT_REGION:us-west-2 CLUSTER_AUTOSCALER_IAM_ROLE:arn:aws:iam::[account-removed]:role/cluster-autoscaler-operations-0 CLUSTER_NAME:operations-0] WorkingDirectory:config ReleasesValues:map[] Kubeconfig:/tmp/tmp.MNHFoP Concurrency:0 Version: HelmVersion: HelmDiffVersion: SkipDiffOnMissingFiles:[]}

I've also tried an explicit depends on block in the helmfile_release_set.

If I remove the reference to the NODE_TERMINATION_IAM_ROLE which does not exist yet, I can run a plan. If the iam role exists, I can run a plan.

Is this a bug or is there another method for making the helmfile dependent on the results from other parts of Terraform? Thanks!

Unable to create K8s cluster and deploy Helmfile release in one go

I have a Terraform module that creates an Azure AKS cluster and outputs kube config to allow connecting to it (see below, available as module.kubernetes_cluster.kube_config).

To the script instantiating this module I added the following:

resource "helmfile_release_set" "app" {
  content = yamlencode({
    releases = [{
      name      = "app"
      namespace = "default"
      chart     = "../app"
      values = [
        "values.yaml",
        {
          secrets = {
            a = local.secret_a
            b = local.secret_b
          }
        },
      ]
    }]
  })

  environment = "default"

  environment_variables = {
    KUBECONFIG = local_file.kube_config.filename
  }
}

resource "local_file" "kube_config" {
  filename          = "${path.root}/.kube/kube_config"
  sensitive_content = module.kubernetes_cluster.kube_config
}

I'm not able to terraform apply it as is. Both terraform plan and terraform apply fail with an error message suggesting that the kube config is missing. That's fair, before the apply there's indeed no config available on the disk, it's only getting generated during the apply. But, helmfile_release_set expects it to be there to do the diff.

Running terraform apply -target=local_file.kube_config first resolves the issue but ideally Terraform scripts shouldn't require splitting them into "stages". I came up with this setup after reading the discussion in #20. Maybe I'm missing something and it isn't supposed to fail if implemented properly? If I understand correctly, the problem here is that Helmfile is executed during terraform plan or the first part of terraform apply to calculate the diff. It shouldn't be necessary to calculate the diff during the resource creation, so if there was a way to suppress it, it would help.

Also, in #20 and #51 the idea of adding configuration options to the provider was brought up. It does seem like this is the standard way of doing it in Terraform. For example, we can find this snippet in the officials docs:
https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/kubernetes_cluster

provider "kubernetes" {
  host                   = azurerm_kubernetes_cluster.main.kube_config.0.host
  username               = azurerm_kubernetes_cluster.main.kube_config.0.username
  password               = azurerm_kubernetes_cluster.main.kube_config.0.password
  client_certificate     = base64decode(azurerm_kubernetes_cluster.main.kube_config.0.client_certificate)
  client_key             = base64decode(azurerm_kubernetes_cluster.main.kube_config.0.client_key)
  cluster_ca_certificate = base64decode(azurerm_kubernetes_cluster.main.kube_config.0.cluster_ca_certificate)
}

It seems like providers can be configured with outputs of resources created in the same script. This probably means that the initialization of the provider and all resources associated with it are postponed until those outputs become available. If it's the case, then it should be possible within this provider to generate a temp file for kube config based on those outputs and feed it to Helmfile during terraform plan or terraform apply.

Any advice on this is highly appreciated. And please let me know if you need to see how other pieces are implemented for fuller picture.

Error: Provider produced inconsistent final plan

if the for_each in place, the second round can't be successfully applied, even if the first got succeeded(which helm ls can work and show the proper results )

Versions

➜ terraform version
Terraform v0.12.28
+ provider.helmfile v0.3.16

tfvars

environments = {
  test = {
    consumer_service = "v2"
    goods_service = "v2"
    cart_service = "v2"
  }

  dev1 = {
    consumer_service = "v2"
    goods_service = "v2"
    cart_service = "v2"
  }
}

terraform code

variable "environments" {}
variable "namespaces" {
  default = {
    "test" = "tech-test"
  }
}


resource "helmfile_release_set" "helmfile_release" { 
  for_each = var.environments
  working_directory = ".terraform/modules/deploy/"
  content = file(".terraform/modules/deploy/helmfile.yaml")
  environment = "default"
  concurrency = 1
  environment_variables = merge(each.value, {
    name = each.key
    namespace = lookup(var.namespaces, each.key, "tech-dev") 
  })
  depends_on = [
    module.deploy
  ]
}

Output

➜ terraform apply --parallelism=1 -auto-approve
helmfile_release_set.helmfile_release["test"]: Creating...
helmfile_release_set.helmfile_release["test"]: Still creating... [10s elapsed]
helmfile_release_set.helmfile_release["test"]: Still creating... [20s elapsed]
helmfile_release_set.helmfile_release["test"]: Still creating... [30s elapsed]
helmfile_release_set.helmfile_release["test"]: Creation complete after 35s [id=bssvfole8ppekekulrcg]

Error: Provider produced inconsistent final plan

When expanding the plan for helmfile_release_set.helmfile_release["dev1"] to
include new values learned so far during apply, provider
"registry.terraform.io/-/helmfile" produced an invalid new value for
.diff_output: was cty.StringVal("Adding repo ...

...


This is a bug in the provider, which should be reported in the provider's own
issue tracker.

A piece of Trace

2020-08-17T11:39:40.234+0800 [DEBUG] plugin.terraform-provider-helmfile_v0.3.16: 2020/08/17 11:39:40 [DEBUG] Unlocking ".terraform/modules/china-deploy/"
2020-08-17T11:39:40.234+0800 [DEBUG] plugin.terraform-provider-helmfile_v0.3.16: 2020/08/17 11:39:40 [DEBUG] Unlocked ".terraform/modules/china-deploy/"
2020-08-17T11:39:40.234+0800 [DEBUG] plugin.terraform-provider-helmfile_v0.3.16: 2020/08/17 11:39:40 Writing diff file to .terraform/helmfile/diff-0d61dda974b408c0124baa9d71fcaf530c3b0f5661c2d38ab89aa80e36569406
2020/08/17 11:39:40 [WARN] Provider "registry.terraform.io/-/helmfile" produced an invalid plan for helmfile_release_set.helmfile_release["dev1"], but we are tolerating it because it is using the legacy plugin SDK.
    The following problems may be the cause of any confusing errors from downstream operations:
      - .binary: planned value cty.StringVal("helmfile") does not match config value cty.NullVal(cty.String)
      - .dirty: planned value cty.False does not match config value cty.NullVal(cty.Bool)
      - .helm_binary: planned value cty.StringVal("helm") does not match config value cty.NullVal(cty.String)
2020/08/17 11:39:40 [TRACE] <root>: eval: *terraform.EvalCheckPlannedChange
2020/08/17 11:39:40 [TRACE] EvalCheckPlannedChange: Verifying that actual change (action Create) matches planned change (action Create)
2020/08/17 11:39:40 [ERROR] <root>: eval: *terraform.EvalCheckPlannedChange, err: Provider produced inconsistent final plan: When expanding the plan for helmfile_release_set.helmfile_release["dev1"] to include new values learned so far during apply, provider "registry.terraform.io/-/helmfile" produced an invalid new value for .diff_output: was cty.StringVal("


...


2020/08/17 11:39:40 [ERROR] <root>: eval: *terraform.EvalSequence, err: Provider produced inconsistent final plan: When expanding the plan for helmfile_release_set.helmfile_release["dev1"] to include new values learned so far during apply, provider "registry.terraform.io/-/helmfile" produced an invalid new value for .diff_output: was cty.StringVal("

Feature Request: Binary Downloads to Support Terraform Cloud

what

  • Add flag to download kubectl and helmfile from GitHub pinned to a specific release

why

  • Running provider in terraform cloud requires binaries be installed by some other means
  • Using local-exec with a null_resource won't work due to order of operations issues

@aknysh might help implement this

related

See #2

Best-practices for secrets management

Hi! I'm trying to fit this into our existing ecosystem. I currently have Terraform built/managed infrastructure and manually put values into a "values" and "secret" yaml (using helm-secrets). I'd like to utilize this plugin so I don't require manual intervention at all and perhaps could remove helm-secrets outright.

Is there a way to pass Terraform derived secrets into a Release Set resource? Is there a better way? Perhaps intermediate files written by Terraform and subsequently used in Helmfile?

Maybe I'm missing something. I would have expected to do something like:

# Won't run as-is but to illustrate my point ... 

provider mysql {}
provider aws {}
provider helmfile {}

resource "mysql_database" "default" {
  name = "my_awesome_app"
}

resource "aws_s3_bucket" "default" {
  bucket = "my-tf-test-bucket"
}

resource "helmfile_release_set" "default" {
  # helmfile.yaml already uses environmental defaults for access keys, common configs, etc.
  path = "../helmfiles/helmfile.yaml"
  environment = "production"
  selector = {
    name = "some-app"
  }
  values = [
    s3_bucket = aws_s3_bucket.bucket
  ]
  secrets = [
    mysql_password = mysql_database.password
  ]
}

I'm just trying to figure out how best to bridge Terraform created values/secrets with Helmfile and in particular this plugin.

Diff does not show in terraform plan

Diff in tf plan is not the same as diff via the helmfile cli:

Terraform v0.14.11
Configuring remote state backend...
Initializing Terraform configuration...
local_file.kubeconfig: Refreshing state... [id=61cbd7e426fc6807211cc7557f8ab62504c8f440]
helmfile_release_set.nginx: Refreshing state... [id=c31p18vgb56ccqb6m9og]
helm_release.example: Refreshing state... [id=ingress-nginx]

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create
  ~ update in-place

Terraform will perform the following actions:

  # helmfile_release_set.nginx will be updated in-place
  ~ resource "helmfile_release_set" "nginx" {
      ~ content           = <<-EOT
            
            repositories:
            - name: ingress-nginx
              url: https://kubernetes.github.io/ingress-nginx
            
            releases:
            - name: helmfile-ingress-nginx
              namespace: default
              chart: ingress-nginx/ingress-nginx
          -   version: "3.19.0"
          +   version: "3.33.0"
            
        EOT
        id                = "c31p18vgb56ccqb6m9og"
        # (9 unchanged attributes hidden)
    }

  # local_file.kubeconfig will be created
  + resource "local_file" "kubeconfig" {
      + content              = (sensitive)
      + directory_permission = "0777"
      + file_permission      = "0777"
      + filename             = "/terraform/kubeconfig"
      + id                   = (known after apply)
    }

Plan: 1 to add, 1 to change, 0 to destroy.

Terraform crashes

Versions

Terraform v0.12.28

  • provider.helmfile v0.7.4

helmfile.yaml

repositories:
  - name: stable
    url: https://kubernetes-charts.storage.googleapis.com

helmDefaults:
  wait: true

releases:
  - name: heapster
    namespace: kube-system
    chart: stable/heapster
    version: 0.3.2
    values:
    - "./config/heapster/values.yaml"
    - "./config/heapster/{{ .Environment.Name }}.yaml"
    secrets:
    - "./config/heapster/secrets.yaml"
    - "./config/heapster/{{ .Environment.Name }}-secrets.yaml"

main.tf

provider "helmfile" {}
resource "helmfile_release_set" "mystack" {
  content = file("./helmfile.yaml")

  environment_variables = {
    KUBECONFIG = "${path.cwd}/kubeconfig-admin-PersonalClusters-mysql-test"
  }
}

(I can't share the kubeconfig file but I can ensure you that it is valid)

crash.log

crash.log

macos m1 support

are there any plans to support m1 for this provider?
helmfile itself is able to run using rosetta2

│ Error: Incompatible provider version
│
│ Provider registry.terraform.io/mumoshu/helmfile v0.14.1 does not have a
│ package available for your current platform, darwin_arm64.
│
│ Provider releases are separate from Terraform CLI releases, so not all
│ providers are available for all platforms. Other versions of this provider
│ may have different platforms supported.

Failed release results in inability to manage resources via terraform

Terraform apply ended with one failed release which broke the execution immediately. Some releases were installed successfully.

The subsequent run of terraform plan shows that the resource should be created as there is nothing installed:

# helmfile_release_set.mystack will be created
  + resource "helmfile_release_set" "mystack" {
      + apply_output      = (known after apply)
      + binary            = "helmfile"
      + dirty             = false
      + environment       = "staging"
      + helm_binary       = "helm"
      + id                = (known after apply)
      + path              = "git::https://******/helmfile.yaml"
      + working_directory = "."
    }

Now there are a bunch of releases left out of control of terraform. If I run terraform apply after that helmfile will deal with that situation, but I'm not sure that it's the intended behaviour.

Also, I noticed that logs produced by terraform apply (and helmfile apply) are not full and seem trimmed. There are much more releases that were installed successfully than shown in the output.

Slow performance

Is there any advice around tuning the Helmfile/Terraform configs for speed? A regular Terraform job takes significantly less time compare to running just Helmfile inside a Terraform job on Jenkins.

I have maybe 10 charts being executed via the Helmfile integration and it takes more than 2 hours to complete (if we don't hit failures which is fairly often.)

We are using GCP as the cloud provider which could be an issue? but I know running these charts manually I could release them in less than a quarter of this time easily.

Is there any advice?

Diff is partially hidden for larger outputs

terraform-provider-helmfile version:
3df004fb52ba7c30f65d80eb0c3b537e0c4df7eb
helmfile version:
helmfile-0.125.0.high_sierra.bottle.tar.gz

Overview

We are using helmfile for managing Jenkins helm deployment.

It uses a very long config map with hundreds of lines of yaml config.

When we change anything that ends up at the top of the config map, the diff output in terraform plan gets trimmed and we can't see the changes.

I can see that @mumoshu tired to address that issue here, however I believe these flags get passed to helmfile (cli ref), not helm-diff.

Possible causes

  • Some go buffer running out of space
  • Terraform's character limit on a string property
  • Terraform's display strategy for long string properties

Example output

[...]
------------------------------------------------------------------------

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # module.blue-jenkins.helmfile_release_set.helmfile_release will be updated in-place
  ~ resource "helmfile_release_set" "helmfile_release" {
      ~ apply_output          = <<~EOT
            ep:1.3.6
                  config-file-provider:3.6.3
                  configuration-as-code:1.42
                  configurationslicing:1.51
                  confluence-publisher:2.0.5
[...]

Notice the ep:1.3.6 is clearly a part of a plugin name.

Possible solutions

  • Change helmfile to accept diff flags, or at least the context flag, to shorten the output.
  • Display a full diff output

Side note

the diff was easier to read with color

apply_output and diff_output of a previous run are shown when there are no changes in the current one

When the are no changes in manifests apply_output and diff_output that are shown to the user contain information from the previous successful run. This could be misleading.

Terraform apply says No changes. Infrastructure is up-to-date., but then diff_output shows something like:

...
core, core-instance-frontend, Deployment (apps) has changed:
 ...
         labels:
           app.kubernetes.io/name: frontend
           app.kubernetes.io/instance: core-instance-frontend
 -         app.kubernetes.io/version: 17213-57d9c33d
 +         app.kubernetes.io/version: 17114-8524e206
       spec:
         containers:
           - name: frontend
 -           image: "eu.gcr.io/my-project/frontend:17213-57d9c33d"
 +           image: "eu.gcr.io/my-project/frontend:17114-8524e206"
             imagePullPolicy: Always
             envFrom:
             - configMapRef:
 ...
 Comparing release=core-instance-saml, chart=chartmuseum/saml
 Comparing release=monitoring-addon-core-instance, chart=chartmuseum/monitoring-addon
 Listing releases matching ^core-instance-stash-backup$
 Affected releases are:
   **core-instance-frontend (chartmuseum/frontend) UPDATED**
 **Identified at least one change**

Same for apply_output.

helm_release_set not working when environment_variables are set

I have the following helm_release_set resource:

resource "helmfile_release_set" "external-dns" {
  working_directory = "${path.module}/helmfiles/external-dns"
  kubeconfig        = pathexpand("~/.kube/config")
  environment_variables = local.external_dns_environment_variables
  depends_on = [
    aws_iam_openid_connect_provider.cluster,
    module.eks_iam_role
  ]
}

locals {
  external_dns_environment_variables = { 
    "KUBECTX"               = "var.context"
    "EKS_ROLE_ARN"          = "${module.eks_iam_role.service_account_role_arn}"
  }
}

and this is my helmfile.yaml:

---
repositories:
  - name: external-dns
    url: https://kubernetes-sigs.github.io/external-dns/

helmDefaults:
  wait: true
  timeout: 120
  atomic: false
  createNamespace: true
  kubeContext: {{ requiredEnv "KUBECTX" }}

releases:
  - name: external-dns
    namespace: kube-system
    chart: external-dns/external-dns
    values:

      - serviceAccount:
          create: true
          annotations: {
             eks.amazonaws.com/role-arn: {{ requiredEnv "EKS_ROLE_ARN" }},
            }
          name: "external-dns"

        rbac:
          create: true
        podAnnotations: {
          eks.amazonaws.com/role-arn: {{ requiredEnv "EKS_ROLE_ARN" }},
        }

        service:
          port: 7979

        logLevel: info
        logFormat: text

        interval: 1m
        triggerLoopOnEvent: false

        sources:
          - service
          - ingress

        policy: upsert-only

        registry: txt
        txtOwnerId: "external-dns"
        txtPrefix: "external-dns"

        provider: aws

        extraArgs: [
          --aws-zone-type=public
          ]

The fact is that I'm using this your provider since a long time ago but now it's not working, the resource seems to fail when the diff can't evaluate the requiredEnv, but, funny thing is var.context is already set.
If the previous resources are created, the diff goes well, but the strange thing is var.context not working now.

this is the error that I'm getting:

│ Error: diffing release set: running helmfile diff: running command: /usr/local/bin/helmfile: exit status 1
│ in ./helmfile.yaml: error during helmfile.yaml.part.0 parsing: template: stringTemplate:11:18: executing "stringTemplate" at <requiredEnv "KUBECTX">: error calling requiredEnv: required env var `KUBECTX` is not set
│ 
│ 
│   on external-dns.tf line 1, in resource "helmfile_release_set" "external-dns":
│    1: resource "helmfile_release_set" "external-dns" {

I don't understand what could be wrong.
I really use this provider a lot, any thoughts here?

Thanks mumoshu for this excellent provider.

I have read that I can specify the kubeconfig to "" but this is not how it was working.

Hope someone can help me, really lost here.

Provider produced an invalid new value for .diff_output

terraform-provider-helmfile version:
v0.3.14
helmfile version:
helmfile-0.125.5.high_sierra.bottle.tar.gz

Overview

Quite often we can't create larger helm charts due to inconsistent diff_output being generated by plan and apply.

When comparing the generated diffs, it usually comes down to the order of manifests being presented. When deploying a chart to a fresh environment with more than 15 manifests, it becomes impossible to deploy, as it fails every time.

Additionally, the issue gets worse when helmfile config contains multiple releases. The order of these releases tends to vary more often in the diff output.

Example output

When expanding the plan for module.jenkins-v2.helmfile_release_set.helmfile_release to include 
new values learned so far during apply, provider "registry.terraform.io/mumoshu/helmfile" produced 
an invalid new value for .diff_output: was cty.StringVal(...) but now cty.StringVal(...).

Already discussed here

Possible solutions

  • Maybe it is possible to ignore the diff generated during apply, and just trick terraform into thinking the two stages produce the same output?
  • It seems like you have an issue with terraform apply running the diff part twice. How about caching the first run in the module's global variable, and then later returning the same result?

Spoiled state problem

In #9 we decided that taking file approach like

resource "helmfile_release_set" "mystack" {
    content = file("./helmfile.yaml")
...
}

would fix all the troubles with the stored stated. But we've just faced the other one.

Somehow terraform apply failed (the chart wasn't uploaded to the chart museum) and it left a broken state. Also we then realised that the chart had an error in values which were not detected by the linter. Further fixes of a chart and values didn't help since Terraform tries to run diff against its state first. And now we are just locked down - nothing can be done without manipulating with the state.

The question is if it's ok that the state was updated after terrafrom apply failed

Problems with locks

Probably it's not related to this provider. But since yesterday we've been experiencing a really strange issue. And cannot figure out what is the cause of this. I posted the same to Slack, but probably it's good to have it here too

After some changes to *.tf file we started getting the following issues with locks:

2020/08/18 22:12:14 [TRACE] backend/local: requesting state manager for workspace "my-workspace"
2020/08/18 22:12:15 [TRACE] backend/local: requesting state lock for workspace "my-workspace"
o:Acquiring state lock. This may take a few moments...
e:
Error: Error locking state: Error acquiring the state lock: writing "gs://my-bucket/terraform/my-workspace.tflock" failed: googleapi: Error 412: Precondition Failed, conditionNotMet
Lock Info:
  ID:        1597788628081747
  Path:      gs://my-bucket/terraform/my-workspace.tflock
  Operation: OperationTypePlan
  Who:       runner@runner-urx-q8js-project-102-concurrent-0nbfb2
  Version:   0.12.24
  Created:   2020-08-18 22:10:27.940237341 +0000 UTC
  Info:      
Terraform acquires a state lock to protect the state from being written
by multiple users at the same time. Please resolve the issue above and try
again. For most commands, you can disable locking with the "-lock=false"
flag, but this is not recommended.

Note the TRACE output, there are other TRACE, DEBUG and INFO lines prior to this. I must say, that here couldn’t be any other pipeline or person triggering the same terraform apply.

If we do terraform plan -lock=false it shows tons of debug output like this:

...
2020-08-18T17:35:05.431Z [DEBUG] plugin.terraform-provider-helmfile:   labels:
2020-08-18T17:35:05.431Z [DEBUG] plugin.terraform-provider-helmfile:     app: prometheus-operator-prometheus
2020-08-18T17:35:05.431Z [DEBUG] plugin.terraform-provider-helmfile:     
2020-08-18T17:35:05.431Z [DEBUG] plugin.terraform-provider-helmfile:     chart: prometheus-operator-8.7.0
2020-08-18T17:35:05.431Z [DEBUG] plugin.terraform-provider-helmfile:     release: "prom"
2020-08-18T17:35:05.431Z [DEBUG] plugin.terraform-provider-helmfile:     heritage: "Helm"
2020-08-18T17:35:05.432Z [DEBUG] plugin.terraform-provider-helmfile: roleRef:
...

And it takes ages to proceed and usually fails afterwards without noticeable error. It's also strange that I didn't specify any log level here, but it shows DEBUG output.

But sometimes, quite rarely now, it works ok, it shows the normal output. We can’t figure out what was the reason. We used the most recent provider, the most recent helmfile, helm 3.2.4 and terraform 0.12.24.

Then we started getting the same thing on another environment, where versions of tools are different, where there were no modifications of tf resources, no tools were updated or something. It was a helm chart that was updated in helmfile.yaml.

Support helm-diff binary version(?)

Ran into this issue:

PATH:
  /usr/local/bin/helm-3.3.0

ARGS:
  0: helm-3.3.0 (10 bytes)
  1: --kube-context (14 bytes)
  2: [REDACTED] (42 bytes)
  3: diff (4 bytes)
  4: upgrade (7 bytes)
  5: --reset-values (14 bytes)
  6: --allow-unreleased (18 bytes)
  7: prometheus (10 bytes)
  8: stable/prometheus-operator (26 bytes)
  9: --version (9 bytes)
  10: ~9.3.0 (6 bytes)
  11: --disable-validation (20 bytes)
  12: --namespace (11 bytes)
  13: monitoring (10 bytes)
  14: --values (8 bytes)
  15: /var/folders/qr/2m8w9ym94yq4z00qxk_1wxfw0000gp/T/values036368530 (64 bytes)
  16: --detailed-exitcode (19 bytes)
  17: --suppress-secrets (18 bytes)
  18: --no-color (10 bytes)
  19: --context (9 bytes)
  20: 3 (1 bytes)

ERROR:
  exit status 1

EXIT STATUS
  1

STDERR:
  Error: unknown flag: --disable-validation

This is due to me using 3.1.1 of the helm-diff plugin, 3.1.2 changelog adds the following parameter: --disable-validation.

I'm playing around with using this provider to enforce binary versions across our team with strictly terraform and I'm wondering if a terraform provider can handle specifying helm plugin versions.

terraform plan stuck in Refreshing State for more than 2 hours

Steps to Reproduce :

  • Set export TF_LOG=TRACE which is the most verbose logging.
  • Run terraform plan ....
  • In the log, I got the root cause of the issue and it was :
dag/walk: vertex "module.kubernetes_apps.provider.helmfile (close)" is waiting for "module.kubernetes_apps.helmfile_release_set.metrics_server"

Workaround Fix :

  • Based on these logs, I identify the state which is the cause of the issue: module.kubernetes_apps.helmfile_release_set.metrics_server.

  • I deleted its state :

terraform state rm module.kubernetes_apps.helmfile_release_set.metrics_server
  • Now run terraform plan ... it works ... but after removing state.

Expected behaviour

The issue is fixed without deleting the state

Invalid plan does not trigger terraform errors

Hi @mumoshu ; thanks for developing this provider <3

I just started to learn using it and I ran into this issue during terraform plan phase:

--- SNIP ---
2021/01/26 10:48:16 [WARN] Provider "registry.terraform.io/mumoshu/helmfile" produced an invalid plan for helmfile_release_set.helmfile, but we are tolerating it because it is using the legacy plugin SDK.
    The following problems may be the cause of any confusing errors from downstream operations:
      - .binary: planned value cty.StringVal("helmfile") does not match config value cty.NullVal(cty.String)
      - .helm_binary: planned value cty.StringVal("helm") does not match config value cty.NullVal(cty.String)
      - .dirty: planned value cty.False does not match config value cty.NullVal(cty.Bool)
      - .concurrency: planned value cty.NumberIntVal(0) does not match config value cty.NullVal(cty.Number)
2021-01-26T10:48:16.284+0100 [DEBUG] plugin: plugin process exited: path=.terraform/providers/registry.terraform.io/mumoshu/helmfile/0.13.2/linux_amd64/terraform-provider-helmfile_v0.13.2 pid=44672
2021-01-26T10:48:16.284+0100 [DEBUG] plugin: plugin exited
--- SNIP ---

I would expect that any errors encountered during the plan phase would result in an invalid plan.

This is the setup:

$ terraform version
Terraform v0.14.4
+ provider registry.terraform.io/mumoshu/helmfile v0.13.2

Not 100% what the real error is; from what I can gather so far it's related to KUBECONFIG parsing - looks like it's using the base64 encoded CA as server name; If I can confirm it will open a separate issue.
Thanks again.

(feat): kubeconfig_raw as alternative to local file and setting KUBECONFIG env variable

Would be nice to support kubeconfig_raw as input in helmfile provider.
For example like this provider https://github.com/vmware-tanzu/terraform-provider-carvel/blob/develop/pkg/provider/kubeconfig.go#L34
Use case:

data "azurerm_kubernetes_cluster" "example" {
  name                = "myakscluster"
  resource_group_name = "my-example-resource-group"
}

resource "helmfile_release_set" "mystack" {
    content = file("./helmfile.yaml")
    kubeconfig_raw = data.azurerm_kubernetes_cluster.example.kube_admin_config_raw
}

It should be sensitive input ( hide sensitive stuff in plan/apply steps)

Thanks

Official provider can use also raw input https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs#host

EKS cluster authentication KO on TerraformCloud

Hi !

We are trying to deploy some applications on a given EKS cluster using your helmfile provider through Terraform Cloud 👍

At this time, we did not succeed ...


Here is the tf code :

resource "local_file" "kubeconfig" {
  filename          = "${path.module}/.kube/config"
  file_permission   = "600"
  # kubeconfig that comes from another tf workspace
  sensitive_content = data.terraform_remote_state.back_infra.outputs.eks_cluster_kubeconfig
}

# deploy all backend apps
resource "helmfile_release_set" "example_app" {
  version           = "0.142.0"
  helm_version      = "3.7.1"
  helm_diff_version = "v3.1.3"

  # load helmfile where helm releases are defined
  content                   = file("helmfile/helmfile.yaml")
  working_directory  = "${path.module}/helmfile"
  kubeconfig             = local_file.kubeconfig.filename
  # ask helmfile to deploy the app
  selector              = {
    appName = "exampleApp" # corresponds to -l appName=exampleApp
  }
}

Here is the kubeconfig we are passing :

apiVersion: v1
preferences: {}
kind: Config

clusters:
- cluster:
  server: <hidden>
  certificate-authority-data: <hidden>
  name: <hidden>

contexts:
- context:
  cluster: eks_example
  user: eks_example
  name: eks_example

current-context: eks_example

users:
- name: eks_example
  user:
  exec:
  apiVersion: client.authentication.k8s.io/v1alpha1
  command: aws-iam-authenticator
  args:
  - \"token\"
  - \"-i\"
  - \"example\"
  "

Terraform Cloud returns an error saying that aws-iam-authenticator binary is missing

We tried to add aws-iam-authenticator using a null-resource like this without any success :

resource "null_resource" "install_aws_iam_authenticator" {
  # always recreate the config on the remote machine
  triggers = {
    always_run = timestamp()
  }

  provisioner "local-exec" {
    command = <<-INSTALL_AWS_IAM_AUTH
      curl -o aws-iam-authenticator https://amazon-eks.s3.us-west-2.amazonaws.com/1.21.2/2021-07-05/bin/linux/amd64/aws-iam-authenticator
      chmod +x ./aws-iam-authenticator
      export PATH=$PATH:${path.module} # that one does not actually add the module path to the PATH ...
      echo $PATH
      mv aws-iam-authenticator /usr/local/bin # that one fails ...
      aws-iam-authenticator help
    INSTALL_AWS_IAM_AUTH
  }
}

We also tried to generate a kubeconfig using aws eks update-kubeconfig so that the kubeconfig uses aws CLI to perform authentication ... but the helmfile_release_set ressource keep returning an error saying that the aws profile (xxxx) is not present in the config file ... doing a cat on it show that the profile is present 😬 🤦


Do you have any idea on how to perform AWS EKS authentication through TerraformCloud ?

To me, the whole issue resides in the fact that helmfile provider does not asks for any kubernetes conf as the helm provider does 🤷‍♂️

provider "helm" {
  kubernetes {

  }
}

Thanks a lot for your help !
Let me know if I can help on anything 👍

Version `0.14.1` is not back compatible

What

@mumoshu this #62 PR breaks our terraform module.

resource "helmfile_release_set" "default" {
  kubeconfig = "${path.module}/kubeconfig"

  helm_binary = "helm3"
  binary      = "helmfile"
  path        = "../../helmfiles/${var.workdir}/helmfile.yaml"
  concurrency = 0
  environment = var.helmfile_environment

  # Environment variables available to helmfile's requireEnv and commands being run by helmfile
  environment_variables = merge(var.environment_variables, { for key, value in module.chamber : key => value.value })

  depends_on = [
    data.shell_script.kubeconfig
  ]
}

This code works on version 0.14.0 and is broken on 0.14.1

Error message that we see is

Initializing modules...

Initializing the backend...

Initializing provider plugins...
- Reusing previous version of hashicorp/random from the dependency lock file
- Reusing previous version of hashicorp/aws from the dependency lock file
- Reusing previous version of mumoshu/helmfile from the dependency lock file
- Reusing previous version of scottwinkler/shell from the dependency lock file
- Reusing previous version of eddycharly/kops from the dependency lock file
- Reusing previous version of hashicorp/template from the dependency lock file
- Reusing previous version of hashicorp/null from the dependency lock file
- Using previously-installed hashicorp/template v2.2.0
- Using previously-installed hashicorp/null v3.1.0
- Using previously-installed hashicorp/random v3.1.0
- Using previously-installed hashicorp/aws v3.60.0
- Using previously-installed mumoshu/helmfile v0.14.1
- Using previously-installed scottwinkler/shell v1.7.7
- Using previously-installed eddycharly/kops v1.19.0-alpha.6

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Acquiring state lock. This may take a few moments...
helmfile_release_set.default: Refreshing state... [id=c4f4qke9fsurke49e1d0]
╷
│ Warning: Value for undeclared variable
│
│ The root module does not declare a variable named "cluster_name" but a value was found in file "ue1-staging-helm-dashboard.terraform.tfvars.json". If you meant to use this value, add a "variable" block to the configuration.
│
│ To silence these warnings, use TF_VAR_... environment variables to provide certain "global" settings to all configurations in your organization. To reduce the verbosity of these warnings, use the -compact-warnings option.
╵
╷
│ Error: diffing release set: running helmfile diff: running command: /usr/bin/helmfile: exit status 3
│ err: no releases found that matches specified selector() and environment(default), in any helmfile
│
│
│   with helmfile_release_set.default,
│   on main.tf line 97, in resource "helmfile_release_set" "default":
│   97: resource "helmfile_release_set" "default" {
│
╵
Releasing state lock. This may take a few moments...
Error: 1 error occurred:
	* step "plan cmd": job "terraform subcommand": command "/usr/bin/terraform-0.15 plan -out ue1-staging-helm-dashboard.planfile -var-file ue1-staging-helm-dashboard.terraform.tfvars.json" in "./components/terraform/helmfiles": exit status 1

Does the helmfile affect the whole cluster or the resources in the terraform state only?

I want to know what happens when using something like this in an old cluster:

terraform {
  required_providers {
    helmfile = {
      source = "mumoshu/helmfile"
      version = "VERSION"
    }
  }
}

Does, it affect the entire cluster and move it to the desired state stated in the helmfile, or just affect the desired state of the resources made by terraform and found in terraform state?

helmfile both errors and hangs while attempting to plan and apply

hey @mumoshu big fan of helmfile, so just wanted to start off saying thank you for your hard work on it.

so not sure whats going on here, but for whatever reason when attempting to run our existing helmfile project with the terraform module its running into issues. we'd previously been using the helmfile binary directly and have had no issues running at all. just to be certain, i did a complete tear down and rebuild of the stack using the helmfile binary and had no issues with the project as-is.

with that said, here is the terrafrom configuration:

terraform {
  required_providers {
    helmfile = {
      source = "mumoshu/helmfile"
      version = "0.8.0"
    }
  }
}

provider "helmfile" {}

locals {
  helmfile_directory = abspath("${path.module}/../../../../helm/helmfiles")
}

resource "helmfile_release_set" "k8s_stack" {
  content = file("${local.helmfile_directory}/helmfile.yml")
  environment = var.helmfile_environment
  environment_variables = {
    KUBECONFIG = var.kubernetes_config_file_path
  }
  helm_version = "3.3.1"
  helm_diff_version = "v3.1.3"
  working_directory = local.helmfile_directory
  version = "0.128.0"
}

when first running this it seemed like everything was working fine, just taking a long time to setup which is to be expected with the significant number of releases we have. however after leaving it running overnight i found that it was still running the following morning, which seemed absurd. after killing that off and then rerunning with debug enabled, i confirmed that the following error occurs, but without triggering a crash of terraform:
runtime error: index out of range [1] with length 1
ive included an excerpt from the terraform log below.

for additional context, here is the contents of the targeted helmfile:

---
{{ readFile "./.imports/master-helmfile-config.yml" }}
---
helmfiles:
- ./monitoring/prometheus/helmfile.yml
- ./monitoring/prometheus-adapter/helmfile.yml
- ./tooling/metallb/helmfile.yml
#- ./tooling/kube2iam/helmfile.yml
- ./cert-manager/helmfile.yml
- ./persistence/postgresql/helmfile.yml
- ./persistence/elasticsearch/helmfile.yml
- ./persistence/redis/helmfile.yml
- ./monitoring/jaeger-tracing/helmfile.yml
- ./istio/helmfile.yml
- ./middleware/helmfile.yml
#- ./monitoring/logstash/helmfile.yml
#- ./monitoring/metricbeat/helmfile.yml
#- ./monitoring/filebeat/helmfile.yml
#- ./monitoring/prometheus-cloudwatch/helmfile.yml
#- ./tooling/kibana/helmfile.yml
- ./tooling/grafana/helmfile.yml
- ./persistence/vault/helmfile.yml
#- ./tooling/kubernetes-dashboard/helmfile.yml

as mentioned, we've several releases so we've adopted some of the best practices suggested in helmfile's readme to make things more maintainable (e.g. layering). could that perhaps be indireclty causing the error?

in any case, please let us know if there's any additional information we can provide to help diagnose this further.

here's the excerpt from the terraform log:
edit: redacted for brevity

Error: Failed to instantiate provider "helmfile" to obtain schema: fork/exec terraform.d/plugins/darwin_amd64: exec format error

Environment

  • terraform v0.12.28
  • helmfile : v0.119.0
  • kubectl

client 1.17 (d224476cd0730baca2b6e357d144171ed74192d6)
server 1.16 (e163110a04dcb2f39c3325af96d019b4925419eb)

  • other TF providers :

provider.aws v2.68.0
provider.helmfile v0.1.0
provider.kubernetes v1.11.3
provider.local v1.4.0
provider.null v2.1.2
provider.random v2.2.1
provider.template v2.1.2

Issue

Not able to you it in my terraform plan :

I used these commands to download the plugin and install the binary in the suitable folder/directory :

os=darwin
arch=amd64
thirdparty_plugins=terraform.d/plugins/${os}_${arch}
mkdir -p ${thirdparty_plugins}

if [ ! -f "${thirdparty_plugins}/terraform-provider-helmfile_v0.1.0" ]; then
  curl -o ${thirdparty_plugins}/terraform-provider-helmfile_v0.1.0 https://github.com/mumoshu/terraform-provider-helmfile/releases/download/v0.1.0/terraform-provider-helmfile_${os}_${arch}
  chmod a+x  ${thirdparty_plugins}/terraform-provider-helmfile_v0.1.0
fi

Then, I imported in my plan :

provider "helmfile" {
}

Result from terminal stdout when run terraform validate :

Error: Failed to instantiate provider "helmfile" to obtain schema: fork/exec /Users/abdennoor/git/repo-infra/terraform.d/plugins/darwin_amd64/terraform-provider-helmfile_v0.1.0: exec format error

Even though terraform version is able to list the plugin :

# terraform version

Terraform v0.12.28
+ provider.aws v2.68.0
+ provider.helmfile v0.1.0
+ provider.kubernetes v1.11.3
+ provider.local v1.4.0
+ provider.null v2.1.2
+ provider.random v2.2.1
+ provider.template v2.1.2

Do you have a clear documentation how to install it successfully ?

diff not working as expected

Hi,
@mumoshu, I came across a couple of issues:
When using helmfile_release no diff is being shown.
When using helmfile_release_setwith multiple releases, the diff being shown is only for one of the releases.

Double diff in terraform plan

terraform-provider-helmfile version:
v0.3.8
helmfile version:
helmfile-0.125.0.high_sierra.bottle.tar.gz

Overview

When running on this version for the first time the diff_output behaves as expected, displaying no-color diff between previous and current helm deployment.

However, this diff persists in state file and the following terraform plans run the same diff, and diff it with previous diff.

diffs

Example output

Full output:
Screen Shot 2020-07-30 at 17 18 22

Extreme example:
Screen Shot 2020-07-30 at 17 23 58

Possible causes

  • Terraform compares the helm diff with the output of previous helm diff in state file.

Possible solutions:

  • Suppress the diff with DiffSuppressFunc schema function and display it in another way
  • Use StateFunc schema function to always set diff value to empty string or nil in state file. I am not sure if this will print the current version of the diff though.
  • Alternatively, terraform sdk provides customdiff package that can give a full control over this behaviour. Resources - Customizing Differences.

kubeconfig is required but definition not found error

Trying to use Terraform to drive external helmfile for deployment of velero to an existing K8s cluster in GKE
The following constantly giving me an error

  resource "helmfile_release_set" "velero" {
  content = file("./helmfiles/vl-helmfile.yaml")
 environment_variables = {
  KUBECONFIG = "${HOME}/.kube/config"
  }
}

I try to specify a hard coded path - still would not work

Got the following error when trying to validate

Error: Missing required argument

  on main.tf line 97, in resource "helmfile_release_set" "velero":
  97: resource "helmfile_release_set" "velero" {

The argument "kubeconfig" is required, but no definition was found.

Terraform 0.13.5
Helmfile provider version 0.11

Cleanup of temporary helmfiles

Hi,

Great tool! very happy to have this available, and we're going to start using it a lot.

One thing I've noticed, is that there are temporary files left in the working_dir after the plan is applied - should I be expecting these to be cleaned up automatically? or should I be adding them to .gitignore or something?

example:

❯❯❯❯ ll
total 328
drwxr-xr-x  35 andrew  staff   1.1K Apr  1 15:46 ./
drwxr-xr-x   5 andrew  staff   160B Mar 30 07:26 ../
drwxr-xr-x   6 andrew  staff   192B Mar 26 14:30 .terraform/
-rw-r--r--   1 andrew  staff   5.3K Mar 29 16:11 .terraform.lock.hcl
-rw-r--r--   1 andrew  staff    17B Mar 26 15:03 .tool-versions
-rw-r--r--   1 andrew  staff   1.2K Mar 23 12:24 2048_full.yaml
-rw-r--r--   1 andrew  staff   222B Mar 26 14:13 backend.tf
-rw-r--r--   1 andrew  staff   782B Mar 24 11:39 cluster.tf
-rw-r--r--   1 andrew  staff   1.9K Mar 23 13:48 codepipeline.tf
-rw-r--r--   1 andrew  staff   1.6K Apr  1 06:31 data-resources.tf
-rw-r--r--   1 andrew  staff   1.5K Mar 29 09:09 eks-fargate-profile.tf
-rw-r--r--   1 andrew  staff    41K Apr  1 09:23 errored.tfstate
drwxr-xr-x   3 andrew  staff    96B Mar 23 18:19 files/
-rw-r--r--   1 andrew  staff   556B Mar 25 14:41 helm-alb.tf
-rw-r--r--   1 andrew  staff   1.1K Apr  1 14:07 helm-hello-world.tf
-rw-r--r--   1 andrew  staff   1.3K Apr  1 13:52 helm-ingress-nginx.tf
-rw-r--r--   1 andrew  staff   2.1K Mar 30 08:14 helm-rbac-manager.tf
-rwxr-xr-x   1 andrew  staff   358B Apr  1 13:57 helmfile-601c16fc8dec00a95b32d832dcf9a628f9fc92e39cc06148f02417884b4adb00.yaml*
-rwxr-xr-x   1 andrew  staff   357B Apr  1 13:57 helmfile-7ea24fb586ca43355e166e8b17927e7c571ce14fb7e30901f7ce0faa89336325.yaml*
-rwxr-xr-x   1 andrew  staff   603B Apr  1 14:16 helmfile-8bfc9a1633dc2affbd39cf2601b0d2c04ef11f8fa7178b7eb6a542dae252f7c6.yaml*
-rwxr-xr-x   1 andrew  staff   418B Apr  1 14:16 helmfile-a46944bfce1d865911b70bd8a6d0e95fdd83f3b7695de04e1b6d5c5b2d490377.yaml*
-rwxr-xr-x   1 andrew  staff   420B Apr  1 14:16 helmfile-bcdaf754eea3d447ebd765fe8436885957a04b942920d2d0e14a765f86ea3080.yaml*
-rwxr-xr-x   1 andrew  staff   242B Apr  1 14:16 helmfile-c359764e1d8ab998de6ed447b29f328ba1aab2df71f32c7dfb7be37d5c5c235f.yaml*
-rwxr-xr-x   1 andrew  staff   417B Apr  1 14:17 helmfile-c4fde063873396efb34d44ff636e9f9752ff9299f2a013d70435119f692d00d3.yaml*
-rwxr-xr-x   1 andrew  staff   1.5K Apr  1 14:16 helmfile-cd4891323a42f055a1ff2162f98a7dcf23589d47be26785715c2a333a5525e6a.yaml*
-rwxr-xr-x   1 andrew  staff   419B Apr  1 14:17 helmfile-cd79c8f04c3a0b4633980087ff8015989efc992e10382e14c1d657b67104dc24.yaml*
-rwxr-xr-x   1 andrew  staff   714B Apr  1 14:16 helmfile-de5e90dc18fa599b32333394ca490f915d3b860d712fdff263ab7163f95bbf09.yaml*
-rw-r--r--   1 andrew  staff   394B Mar 30 08:54 k8s.tf
-rw-r--r--   1 andrew  staff   414B Mar 23 18:34 logging.tf
-rw-r--r--   1 andrew  staff   1.1K Apr  1 12:25 providers.tf
-rw-r--r--   1 andrew  staff   384B Mar 29 18:51 rbac.tf
-rw-r--r--   1 andrew  staff   422B Mar 25 14:11 rfc.tf
drwxr-xr-x   9 andrew  staff   288B Apr  1 14:16 templates/
-rw-r--r--   1 andrew  staff   392B Mar 26 16:03 values.yaml
-rw-r--r--   1 andrew  staff   3.8K Mar 29 10:14 variables.tf

Terraform cloud - Shoal: repository does not exist

Hi, I have some issues with a basic setup using terraform cloud. I have copied the configuration from the README but there seems to be some problems with shoal-sync and I really don't understand the logs. They say "repository does not exist" but which repository doesn't exist?

I use this helmfile terraform configuration:

resource "helmfile_release_set" "mystack" {
  version           = "0.128.0"
  helm_version      = "3.2.1"
  helm_diff_version = "v3.1.3"
  kubeconfig        = var.kubeconfig
  path              = "helmfile.yaml"

  values = [
    <<EOF
setting: "somevalue"
EOF
  ]
}

This is the helmfile terraform logs:

�Terraform v0.14.4
Initializing plugins and modules...
�[0m�[1mhelmfile_release_set.mystack: Creating...�[0m�[0m
�[31m
�[1m�[31mError: �[0m�[0m�[1mcreating release set: getting diff file: getting helmfile version: creating command: running shoal-sync: syncing shoal foods: repository does not exist
shoal.go:146: Listing versions
shoal.go:167: Reading workspace cache dir at /terraform/terraform/.shoal/workspaces/github.com-fishworks-fish-food-afb200ee2af039a9aa3429d8524a027dcfa07f7b
shoal.go:185: reading rig ID file at /terraform/terraform/.shoal/workspaces/github.com-fishworks-fish-food-afb200ee2af039a9aa3429d8524a027dcfa07f7b/0/RIG
shoal.go:204: locking workspace dir at /terraform/terraform/.shoal/workspaces/github.com-fishworks-fish-food-afb200ee2af039a9aa3429d8524a027dcfa07f7b/0
shoal.go:212: getting origin head branch in /terraform/terraform/.shoal/workspaces/github.com-fishworks-fish-food-afb200ee2af039a9aa3429d8524a027dcfa07f7b/0
shoal.go:207: unlocking workspace dir at /terraform/terraform/.shoal/workspaces/github.com-fishworks-fish-food-afb200ee2af039a9aa3429d8524a027dcfa07f7b/0

shoal.go:146: Listing versions
shoal.go:167: Reading workspace cache dir at /terraform/terraform/.shoal/workspaces/github.com-fishworks-fish-food-afb200ee2af039a9aa3429d8524a027dcfa07f7b
shoal.go:185: reading rig ID file at /terraform/terraform/.shoal/workspaces/github.com-fishworks-fish-food-afb200ee2af039a9aa3429d8524a027dcfa07f7b/0/RIG
shoal.go:204: locking workspace dir at /terraform/terraform/.shoal/workspaces/github.com-fishworks-fish-food-afb200ee2af039a9aa3429d8524a027dcfa07f7b/0
shoal.go:212: getting origin head branch in /terraform/terraform/.shoal/workspaces/github.com-fishworks-fish-food-afb200ee2af039a9aa3429d8524a027dcfa07f7b/0
shoal.go:207: unlocking workspace dir at /terraform/terraform/.shoal/workspaces/github.com-fishworks-fish-food-afb200ee2af039a9aa3429d8524a027dcfa07f7b/0
: syncing shoal foods: repository does not exist
shoal.go:146: Listing versions
shoal.go:167: Reading workspace cache dir at /terraform/terraform/.shoal/workspaces/github.com-fishworks-fish-food-afb200ee2af039a9aa3429d8524a027dcfa07f7b
shoal.go:185: reading rig ID file at /terraform/terraform/.shoal/workspaces/github.com-fishworks-fish-food-afb200ee2af039a9aa3429d8524a027dcfa07f7b/0/RIG
shoal.go:204: locking workspace dir at /terraform/terraform/.shoal/workspaces/github.com-fishworks-fish-food-afb200ee2af039a9aa3429d8524a027dcfa07f7b/0
shoal.go:212: getting origin head branch in /terraform/terraform/.shoal/workspaces/github.com-fishworks-fish-food-afb200ee2af039a9aa3429d8524a027dcfa07f7b/0
shoal.go:207: unlocking workspace dir at /terraform/terraform/.shoal/workspaces/github.com-fishworks-fish-food-afb200ee2af039a9aa3429d8524a027dcfa07f7b/0
�[0m

�[0m  on helmfile.tf line 1, in resource "helmfile_release_set" "mystack":
   1: resource "helmfile_release_set" "mystack" �[4m{�[0m
�[0m
�[0m�[0m

Error: rpc error: code = Unavailable desc = transport is closing

I'm moving forward from the previous version to the most recent and the latest fails for all our environment with

Error: rpc error: code = Unavailable desc = transport is closing

during terraform plan stage. There are no changes in any helmfile state except for one. And for that specific environment, the error is the same. Retry didn't help.

helmfile version: v0.109.0

Turning debug on I've got the following:

2020-07-22T20:37:42.488Z [DEBUG] plugin.terraform-provider-helmfile: 
 2020-07-22T20:37:42.488Z [DEBUG] plugin.terraform-provider-helmfile: goroutine 16 [running]:
 2020-07-22T20:37:42.488Z [DEBUG] plugin.terraform-provider-helmfile: github.com/mumoshu/terraform-provider-helmfile/pkg/tfhelmfile.MustRead(0x12307a0, 0xc0001d47e0, 0x7f0dc5dea008)
 2020-07-22T20:37:42.488Z [DEBUG] plugin.terraform-provider-helmfile: 	/go/src/terraform-provider-helmfile/pkg/tfhelmfile/resource_release_set.go:192 +0x91b
 2020-07-22T20:37:42.488Z [DEBUG] plugin.terraform-provider-helmfile: github.com/mumoshu/terraform-provider-helmfile/pkg/tfhelmfile.read(0xc0001d47e0, 0xe1e2e0, 0x195ea60, 0xc00023d888, 0x1, 0x1, 0xc0001d47e0, 0x0)
 2020-07-22T20:37:42.488Z [DEBUG] plugin.terraform-provider-helmfile: 	/go/src/terraform-provider-helmfile/pkg/tfhelmfile/resource_release_set.go:311 +0x3b
 2020-07-22T20:37:42.488Z [DEBUG] plugin.terraform-provider-helmfile: github.com/mumoshu/terraform-provider-helmfile/pkg/tfhelmfile.resourceReleaseSetRead(0xc0001d47e0, 0xe1e2e0, 0x195ea60, 0xc0001d47e0, 0x0)
 2020-07-22T20:37:42.488Z [DEBUG] plugin.terraform-provider-helmfile: 	/go/src/terraform-provider-helmfile/pkg/tfhelmfile/resource_release_set.go:137 +0x77
 2020-07-22T20:37:42.488Z [DEBUG] plugin.terraform-provider-helmfile: github.com/hashicorp/terraform-plugin-sdk/helper/schema.(*Resource).RefreshWithoutUpgrade(0xc00017a900, 0xc000589130, 0xe1e2e0, 0x195ea60, 0xc00015e6f8, 0x0, 0x0)
 2020-07-22T20:37:42.488Z [DEBUG] plugin.terraform-provider-helmfile: 	/go/pkg/mod/github.com/hashicorp/[email protected]/helper/schema/resource.go:455 +0x119
 2020-07-22T20:37:42.488Z [DEBUG] plugin.terraform-provider-helmfile: github.com/hashicorp/terraform-plugin-sdk/internal/helper/plugin.(*GRPCProviderServer).ReadResource(0xc00015e4a0, 0x123ef00, 0xc00049e930, 0xc000588f50, 0xc00015e4a0, 0xc00049e930, 0xc00059ca80)
 2020-07-22T20:37:42.488Z [DEBUG] plugin.terraform-provider-helmfile: 	/go/pkg/mod/github.com/hashicorp/[email protected]/internal/helper/plugin/grpc_provider.go:525 +0x3d8
 2020-07-22T20:37:42.488Z [DEBUG] plugin.terraform-provider-helmfile: github.com/hashicorp/terraform-plugin-sdk/internal/tfplugin5._Provider_ReadResource_Handler(0xfd0aa0, 0xc00015e4a0, 0x123ef00, 0xc00049e930, 0xc0005b7560, 0x0, 0x123ef00, 0xc00049e930, 0xc000157900, 0x10d0)
 2020-07-22T20:37:42.488Z [DEBUG] plugin.terraform-provider-helmfile: 	/go/pkg/mod/github.com/hashicorp/[email protected]/internal/tfplugin5/tfplugin5.pb.go:3153 +0x217
 2020-07-22T20:37:42.488Z [DEBUG] plugin.terraform-provider-helmfile: google.golang.org/grpc.(*Server).processUnaryRPC(0xc0000ea000, 0x124ac40, 0xc0002d2a80, 0xc0004f2600, 0xc000142540, 0x1934510, 0x0, 0x0, 0x0)
 2020-07-22T20:37:42.488Z [DEBUG] plugin.terraform-provider-helmfile: 	/go/pkg/mod/google.golang.org/[email protected]/server.go:995 +0x460
 2020-07-22T20:37:42.488Z [DEBUG] plugin.terraform-provider-helmfile: google.golang.org/grpc.(*Server).handleStream(0xc0000ea000, 0x124ac40, 0xc0002d2a80, 0xc0004f2600, 0x0)
 2020-07-22T20:37:42.488Z [DEBUG] plugin.terraform-provider-helmfile: 	/go/pkg/mod/google.golang.org/[email protected]/server.go:1275 +0xd97
 2020-07-22T20:37:42.488Z [DEBUG] plugin.terraform-provider-helmfile: google.golang.org/grpc.(*Server).serveStreams.func1.1(0xc000152000, 0xc0000ea000, 0x124ac40, 0xc0002d2a80, 0xc0004f2600)
 2020-07-22T20:37:42.488Z [DEBUG] plugin.terraform-provider-helmfile: 	/go/pkg/mod/google.golang.org/[email protected]/server.go:710 +0xbb
 2020-07-22T20:37:42.488Z [DEBUG] plugin.terraform-provider-helmfile: created by google.golang.org/grpc.(*Server).serveStreams.func1
 2020-07-22T20:37:42.488Z [DEBUG] plugin.terraform-provider-helmfile: 	/go/pkg/mod/google.golang.org/[email protected]/server.go:708 +0xa1
 2020/07/22 20:37:42 [ERROR] <root>: eval: *terraform.EvalRefresh, err: rpc error: code = Unavailable desc = transport is closing
 2020/07/22 20:37:42 [ERROR] <root>: eval: *terraform.EvalSequence, err: rpc error: code = Unavailable desc = transport is closing
 2020-07-22T20:37:42.491Z [DEBUG] plugin: plugin process exited: path=/root/.terraform.d/plugins/linux_amd64/terraform-provider-helmfile pid=341 error="exit status 2"
 2020-07-22T20:37:42.797Z [DEBUG] plugin: plugin exited

We have concurrency set to 1.

Error finding temp value files

Since upgrading to the latest version being 0.12.0 of the provider I've been blocked by the following errors:

15:57:15  in ./helmfile-3dce096a66df5e8515ae85f40bd83a2bc10620b819f7b5c0019505065acd204d.yaml: in .helmfiles[0]: in helmfile/releases/00-frontend-helmfile.yaml: 4 errors:
15:57:15  err 0: failed processing release certificate-manager-qa: open .terraform/helmfile/temp-6b9c6545fb/cert-manager-certificate-manager-qa-values-85674b986f: no such file or directory
15:57:15  err 1: failed processing release ingress-external-qa: open .terraform/helmfile/temp-6b9c6545fb/ingress-ingress-external-qa-values-678fb9d8b6: no such file or directory
15:57:15  err 2: failed processing release ingress-internal-qa: open .terraform/helmfile/temp-6b9c6545fb/ingress-ingress-internal-qa-values-5d6d7bfdfb: no such file or directory
15:57:15  err 3: failed processing release external-dns-qa: open .terraform/helmfile/temp-6b9c6545fb/qa-external-dns-qa-values-5bbbb6548b: no such file or directory
15:57:15  

If I revert back to 0.11 version of the provider I can run the helmfile perfectly fine.

Versions:

  • Terraform 1.14.3
  • Helmfile provider: 0.12.0
  • Helmfile: 0.136.0

I was hoping to upgrade to take advantage of the performance improvement from --skip-diff-on-install as raised in my other ticket in regards to slow performance.

It seems no temp values are being generated hence why they cannot be found, rather than a permission issue etc...

Depends_on ignored

When using Terraform with the Helmfile provider it completely ignores any depends on requirements.

In my instance I require this as some of my hooks actually setup credentials which are only available once the cluster is created. Since the Helmfile resource always try to generate a diff output regardless of the depends on requirement it fails every time on a fresh install unless I create Terraform and Helmfile seperately.

is there anyway I can work around this?

EDIT: the main cause of this issue is terraform plan also executes helmfile diff without taking into consideration if tiller is even present let alone if the cluster is even present

Error: Provider produced inconsistent final plan

I've tried the latest version of this provider (taken from the master branch) and found out that some terraform apply failed without any reason. I also noticed that the output had changed. It became a bit messy. And there are no errors related to my helmfile manifests or anything else(there shouldn't be as my helmfile manifests remained the same). The error looks like:

chart=chartmuseum/system-logs\nComparing release=oval-****,
 chart=chartmuseum/backend\nComparing release=oval-frontend,
 chart=chartmuseum/frontend\nComparing release=oval-saml,
 chart=chartmuseum/saml\nComparing release=monitoring-addon-oval,
 chart=chartmuseum/monitoring-addon\nListing releases matching
 ^oval-stash-backup$\nNo affected releases\nComparing release=security-service,
 chart=chartmuseum/security-service\noval, security-service, Deployment (apps)
 has changed:\n...\n          app.kubernetes.io/instance: security-service\n
 spec:\n        serviceAccountName: security-service\n        containers:\n
 - name: security-service\n-           image:
 \"eu.gcr.io/****/security-service:master-c924f869\"\n+           image:
 \"eu.gcr.io/****/security-service:master-f7e4e237\"\n
 imagePullPolicy: Always\n            envFrom:\n            - configMapRef:\n
 name: security-service\n            ports:\n...\n\nComparing
 release=marketplace, chart=chartmuseum/marketplace\nAffected releases are:\n
 security-service (chartmuseum/security-service) UPDATED\n\nIdentified at least
 one change\n").
 **This is a bug in the provider, which should be reported in the provider's own
 issue tracker.**

KUBECONFIG environment variable doesn't get initialized correctly

First time using this provider so just wanted to thank you for starting on this.

First I got an error: Error: diffing release set: %!w(<nil>)
After setting the debug flag, the issue seems to be with helm not recognizing the KUBECONFIG env var.
2020-10-15T23:57:42.851Z [DEBUG] plugin.terraform-provider-helmfile_v0.9.0: Error: Failed to get release *** in namespace kube-system: exit status 1: Error: Kubernetes cluster unreachable: Get "http://localhost:8080/version?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused

I dug deeper and found out the there is an extra line between "KUBECONFIG=" and the file name I specify in the env var list and as a result, helmfile and helm are not getting the right env var. the issue seems to be here: https://github.com/mumoshu/terraform-provider-helmfile/blob/master/pkg/helmfile/release_set.go#L186

Fix tests

Right now tests here are inherited from the shell provider. They always fail. It would be great to adjust them to the current provider or write them from scratch.

Advantage over helm provider?

Hi,
I've been using helmfile standalone without terraform and I think it's great!
However, when using terraform to trigger helm releases, what advantage does helmfile gives us besides the diff output?
In addition, it appears that when deploying a release for the first time we I don't get the diff output, only one the deployed release is changed I get the diff output.

Thanks

Environment `helm` used by default

I've noticed that if I don't specify an environment in my release_set, environment helm is used by default:

Generated command: wd = ., args = helmfile --environment helm --file helmfile-d3edf97a6b5b1af47bc97f073ab79a75bd9ba097852cd1c92386ad418473bd85.yaml --helm-binary helm --no-color diff --concurrency 0 --detailed-exitcode --suppress-secrets --context 3

Is this an expected behaviour? What is the reason for this?

I see some mentions here:

// environment defaults to "helm" for helmfile_release_set but it's always nil for helmfile_release.

Provider produced inconsistent final plan when using oci repo

terraform-provider-helmfile version: v0.13.3
helmfile version: v0.138.4

Overview:
Charts from oci repository, supported in helmfile v0.138.4, are exported in random directories each time during plan and apply resulting in difference in diff_output and produces following error

Error: Provider produced inconsistent final plan

When expanding the plan for helmfile_release_set.kubernetes to include new
values learned so far during apply, provider
"registry.terraform.io/mumoshu/helmfile" produced an invalid new value for
.diff_output: was cty.StringVal(...) but now cty.StringVal(...).

Feature Request: In memory helm diff

Firstly great plugin thanks for making this 🙏

Is there any reason the helm diff needs to output a file? Would there be any reason this couldn't be done in memory?

IE something along the lines of:

resource "helmfile_release_set" "release" {
  working_directory = "NONE"
  content = file("./helmfile.yaml")
}

[Missing Doc] How does helmfile provider authenticate to kubectl : with external kubeconfig or with kubernetes provider

I am using kubernetes terraform provider

provider "kubernetes" {
  host                   = data.aws_eks_cluster.cluster.endpoint
  cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
  token                  = data.aws_eks_cluster_auth.cluster.token
  load_config_file       = false
  version                = "~> 1.9"
}

Does helmfile leverage this configuration ? or it just target the cluster with host kubeconfig ?

On other words:

  • Does helmfile ignore the kubernetes provider ?
  • If so, does it rely on the host configuration of KUBECONFIG , AWS_PROFILE ,.. and so on ?
  • Is it expected to provider an environment variable KUBECONFIG like this :
resource "helmfile_release_set" "nginx_ingress" {
    content               = data.template_file.helmfile.rendered
    environment           = var.environment
    environment_variables = {
      KUBECONFIG = module.eks.kubeconfig_filename # <-- 🔴 👈🏼
    }
    selector              = {
      # Corresponds to -l labelkey1=value1
      name = "ingress"
    }
}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.