Giter Site home page Giter Site logo

Comments (45)

bmonty avatar bmonty commented on July 24, 2024 35

Based on your like on #3 I assume this is for the case where plan is executed and outputting a plan which is then applied from a clean environment.

We've also experienced this in a CI environment where plan and apply are separate stages and I can also simulate the issue with this code:

data "archive_file" "this" {
 type        = "zip"
 output_path = "test.zip"
 source_file = "a.txt"
}

resource "aws_s3_bucket_object" "this" {
 bucket = "YOURBUCKETHERE"
 key    = "test.zip"
 source = "test.zip"
}

and then running something along the lines of

terraform plan -out=tfplan
rm test.zip
terraform apply "tfplan"

This comment helped me solve my issue. I'm using terraform in a Gitlab CI pipeline with separate plan and apply stages. My apply stage would fail because the archive file was not found.

What's happening (and the comment above helped me understand) is the plan step is where the archive file is actually created. To make this work in my CI pipeline, I added config to cache the files created by the plan stage and make them available to the apply stage.

I'd recommend changing the archive provider to produce the zip file during apply instead of plan. This would match with how I think about Terraform working. At a minimum, the docs for the archive provider should be updated to make it clear when Terraform creates the archive file.

from terraform-provider-archive.

hugbubby avatar hugbubby commented on July 24, 2024 35

how the hell did they manage to mess up a goddamn zip command

from terraform-provider-archive.

amine250 avatar amine250 commented on July 24, 2024 23

Having the exact same issue in our Gitlab CI pipeline.
We couldn't use artifacts since we have many zips and it might just upload sensitive data to Gitlab.
As a workaround, we are obliged to rerun terraform plan in the apply step just to create the zip file.

EDIT: According to this bit of documentation, you can defer the creation of the archive file until some resource is applied (ie. in the terraform apply step). One can imagine something like this, which also works as a workaround:

data "archive_file" "zip" {
  type        = "zip"
  source_file = "${path.module}/textfile.txt"
  output_path = "${path.module}/myfile.zip"
  depends_on = [
    random_string.r
  ]
}

resource "random_string" "r" {
  length  = 16
  special = false
}

or something like this, which has an equivalent dependency graph:

data "archive_file" "zip" {
  type        = "zip"
  source_file = "${path.module}/textfile.txt"
  output_path = "${path.module}/myfile-${random_string.r.result}.zip"
}

resource "random_string" "r" {
  length  = 16
  special = false
}

from terraform-provider-archive.

jharley avatar jharley commented on July 24, 2024 17

I ran into this on Terraform Cloud, also. It would be ideal if we could persist a single directory between the plan and apply phases (or, if archive_file was smart enough to regenerate the archive during "apply" if it was missing)

from terraform-provider-archive.

krishansrimal avatar krishansrimal commented on July 24, 2024 11

Got hit by same problem. I wonder why there is still no proper solution from archive provider :(

from terraform-provider-archive.

monti-python avatar monti-python commented on July 24, 2024 11

An even better solution is to use timestamp() as part of the output_path:

data "archive_file" "zip" {
  type        = "zip"
  source_file = "${path.module}/textfile.txt"
  output_path = "${path.module}/myfile-${timestamp()}.zip"
}

This will force terraform to create the zip during the apply phase, and doesn't need any extra providers

from terraform-provider-archive.

ianwremmel avatar ianwremmel commented on July 24, 2024 10

yea, a while after I posted my last comment, I came up with something like

locals {
  source_dir = "${path.module}/cookbook-archive"
}

resource "random_uuid" "this" {
  keepers = {
    for filename in fileset(local.source_dir, "**/*"):
    filename => filemd5("${local.source_dir}/${filename}")
  }
}

data "archive_file" "cookbook" {
  # threw the `/temp/` in there to gitignore it easier, but in hindsight it  
  # could be just as easy to gitignore `cookbook*.zip`
  output_path = "${path.module}/temp/cookbook-${random_uuid.this.result}.zip"
  source_dir  = local.source_dir
  type        = "zip"
}

resource "aws_s3_bucket_object" "cookbook" {
  bucket = module.cookbook.bucket_name
  key    = "cookbook.zip"
  source = data.archive_file.cookbook.output_path

  tags = {
    ManagedBy = "Terraform"
  }
}

(did this from memory, so it might not quite work as-is, but it should be close)

from terraform-provider-archive.

JonnyDaenen avatar JonnyDaenen commented on July 24, 2024 10

I managed to tweak @amine250 's solution to get it working.
The random string does not work as it will already determine it in the plan phase it seems. Hence, I used a null resource that is triggered by a timestamp as mentioned here.

The downside of this approach is that even when the underlying files haven't changed, it will trigger and update. In my case this works out nicely as I'm using this to deploy a Cloud Function (GCP), which will not redeploy when there are no changes (the zipfile I upload to Cloud Storage has a hash in its name).

Note that using a the null-resource directly on the archive resource and triggering the null resource with a hash of the 2 file contents does not work.

# Dummy resource to ensure archive is created at apply stage
resource null_resource dummy_trigger {
  triggers = {
    timestamp = timestamp()
  }
}

data "local_file" "py_main" {
  filename = "${path.root}/../../../../cloud_function/main.py"
  depends_on = [
  # Make sure archive is created in apply stage
    null_resource.dummy_trigger
  ]
}

data "local_file" "py_req" {
  filename = "${path.root}/../../../../cloud_function/requirements.txt"
  depends_on = [
  # Make sure archive is created in apply stage
    null_resource.dummy_trigger
  ]
}



data "archive_file" "cf_zip" {
  type        = "zip"
  output_path = "${path.root}/../../../../tmp/cf.zip"

  source {
    content  = data.local_file.py_main.content
    filename = "main.py"
  }

  source {
    content  = data.local_file.py_req.content
    filename = "requirements.txt"
  }
}

from terraform-provider-archive.

Paulmolin avatar Paulmolin commented on July 24, 2024 9

The tricks work indeed, but then, each time a new apply is made, the archive and all resources that depend on it (e.g. a lambda function) will be modified, even if the content of the lambda did not change.
The Terraform code is then not idempotent anymore.

from terraform-provider-archive.

bendbennett avatar bendbennett commented on July 24, 2024 7

The fundamental issue here is that the archive data source has side effects (i.e., creates a .zip).

Data sources are an abstraction that allow Terraform to reference external data. Unlike managed resources, Terraform does not manage the lifecycle of the resource or data. Data sources are intended to have no side-effects.

When terraform plan -out=tfplan is executed, the Read function in the data source is called, creating the archive and updating the state. The generated tfplan file contains no changes. Consequently, executing terraform apply tfplan does nothing.

This is expected behaviour for Terraform, again the issue is the fact that the archive data source has side effects. Currently, the workarounds described which have implicit or explicit dependencies on a managed resource are the only way to try and force execution during terraform apply rather than terraform plan.

from terraform-provider-archive.

queglay avatar queglay commented on July 24, 2024 4

I have the same problem, but this shouldn't be marked resolved with a timestamp forcing zips and lambda layers to get versioned up all the time its wasteful and slows down CI. The hash of the zip or intended contents should determine if dependencies are retriggered and currently they aren't.

from terraform-provider-archive.

akirax-git avatar akirax-git commented on July 24, 2024 3

I also run into the same issue in Gitlab, and the resource.random_string did not work, but resource.null_resource work. Thanks!

from terraform-provider-archive.

dstuck avatar dstuck commented on July 24, 2024 3

Wanted to leave a warning for anyone considering the suggestion:

If I create the initial zip file manually myself then the archive_file behavior on subsequent apply runs works fine for me -- using terraform version 0.12.28

I tested this out and it does not work. It simply unbreaks the apply by putting an old version of the zip file there.

test.tf:

data "archive_file" "api" {
  type        = "zip"
  source_dir  = "${path.module}/test_files/"
  output_path = "${path.module}/test.zip"
  excludes    = ["__pycache__"]
}

resource "local_file" "zip_sha" {
  content  = data.archive_file.api.output_sha
  filename = "${path.module}/test_sha.txt"
}

Taking an old copy of the zip file with sha , and running the following shows that we end up with the old version of the zip file present during apply.

cp old_test.zip test.zip
terraform plan -out=tfplan
cp old_test.zip test.zip
terraform apply "tfplan"
cat test_sha.txt
> 33585fa47331712f37d9206c3587b6a1380db53b
shasum test.zip
> 0dd4eb3e0f51b5f659c991d1ff93ef5d2c1cc2a0  test.zip

from terraform-provider-archive.

ocervell avatar ocervell commented on July 24, 2024 2

I'm adding an additional workaround below.

If you don't know which files will change, I suggest something along the following:

data "external" "hash" {
  program = ["bash", "${path.module}/scripts/shasum.sh", "${path.module}/configs", "${timestamp()}"]
}

data "archive_file" "main" {
  type        = "zip"
  output_path = pathexpand("archive-${data.external.hash.result.shasum}.zip")
  source_dir  = pathexpand("${path.module}/configs")
}

output "archive_file_path" {
  value = data.archive_file.main.output_path
}

where ${path.module}/configs is the folder to archive. We pass timestamp() to the first data resource so that the hash is recomputed on every run.

The content of the shasum.sh script is as follow (note that this will work only on UNIX based systems, so it won't work on Windows:

#!/bin/bash

FOLDER_PATH=${1%/}
SHASUM=$(shasum $FOLDER_PATH/* | shasum | awk '{print $1}')
echo -n "{\"shasum\":\"${SHASUM}\"}"

from terraform-provider-archive.

iVariable avatar iVariable commented on July 24, 2024 2

This does the trick for me in combination with locals (to reuse the path of the archive down the line). It creates a new archive only if the underlying source file has changed. Notice the filemd5 in the lambda_api_archive_path.

locals {
  lambda_api_function_name = "api"
  lambda_api_binary_path   = "${path.cwd}/../build/${local.lambda_api_function_name}"
  lambda_api_archive_path  = "${path.module}/tf_generated/${local.lambda_api_function_name}-${filemd5(local.lambda_api_binary_path)}.zip"
}

data "archive_file" "lambda_api_zip" {
  type        = "zip"
  source_file = local.lambda_api_binary_path
  output_path = local.lambda_api_archive_path
}

from terraform-provider-archive.

andrewedstrom avatar andrewedstrom commented on July 24, 2024 2

@bendbennett forgive me, but I find your response unsatisfactory.

data.archive_file is an official provider from terraform that lives in this repo. What good does it do to tell us that the code in this repo, that y'all maintain, does something non-idiomatic?

If there's a more idiomatic way to do this, please tell us. What is hashicorp's recommended approach to creating a zip from a file in source code?

from terraform-provider-archive.

ocervell avatar ocervell commented on July 24, 2024 1

This is much better, thanks ! Maybe update your code so that it's valid (need a ',' line 3, and ${filename}" line 4)

from terraform-provider-archive.

ianwremmel avatar ianwremmel commented on July 24, 2024 1

good catch, thanks! also dried it up a bit :)

from terraform-provider-archive.

mikiisz avatar mikiisz commented on July 24, 2024 1

Is there a follow up on this? I was here one year ago, this behaviour still occurs

from terraform-provider-archive.

christhomas avatar christhomas commented on July 24, 2024 1

Based on your like on #3 I assume this is for the case where plan is executed and outputting a plan which is then applied from a clean environment.
We've also experienced this in a CI environment where plan and apply are separate stages and I can also simulate the issue with this code:

data "archive_file" "this" {
 type        = "zip"
 output_path = "test.zip"
 source_file = "a.txt"
}

resource "aws_s3_bucket_object" "this" {
 bucket = "YOURBUCKETHERE"
 key    = "test.zip"
 source = "test.zip"
}

and then running something along the lines of

terraform plan -out=tfplan
rm test.zip
terraform apply "tfplan"

This comment helped me solve my issue. I'm using terraform in a Gitlab CI pipeline with separate plan and apply stages. My apply stage would fail because the archive file was not found.

What's happening (and the comment above helped me understand) is the plan step is where the archive file is actually created. To make this work in my CI pipeline, I added config to cache the files created by the plan stage and make them available to the apply stage.

I'd recommend changing the archive provider to produce the zip file during apply instead of plan. This would match with how I think about Terraform working. At a minimum, the docs for the archive provider should be updated to make it clear when Terraform creates the archive file.

Knowing this helped solve my pipeline problem where I would also plan, then apply in separate gitlab pipeline stages. So the apply would attempt to upload the lambda zip files, which were generated in the plan stage and it would fail. So just adding in the plan stage, the zip folder to the artifacts of the stage, meant it was fixed and working in the apply stage

I don't know why the planning stage is being used to generate zip files, planning should just be about making the plan file, applying should be about creating things and doing actions. It seems wrong to do it in the plan stage. As other people have commented

from terraform-provider-archive.

CodyPaul avatar CodyPaul commented on July 24, 2024 1

#39 (comment)

also did the trick for me

from terraform-provider-archive.

christophemorio avatar christophemorio commented on July 24, 2024 1

Same issue on terraform cloud.

Workaround with consistent output_path when var.inputfile does not change,
and force datasource refresh constantly.

data "archive_file" "scenario_zip" {
  type = "zip"

  output_path = "/tmp/${filesha1(var.inputfile)}.zip"

  source {
    content  = file(var.inputfile)
    filename = "myinputfile"
  }

  source {
    # Forces a datasource refresh
    content  = timestamp()
    filename = ".timestamp"
  }
}

from terraform-provider-archive.

ccayg-sainsburys avatar ccayg-sainsburys commented on July 24, 2024

Based on your like on #3 I assume this is for the case where plan is executed and outputting a plan which is then applied from a clean environment.

We've also experienced this in a CI environment where plan and apply are separate stages and I can also simulate the issue with this code:

data "archive_file" "this" {
 type        = "zip"
 output_path = "test.zip"
 source_file = "a.txt"
}

resource "aws_s3_bucket_object" "this" {
 bucket = "YOURBUCKETHERE"
 key    = "test.zip"
 source = "test.zip"
}

and then running something along the lines of

terraform plan -out=tfplan
rm test.zip
terraform apply "tfplan"

from terraform-provider-archive.

dawidmalina avatar dawidmalina commented on July 24, 2024

Same issue in my case

from terraform-provider-archive.

leelakrishnachava avatar leelakrishnachava commented on July 24, 2024

still same issue.

from terraform-provider-archive.

ocervell avatar ocervell commented on July 24, 2024

Same issue here.

from terraform-provider-archive.

ianwremmel avatar ianwremmel commented on July 24, 2024

Seeing the same thing 0.12.17: when I change a file in the directory referenced below, terraform plan doesn't pick up the change unless I taint aws_s3_bucket_object.cookbook

data "archive_file" "cookbook" {
  output_path = "${path.module}/temp/cookbook.zip"
  source_dir  = "${path.module}/cookbook-archive"
  type        = "zip"
}

resource "aws_s3_bucket_object" "cookbook" {
  bucket = module.cookbook.bucket_name
  key    = "cookbook.zip"
  source = data.archive_file.cookbook.output_path

  tags = {
    ManagedBy = "Terraform"
  }
}

I'm running plan via app.terraform.io, so I assume it would generate the archive on every run and not cache it from a previous run.

from terraform-provider-archive.

ocervell avatar ocervell commented on July 24, 2024

Oops, just run into a weird thing with this code (seems like a provider error):

Error: Provider produced inconsistent final plan

When expanding the plan for module.slo-pipeline-cf-errors.random_uuid.hash to
include new values learned so far during apply, provider "random" produced an
invalid new value for .keepers["slo_config.json"]: was
cty.StringVal("d8073f7f8a404661c31a3cdf66ae6f8d"), but now
cty.StringVal("b42b077fe6dd6e3a57af845c5b0c6c0d").


This is a bug in the provider, which should be reported in the provider's own
issue tracker.

from terraform-provider-archive.

ianwremmel avatar ianwremmel commented on July 24, 2024

weird. I haven't run into that, but I've also only made one change, so maybe it'll bite me next time. Maybe try one of the other file hash methods? could be something weird about md5 on one of the systems involved?

from terraform-provider-archive.

ocervell avatar ocervell commented on July 24, 2024

Ah, it's because I'm dynamically adding a file (generated by TF) to my source directory, using the local_file resource. Even with a depends_on = [local_file.main] in the random_uuid.this resource, it seems like the fileset is executed before the file is dropped in the folder, thus confusing Terraform.

from terraform-provider-archive.

ianwremmel avatar ianwremmel commented on July 24, 2024

what if you added it explicitly somehow? something like:

resource "random_uuid" "this" {
  keepers = {
    localfile => md5(local_file.main.content)
    for filename in fileset(local.source_dir, "**/*"):
    filename => filemd5("${local.source_dir}/${filename}")
  }
}

no idea if for loops work like that... :)

from terraform-provider-archive.

warrenstephens avatar warrenstephens commented on July 24, 2024

If I create the initial zip file manually myself then the archive_file behavior on subsequent apply runs works fine for me -- using terraform version 0.12.28

from terraform-provider-archive.

shambhu9803 avatar shambhu9803 commented on July 24, 2024

this solution worked for me adding source code hash hashicorp/terraform#8344 (comment)

from terraform-provider-archive.

josjaf avatar josjaf commented on July 24, 2024

I just ran into this issue in Gitlab as well

from terraform-provider-archive.

edomaur avatar edomaur commented on July 24, 2024

Got hit by that problem, and I also solved it using #39 (comment)

Works well (but it would be nice if the Terraform doc contained more borderline examples like this... )

from terraform-provider-archive.

micchickenburger avatar micchickenburger commented on July 24, 2024

The archive_file artifacts are produced during the plan stage. You just need to pass the artifacts across the stages.

For instance, for Gitlab CI:

image:
  name: hashicorp/terraform:1.1.9
  entrypoint:
    - '/usr/bin/env'
    - 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'

variables:
  PLAN: "plan.tfplan"
  TF_IN_AUTOMATION: "true"

.terraform_before_script:
  - terraform --version
  # Ensure directory for lambda function zip files exists
  - install -d lambda_output
  - terraform init -input=false

stages:
  - plan
  - deploy

plan:
  stage: plan
  before_script: !reference [.terraform_before_script]
  script:
    - terraform plan -out=$PLAN -input=false
  artifacts:
    name: plan
    paths:
      - $PLAN
      - lambda_output

deploy:
  stage: deploy
  before_script: !reference [.terraform_before_script]
  script:
    - terraform apply -input=false $PLAN
  dependencies:
    - plan

Then, in your Terraform file:

data "archive_file" "function" {
  type        = "zip"
  source_dir  = "${path.root}/lambda/function"
  output_path = "${path.root}/lambda_output/function.zip"
}

from terraform-provider-archive.

amine250 avatar amine250 commented on July 24, 2024

The archive_file artifacts are produced during the plan stage. You just need to pass the artifacts across the stages.

FYI, it's not recommended to store plan files as artifacts because it might contain sensitive data and is not encrypted.

from terraform-provider-archive.

WalterClementsJr avatar WalterClementsJr commented on July 24, 2024

currently on Terraform v1.6.6 and it has happened twice this week. It's driving me insane.

from terraform-provider-archive.

lowkasen avatar lowkasen commented on July 24, 2024

facing the same issue

from terraform-provider-archive.

mikemiller35 avatar mikemiller35 commented on July 24, 2024

Same issue here

from terraform-provider-archive.

antoinefaure avatar antoinefaure commented on July 24, 2024

This works fine for me. Just followed this SO thread:
https://stackoverflow.com/questions/53477485/terraform-does-not-detect-changes-to-lambda-source-files

from terraform-provider-archive.

overfl0wd avatar overfl0wd commented on July 24, 2024

Issue still present. My plan and apply stages run separately in Gitlab CICD pipelines, so for me the fix was caching *.zip in my pipeline confiig so the files were passed from one stage to another

from terraform-provider-archive.

JacobDiChiacchio avatar JacobDiChiacchio commented on July 24, 2024

Also facing this issue. Why is this closed?

from terraform-provider-archive.

Bruno1298 avatar Bruno1298 commented on July 24, 2024

https://stackoverflow.com/questions/53477485/terraform-does-not-detect-changes-to-lambda-source-files

only If you use AWS :/

from terraform-provider-archive.

bwhaley avatar bwhaley commented on July 24, 2024

One point of confusion that I have is the difference between the archive_file resource and the data source. The docs say that the resource is deprecated, but #218 says otherwise. The resource generates the zip file during apply.

from terraform-provider-archive.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.