Giter Site home page Giter Site logo

hashicorp / terraform-provider-archive Goto Github PK

View Code? Open in Web Editor NEW
85.0 23.0 60.0 11.47 MB

Utility provider that provides a data source that can create zip archives for individual files or collections of files.

Home Page: https://registry.terraform.io/providers/hashicorp/archive/latest/docs

License: Mozilla Public License 2.0

Makefile 0.27% Go 99.17% HCL 0.56%
terraform terraform-provider archive

terraform-provider-archive's Introduction

Terraform Provider: Archive

The Archive provider interacts with files. It provides a data source that can create zip archives for individual files or collections of files.

Documentation, questions and discussions

Official documentation on how to use this provider can be found on the Terraform Registry. In case of specific questions or discussions, please use the HashiCorp Terraform Providers Discuss forums, in accordance with HashiCorp Community Guidelines.

We also provide:

  • Support page for help when using the provider
  • Contributing guidelines in case you want to help this project
  • Design documentation to understand the scope and maintenance decisions

The remainder of this document will focus on the development aspects of the provider.

Compatibility

Compatibility table between this provider, the Terraform Plugin Protocol version it implements, and Terraform:

Archive Provider Terraform Plugin Protocol Terraform
>= 2.x 5 >= 0.12
>= 1.2.x, <= 1.3.x 4, 5 >= 0.11
<= 1.1.x 4 <= 0.11

Requirements

Development

Building

  1. git clone this repository and cd into its directory
  2. make will trigger the Golang build

The provided GNUmakefile defines additional commands generally useful during development, like for running tests, generating documentation, code formatting and linting. Taking a look at it's content is recommended.

Testing

In order to test the provider, you can run

  • make test to run provider tests
  • make testacc to run provider acceptance tests

It's important to note that acceptance tests (testacc) will actually spawn terraform and the provider. Read more about they work on the official page.

Generating documentation

This provider uses terraform-plugin-docs to generate documentation and store it in the docs/ directory. Once a release is cut, the Terraform Registry will download the documentation from docs/ and associate it with the release version. Read more about how this works on the official page.

Use make generate to ensure the documentation is regenerated with any changes.

Using a development build

If running tests and acceptance tests isn't enough, it's possible to set up a local terraform configuration to use a development builds of the provider. This can be achieved by leveraging the Terraform CLI configuration file development overrides.

First, use make install to place a fresh development build of the provider in your ${GOBIN} (defaults to ${GOPATH}/bin or ${HOME}/go/bin if ${GOPATH} is not set). Repeat this every time you make changes to the provider locally.

Then, setup your environment following these instructions to make your local terraform use your local build.

Testing GitHub Actions

This project uses GitHub Actions to realize its CI.

Sometimes it might be helpful to locally reproduce the behaviour of those actions, and for this we use act. Once installed, you can simulate the actions executed when opening a PR with:

# List of workflows for the 'pull_request' action
$ act -l pull_request

# Execute the workflows associated with the `pull_request' action 
$ act pull_request

Releasing

The release process is automated via GitHub Actions, and it's defined in the Workflow release.yml.

Each release is cut by pushing a semantically versioned tag to the default branch.

License

Mozilla Public License v2.0

terraform-provider-archive's People

Contributors

aareet avatar apparentlymart avatar appilon avatar austinvalle avatar bendbennett avatar bflad avatar bookshelfdave avatar bsick7 avatar danielmschmidt avatar dependabot[bot] avatar findkim avatar gechr avatar grubernaut avatar hashicorp-tsccr[bot] avatar hc-github-team-tf-provider-devex avatar jeanneryan avatar jvoorhis avatar katbyte avatar kmoe avatar mtougeron avatar nicolai86 avatar paultyng avatar phinze avatar radeksimko avatar raymondhardynike avatar sbgoods avatar stack72 avatar stefansundin avatar team-tf-cdk avatar zachwhaley avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

terraform-provider-archive's Issues

Error installing provider "archive": openpgp: signature made by unknown entity.

Terraform Version

0.11.14

Affected Resource

  • archive_file

Terraform Configuration Files

data "archive_file" "lambda_audit_zip" {
  type        = "zip"
  source_dir  = "${path.module}/source"
  output_path = "${path.module}/audit.zip"
}

Expected Behavior

What should have happened?
Terraform init/plan runs with no issues.

Actual Behavior

What actually happened?
Terraform init command is failing with:

Error installing provider "archive": openpgp: signature made by unknown entity.

Terraform analyses the configuration and state and automatically downloads plugins for the providers used. However, when attempting to download this plugin an unexpected error occured.
This may be caused if for some reason Terraform is unable to reach the plugin repository. The repository may be unreachable if access is blocked by a firewall.
If automatic installation is not possible or desirable in your environment, you may alternatively manually install plugins by downloading a suitable distribution package and placing the plugin's executable file in the following directory:
      terraform.d/plugins/linux_amd64

Steps to Reproduce

  1. terraform init

It looks like this provider was updated a few hours ago, and since then we are seeing this issue which is causing our builds to fail:
https://registry.terraform.io/providers/hashicorp/archive/latest

Store source and output file hashes instead of actual file data in tfstate

Terraform Version

$ terraform -v
Terraform v0.11.7
+ provider.archive v1.0.3
+ provider.aws (unversioned)

Affected Resource(s)

  • archive_file

Terraform Configuration Files

provider "aws" { }

data "archive_file" "myarchive" {
  type        = "zip"
  output_path = "myarchive.zip"

  source {
    content  = "${file("bigfile")}"
    filename = "bigfile.1"
  }
  source {
    content  = "${file("bigfile")}"
    filename = "bigfile.2"
  }
}

Debug Output

Panic Output

N/A

Expected Behavior

It would be preferable to only store a hash of the source and output file contents in the Terraform state file, instead of the entire file contents.

Actual Behavior

The entire source file contents are stored in the Terraform state file, which needs to be JSON encoded. Thus creating a ZIP file with two 100MB sparse files inside will result in a Terraform state file of 1.2GB in size! As you can imagine, this is very slow, especially if you store your state in an S3 bucket.

Steps to Reproduce

Create an archive containing one or more large files using archive_file.

Important Factoids

An example of this in action:

References

No other references that I could find.

For my own reference, my internal bug tracking number is GS-5487.

Export file contents of archive as attribute

I am using terraform and the archive provider to create small containers of config files (just like in the example with the dotfiles!). I would be interested in subsequently uploading them to an aws_s3_bucket_object to distribute them, and setting the content from an output on the archive provider rather than having to send to a temporary file. Is this of any interest? Seems like @BernhardBln came to the same conclusion here . I dug into the archive provider a little bit, and could submit a PR if there is interest.

Using multiple source files for Lambda with archive_file

Terraform Version

Terraform v0.12.15
+ provider.archive v1.3.0
+ provider.aws v2.37.0

Affected Resource(s)

archive_file

Terraform Configuration Files

data "archive_file" "lambda" {
  type        = "zip"
  source_dir  = "../src/"
  output_path = "../dist/lambda_src.zip"
}

Expected Behavior

archive should contain only the files.

Actual Behavior

archive has a directory that contains the files

Steps to Reproduce

Use the example with more than one source file.

References

Are there any other GitHub issues (open or closed) or Pull Requests that should be linked here? For example:

  • Using multiple source files for Lambda with archive_file

TF example with just one file: https://github.com/terraform-providers/terraform-provider-aws/blob/master/examples/lambda/main.tf#L8-L12 In my case, it's more than one source file.

difference between resource archive_file and data archive_file

Terraform Version

Terraform v0.12.24

  • provider.archive v1.3.0
  • provider.azurerm v2.7.0
  • provider.local v1.4.0
  • provider.null v2.1.2
  • provider.random v2.2.1

Affected Resource(s)

Please list the resources as a list, for example:

  • archive_file

Terraform Configuration Files

data "archive_file" "packs" {
  depends_on  = [
    local_file.function]
  for_each    = var.configuratin_collection
  output_path = "${path.module}/.pack_${each.key}.zip"
  type        = "zip"
  source_dir  = "${path.module}/.pack_${each.key}"
}

resource "null_resource" "pack_upload" {
  for_each   = var.configuratin_collection
  triggers   = {
    archive_file   = data.archive_file.packs[each.key].output_path
    zip_hash       = data.archive_file.packs[each.key].output_md5
  }

  provisioner "local-exec" {
    command = <<COMMAND
....
      COMMAND
  }
}

Debug Output

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
-/+ destroy and then create replacement
 <= read (data resources)

Terraform will perform the following actions:

  # module.azure_function.module.internal.data.archive_file.packs["test_function"] will be read during apply
  # (config refers to values not yet known)
 <= data "archive_file" "packs"  {
      + id                  = (known after apply)
      + output_base64sha256 = (known after apply)
      + output_md5          = (known after apply)
      + output_path         = "../ForEach/.pack_test_function.zip"
      + output_sha          = (known after apply)
      + output_size         = (known after apply)
      + source_dir          = "../ForEach/.pack_test_function"
      + type                = "zip"
    }

  # module.azure_function.module.internal.null_resource.pack_upload["test_function"] must be replaced
-/+ resource "null_resource" "pack_upload" {
      ~ id       = "7158927369914622277" -> (known after apply)
      ~ triggers = {
          - "archive_file"   = "../ForEach/.pack_test_function.zip"
          - "function_name"  = "testfunction-de3i6si7h3"
          - "resource_group" = "rg-azurefunctiontest"
          - "subscription"   = "1e1a42bf-00fd-4289-aba1-8a8898c8c12b"
          - "zip_hash"       = "67b5def34234d9b3c8059d6ce5a01b92"
        } -> (known after apply) # forces replacement
    }

Plan: 1 to add, 0 to change, 1 to destroy.

Expected Behavior

I was using the archive_file resource until recently, and tried to fix the warning by replacing it with the datasource. But the datasource causes issues when using the ouputs of the archive as trigger in a custom resource.

When using resource archive_file archive the null_resource is only redeployed when the content of the archive is changed. When using data archive_file archive the null_resource is recreated every time.

Actual Behavior

The null_resource should not get recreated if the content is not changed.

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. terraform plan

Feature Request: multiple files and directories handled properly

Use Case: I need to build a zip archive to upload to AWS Lambda. Because I am using the NodeJS platform, my zip will typically contain one source file and the node_modules directory. I cannot build this via the file_archive provider. I would like to be able to use this HCL:

data "archive_file" "lambdaCode" {
   type = "zip"
   source_file = "lambda_process_firewall_updates.js"
   source_dir = "node_modules"
   output_path = "${var.lambda_zip}"
}

Of course, you can only have one of source_file and source_dir. I have other files in the source file directory and so I want to exclude them all (not one-by-one).

I looked at the provider code, and it seems like the archive platform in Go needs to be used in a way where it keeps the archive open as you loop over the source* arguments. Not terrible and I thought that might be something I could do, but it is enough work for me to pick up Go again to where it will have to be saved for a rainy day.

The archive_file resource now zips directories during terraform plan and ignores depends_on

This issue was originally opened by @rluckey-tableau as hashicorp/terraform#26064. It was migrated here as a result of the provider split. The original body of the issue is below.


Terraform Version

Terraform v0.13.1

Terraform Configuration Files

resource "null_resource" "config_and_jar" {
  triggers = {
    on_every_apply = uuid()
  }

  provisioner "local-exec" {
    command = join(" && ", [
      "rm -rf ${var.zip_input_dir}",
      "mkdir -p ${var.zip_input_dir}/lib",
      "wget ${var.jar_url}",
      "mv './${var.jar_filename}' '${var.zip_input_dir}/lib'",
      "cp -r '${var.config_directory}'/* '${var.zip_input_dir}'",
    ])
  }
}

data "archive_file" "lambda" {
  depends_on  = [null_resource.config_and_jar]

  output_path = var.zip_output_filepath
  source_dir  = var.zip_input_dir
  type        = "zip"
}

Debug Output

Crash Output

Expected Behavior

   # module.samplelambda.data.archive_file.lambda will be read during apply
   # (config refers to values not yet known)
  <= data "archive_file" "lambda"  {
       + id                  = (known after apply)
       + output_base64sha256 = (known after apply)
       + output_md5          = (known after apply)
       + output_path         = "lambda/temp/samplelambda/src.zip"
       + output_sha          = (known after apply)
       + output_size         = (known after apply)
       + source_dir          = "lambda/samplelambda/zip-temp"
       + type                = "zip"
     }

    ...

Job succeeded

Actual Behavior

 Error: error archiving directory: could not archive missing directory: lambda/samplelambda/zip-temp
   on lambda/main.tf line 52, in data "archive_file" "lambda":
   52: data "archive_file" "lambda" {
ERROR: Job failed: exit code 1

Steps to Reproduce

  1. terraform init
  2. terraform plan

Additional Context

This is running on GitLab CI/CD runners. It has been working fine for at least the past year on various versions of Terraform 0.12 up through Terraform 0.12.21.

In case it is useful, here are the provider versions being pulled when using Terraform 0.13.1:

 - Installed -/aws v3.4.0 (signed by HashiCorp)
 - Installed hashicorp/null v2.1.2 (signed by HashiCorp)
 - Installed hashicorp/local v1.4.0 (signed by HashiCorp)
 - Installed -/null v2.1.2 (signed by HashiCorp)
 - Installed hashicorp/aws v3.4.0 (signed by HashiCorp)
 - Installed -/archive v1.3.0 (signed by HashiCorp)
 - Installed -/local v1.4.0 (signed by HashiCorp)
 - Installed hashicorp/archive v1.3.0 (signed by HashiCorp)

Edit: Tested the same TF code with a 0.12.29 image and terraform plan passed with no issues, even though it is pulling the same provider versions.

 - Downloading plugin for provider "aws" (hashicorp/aws) 3.4.0...
 - Downloading plugin for provider "local" (hashicorp/local) 1.4.0...
 - Downloading plugin for provider "null" (hashicorp/null) 2.1.2...
 - Downloading plugin for provider "archive" (hashicorp/archive) 1.3.0...

References

Create provider design doc

Maintenance of the standard library providers prioritises stability and correctness relative to the provider's intended feature set. Create a design document describing this feature set, and any other design considerations which influence the architecture of the provider and what can and cannot be added to it.

Run archive_file on each apply

Hi there,

Is there a way to ensure that the archive_file runs on each apply? I.e every time I change the code in my lambda python I want the archive_file to re-zip again so the lambda can update to the new code.

Enable go-changelog Automation

The "standard library" Terraform Providers should implement nascent provider development tooling to encourage consistency, foster automation where possible, and discover bugs/enhancements to that tooling. To that end, this provider's CHANGELOG handling should be switched to go-changelog, including:

  • Adding .changelog directory and template files
  • Enabling automation for regenerating the CHANGELOG (e.g. scripts or GitHub Actions)
  • (If enhancements are made available upstream in time) Enabling automation for checking CHANGELOG entries and formatting

archive_file uses backslashes in zipped files' paths on Windows

This issue was originally opened by @dtreskunov as hashicorp/terraform#17485. It was migrated here as a result of the provider split. The original body of the issue is below.


Terraform Version

Terraform v0.11.3
+ provider.archive v1.0.0

OS: Windows

Terraform Configuration Files

provider "archive" {
  version = "~> 1.0"
}

data "archive_file" "test" {
  type        = "zip"
  source_dir  = "test"
  output_path = "test.zip"
}

Expected Behavior

The generated test.zip file should contain sub/file.txt. This way, when the file is unzipped on a Linux system, directories are handled properly. Cygwin's zip as well as Windows built-in zip utility both behave correctly.

Actual Behavior

The generated test.zip file contains sub\file.txt.

Archive:  test.zip
  Length      Date    Time    Name
---------  ---------- -----   ----
        4  00-00-1980 00:00   sub\file.txt
---------                     -------
        4                     1 file

Steps to Reproduce

Please list the full steps required to reproduce the issue, for example:

  1. In an empty directory, paste the above into main.tf
  2. mkdir -p test/sub
  3. touch test/sub/file.txt
  4. terraform init
  5. terraform plan
  6. unzip -l test.zip

data.archive_file does not support resource tainting

It seems like archive_file datasource does not support tainting at the moment, is this a bug or is it not supposed to support tainting ?

Terraform Version

$ terraform -v
Terraform v1.1.2
on linux_amd64
+ provider registry.terraform.io/hashicorp/archive v2.2.0

Affected Resource(s)

  • data.archive_file

If this issue appears to affect multiple resources, it may be an issue with Terraform's core, so please mention this.

Terraform Configuration Files

data "archive_file" "test" {
  type = "zip"
  source_file = "${path.module}/test.json"
  output_path = "${path.module}/out.zip"
}

output "path" {
  value = data.archive_file.test.output_path
}

output "md5" {
  value = data.archive_file.test.output_md5
}

Debug Output

https://gist.github.com/tbobm/653cb4b2b3b057da8fcbc025e02a454b

Expected Behavior

Upon tainting the archive_file, I expect it to be registered as tainted in order to manually force recreation during the next terraform apply

Actual Behavior

The resource does not seem to support tainting.

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. terraform taint data.archive_file.test

source_dir zips folder contents rather than folder

Terraform Version

v0.14.8

Affected Data

  • archive_file

Terraform Configuration Files

// Zips the mappings folder into mappings.zip
data "archive_file" "lib" {
  type        = "zip"
  source_file  = "${path.module}/folder"
  output_path = "${path.module}/folder.zip"
}

Expected Behavior

folder.zip deflates to folder

$ unzip folder.zip
$ tree
.
└──  folder

Actual Behavior

folder.zip deflates to the contents of folder not the folder itself

$ unzip folder.zip
$ tree
.
├── file1
└──  file2

Steps to Reproduce

  1. mkdir folder
  2. touch folder/file{1,2}
  3. terraform apply
  4. unzip folder

archive_file changes file permissions on Windows

This is a re-opening of issue #10 .

This issue is still occurring on Windows. Within the ZIP file created by archive_file when running Terraform v0.12.18 with AWS provider v2.43 on Windows 10, the contained file has 666 permissions (no execute). Running the exact same Terraform plan with the same version on Linux results in the contained file having 777 permissions.

Feature request - add feature to add file to pre-existing archive in Archive provider

This issue was originally opened by @ponyfleisch as hashicorp/terraform#9904. It was migrated here as part of the provider split. The original body of the issue is below.


I'm using AWS Lambda with node, so i package all the dependencies in a zip file. I need to use the template rendering on the main file to set some configurable parameters. If i want to use the Archive provider to create a zip file with the main file and the node_modules directory, i need to keep the node_modules directory unzipped in my module repository.

It would be much better to keep the dependencies around as zip and then use the Archive provider to add the source file to it (creating a new output zip file in the process).

Support glob paths in archive_file data source excludes

It would be really handy if data.archive_file supported glob matching for excludes. This would enable configurations such as:

data "archive_file" "zip" {
  type        = "zip"
  output_path = "./myzip.zip"
  source_dir  = "."
  excludes    = ["**/.git"]
}

in order to exclude all .git directories from the archive.

The glob matching should ideally support the well known ** syntax meaning 0 or more directories

Affected Resource(s)

  • data.archive_file

Issue archiving base64 encoded content w/ source block

Terraform CLI and Provider Versions

Terraform v1.2.3
on darwin_amd64

  • provider registry.terraform.io/hashicorp/archive v2.2.0

Terraform Configuration

data "archive_file" "input_archive" {
  type        = "zip"
  source_file = "${path.module}/input.file"
  output_path = "${path.module}/files/input.zip"
}

data "archive_file" "output_archive" {
  type        = "zip"
  output_path = "${path.module}/files/output.zip"

  source {
    content = filebase64(data.archive_file.input_archive.output_path)
    filename = "${data.archive_file.input_archive.output_path}"
  }
}

Expected Behavior

Just for the sake of producing an working bug in the Terraform configuration we have the following:

  1. First "archive_file" creates a dummy archive 'input.zip' based on 'input.file'
  2. Second "archive_file" creates an 'output.zip' w/ base64 contents of 'input.zip' archive (since it's a binary file using filebase64 function)

Expected behaviour is that the input.zip archive that is archived within output.zip archive is valid and extractable.

Actual Behavior

When 'output.zip' archive is extracted, 'input.zip' is present but it's not extractable. Upon closer inspection it seems that it's still base64 encoded.

Steps to Reproduce

  1. terraform apply

How much impact is this issue causing?

Medium

Logs

No response

Additional Information

No response

Code of Conduct

  • I agree to follow this project's Code of Conduct

Ability to set explicit permissions on files included in archives

Terraform Version

v0.12.19

Affected Resource(s)

archive_file

Terraform Configuration Files

data "http" "datadog" {
  url = "https://raw.githubusercontent.com/DataDog/datadog-serverless-functions/master/aws/logs_monitoring/lambda_function.py"
}

data "archive_file" "datadog" {
  type        = "zip"
  output_path = "${path.module}/files/lambda_function.py.zip"
  source {
    content  = data.http.datadog.body
    filename = "lambda_function.py"
  }
}

resource "aws_lambda_function" "datadog" {
  function_name                   = "datadog-logs"
  filename                        =  data.archive_file.datadog.output_path
  source_code_hash                =  filebase64sha256(data.archive_file.datadog.output_path)
  ...
}

Actual Behavior

The file is written to disk using the umask of the host. The file is included in the zip with those same permissions. If the umask is more restrictive than the 755 required by Lambda, the zip is unreadable by Lambda and Lambda fails with a "permission denied".

Expected Behavior

The above is "expected" but is unpredictable - what works on a dev laptop doesn't match what happens on a CI/CD server because it is vulnerable to the host's umask. Instead, the archive_file resource should support a file_permission attribute on sources just like the local_file resource does:

data "archive_file" "datadog" {
  type        = "zip"
  output_path = "${path.module}/files/lambda_function.py.zip"

  source {
    content  = "${data.http.datadog.body}"
    filename = "lambda_function.py"
    file_permission = "0755"
  }

Feature Request: Maintain Parent Directory of source_dir

Trying to create a lambda layer consisting of just the ./node_modules fails, because the zip file created strips away the node_module parent directory (example). Ideally, source_dir should maintain the parent directory structure and allow you to strip away the parent directory using a value such as ./path/to/dir/**, but this would not be backwards compatible. Ways this could be solved without breaking backwards compatibility would be to add an ignore attribute that would be a regex, a target_dir attribute that would set the name of the parent directory inside of the zip file, or a boolean attribute save_parent_dir to copy the parent directory structure as well.

Terraform Version

Terraform v0.12.24
+ provider.archive v1.3.0
+ provider.aws v2.54.0

Affected Resource(s)

  • data.archive_file

Terraform Configuration Files

data "archive_file" "node_modules" {
  type        = "zip"
  output_path = "./.cache/${var.name}_node_module.zip"
  source_dir  = "${var.source_dir}/node_modules"
}

output_base64sha256 often returns the hash of an empty file instead of the hash of the contents

In short, Terraform keeps wanting to re-deploy my lambda functions at random (not even consistent between plan-cancel-plan). It turns out that in each case, the computed hash for the lambda zip file in the plan is equal to the hash of the empty file:

touch /tmp/emptyfile
openssl dgst -binary -sha256 /tmp/emptyfile | openssl base64

Output (same as the hash that's being returned by output_base64sha256 at random):

47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU=

My guess is that there's a race condition whereby the hash is sometimes performed on an empty file.

Terraform Version

Terraform v0.11.8

  • provider.archive v1.1.0
  • provider.aws v1.40.0
  • provider.local v1.1.0
  • provider.null v1.0.0
  • provider.template v1.0.0

Affected Resource(s)

  • data.archive_file

Terraform Configuration Files

These are extracts from two submodules which declare the affected lambda functions. The problem seems to apply to all lambda functions in my entire stack though.

data "archive_file" "notify_model_ready_lambda_code_zip" {
  type = "zip"
  source_file = "${path.module}/lambda/notify_model_ready.py"
  output_path = "${path.module}/lambda/notify_model_ready.zip"
}

resource "aws_lambda_function" "notify_model_ready" {
  function_name = "${var.service_name_short}-${var.environment_name}-notify-model-ready"
  description = "Notifies that the model for ${var.service_name_long} is ready by publishing to an SNS topic."
  role = "${module.notify_model_ready_lambda_role.role_arn}"
  runtime = "python2.7"
  handler = "notify_model_ready.lambda_handler"
  filename = "${data.archive_file.notify_model_ready_lambda_code_zip.output_path}"
  source_code_hash = "${data.archive_file.notify_model_ready_lambda_code_zip.output_base64sha256}"
  publish = true
  tags = "${local.all_tags}"
  environment {
    variables = {
      TOPIC_ARN = "${aws_sns_topic.model_ready.arn}"
    }
  }
  lifecycle {
    ignore_changes = ["filename"]
  }
}

...

data "archive_file" "model_refresher_lambda_code_zip" {
  type = "zip"
  source_file = "${path.module}/lambda/refresh_model.py"
  output_path = "${path.module}/lambda/refresh_model.zip"
}

resource "aws_lambda_function" "refresh_service_model" {
  function_name = "${var.service_name_short}-${var.environment_name}-refresh-model"
  description = "Takes a training output model and uses it to refresh the model for the service ${var.service_name_long}."
  role = "${module.model_refresher_iam_role.role_arn}"
  runtime = "python2.7"
  handler = "refresh_model.lambda_handler"
  filename = "${data.archive_file.model_refresher_lambda_code_zip.output_path}"
  source_code_hash = "${data.archive_file.model_refresher_lambda_code_zip.output_base64sha256}"
  publish = true
  tags = "${local.all_tags}"
  timeout = 300
  environment {
    variables = {
      ENABLED = "${var.enabled ? "true" : "false"}"
      TRAINING_OUTPUT_S3_BUCKET_NAME = "${var.training_output_s3_bucket_name}"
      SERVICE_MODEL_S3_BUCKET = "${var.service_model_s3_bucket_name}"
      ECS_CLUSTER_NAME = "${var.ecs_cluster_name}"
      ECS_SERVICE_NAME = "${var.service_name_long}-service"
      SERVICE_DOCKER_REPO_URI = "${var.service_docker_repository_url}"
    }
  }
  lifecycle {
    ignore_changes = ["filename"]
  }
}

Expected Behavior

Once the lambda functions are deployed, Terraform will not try to re-deploy them unless the source code has changed.

Actual Behavior

Terraform often (but not always) wants to re-deploy some of the lambda functions because it thinks that their source code hash has changed. The lambda functions which it wants to deploy each time are seemingly random, and change even between consecutive plan operations without applying. The source code hash in every case where it wants to re-deploy is always the same hash code, even though the different lambda functions have different source code. Note that the supposedly new hash for two different lambda functions here is "47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU="

Plan output

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  ~ update in-place
 <= read (data resources)

Terraform will perform the following actions:

 <= module.compute_shared.data.null_data_source.aws_batch_service_role_arn
      id:                   <computed>
      has_computed_default: <computed>
      inputs.%:             "1"
      inputs.arn:           "arn:aws:iam::695716229028:role/tm-ds/shared/aws-batch-service-role-prod-20180531031834797400000005"
      outputs.%:            <computed>
      random:               <computed>

  ~ module.data_science_services.module.sh_recommendations_trainer.aws_lambda_function.notify_model_ready
      last_modified:        "2018-10-08T00:26:01.608+0000" => <computed>
      qualified_arn:        "arn:aws:lambda:us-west-2:695716229028:function:sh-recs-prod-notify-model-ready:7" => <computed>
      source_code_hash:     "j3BpqsSkUvNFP2F3kRpoqCav+IlnEO6iFVwkGifFsqE=" => "47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU="
      version:              "7" => <computed>

  ~ module.data_science_services.module.sh_recommendations_trainer.aws_lambda_function.submit_training_job
      last_modified:        "2018-08-30T04:48:15.435+0000" => <computed>
      qualified_arn:        "arn:aws:lambda:us-west-2:695716229028:function:sh-recs-prod-submit-training-job:8" => <computed>
      source_code_hash:     "25xwKLUPygqJcUnFK2Iv85GhPkvP2bU7KkO6Yc9J0+s=" => "47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU="
      version:              "8" => <computed>


Plan: 0 to add, 2 to change, 0 to destroy.

Hash-code of lambda function code file changes if run in diffrent shell/os

This issue was originally opened by @wadhekarpankaj as hashicorp/terraform#22397. It was migrated here as a result of the provider split. The original body of the issue is below.


Hello,
I am using the lambda module to create lambda function in AWS. However, the value of source_code_hash changes, If I try to do terraform plan/apply in different shell or OS.
The code contents are the same every time I run terraform init. This code is used by multiple users and they have a different OS. We need a solution to avoid this.
Hope the issue is clear.

Terraform version

Terraform v0.11.11

Terraform code

data "archive_file" "lambda_code" {
  type        = "zip"
  source_file = "${path.module}/functions/lambda-function.py"
  output_path = "${path.module}/functions/lambda-function.zip"
}

resource "aws_lambda_function" "lambda_function" {
  filename         = "${replace(substr(data.archive_file.lambda_code.output_path, length(path.cwd) + 1, -1), "\\", "/")}"
  function_name    = "my-test-function
  role             = "${aws_iam_role.iam_for_lambda.arn}"
  handler          = "lambda-function.lambda_handler"
  source_code_hash = "${data.archive_file.lambda_code.output_base64sha256}"
  runtime          = "python2.7"
  timeout          = "60"

  lifecycle {
    ignore_changes = [
      "filename",
      "last_modified",
    ]
  }
}

Actual Behavior

In Windows-
No changes. Infrastructure is up-to-date.
In Ubuntu-

~   aws_lambda_function.lambda_function
      source_code_hash: "7/j4FEt6mgWVm+t991ffkck72xH9LGJvesyNqeC8ETc=" => "/S9mgjpI5UBGSRpMVQUv8HJkj3jeKGnWvsSPW4QiMzY="

and vice versa

Expected Behavior

In Windows-
No changes. Infrastructure is up-to-date.
In Ubuntu-
No changes. Infrastructure is up-to-date.

gzip support for archive_file

This issue was originally opened by @akranga as hashicorp/terraform#12884. It was migrated here as part of the provider split. The original body of the issue is below.


Feature request to implement gzip and base64 support for archive_file. Current issue is in conjunction with hashicorp/terraform#3407. Because now terraform provides data sources which seems to be right abstraction to perform such actions. Here is an example below.

data "archive_file" "foo" {
    type        = "gzip"
    source_file = "${path.module}/cloud-init.yaml"
    output_path = "${path.module}/files/cloud-init.gzip"
}

data "archive_file" "bar" {
    type        = "gzip|base64"
    source_file = "${path.module}/cloud-init.yaml"
    output_path = "${path.module}/files/cloud-init.base64"
}

We would also need a output_content as an attribute that would behave similar to source_content

If this feature request will be supported we can start working implementation.

Archive Strips File Attributes

Terraform Version

Terraform v0.10.7

Affected Resource(s)

  • data.archive_file

Terraform Configuration Files

resource "local_file" "env" {
  content  = "{\"APEX_FUNCTION_NAME\":\"${var.function_name}\",\"LAMBDA_FUNCTION_NAME\":\"${var.project_name}_${var.function_name}\"}"
  filename = "${path.module}/dist/.env.json"
}

resource "null_resource" "build_lambda" {
  provisioner "local-exec" {
    command = <<EOF
mkdir -p ${path.module}/dist
CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -ldflags -f -a -o ${path.module}/dist/main ${path.module}/src/main.go
chmod +x ${path.module}/dist/main
cp ${path.module}/src/*.js ${path.module}/dist
EOF
  }

  triggers {
   sha = "${base64sha256(file("${path.module}/src/main.go"))}"
  }

  depends_on = ["local_file.env"]
}

data "archive_file" "lambda_zip" {
  type        = "zip"
  source_dir  = "${path.module}/dist"
  output_path = "${var.build_path}"
  depends_on  = ["null_resource.build_lambda"]
}

Debug Output

N/A

Panic Output

N/A

Expected Behavior

The archive should include the golang binary with the executable flag set.

Actual Behavior

All of the file attributes were stripped from the binary.

Steps to Reproduce

  1. Create a file with +x attr
  2. Use the data.archive_file resource to archive the file from step 1.
  3. Run terraform apply
  4. Inspect the archive file. File is stripped of the +x attribute.

Important Factoids

The zip archiver is using the https://golang.org/pkg/archive/zip/#Writer.Create method which does not preserve the file header info. The https://golang.org/pkg/archive/zip/#Writer.CreateHeader method should be used instead to preserve the attributes of the files within the archive.

The executable attribute is critical when using NodeJS as a shim to execute golang binaries in AWS Lambda. Without the +x flag set, the Lambda is not able to invoke the binary correctly from the uploaded archive.

References

  • None

version 1.0.2 alters how zip files are made, which makes them appear to have changed when their contents have not changed.

Terraform Version

+ provider.archive v1.0.0
+ provider.aws v1.11.0
+ provider.template v1.0.0```
Currently running 1.0.0 provider, before the change introduced by 1.0.2

### Affected Resource(s)
- lambda functions' archive provider generated zip files
- any archive provider generated zip files that have changes tracked by hashes

### Terraform Configuration Files
we have tons, this affects all our account-regions because we have lambda scripts in our base module. This means, anyone with the latest archive-provider will generate a plan that wants to re-upload the lambda zip files, despite no changes in the source files for those zips. This causes a great deal of confusion for other teams, etc. We have temporarily pinned the archive provider version to 1.0.0 to avoid the confusion for now.

### Debug Output

### Panic Output

### Expected Behavior
Since the source files didn't change, the zip files shouldn't have changed either. 

### Actual Behavior
The hashes used to track whether the zip files have changed or not are actually changed when using this provider 1.0.2 vs 1.0.0

### Steps to Reproduce
1. Using a setup that employs the aws archive provider to generate a zip for lambda function that has already been successfully deployed under archive provider 1.0.0, run a plan. There should be no changes. Now, delete the generated zips. Upgrade provider to 1.0.2, and run a plan. Now, tf will want to update all the lambda functions. If you let it, it's not actually an issue. The zips are still valid, and so uploading them again doesn't change anything, as the source files have not changed. However, if one user does this and applies the changes / reuploads the files and stores the new hashes in the state file, then another user with the older provider runs a plan, tf will want to reupload the zips again and change all the hashes back.

Our temporary solution is pin the archiver to 1.0.0. However, I can also imagine forcing it to be 1.0.2 and then running a mass plan / apply (on all 70 or so of our accounts) - updating them all to the "new" hashes. We want to know what has changed about the zip-creating process that alters the files, though. We don't want to do this often!

### Important Factoids
We are running terraform for around 70 different accounts under one repo. This situation happens due to lambdas we have defined in our base-module, so it affects all accounts.

### References

Cannot archive directories with symbolic links (symlinks)

Hi there,

Got an issue when archive_file throws an error in case if a source directory contains symlinked folders. One may catch this error when trying to package an application folder with some dependencies added to the application folder via a symbolic link. Please check out all the details below.

Terraform Version

v0.9.8

Terraform Configuration Files

data "archive_file" "dir" {
  type = "zip"
  source_dir = "./dir_with_symlink"
  output_path = "./dir.zip"
}

Debug Output

trace.log

Main error:

error archiving directory: error reading file for archival: read dir_with_symlink/symlinked: is a directory

Expected Behavior

Symlinked folder is resolved and archived along with regular folders and files

Actual Behavior

Terraform exits with an error

Steps to Reproduce

  1. Create two directories symlinked and dir_with_symlink
  2. Run ln -s $(pwd)/symlinked $(pwd)/dir_with_symlink/
  3. Run terraform plan

Files in the archive do not have the correct modification time metadata

Expected Behavior

This data source:

data "archive_file" "salt_configuration" {
  type = "zip"
  source_dir = "foo"
  output_path = "foo.zip"
}

should construct a zipfile containing the contents of foo. Each file in the foo directory should be present in the archive with a modification time equal to that of the file in the filesystem.

Actual Behavior

All files in the archive are given the modification time of midnight Jan 1 2049.

The explanation for this seems to be straightforward. No attempt is made to preserve the file's mtime in the archive: https://github.com/terraform-providers/terraform-provider-archive/blob/bcb385318ea3c5ddc24c65231a7ed54a46fe2cb5/archive/zip_archiver.go#L133

This is undesirable because the mtime is used by some systems to indicate freshness/change. Timestamps from the archive in the distant future look newer than any real files and so changes to the real files in the archive never look newer than the "old" ones and are ignored.

[feature request][archive_file] Allow source_file to be list of files

This issue was originally opened by @mtougeron as hashicorp/terraform#8565. It was migrated here as part of the provider split. The original body of the issue is below.


Hi, it would be nice if the archive_file resource allowed source_file and source_dir to be lists. It would also be nice if they were not mutually exclusive.

For example, I'd like to use the following config to support my aws lambda functions.

resource "archive_file" "sesbounce" {
  type = "zip"
  source_file = [
    "${path.module}/sesbounce.py",
    "${path.module}/shared/common.py"
  ]
  source_dir = [
    "${path.module}/pip_modules/"
  ]

  output_path = "${path.module}/_files/sesbounce.zip"
}

resource "aws_lambda_function" "sesbounce" {
  filename = "${path.module}/_files/sesbounce.zip"
  function_name = "sesbounce"
  role = "${aws_iam_role.iam_for_lambda.arn}"
  handler = "sesbounce.handler_ses_bounce"
  source_code_hash = "${base64sha256(file("_files/sesbounce.zip"))}"
}

source_dir property of archive_file data source doesn't work with ${path.module} and "../"

This issue was originally opened by @philipl as hashicorp/terraform#12929. It was migrated here as part of the provider split. The original body of the issue is below.


Terraform Version

terraform 0.9.1

Affected Resource(s)

  • archive_file

Terraform Configuration Files

data "archive_file" "function" {
    type = "zip"
    source_dir = "${path.module}/../src"
    output_path = "${path.root}/somefile.zip"
}

Debug Output

2017/03/21 09:25:52 [DEBUG] root.cloudfront: eval: *terraform.EvalReadDataApply
o:data.archive_file.function: Refreshing state...
2017/03/21 09:25:52 [ERROR] root.cloudfront: eval: *terraform.EvalReadDataApply, err: data.archive_file.function: unexpected EOF
2017/03/21 09:25:52 [ERROR] root.cloudfront: eval: *terraform.EvalSequence, err: data.archive_file.function: unexpected EOF
2017/03/21 09:25:52 [TRACE] [walkRefresh] Exiting eval tree: module.cloudfront.data.archive_file.function
2017/03/21 09:25:52 [DEBUG] plugin: terraform: panic: runtime error: invalid memory address or nil pointer dereference
2017/03/21 09:25:52 [DEBUG] plugin: terraform: [signal SIGSEGV: segmentation violation code=0x1 addr=0x20 pc=0xb02682]
2017/03/21 09:25:52 [DEBUG] plugin: terraform: 
2017/03/21 09:25:52 [DEBUG] plugin: terraform: goroutine 43 [running]:
2017/03/21 09:25:52 [DEBUG] plugin: terraform: github.com/hashicorp/terraform/builtin/providers/archive.(*ZipArchiver).ArchiveDir.func1(0xc4204f7580, 0x7c, 0x0, 0x0, 0x73e88a0, 0xc420509ad0, 0x0, 0x0)
2017/03/21 09:25:52 [DEBUG] plugin: terraform:  /opt/gopath/src/github.com/hashicorp/terraform/builtin/providers/archive/zip_archiver.go:65 +0x52
2017/03/21 09:25:52 [DEBUG] plugin: terraform: path/filepath.walk(0xc420582090, 0x8a, 0x74067a0, 0xc4203188f0, 0xc42015a400, 0x0, 0x20)
2017/03/21 09:25:52 [DEBUG] plugin: terraform:  /opt/go/src/path/filepath/path.go:372 +0x2fe
2017/03/21 09:25:52 [DEBUG] plugin: terraform: path/filepath.Walk(0xc420582090, 0x8a, 0xc42015a400, 0xc420318820, 0x0)
2017/03/21 09:25:52 [DEBUG] plugin: terraform:  /opt/go/src/path/filepath/path.go:398 +0x14c
2017/03/21 09:25:52 [DEBUG] plugin: terraform: github.com/hashicorp/terraform/builtin/providers/archive.(*ZipArchiver).ArchiveDir(0xc42015a360, 0xc420582090, 0x8a, 0x0, 0x0)
2017/03/21 09:25:52 [DEBUG] plugin: terraform:  /opt/gopath/src/github.com/hashicorp/terraform/builtin/providers/archive/zip_archiver.go:85 +0x117
2017/03/21 09:25:52 [DEBUG] plugin: terraform: github.com/hashicorp/terraform/builtin/providers/archive.archive(0xc4202da310, 0x5f, 0x74067a0)
2017/03/21 09:25:52 [DEBUG] plugin: terraform:  /opt/gopath/src/github.com/hashicorp/terraform/builtin/providers/archive/data_source_archive_file.go:158 +0x18d
2017/03/21 09:25:52 [DEBUG] plugin: terraform: github.com/hashicorp/terraform/builtin/providers/archive.dataSourceFileRead(0xc4202da310, 0x0, 0x0, 0xc420238801, 0x0)
2017/03/21 09:25:52 [DEBUG] plugin: terraform:  /opt/gopath/src/github.com/hashicorp/terraform/builtin/providers/archive/data_source_archive_file.go:123 +0xbf
2017/03/21 09:25:52 [DEBUG] plugin: terraform: github.com/hashicorp/terraform/helper/schema.(*Resource).ReadDataApply(0xc4204fac00, 0xc42015a180, 0x0, 0x0, 0xc42036e978, 0xc420509901, 0x0)
2017/03/21 09:25:52 [DEBUG] plugin: terraform:  /opt/gopath/src/github.com/hashicorp/terraform/helper/schema/resource.go:252 +0xbb
2017/03/21 09:25:52 [DEBUG] plugin: terraform: github.com/hashicorp/terraform/helper/schema.(*Provider).ReadDataApply(0xc4202d2930, 0xc4202494f0, 0xc42015a180, 0x7fcbfda7d960, 0x0, 0x0)
2017/03/21 09:25:52 [DEBUG] plugin: terraform:  /opt/gopath/src/github.com/hashicorp/terraform/helper/schema/provider.go:381 +0x91
2017/03/21 09:25:52 [DEBUG] plugin: terraform: github.com/hashicorp/terraform/plugin.(*ResourceProviderServer).ReadDataApply(0xc4202b59e0, 0xc42035a440, 0xc42035a820, 0x0, 0x0)
2017/03/21 09:25:52 [DEBUG] plugin: terraform:  /opt/gopath/src/github.com/hashicorp/terraform/plugin/resource_provider.go:565 +0x4e
2017/03/21 09:25:52 [DEBUG] plugin: terraform: reflect.Value.call(0xc4202fa2a0, 0xc420300180, 0x13, 0x4c5783b, 0x4, 0xc42062df20, 0x3, 0x3, 0x4de3300, 0xc42002aee8, ...)
2017/03/21 09:25:52 [DEBUG] plugin: terraform:  /opt/go/src/reflect/value.go:434 +0x91f
2017/03/21 09:25:52 [DEBUG] plugin: terraform: reflect.Value.Call(0xc4202fa2a0, 0xc420300180, 0x13, 0xc42002af20, 0x3, 0x3, 0xc42002af0c, 0x180001, 0x300000000)
2017/03/21 09:25:52 [DEBUG] plugin: terraform:  /opt/go/src/reflect/value.go:302 +0xa4
2017/03/21 09:25:52 [DEBUG] plugin: terraform: net/rpc.(*service).call(0xc420429a80, 0xc420429a00, 0xc42037fab8, 0xc4204f6300, 0xc420411360, 0x3ee7d00, 0xc42035a440, 0x16, 0x3ee7d40, 0xc42035a820, ...)
2017/03/21 09:25:52 [DEBUG] plugin: terraform:  /opt/go/src/net/rpc/server.go:387 +0x144
2017/03/21 09:25:52 [DEBUG] plugin: terraform: created by net/rpc.(*Server).ServeCodec
2017/03/21 09:25:52 [DEBUG] plugin: terraform:  /opt/go/src/net/rpc/server.go:481 +0x404
2017/03/21 09:25:52 [DEBUG] plugin: terraform: 
2017/03/21 09:25:52 [DEBUG] plugin: /home/philipl/.local/bin/terraform: plugin process exited

Panic Log

https://gist.github.com/philipl/e5108167c11c17f2b96564e81e34e0fc

Expected Behavior

zip file is created with contents of directory

Actual Behavior

empty zip file is created and crash is reported in debug output

Steps to Reproduce

  1. terraform plan

Important Factoids

This seems a very specific failure. The following scenarios work fine:

  1. source_dir = "${path.module}/src"
  2. source_dir = "/some/path/../src"
  3. source_file = "${path.module}/../src/filename.js"

The specific failure is when ${path.module} is combined with "../"

archive_file changes file permissions

This issue was originally opened by @cbarensfeld as hashicorp/terraform#16598. It was migrated here as a result of the provider split. The original body of the issue is below.


When using archive_file, file permissions are set to 644 regardless of the original permissions. Is this by design? I would really like to use Terraform to zip up my Lambda function, but since I am using a binary, which needs to be executable, I can not.

https://www.terraform.io/docs/providers/archive/d/archive_file.html

Migrate Documentation to terraform-plugin-docs

The "standard library" Terraform Providers should implement nascent provider development tooling to encourage consistency, foster automation where possible, and discover bugs/enhancements to that tooling. To that end, this provider's documentation should be switched to terraform-plugin-docs, including:

  • Migrating directory structures and files as necessary
  • Enabling automation for documentation generation (e.g. make gen or go generate ./...)
  • Enabling automated checking that documentation has been re-generated during pull request testing (e.g. no differences)

Run

Hi there,

  • we have lambda function that have 3 python files and 1 requirements file, we use Gitops and store all our files here on git, we need to upload only thr 3 python files (not all source files and dependencies), so we need the build_command to run every terraform apply is run. in current case only when the lambda creation.

Terraform Version

v0.12.19

Affected Resource(s)

Please list the resources as a list, for example:

  • lambda function

Switch to GitHub Actions and goreleaser Release Process

The "standard library" Terraform Providers should implement nascent provider development tooling to encourage consistency, foster automation where possible, and discover bugs/enhancements to that tooling. To that end, this provider's release process should be switched to goreleaser to match the documented Terraform Registry publishing recommendations. This includes:

  • Creating necessary .goreleaser.yml and .github/workflows/release.yml configurations for tag-based releases (see also: TF-279 RFC)
  • Ensuring necessary GitHub or Vault tokens are in place to fetch release secrets
  • Ensuring provider and internal release process documentation is updated

Terraform provider data archive

Hi there,

I am trying to dynamically build go code from source code, zip it and create a lambda out of it. Problem starts because we do not want to keep binaries with IAC code and hence build and archive has to happen in terraform dynamically but terraform data resource causes issues as described below.

Terraform Version

Terraform v0.12.24

  • provider.archive v1.3.0
  • provider.aws v2.67.0
  • provider.helm v1.2.3
  • provider.kubernetes v1.11.3
  • provider.local v1.4.0
  • provider.null v2.1.2
  • provider.random v2.2.1
  • provider.template v2.1.2

Affected Resource(s)

Please list the resources as a list, for example:

  • null_resource.builder
  • null_data_resource.wait_for_builder
  • data.archive_file.zip_file

If this issue appears to affect multiple resources, it may be an issue with Terraform's core, so please mention this.

Terraform Configuration Files

resource "null_resource" "builder" {
  count = var.enabled == true ? 1 : 0

  triggers = {
    generated_hash = filemd5("${path.module}/lambda/eks-monitor/main.go")
  }

  provisioner "local-exec" {
    working_dir = "${path.module}/lambda/eks-monitor"
    command     = "go build main.go"
  }

}

data "null_data_source" "wait_for_builder" {
  count = var.enabled == true ? 1 : 0

  inputs = {
    # This ensures that this data resource will not be evaluated until
    # after the null_resource has been created.
    builder_id = null_resource.builder.0.id

    # This value gives us something to implicitly depend on
    # in the archive_file below.
    source_file = "${path.module}/lambda/eks-monitor/main"
  }
}


data "archive_file" "zip_file" {
  count = var.enabled == true ? 1 : 0

  type        = "zip"
  source_file = data.null_data_source.wait_for_builder.0.outputs["source_file"]
  output_path = "${path.module}/lambda/eks-monitor/main.zip"
}

Expected Behavior

Ideally data should know that it's archive_file and should not try to run it during refresh.

Actual Behavior

During the refresh it runs again and causes an issue because null_resource has built the file once and during second run binary doesn't exist ( we use Jenkins to deploy and worker nodes underneath keep changing and binary file is not there during the seond run.

So I get this error

Error: error archiving file: could not archive missing file: ../modules/aws-eks-add-on/aws-eks-monitor/lambda/eks-monitor/main

on ../modules/aws-eks-add-on/aws-eks-monitor/lambda.tf line 58, in data "archive_file" "zip_file":
58: data "archive_file" "zip_file" {

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. terraform apply
  2. Make sure main and main.zip is deleted because that's where the issue starts ( build and deploy environments are dynamic container and do not store dynamically build artifacts )
  3. terraform apply again

archive_file not creating archive

This issue was originally opened by @blalor as hashicorp/terraform#12471. It was migrated here as part of the provider split. The original body of the issue is below.


Terraform Version

v0.8.8

Affected Resource(s)

  • archive_file

Terraform Configuration Files

(partial; if the log's not sufficient, I'll encrypt my config and upload.)

data "null_data_source" "lambda_vars" {
    inputs = {
        output_path = "/tmp/route53-vpn-updater.${data.template_file.var_vpc_id.rendered}.zip"
    }
}

## zip up the Lambda source file
# https://github.com/hashicorp/terraform/issues/8344#issuecomment-265548941
data "archive_file" "route53_vpn_updater" {
    type = "zip"
    source_file = "${path.module}/files/lambda_functions/vpn_ddns.py"
    output_path = "${data.null_data_source.lambda_vars.outputs.output_path}"
}

## the Lambda function itself
resource "aws_lambda_function" "route53_vpn_updater" {
    ## must match pattern:
    ## (arn:aws:lambda:)?([a-z]{2}-[a-z]+-\d{1}:)?(\d{12}:)?(function:)?([a-zA-Z0-9-_]+)(:(\$LATEST|[a-zA-Z0-9-_]+))?
    function_name = "route53-vpn-updater_${data.template_file.var_vpc_label.rendered}"

    filename = "${data.null_data_source.lambda_vars.outputs.output_path}"
    source_code_hash = "${data.archive_file.route53_vpn_updater.output_base64sha256}"
    handler = "vpn_ddns.lambda_handler"

    role = "${var.ddns_lambda_role_arn}"
    
    runtime = "python2.7"
    timeout = 300
    
    kms_key_arn = "${aws_kms_key.ddns-lambda.arn}"
}

Debug Output

Encrypted with [email protected]'s PGP key; this is from a real run of one of my environments.
terraform.log.gpg.zip

Expected Behavior

Lambda function should have been created from the archive file.

Actual Behavior

* aws_lambda_function.route53_vpn_updater: Unable to load "/tmp/route53-vpn-updater.prod_aws_gen_eu-west-1.zip": open /tmp/route53-vpn-updater.prod_aws_gen_eu-west-1.zip: no such file or directory

Steps to Reproduce

The apply worked fine on macOS 10.12.3, but failed in a CentOS 7 rkt container. The apply failed consistently enough in my CI environment that I was able to add the TF_LOG stuff to capture the output.

Migrate to terraform-plugin-framework

Migrate from https://github.com/hashicorp/terraform-plugin-sdk to https://github.com/hashicorp/terraform-plugin-framework.

In addition to the migration, "uplift" of the Terraform Utility Providers involving a number of tasks (see below) is being undertaken.

  • GitHub action to test all minor Terraform versions >= 0.12
  • Acceptance tests to use TestCheckFunc (see docs and example)
  • Adoption of tflog (see docs)
  • Removal of deprecated fields, types and functions
  • Update Makefile
  • Switch linting to golangci-lint
  • Use terraform-plugin-docs
  • Add DESIGN.md
  • Update README.md
  • Update CONTRIBUTING.md
  • Update SUPPORT.md

AWS Lambda function "module initialization error: [Errno 13] Permission denied: '/var/task/helloworld.py'" when provisioned by archive_file type zip

Hi,

Terraform Version

$ terraform -v
Terraform v0.11.3

  • provider.archive v1.0.3
  • provider.aws v1.12.0

Affected Resource(s)

Please list the resources as a list, for example:

  • archive_file
  • aws_lambda_function

After upgrade archive from 1.0.0 to 1.0.3, the provisoined Lambda function received below error.
module initialization error: [Errno 13] Permission denied: '/var/task/helloworld.py'

Terraform Configuration Files

data "archive_file" "helloworld_lambda_zip" {
  source_dir  = "${path.module}/lambda"
  output_path = "${path.module}/lambda.zip"
  type        = "zip"
}

resource "aws_lambda_function" "helloworld" {
  function_name    = "helloworld"
  handler          = "helloworld.lambda_handler"
  role             = "${aws_iam_role.my_lambda_role.arn}"
  runtime          = "python3.6"
  timeout          = "120"
  filename         = "${data.archive_file.helloworld_lambda_zip.output_path}"
  source_code_hash = "${data.archive_file.helloworld_lambda_zip.output_base64sha256}"
}

Debug Output

No error while provisioning the Lambda and so no debug option is provided

Panic Output

N/a

Expected Behavior

Terraform is run at Linux host.

I aware there is change of the how to zip a file and here is the output of zipinfo for the zip file created by version 1.0.0

$ zipinfo -l modules/my_module/lambda.zip
Archive: modules/my_module/lambda.zip
Zip file size: 2711 bytes, number of entries: 1
-rw---- 2.0 fat 11340 bl 2559 defN 80-000-00 00:00 helloworld.py

File extracted has permisson 664

Actual Behavior

And here is the zipinfo of the zip file created by version 1.0.3

$ zipinfo -l modules/my_module/lambda.zip
Archive: modules/my_module/lambda.zip
Zip file size: 2711 bytes, number of entries: 1
-rw------- 2.0 unx 11340 bl 2559 defN 49-Jan-01 00:00 helloworld.py

File extracted has permission of 600

AWS confirmed that files without world permission could received module initialization error: [Errno 13] Permission denied: '/var/task/xxx.py' error.

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. terraform apply

Important Factoids

N/a

References

N/a

data.archive_file does not generate archive file during apply

Hi there,

looks like data.archive_file does not generate archive file during apply.

Terraform Version

Terraform version: 0.11.11

  • provider.archive v1.1.0

Affected Resource(s)

  • archive_file

Terraform Configuration Files

data "archive_file" "deployment_package" {
  type = "zip"
  source_dir = "../../example/"
  output_path = ".${replace(path.module, path.root, "")}/tmp/example.zip"
}

Expected Behavior

Archive file is generated during terraform apply.

Actual Behavior

Archive file is not generated. However if I run terraform plan before apply, the output is generated.

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. terraform apply

archiving issue for v1.0.3, in linux, the directory structure is replaced by forword slashes

Hi there,

Got an issue regarding the archiving in below mentioned version, when the files got archived and placed into EMR by terraform, when i logged into EMR cluster, I checked the archived files i had seen the folder
and files appeared as back slash, opposite to directory struture in linux.

reverted back to v1.0.0 and re-ran the terraform apply, everything was working as expected. Thinking the bug might be in latest version.

Terraform Version

provider.archive v1.3.0

Affected Resource(s)

Linux folder structure

Expected Behavior

Expected folder structure (/tmp/colors/dark/red) from archived folder

Actual Behavior

Found back slashes(\tmp\colors\dark\red) other than folders and files

Steps to Reproduce

  1. terraform apply

Important Factoids

Running in EMR Cluster

Terraform always shows archive_file in plan

Hi there,

Terraform Version

Terraform v0.11.3

  • provider.archive v1.0.0
  • provider.null v1.0.0

Affected Resource(s)

Please list the resources as a list, for example:

  • data.archive_file

Terraform Configuration Files

  triggers {
    index = "${base64sha256(file("${path.module}/lambda-files/index.js"))}"
  }
}

data "archive_file" "lambda_exporter" {
  output_path = "${path.module}/lambda-files.zip"
  source_dir  = "${path.module}/lambda-files/"
  type        = "zip"

  depends_on = ["null_resource.lambda_exporter"]
}

Debug Output

Apply once, and then re-plan:

------------------------------------------------------------------------

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
 <= read (data resources)

Terraform will perform the following actions:

 <= data.archive_file.lambda_exporter
      id:                  <computed>
      output_base64sha256: <computed>
      output_md5:          <computed>
      output_path:         "/Users/amaru/work/tfbug/lambda-files.zip"
      output_sha:          <computed>
      output_size:         <computed>
      source.#:            <computed>
      source_dir:          "/Users/amaru/work/tfbug/lambda-files/"
      type:                "zip"


Plan: 0 to add, 0 to change, 0 to destroy.

------------------------------------------------------------------------

Expected Behavior

Terraform shouldn't show any changes on terraform plan since the hash of file has not changed.

Actual Behavior

Terraform shows a change with read (data resources) for data.archive_file.lambda_exporter

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. terraform apply
  2. terraform plan

Feature Request: sensitive content

Provide a way to pass sensitive content to archive_file. At the moment there is no other way to mask a variable or flag an input as sensitive than at the provider level (hashicorp/terraform#16643 (comment)).

Example use case is a rendered template_file with secrets injected into it that we would like to zip.

Three different ways this could be implemented:

data "archive_file" "foobar" {
  type        = "zip"
  output_path = "${path.module}/foobar.zip"

  source {
    content  = "${local.secret}"
    filename = "bar"
    sensitive = true
  }
}
data "archive_file" "foobar" {
  type        = "zip"
  output_path = "${path.module}/foobar.zip"

  source {
    sensitive_content = "${local.secret}"
    filename          = "bar"
  }
}
data "archive_file" "foobar" {
  type        = "zip"
  output_path = "${path.module}/foobar.zip"

  sensitive_source {
    content  = "${local.secret}"
    filename = "bar"
  }
}

Bump Development/Build Minimum Go Version to 1.17

Terraform CLI and Provider Versions

N/A (main branch development)

Use Cases or Problem Statement

Following the Go support policy and given the ecosystem availability and stability of the latest Go minor version, it's time to upgrade. This will ensure that this project can use recent improvements to the Go runtime, standard library functionality, and continue to receive security updates

Proposal

  • Run the following commands to upgrade the Go module files and remove deprecated syntax such as //+build:
go mod edit -go=1.17
go mod tidy
go fix
  • Ensure any GitHub Actions workflows (.github/workflows/*.yml) use 1.18 in place of any 1.17 and 1.17 in place of any 1.16 or earlier
  • Ensure the README or any Contributing documentation notes the Go 1.17 expected minimum
  • (Not applicable to all projects) Ensure the .go-version is at least 1.17 or later
  • Enable the tenv linter in .golangci.yml and remediate any issues.

How much impact is this issue causing?

Medium

Additional Information

Code of Conduct

  • I agree to follow this project's Code of Conduct

testing

Testing HashiBot

panic: something went horribly wrong!

archive_file produces different results on different OSs

Affected Resource(s)

  • datasource archive_file
  • resource aws_lambda_function

Terraform Configuration Files

data "archive_file" "zip" {
    type        = "zip"
    source_file = "${var.source_code_path}"
    output_path = "zip/${basename(var.source_code_path)}.zip"
}

resource "aws_lambda_function" "fn" {
    filename = "${data.archive_file.zip.output_path}"
    source_code_hash = "${data.archive_file.zip.output_base64sha256}"
    function_name = "${var.function_name}"
    role = "${aws_iam_role.role.arn}"
    handler = "${var.handler_name == "" ? "${replace("${basename(var.source_code_path)}",".py","")}.lambda_handler" : var.handler_name}"
    runtime = "python3.6"
    timeout = "${var.timeout}"

    environment {
        variables = "${var.env_vars}"
    }
}

After applying on Windows, planning on Mac indicates that source_code_hash has changed, even though the content of the code has not changed. This is because archive_file produces equivalent but not identical results on Mac and Windows.

Expected Behavior

The lambda updates IFF the source code changes

Actual Behavior

The lambda updates when the source code changes OR if you run terraform on a different OS

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. terraform apply on Windows
  2. terraform plan on Mac
  3. observe that it thinks it needs to update the lambda

Force file mode inside archive

We've discovered that the file mode of source files is now preserved by archive_file. While we recognize everybody has different requirements, we think it would be useful to be able to specify a mode for files either for the entire archive or for a single source.

Especially given how often this seems to rear its head. Would you welcome a PR on this?

Terraform Version

Terraform v0.12.12

  • provider.archive v1.3.0
  • provider.aws v2.34.0
  • provider.external v1.2.0
  • provider.random v2.2.1
  • provider.template v2.1.2

Affected Resource(s)

Please list the resources as a list, for example:

  • archive_file

Terraform Configuration Files

data "archive_file" "snapshot_replica" {
  type        = "zip"
  source_file = "${path.module}/snapshot-replica-lambda/index.js"
  output_path = "${path.module}/snapshot-replica-lambda/snapshot-replica-lambda.zip"
}

Debug Output

(can provide)

Expected Behavior

The sha256sum of the generated zipfile is consistent across machines. This seems to be the expectation implied by #10.

Actual Behavior

The sha256sum of the generated zipfile differs depending on the mode of the source file in the archive.

Steps to Reproduce

  1. On Computer A, index.js has the mode 0644 (it has a umask of 0022)
  2. On Computer B, index.js has the mode 0664 (it has a umask of 0002)
  3. tf plan on both machines
  4. The generated archive differs as follows:
○ → zipinfo kronos-snapshot-replica-lambda.zip 
Archive:  kronos-snapshot-replica-lambda.zip
Zip file size: 815 bytes, number of entries: 1
-rw-r--r--  2.0 unx     1951 bl defN 49-Jan-01 00:00 index.js
1 file, 1951 bytes uncompressed, 685 bytes compressed:  64.9%

○ → zipinfo challenger-snapshot-replica-lambda.zip 
Archive:  challenger-snapshot-replica-lambda.zip
Zip file size: 815 bytes, number of entries: 1
-rw-rw-r--  2.0 unx     1951 bl defN 49-Jan-01 00:00 index.js
1 file, 1951 bytes uncompressed, 685 bytes compressed:  64.9%

Please add an "ignore_files" ("excludes") attribute

Feature Request: Are we able to add an "ignore_files" parameter so that we can ensure we aren't zipping everything inside of a folder. I know the simple solution is zip into a different folder however this then just adds another folder unnecessarily feels a little dirty.

Terraform Archive Error

Hi

I am getting below error when trying archive and upload google cloud storage.

Failed to read source file "data.archive_file.main.output_path". Cannot compute md5 hash for it.: timestamp=2021-07-07T15:28:12.440Z
2021-07-07T15:28:12.442Z [WARN] Provider "registry.terraform.io/hashicorp/google" produced an invalid plan for google_storage_bucket_object.archive, but we are tolerating it because it is using the legacy plugin SDK.
The following problems may be the cause of any confusing errors from downstream operations:
- .detect_md5hash: planned value cty.StringVal("different hash") for a non-computed attribute

data "archive_file" "main" {
type = "zip"
source_dir = pathexpand("../function/ce-analytics-npd-trainer")
output_path = pathexpand("/tmp/trainer-0.1.zip")
}

resource "google_storage_bucket_object" "archive" {
name = "${data.archive_file.main.output_md5}-${basename(data.archive_file.main.output_path)}"
bucket = google_storage_bucket.cloud_functions.name
source = "data.archive_file.main.output_path"
content_disposition = "attachment"
content_encoding = "gzip"
content_type = "application/zip"
}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.