Giter Site home page Giter Site logo

Comments (19)

aaronsteers avatar aaronsteers commented on July 4, 2024 4

@antondemidov - RE:

btw, I've found some workaround. Just to use null_resource for zipping file. Not very terraform way, but it's working :)

I was actually going in this direction also when you posted this. In terms of the right 'terraform way', I think the solution is worth merit.

New platform-agnostic solution for using native zip on any OS.

I followed the same general approach as @antondemidov - except that I needed it to work on Windows as well as Linux-based systems. (Caveat: Even though I've now tested this on both Ubuntu and Windows 10, I don't know that I have the icacls statements exactly correct yet.)

  1. Calculate local hashes of source files, a hash of the hashes, and an abbrev of the hash of hashes:
locals {
  source_files_hash = join(",", [
    for filepath in local.source_files :
    filebase64sha256("${var.local_metadata_path}/${filepath}")
  ])
  unique_hash   = md5(local.source_files_hash)
  unique_suffix = substr(local.unique_hash, 0, 4)
}
  1. Declare some needed paths and detect the OS:
locals {
  is_windows              = substr(pathexpand("~"), 0, 1) == "/" ? false : true
  temp_artifacts_root     = "${path.root}/.terraform/tmp"
  temp_build_folder       = "${local.temp_artifacts_root}/${var.name_prefix}lambda-zip-${local.unique_suffix}"
  local_requirements_file = fileexists("${var.lambda_source_folder}/requirements.txt") ? "${var.lambda_source_folder}/requirements.txt" : null
}
  1. All in one null_resource step:
    1. Copy source data to temp folder (using a new tmp subfolder under the local .terraform directory).
    2. Optionally pip-install dependencies to the folder if requirements.txt exists.
    3. Make sure the execute permissions is set on files before zipping.
    4. Compress the contents.
resource "null_resource" "create_dependency_zip" {
  triggers = {
    version_increment = 1.1 # can be bumped to manually force a refresh
    source_files_hash = local.source_files_hash
  }

  provisioner "local-exec" {
    interpreter = local.is_windows ? ["Powershell", "-Command"] : ["/bin/bash", "-c"]
    command = join(local.is_windows ? "; " : " && ", flatten(
      # local.local_requirements_file == null ? [] :
      local.is_windows ?
      [
        [
          "echo \"Creating target directory '${abspath(local.temp_build_folder)}'...\"",
          "New-Item -ItemType Directory -Force -Path ${abspath(local.temp_build_folder)}",
          "echo \"Copying directory contents from '${abspath(var.lambda_source_folder)}/' to '${abspath(local.temp_build_folder)}/'...\"",
          "Copy-Item -Force -Recurse -Path \"${abspath(var.lambda_source_folder)}/*\" -Destination \"${abspath(local.temp_build_folder)}/\"",
          "echo \"Granting execute permissions on temp folder '${local.temp_build_folder}'\"",
          "icacls ${local.temp_build_folder} /grant Everyone:F",
          "icacls ${local.temp_build_folder}/* /grant Everyone:F",
        ],
        local.local_requirements_file == null ? [] : !fileexists(local.local_requirements_file) ? [] :
        [
          "echo \"Running pip install from requirements '${abspath(local.local_requirements_file)}'...\"",
          "${local.pip_path} install --upgrade -r ${abspath(local.local_requirements_file)} --target ${local.temp_build_folder}",
        ],
        [
          "sleep 3",
          "echo \"Changing working directory to temp folder '${abspath(local.temp_build_folder)}'...\"",
          "cd ${abspath(local.temp_build_folder)}",
          "echo \"Zipping contents of ${abspath(local.temp_build_folder)} to '${abspath(local.local_dependencies_zip_path)}'...\"",
          "ls",
          "tar -acf ${abspath(local.local_dependencies_zip_path)} *",
        ]
      ] :
      [
        [
          "echo \"Creating target directory '${abspath(local.temp_build_folder)}'...\"",
          "set -e",
          "mkdir -p ${local.temp_build_folder}",
          "echo \"Copying directory contents from '${abspath(var.lambda_source_folder)}/' to '${abspath(local.temp_build_folder)}/'...\"",
          "cp ${var.lambda_source_folder}/* ${local.temp_build_folder}/",
        ],
        local.local_requirements_file == null ? [] : !fileexists(local.local_requirements_file) ? [] :
        [
          "echo \"Running pip install from requirements '${abspath(local.local_requirements_file)}'...\"",
          "${local.pip_path} install --upgrade -r ${local.temp_build_folder}/requirements.txt --target ${local.temp_build_folder}",
        ],
        [
          "sleep 3",
          "echo \"Granting execute permissions on temp folder '${local.temp_build_folder}'\"",
          "chmod -R 755 ${local.temp_build_folder}",
          "cd ${abspath(local.temp_build_folder)}",
          "echo \"Zipping contents of '${abspath(local.temp_build_folder)}' to '${abspath(local.local_dependencies_zip_path)}'...\"",
          "zip -r ${abspath(local.local_dependencies_zip_path)} *",
        ]
      ]
    ))
  }
}
  1. Force a new S3 upload by embedding the unique_suffix as part of the S3 key name, which is based on the hash of all source files, excluding pip artifacts.
    • Note:
      • The normal way this resource tries to detect changes is the etag property - which we cannot use for our purposes because the etag (that is, the zip's file hash) cannot be known at plan time.
      • The S3 key change also triggers an update to the Lambda function, which doesn't seem to happen if the S3 file is updated in place.
resource "aws_s3_bucket_object" "s3_source_uploads" {
  bucket = var.dest_bucket
  key = "${var.dest_folder}/lambda-dependencies-${local.unique_suffix}.zip"
    ]
  )
  source   = "${local.local_dependencies_zip_path}"
}

Quick note regarding native zip capabilities on Windows and Linux-based systems.

  • Apparently Windows has been shipping rar.exe natively in the OS for a while, and it's also able to create zips (as demonstrated in the snippet above.
  • Ironically, in this case, Linux takes an additional prereq in this scenario, which is simply apt-get zip. (Runs in <4 seconds in my CI/CD environment.)
  • Windows also has PowerShell Compress-Archive commandlet - but this cannot be used in our scenarios due to a fact that it mutilates the file attributes (aka file permissions) that are needed for Linux-based systems. Without proper execute permissions on the zipped-up python files, they cannot be run in AWS Lambda. (Lambda will throw an error.)

Problems with using data.local_archive in combination with null_resource and/or null_data_source:

In retrospect, I think using the data archive resource needs additional enhancement before it's ready to support these use cases natively. For instance, if that provider were to take the equivalent of a pre-run local provisioner, we might be close to accomplishing the use case without extensive hackery. But in absense of native support for pre-zip steps, I think the above solution is the best because it has a short dependency tree and all side-effects are in the solitary null_resouce - pairing with a rename at the s3-key level to force simultaneous update of both if either is modified.

In Summary:

I don't know if others have gotten the older (pre-tf-0.13) solution working again, but I wanted to post here that the above method is working for me, and after successful apply it also provides a clean plan in CI/CD environments where no local files are cached between executions.

from terraform-provider-archive.

antondemidov avatar antondemidov commented on July 4, 2024 2

Thanks, @aaronsteers. I'll take a look

btw, I've found some workaround. Just to use null_resource for zipping file. Not very terraform way, but it's working :) Posting here for reference:

resource "null_resource" "libs_layer" {
  triggers = {
    policy_sha1 = sha1(file("${path.module}/../../../../../authentication-service/authservice/lambda/verifyoauth_dev/requirements.txt"))
  }

  provisioner "local-exec" {
    command = <<CMD
    virtualenv -p python3.7 files/venv-libs/python;
    files/venv-libs/python/bin/pip install -r ${path.module}/../../../../../authentication-service/authservice/lambda/verifyoauth_dev/requirements.txt;
    cd files/venv-libs;
    zip -r ../verify_oauth_lambda_layer.zip python/lib/python3.7/site-packages;
CMD

    interpreter = ["bash", "-c"]
  }
}

from terraform-provider-archive.

dparker2 avatar dparker2 commented on July 4, 2024 1

This seems to be happening when terraform is refreshing the state. So it works (for me at least, w/ v0.13.5) on the initial apply, but if the source_dir is deleted and terraform refresh is ran, it complains with this error.

from terraform-provider-archive.

aaronsteers avatar aaronsteers commented on July 4, 2024 1

Update: I found the forum on Discuss - looks like ETA for 0.14.0 GA is next week.

https://discuss.hashicorp.com/t/terraform-v0-14-0-rc1-released/15752/14

I'll start testing with the RC1 with the expectation/hope we'll make a quick transition once released.

from terraform-provider-archive.

aaronsteers avatar aaronsteers commented on July 4, 2024 1

@antondemidov - I can't speak to any remnants of the 0.13.x behavior, because my project is basically working again...

However, I do know there was a workaround described in another ticket (long ago, I'm afraid), in which there was a recommendation to pipe dependencies through a null_data_source. Based on my own research, that workaround may also be required. I'm still using it in my own projects - and if I remove it, I do lose the wait on dependencies as you are describing.


UPDATE: this stack overflow answer seems to describe the workaround concisely: https://stackoverflow.com/a/58585612/4298208

And the related issues pointed to in the stackoverflow are:

from terraform-provider-archive.

nweatomv avatar nweatomv commented on July 4, 2024 1

@jbardin So other than the workaround mentioned by @antondemidov how is one to migrate from 12 to 14, if the 14 migration docs specifically mention that you need to do at least one apply with 13 before going to 14 this seems like it is kind of a blocker for anyone migrating from 12, if they cannot migrate to 13. I tried the null_data_source workaround mentioned by @aaronsteers and it did not help.

from terraform-provider-archive.

rluckey-tableau avatar rluckey-tableau commented on July 4, 2024

This does not seem likely to be an issue with the hashicorp/archive v1.3.0 provider since the problem occurs with that version of the provider on Terraform v0.13.1 but not with that version of the provider on Terraform v0.12.29.

from terraform-provider-archive.

Mrg77 avatar Mrg77 commented on July 4, 2024

I have the same problem on my configuration when i try to create an lambda :

Error: error archiving directory: could not archive missing directory: ./.terraform/temp/GTUKucTyx-0/lambda-response

on AWS-App_FE.tf line 115, in data "archive_file" "lambda_edge_function_files":
115: data "archive_file" "lambda_edge_function_files" {

Try to do this with terraform 0.13.2
All work with terraform 0.12.29

from terraform-provider-archive.

Robin-Walter avatar Robin-Walter commented on July 4, 2024

Same for me. All work with terraform 0.12.29 but not with version ^0.13

from terraform-provider-archive.

rluckey-tableau avatar rluckey-tableau commented on July 4, 2024

Any traction on this at all? Still blocking upgrade to v0.13.

from terraform-provider-archive.

nicolas-lopez avatar nicolas-lopez commented on July 4, 2024

Hello any news ?

from terraform-provider-archive.

aaronsteers avatar aaronsteers commented on July 4, 2024

Anyone find a workaround or solution to this? I'm blocked on the same.

from terraform-provider-archive.

aaronsteers avatar aaronsteers commented on July 4, 2024

@dparker2 - confirmed, I'm seeing the same.

My only workaround at this point is to run terraform state rm ____ to forcibly delete the upstream object which is not getting properly waited for otherwise.

from terraform-provider-archive.

aaronsteers avatar aaronsteers commented on July 4, 2024

@apparentlymart - It looks like this might be the bug described in 0.14.0 release notes.

Can you confirm if this looks like it may be related to that issue?

...
core: Errors with data sources reading old data during refresh, failing to refresh, and not appearing to wait on resource dependencies are fixed by updates to the data source lifecycle and the merging of refresh and plan (hashicorp/terraform#26270)
...

... as listed here in 0.14.0-rc1 release notes regarding fix described here: hashicorp/terraform#26270

Also:

from terraform-provider-archive.

jbardin avatar jbardin commented on July 4, 2024

Thanks @aaronsteers, this type of issue will be resolved with 0.14.
There isn't anything to fix in the archive_file data source, if the external directory is removed during refresh the data source is expected to fail. Eliminating the separate refresh phase avoids this possibility entirely.

from terraform-provider-archive.

aaronsteers avatar aaronsteers commented on July 4, 2024

Thanks, @jbardin ! By chance is there an expected release date for 0.14 or a ticket I can follow which would cover that timeline?

For my case, we've already upgraded to 0.13.4, including our our state files - so rolling back to 0.12.x would be a very difficult option for us.

from terraform-provider-archive.

antondemidov avatar antondemidov commented on July 4, 2024

Still have the problem with terraform 0.14

from terraform-provider-archive.

joeltio avatar joeltio commented on July 4, 2024

Still have the problem with terraform 0.14

I don't seem to have the problem in 0.14. I'm using v0.14.9 with this smaller test:

# main.tf
resource "null_resource" "zip" {
  triggers = {
    # Not the best to use here but works well enough
    create_file = fileexists("${path.module}/folder_to_zip/new_file")
  }
  provisioner "local-exec" {
    working_dir = "folder_to_zip"
    command = "touch new_file"
  }
}

data "archive_file" "zip" {
  depends_on = [null_resource.zip]
  type        = "zip"
  source_dir  = "folder_to_zip"
  output_path = "output.zip"
}

Commands I ran:

mkdir folder_to_zip
terraform init
terraform apply

The output.zip contained the new_file. Stopping the terraform apply also produced no zip files and the command was not run.

Unless maybe I misunderstood the problem here.

from terraform-provider-archive.

johnbarney avatar johnbarney commented on July 4, 2024

Still have the problem with terraform 0.14

I don't seem to have the problem in 0.14. I'm using v0.14.9 with this smaller test:

# main.tf
resource "null_resource" "zip" {
  triggers = {
    # Not the best to use here but works well enough
    create_file = fileexists("${path.module}/folder_to_zip/new_file")
  }
  provisioner "local-exec" {
    working_dir = "folder_to_zip"
    command = "touch new_file"
  }
}

data "archive_file" "zip" {
  depends_on = [null_resource.zip]
  type        = "zip"
  source_dir  = "folder_to_zip"
  output_path = "output.zip"
}

Commands I ran:

mkdir folder_to_zip
terraform init
terraform apply

The output.zip contained the new_file. Stopping the terraform apply also produced no zip files and the command was not run.

Unless maybe I misunderstood the problem here.

I think this simply highlights a race condition. Touching a file is incredibly fast. Running pip install isn't. Not to mention your directory already exists before you start whereas some are attempting to move the source files to a new folder, build dependencies, then zipping. Local_exec and archive_file are running in parallel on different threads and thus you did not hit the problem. You can test this by replacing command = "touch new_file" with command = "sleep 20; touch new_file".

Seems sad but the only way I see to solve this issue is to do the zipping inside local_exec, which makes cross platform quite painful. Overall it seems to me to be a problem with how archive_file is implemented. A Resource is an object you wish Terraform to create an manage. A Datasource is an object that exists that you want Terraform to refer to. In my mind its obvious if we are creating an archive file that concept would be a Resource to Terraform. I'm sure there are good deeper level reasons why the team decided change this provider in the way they did. These guys are smart. But it seems to violate the design principals of Terraform itself.

from terraform-provider-archive.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.