Giter Site home page Giter Site logo

terraform-resource's Introduction

Terraform Concourse Resource

A Concourse resource that allows jobs to modify IaaS resources via Terraform. Useful for creating a pool of reproducible environments. No more snowflakes!

See DEVELOPMENT if you're interested in submitting a PR 👍

Docker Pulls

Source Configuration

Important!: The source.storage field has been replaced by source.backend_type and source.backend_config to leverage the built-in Terraform backends. If you currently use source.storage in your pipeline, follow the instructions in the Backend Migration section to ensure your state files are not lost.

  • backend_type: Required. The name of the Terraform backend the resource will use to store statefiles, e.g. s3 or consul.

    Note: The 'local' backend type is not supported, Concourse requires that state is persisted outside the container

  • backend_config: Required. A map of key-value configuration options specific to your choosen backend, e.g. S3 options.

  • env_name: Optional. Name of the environment to manage, e.g. staging. A Terraform workspace will be created with this name. See Single vs Pool section below for more options.

  • delete_on_failure: Optional. Default false. If true, the resource will run terraform destroy if terraform apply returns an error.

  • vars: Optional. A collection of Terraform input variables. These are typically used to specify credentials or override default module values. See Terraform Input Variables for more details.

  • env: Optional. Similar to vars, this collection of key-value pairs can be used to pass environment variables to Terraform, e.g. "AWS_ACCESS_KEY_ID".

  • private_key: Optional. An SSH key used to fetch modules, e.g. private GitHub repos.

Source Example

resource_types:
- name: terraform
  type: docker-image
  source:
    repository: ljfranklin/terraform-resource
    tag: latest

resources:
  - name: terraform
    type: terraform
    source:
      env_name: staging
      backend_type: s3
      backend_config:
        bucket: mybucket
        key: mydir/terraform.tfstate
        region: us-east-1
        access_key: {{storage_access_key}}
        secret_key: {{storage_secret_key}}
      vars:
        tag_name: concourse
      env:
        AWS_ACCESS_KEY_ID: {{environment_access_key}}
        AWS_SECRET_ACCESS_KEY: {{environment_secret_key}}

The above example uses AWS S3 to store Terraform state files. All backend_config options documented here are forwarded straight to Terraform.

Terraform also supports many other state file backends, for example Google Cloud Storage (GCS):

resources:
  - name: terraform
    type: terraform
    source:
      backend_type: gcs
      backend_config:
        bucket: mybucket
        prefix: mydir
        credentials: {{gcp_credentials_json}}
      ...

Image Variants

Note: all images support AMD64 and ARM64 architectures, although only AMD64 is fully tested prior to release.

  • Latest stable release of resource: ljfranklin/terraform-resource:latest.
  • Specific versions of Terraform, e.g. ljfranklin/terraform-resource:0.7.7.
  • RC builds from Terraform pre-releases: ljfranklin/terraform-resource:rc.
  • Nightly builds from Terraform master branch: ljfranklin/terraform-resource:nightly.

See Dockerhub for a list of all available tags. If you'd like to build your own image from a specific Terraform branch, configure a pipeline with build-image-pipeline.yml.

Behavior

This resource should usually be used with the put action rather than a get. This ensures the output always reflects the current state of the IaaS and allows management of multiple environments as shown below. A get step outputs the same metadata file format shown below for put.

Get Parameters

Note: In Concourse, a put is always followed by an implicit get. To pass get params via put, use put.get_params.

  • output_statefile: Optional. Default false If true, the resource writes the Terraform statefile to a file named terraform.tfstate.Warning: Ensure any changes to this statefile are persisted back to the resource's storage bucket. Another warning: Some statefiles contain unencrypted secrets, be careful not to expose these in your build logs.

  • output_planfile: Optional. Default false If true a file named plan.json with the JSON representation of the Terraform binary plan file will be created.

  • output_module Optional. Write only the outputs from the given module name to the metadata file.

Put Parameters

  • terraform_source: Required. The relative path of the directory containing your Terraform configuration files. For example: if your .tf files are stored in a git repo called prod-config under a directory terraform-configs, you could do a get: prod-config in your pipeline with terraform_source: prod-config/terraform-configs/ as the source.

  • env_name: Optional, see Note. The name of the environment to create or modify. A Terraform workspace will be created with this name. Multiple environments can be managed with a single resource.

  • generate_random_name: Optional, see Note. Default false Generates a random env_name (e.g. "coffee-bee"). See Single vs Pool section below.

  • env_name_file: Optional, see Note. Reads the env_name from a specified file path. Useful for destroying environments from a lock file.

    Note: You must specify one of the following options: source.env_name, put.params.env_name, put.params.generate_random_name, or env_name_file

  • delete_on_failure: Optional. Default false. See description under source.delete_on_failure.

  • vars: Optional. A collection of Terraform input variables. See description under source.vars.

  • var_files: Optional. A list of files containing Terraform input variables. These files can be in YAML or JSON format, or HCL if the filename ends in .tfvars.

    Terraform variables will be merged from the following locations in increasing order of precedence: source.vars, put.params.vars, and put.params.var_files. Finally, env_name is automatically passed as an input var.

  • env: Optional. A key-value collection of environment variables to pass to Terraform. See description under source.env.

  • private_key: Optional. An SSH key used to fetch modules, e.g. private GitHub repos.

  • plan_only: Optional. Default false This boolean will allow Terraform to create a plan file and store it the configured backend. Useful for manually reviewing a plan prior to applying. See Plan and Apply Example. Warning: Plan files contain unencrypted credentials like AWS Secret Keys, only store these files in a private bucket.

  • plan_run: Optional. Default false This boolean will allow Terraform to execute the plan file stored on the configured backend, then delete it.

  • import_files: Optional. A list of files containing existing resources to import into the state file. The files can be in YAML or JSON format, containing key-value pairs like aws_instance.bar: i-abcd1234.

  • override_files: Optional. A list of files to copy into the terraform_source directory. Override files must follow conventions outlined here such as file names ending in _override.tf.

  • module_override_files: Optional. A list of maps to copy override files to specific destination directories. Override files must follow conventions outlined here such as file names ending in _override.tf. The source file is specified with src and the destination directory with dst.

  • action: Optional. When set to destroy, the resource will run terraform destroy against the given statefile.

    Note: You must also set put.get_params.action to destroy to ensure the task succeeds. This is a temporary workaround until Concourse adds support for delete as a first-class operation. See this issue for more details.

  • plugin_dir: Optional. The path (relative to your terraform_source) of the directory containing plugin binaries. This overrides the default plugin directory and Terraform will not automatically fetch built-in plugins if this option is used. To preserve the automatic fetching of plugins, omit plugin_dir and place third-party plugins in ${terraform_source}/terraform.d/plugins. See https://www.terraform.io/docs/configuration/providers.html#third-party-plugins for more information.

  • parallelism: Optional. Default 10 This int limit the number of concurrent operations Terraform will perform. See the Terraform docs for more information.

  • lock_timeout: Optional. Default 0s Duration to retry a state lock. See the Terraform docs for more information.

Put Example

Every put action creates name and metadata files as an output containing the env_name and Terraform Outputs in JSON format.

jobs:
- name: update-infrastructure
  plan:
  - get: project-git-repo
  - put: terraform
    params:
      env_name: e2e
      terraform_source: project-git-repo/terraform
  - task: show-outputs
    config:
      platform: linux
      inputs:
        - name: terraform
      run:
        path: /bin/sh
        args:
          - -c
          - |
              echo "name: $(cat terraform/name)"
              echo "metadata: $(cat terraform/metadata)"

The preceding job would show a file similar to the following:

name: e2e
metadata: { "vpc_id": "vpc-123456", "vpc_tag_name": "concourse" }

Plan and apply example

jobs:
- name: terraform-plan
  plan:
  - get: project-git-repo
  - put: terraform
    params:
      env_name: staging
      terraform_source: project-git-repo/terraform
      plan_only: true
      vars:
        subnet_cidr: 10.0.1.0/24

- name: terraform-apply
  plan:
  - get: project-git-repo
    trigger: false
    passed: [terraform-plan]
  - get: terraform
    trigger: false
    passed: [terraform-plan]
  - put: terraform
    params:
      env_name: staging
      terraform_source: project-git-repo/terraform
      plan_run: true

Managing a single environment vs a pool of environments

This resource can be used to manage a single environment or a pool of many environments.

Single Environment

To use this resource to manage a single environment, set source.env_name or put.params.env_name to a fixed name like staging or production as shown in the previous put example. Each put will update the IaaS resources and state file for that environment.

Pool of Environments

To manage a pool of many environments, you can use this resource in combination with the pool-resource. This allows you to create a pool of identical environments that can be claimed and released by CI jobs and humans. Setting put.params.generate_random_name: true will create a random, unique env_name like "coffee-bee" for each environment, and the pool-resource will persist the name and metadata for these environments in a private git repo.

jobs:
- name: create-env-and-lock
  plan:
    # apply the terraform template with a random env_name
    - get: project-git-repo
    - put: terraform
      params:
        terraform_source: project-git-repo/terraform
        generate_random_name: true
        delete_on_failure: true
        vars:
          subnet_cidr: 10.0.1.0/24
    # create a new pool-resource lock containing the terraform output
    - put: locks
      params:
        add: terraform/

- name: claim-env-and-test
  plan:
    # claim a random env lock
    - put: locks
      params:
        acquire: true
    # the locks dir will contain `name` and `metadata` files described above
    - task: run-tests-against-env
      file: test.yml
      input_mapping:
        env: locks/

- name: destroy-env-and-lock
  plan:
    - get: project-git-repo
    # acquire a lock
    - put: locks
      params:
        acquire: true
    # destroy the IaaS resources
    - put: terraform
      params:
        terraform_source: project-git-repo/terraform
        env_name_file: locks/name
        action: destroy
      get_params:
        action: destroy
    # destroy the lock
    - put: locks
      params:
        remove: locks/

Backend Migration

Previous versions of this resource required statefiles to be stored in an S3-compatible blobstore using the source.storage field. The latest version of this resource instead uses the build-in Terraform Backends to support many other statefile storage options in addition to S3. If you have an existing pipeline that uses source.storage, your statefiles will need to be migrated into the new backend directory structure using the following steps:

  1. Rename source.storage to source.migrated_from_storage in your pipeline config. All fields within source.storage should remain unchanged, only the top-level key should be renamed.
  2. Add source.backend_type and source.backend_config fields as described under Source Configuration.
  3. Update your pipeline: fly set-pipeline.
  4. The next time your pipeline performs a put to the Terraform resource:
  • The resource will copy the statefile for the modified environment into the new directory structure.
  • The resource will rename the old statefile in S3 to $ENV_NAME.migrated.
  1. Once all statefiles have been migrated and everything is working as expected, you may:
  • Remove the old .migrated statefiles.
  • Remove the source.migrated_from_storage from your pipeline config.

Breaking Change: The backend mode drops support for feeding Terraform outputs back in as input vars to subsequent puts. This "feature" causes suprising errors if inputs and outputs have the same name but different types and the implementation was significantly more complicated with the new migrated_from_storage flow.

Legacy storage configuration

  • migrated_from_storage.bucket: Required. The S3 bucket used to store the state files.

  • migrated_from_storage.bucket_path: Required. The S3 path used to store state files, e.g. mydir/.

  • migrated_from_storage.access_key_id: Required. The AWS access key used to access the bucket.

  • migrated_from_storage.secret_access_key: Required. The AWS secret key used to access the bucket.

  • migrated_from_storage.region_name: Optional. The AWS region where the bucket is located.

  • migrated_from_storage.server_side_encryption: Optional. An encryption algorithm to use when storing objects in S3, e.g. "AES256".

  • migrated_from_storage.sse_kms_key_id Optional. The ID of the AWS KMS master encryption key used for the object.

  • migrated_from_storage.endpoint: Optional. The endpoint for an s3-compatible blobstore (e.g. Ceph).

    Note: By default, the resource will use S3 signing version v2 if an endpoint is specified as many non-S3 blobstores do not support v4. Opt into v4 signing by setting migrated_from_storage.use_signing_v4: true.

Migration Example

resources:
  - name: terraform
    type: terraform
    source:
      backend_type: s3
      backend_config:
        bucket: mybucket
        key: mydir/terraform.tfstate
        region: us-east-1
        access_key: {{storage_access_key}}
        secret_key: {{storage_secret_key}}
      migrated_from_storage:
        bucket: mybucket
        bucket_path: mydir/
        region: us-east-1
        access_key_id: {{storage_access_key}}
        secret_access_key: {{storage_secret_key}}
      vars:
        tag_name: concourse
      env:
        AWS_ACCESS_KEY_ID: {{environment_access_key}}
        AWS_SECRET_ACCESS_KEY: {{environment_secret_key}}

terraform-resource's People

Contributors

alexwilcox9 avatar amohemed avatar andy-paine avatar archgrove avatar blgm avatar dgrizzanti avatar divyabhargov avatar iuriaranda avatar krtk6160 avatar kurtmc avatar ljfranklin avatar odise avatar peterellisjones avatar seraf avatar simon-swanson-leapyear avatar spire-allyjweir avatar stuart-c-moore avatar tripstackdavidthornton avatar vbogretsov avatar vchrisb avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

terraform-resource's Issues

Providers linked against glibc won't run

We're trying to use terraform-resource with the provider http://github.com/mevansam/terraform-provider-cf. This is currently dynamically linked against glibc (due to Golang's inability to cross-compile programs using certain parts of the standard library, e.g. golang/go#14625). As such, it won't run inside the Alpine Linux based container for this resource.

We'd like to propose a tiny change, viz: #49, that adds the Alpine libc compatibility shim https://pkgs.alpinelinux.org/package/edge/main/x86/libc6-compat. We've confirmed they fix at least our use case with the image at https://hub.docker.com/r/adamatkalo/terraform-resource-libc/, and adds only 8kb to the image size.

Recompiling (potentially many) future providers to be statically linked could prevent a lot of them from running in this resource.

CC @jpluscplusm

var_files cannot include maps

If I pass a var_file:

my_map:
  a: b
  c: d

the resulting args will be: -var 'my_map={}'

This is because terraform/client.go's formatVar function, in the reflect.Map case, casts value as value.(map[string]interface{}), which is always an empty map. If this is switched to a map from interface to interface, the resulting args will be: -var 'my_map={a="b",c="d"}'. However, making this change breaks maps passed as vars in the exact same way. In other words, value.(map[string]interface{}) works for vars, and value.(map[interface{}]interface{}) works for var_files.

Because of this, I'm not quite sure what the correct fix is. If you have some guidance here, I can submit a PR. Thanks!

Add support for custom CA certificates

Hi @ljfranklin

We are using the terraform-resource and we are very happy with it. We would like to use the terraform-resource for infrastructures which are set up with custom certificates. Terraform supports custom CA certificates via the argument cacert_file. Currently, it's not possible to use this feature of terraform with the terraform-resource. It's possible to pass the argument cacert_file to terraform thru the terraform-resource but there is no way to provide the custom certificate file. There is a workaround for this by specifying the insecure argument but it's more secure by using the cacert_file option. It will be grate to have this supported by the terraform-resource.

Support `terraform import` command as a means of enforcing security

Use Case

Terraform makes it easy to declare a list of firewall rules that should be created in a given environment. However, if someone manually creates a firewall rule to do some debugging they may forget to delete the rule when finished. As the Terraform state file does not know about this rule, it will not destroy it on the next apply. Adding support for terraform input could allow for generating a list of all firewall rules in a task, passing that list to a put via an input_file field, then Terraform would destroy all "unknown" rules automatically. This job could be run on a timer to enforce that only Terraform defined rules exist.

Terraform 0.10.0 has an incompatible change for `terraform init`

As described here terraform 0.10.0 introduces with this commit a backwards incompatible change for terraform init. This seems to brake the concourse resource because we see in our pipelines errors like:

Terraform initialized in an empty directory!

The directory has no Terraform configuration files. You may begin working
with Terraform immediately by creating Terraform configuration files.
No configuration files found!

As I understood from the code it happens here

Intermittent "invalid character '/' after top-level value" error

We hit "invalid character '/' after top-level value" sometimes. Its a flakiness and happens approximately once in 20 runs. A retrigger without any changes makes the problem go away. Have you encountered this or do you have any suggestion for us how we can debug this further ?

A successful `put` should "untaint" a state file

A failed put will rename that state file to "your-env.tfstate.tainted" to avoid a subsequent get grabbing an environment that's in a bad state. Unfortunately, once you fix the terraform template the next put throws a confusing error:

Failed to check for existing state file from 'your-env.tfstate': HeadObject request failed.
Error: SerializationError: failed to decode S3 XML error response

To get past this, the user has to manually rename the state file to remove the .tainted suffix. The resource should allow a successful put to "untaint" the state file automatically.

Get so I can read state

I want to be able to specify Get to read the metadata from my terraform template output as dependable data for further scripts, for example, configuring a parent DNS server. Right now I must use the AWS CLI inside a task to obtain accurate data from the tfstate file, as using the s3-resource wont work, thinking its current version is accurate when it may not be.

Happy to walk you through this with a pairing session.

TF_LOG: TRACE bug

Hi there.

We've been using your Terraform concourse resource - great tool, thanks! - and have stumbled upon a bug. In some circumstances, put-ing a Terraform resource will lead to the following console output when TF_LOG is set to TRACE in the resource definition:

16:47:13
Error: invalid character '/' after top-level value
16:47:13
Output: 2017/11/28 16:47:13 [INFO] Terraform version: 0.11.0  ec9d4f1d0f90e8ec5148f94b6d634eb542a4f0ce+CHANGES
16:47:13
2017/11/28 16:47:13 [INFO] Go runtime version: go1.9
16:47:13
2017/11/28 16:47:13 [INFO] CLI args: []string{"/usr/local/bin/terraform", "output", "-json", "-state=/tmp/terraform-resource-out665328595/terraform.tfstate"}
16:47:13
2017/11/28 16:47:13 [DEBUG] Attempting to open CLI config file: ~/.terraformrc
16:47:13
2017/11/28 16:47:13 [DEBUG] File doesn't exist, but doesn't need to. Ignoring.
16:47:13
2017/11/28 16:47:13 [INFO] CLI command args: []string{"output", "-json", "-state=/tmp/terraform-resource-out665328595/terraform.tfstate"}

A colleague of ours, who encountered the same issue a while ago, eventually rescued us, and said he believed the problem was caused by the logs being sent to STDOUT while another part of the resource was already watching STDIN for something else.

Just thought I'd flag it for you.

Add better handling of invalid variable formats

We got the following panic when there were multiline values without the appropriate yaml syntax for multiline values in the vars section.

▼ ▼ ▼ ▼ ▼ ▼ ▼ ▼ ▼ ▼ Terraform Destroy ▼ ▼ ▼ ▼ ▼ ▼ ▼ ▼ ▼ ▼
▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ Terraform Destroy ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0xa0 pc=0x523fed]

goroutine 1 [running]:
panic(0x7ccc20, 0xc420012080)
	/usr/local/go/src/runtime/panic.go:500 +0x1a1
terraform-resource/terraform.formatVar(0x0, 0x0, 0xc4204d2f50, 0x2)
	/tmp/build/9d0884a1/resource-src/src/terraform-resource/terraform/client.go:151 +0x7d
terraform-resource/terraform.Client.varFlags(0xc42016ce00, 0x10, 0xc42043a690, 0xc4201900c0, 0x2e, 0x0, 0xc420072d80, 0x36, 0xc420184980, 0x12, ...)
	/tmp/build/9d0884a1/resource-src/src/terraform-resource/terraform/client.go:144 +0x10a
terraform-resource/terraform.Client.Destroy(0xc42016ce00, 0x10, 0xc42043a690, 0xc4201900c0, 0x2e, 0x0, 0xc420072d80, 0x36, 0xc420184980, 0x12, ...)
	/tmp/build/9d0884a1/resource-src/src/terraform-resource/terraform/client.go:99 +0x614
terraform-resource/terraform.Action.attemptDestroy(0xc42016ce00, 0x10, 0xc42043a690, 0xc4201900c0, 0x2e, 0x0, 0xc420072d80, 0x36, 0xc420184980, 0x12, ...)
	/tmp/build/9d0884a1/resource-src/src/terraform-resource/terraform/action.go:120 +0xdf
terraform-resource/terraform.Action.Destroy(0xc42016ce00, 0x10, 0xc42043a690, 0xc4201900c0, 0x2e, 0x0, 0xc420072d80, 0x36, 0xc420184980, 0x12, ...)
	/tmp/build/9d0884a1/resource-src/src/terraform-resource/terraform/action.go:99 +0xbc
terraform-resource/out.Runner.Run(0x7fff00022ecf, 0xe, 0xa16c80, 0xa5d298, 0xa16840, 0xc420094010, 0xc42016ca18, 0x2, 0xc420184080, 0x1c, ...)
	/tmp/build/9d0884a1/resource-src/src/terraform-resource/out/out.go:89 +0xdd8
main.main()
	/tmp/build/9d0884a1/resource-src/src/terraform-resource/cmd/out/main.go:32 +0x273

Metadata not available

When running

resource_types:
- name: terraform
  type: docker-image
  source:
    repository: ljfranklin/terraform-resource

resources:
- name: img-sdk
 
- name: terraform
  type: terraform
  source:
    env:
          [bla]
    storage:
      access_key_id: []
      bucket: []
      bucket_path: []
      region_name: []
      secret_access_key: []
    vars: 
         [bla]

- name: src-env
  type: git
  source:
     [some source]

jobs:

  - put: terraform-plan
     resource: terraform
     params:
       terraform_source: src-env/app
       env_name: test-1
       plan_only: true

   - task: show-outputs
     image: img-sdk
     config:
       platform: linux
       inputs:
       - name: terraform-plan
       run:
         path: /bin/sh
         args:
         - -c
         - |
             echo "name: $(cat terraform-plan/name)"
             echo "metadata: $(cat terraform-plan/metadata)"

the second task fails with

cat: terraform-plan/name: No such file or directory
name: 
cat: terraform-plan/metadata: No such file or directory
metadata: 

Intercepting the container I can find the terraform-plan directory, but it's empty. The put step is successful.

Resource version: latest
Concourse version: 3.10.0

I'm sure this was working before, no idea of what changed in the meantime!

Support S3 Server Side Encryption

Would be awesome to add support for remote backend config values such as supporting S3 server side encryption.

An example directly using the Terraform CLI:

terraform remote config -backend=s3 \
  -backend-config="bucket=some-bucket" \
  -backend-config="key=some-key" \
  -backend-config="region=some-region" \
  -backend-config="encrypt=true" \
  -backend-config="kms_key_id=some_kms_key_id"

Possible terraform-resource solution:

resources:
  - name: terraform
     type: terraform
     source:
       storage:
         bucket: mybucket
         bucket_path: terraform-ci/
         access_key_id: {{storage_access_key}}
         secret_access_key: {{storage_secret_key}}
         additional_config:
           - encrypt=true
           - kms_key_id={{kms_key_id}}

Support terraform init

In 0.9, remote state backends has changed.

Backend reinitialization required. Please run "terraform init".
Reason: Initial configuration of the requested backend "s3"

The "backend" is the interface that Terraform uses to store state,
perform operations, etc. If this message is showing up, it means that the
Terraform configuration you're using is using a custom configuration for
the Terraform backend.

Changes to backend configurations require reinitialization. This allows
Terraform to setup the new configuration, copy existing state, etc. This is
only done during "terraform init". Please run that command now then try again.

If the change reason above is incorrect, please verify your configuration
hasn't changed and try again. At this point, no changes to your existing
configuration or state have been made.

Support for Google Cloud Platform

We have been using this to spin instances on AWS and it works great. We now want the ability to do the same on GCP. Wanted to run this by you to understand your thoughts. Are there any plans to support this for GCP ?

Do you see the changes required to support GCP as trivial ?

Feature Request: Loads vars from Terraform output JSON

Just came across this need on a project, and raising it here so we don't forget in case we get time to do a PR.

It'd be good if we could use a file containing terraform output -json instead of the vars element in a put. We've got dependent Terraform steps in different jobs, so we could put the output in one job, and then get it as part of a later plan and then pick up all of those vars just by specifying the path to where that get was mounted in the container.

Why not embed the Terraform binary?

Hi, first of all thank you for the great work on this resource :)

After looking at this resource and other Concourse resources, I'm curious why you couldn't (or decided not to) embed the terraform binary and write bash scripts for in/out/check, as is done for e.g. the git resource?

Put should support optionally providing native terraform state file

Currently, the resulting .tfstate file resulting from apply goes into the S3-compatible backing, but the outputs of the resource only contain this resource's custom JSON format. This means that I have to write separate scripts for my manual workflow with things like:

cat <<EOF > manifest.yml
---
foo: $(terraform --output=PATH_TO_STATE_FILE var_foo)
...
EOF

and scripts for my CI workflow like this:

cat <<EOF > manifest.yml
---
foo: $(jq .var_foo PATH_TO_JSON_FILE)
...
EOF

It would be nice if the tfstate file were also in the output (perhaps as an opt-in feature activated in the put parameters) so I could have more unified tooling.

Resource fails to put when there are no outputs

What if I don't want any outputs? Well, I'll tell you what:

output "make_it_work" {
  value    = "https://github.com/ljfranklin/terraform-resource/issues/26"
}
The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state
use the `terraform show` command.

State path: /tmp/terraform-resource-out824693898/terraform.tfstate
▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ Terraform Apply ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲
Failed To Run Terraform Apply!
Cleaning Up Partially Created Resources...
▼ ▼ ▼ ▼ ▼ ▼ ▼ ▼ ▼ ▼ Terraform Destroy ▼ ▼ ▼ ▼ ▼ ▼ ▼ ▼ ▼ ▼
mysql_database.ccdb: Refreshing state... (ID: cloud_controller)
mysql_database.uaa: Refreshing state... (ID: uaa)
mysql_database.app_usage_service: Refreshing state... (ID: app_usage_service)
mysql_database.diego: Refreshing state... (ID: diego)
mysql_database.app_usage_service: Destroying...
mysql_database.ccdb: Destroying...
mysql_database.diego: Destroying...
mysql_database.uaa: Destroying...
mysql_database.diego: Destruction complete
mysql_database.app_usage_service: Destruction complete
mysql_database.ccdb: Destruction complete
mysql_database.uaa: Destruction complete

Destroy complete! Resources: 4 destroyed.
▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ Terraform Destroy ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲
2017/02/03 20:12:03 Apply Error: Failed to retrieve output.
Error: exit status 1
Output: The state file has no outputs defined. Define an output
in your configuration with the `output` directive and re-run
`terraform apply` for it to become available.

Bump Terraform to 0.8.0

Tried to bump, had a test fail with:

Error creating plan: 1 error(s) occurred:
  
  * 1:3: unknown variable accessed: var.access_key in:
  
  ${var.access_key}

Opened an issue here: hashicorp/terraform#10711. Going to hold off on bumping until I hear back on issue.

get and put of terraform resource should not show credentials

Right now, a get on a terraform resource will dump out all the outputs, which may contain credentials (e.g. if I create an RDS resource, or an IAM user, I may want to output the creds so I can thread them through to a downstream job). It would be great if these credentials were obfuscated, or perhaps the whole output could be obfuscated (might be hard to decide exactly where to draw the line on what to obfuscate, e.g. maybe some IPs should be obfuscated too?). Ideally, this would happen by default, but at minimum there should be some way to opt into this obfuscation.

/cc @fushewokunze-pivotal

Pushing variables in environment

Hello @ljfranklin,

I have a problem with a remote state provider on S3. To access it, i have to provide my access/secret keys which will be stored in the remote state.
The aws sdk use by default variables stored in the environment.

Do you think it's possible to add an env parameter to push keys/values in the resource environment ?

`get` fails if environment is tainted

If a put fails, subsequent gets will fail with:

2018/01/08 18:25:31 State file does not exist with key 'gossamer-wolf.tfstate'.
If you intended to run the `destroy` action, add `put.get_params.action: destroy`.
This is a temporary requirement until Concourse supports a `delete` step.

Should get be able to fetch a tainted statefile? Or should get only be able to fetch "healthy" envs?

map keys cannot include special characters

If I specify:

vars:
  domains:
    "*.example.com": my-favorite-domain

I generate the args: -var 'domains={*.example.com="my-favorite-domain"}' and Terraform throws the error:

invalid value "domains={*.example.com=\"my-favorite-domain\"}" for flag -var: Cannot parse value for variable ("{*.example.com=\"my-favorite-domain\"}") as valid HCL: At 1:6: illegal char

To resolve this, I must escape the key myself:

vars:
  domains:
    "\"*.example.com\"": my-favorite-domain

to generate the args: -var 'domains={"*.example.com"="my-favorite-domain"}'

Can the terraform outputs be printed as JSON?

When we do a put to the resource in order to apply, the outputs we get are printed in the UI as HCL. It would be cool to have an option to print them as JSON, since we obviously consume them as JSON in CI. We could then just copy them from the concourse UI when we are trying to manually test our newly terraformed resources (rather than using subsequent CI jobs to do so).

Double file extension making destroy action blow up

When testing the destroy action it works but returns as a failure because of an error message like this.

2017/03/30 20:10:58 State file does not exist with key 'tf-sandboxing.tfstate.tfstate'.
If you intended to run the `destroy` action, add `put.get_params.action: destroy`.
This is a temporary requirement until Concourse supports a `delete` step.

We think we have a fix and there may be a PR incoming. That double .tfstate looks like a bug.

Support tfstate manipulations

There are some TF usage that require TF state manipulations, such as terraform modules renaming ( https://ryaneschinger.com/blog/terraform-state-move/ provides more background and example of this use-case)

The state command enables CI/CD to perform automated manipulation of the state file (e.g. resource renaming): https://www.terraform.io/docs/commands/state/index.html

As an analogy, this is similar to database refactoring automation such as https://flywaydb.org/documentation/migration/ that enables continous deployment of applications requiring database schema manipulations.

It would be great to be able to specify some state management commands that the terraform-resource could apply (tracking their invocations and generally their lifecycle).

Support for getting build information as terraform variables

We have had a need to tag the instances that are spun out using terraform with the build information in order to trace and tie back the instances to the concourse build. The specific variables we need are already available in the Put container as environment variables. What do you think about having the following list available as terraform variables, similar to the env_name that is the only one currently provided ?

$BUILD_NAME
$BUILD_JOB_NAME
$BUILD_PIPELINE_NAME
$BUILD_ID
$BUILD_TEAM_NAME
$ATC_EXTERNAL_URL

The reason we cant use the concourse container environment variables directly is because in order for terraform to read environment variables they must be prefixed with TF_VAR_
https://www.terraform.io/intro/getting-started/variables.html#from-environment-variables

Integration tests occasionally fail with 404 errors

Seems to be some eventual consistency issue with S3:

Expected S3 file 'terraform-test-unit/out-test-72657703.tfstate.tainted' to exist in bucket 'terraform-resource-ci', but it does not
  Expected error:
      <*awserr.requestError | 0xc42046b050>: {
          awsError: {code: "NotFound", message: "Not Found", errs: nil},
          statusCode: 404,
          requestID: "E4F9EC73626A515C",
      }
      NotFound: Not Found
      	status code: 404, request id: E4F9EC73626A515C
  not to have occurred

One failed build is here

Support for modules in private repos

As the title says, we'd like to use modules sitting in private repos.

However, when we try to do this (ignoring the fact that the we'd need to figure out a way to get the key in the docker container - advice would be welcome!), we get this:

▼ ▼ ▼ ▼ ▼ ▼ ▼ ▼ ▼ ▼ Terraform Plan ▼ ▼ ▼ ▼ ▼ ▼ ▼ ▼ ▼ ▼
▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ Terraform Plan ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲ ▲
Failed To Run Terraform Plan!
2018/04/13 12:53:44 Plan Error: terraform init command failed.
Error: exit status 1
Output: Initializing modules...
- module.router53_alias_notification
  Getting source "git::ssh://[something]"
Error downloading modules: Error loading modules: error downloading 'ssh://[something]: /usr/bin/git exited with 128: Cloning into '.terraform/modules/b8f1cee68bd102da2be28b22e182b77f'...
fatal: cannot run ssh: No such file or directory
fatal: unable to fork

so no ssh on the image! :-(

It would be also good to have something like a private_key configuration key like we have in the git resource.

Fails to upload state when using shared credentials file

I'm using the tf backends branch with S3 backend along with the shared credentials file option.
My credentials file includes a session token for role with full administration rights.
When applying it can read the state and can apply changes (modify and create resources) but it fails when trying to upload the state with the following error:

Failed to save state: failed to upload state: AccessDenied: Access Denied status code: 403

I can confirm session token is not expired, the apply process takes only few seconds, and I can manually using the same token and credentials push the state inside a hijacked container without issues after I get the failure

Let me know if there's something I'm missing or if you need additional information for debugging

resource doesn't load custom plugins from source directory default location

If I have a custom plugin in my source, it isn't discovered when it should.

So if i have the following plugin in mycode:
terraform.d/plugins/linux_amd64/terraform-provider-acme_v0.4.0

And the the following config:

jobs:
  - name: terraform-plan
    plan:
      - get: mycode
      - put: terraform
        params:
          terraform_source: mycode

Then the plan/apply will fail saying it can't find the plugin even though it exists in terraform.d/plugins/linux_amd64/

https://www.terraform.io/docs/commands/init.html#plugin-installation

I'm aware you can override the plugin load directory with plugin_dir, however that disables the automatic fetching of other plugins which isn't always desired behaviour.

Support for `terraform plan`

Thanks so much for the work here! Has any consideration been given to terraform plan functionality? For example, my team would like to be able to assess terraform plan output before executing terraform apply.

I'm somewhat naive -- so apologies if I'm off base -- but I'm imagining something akin to the following:

  • support for action: plan functionality that makes Terraform's tfplan available as Concourse resource output
  • support to allow terraform apply to accept the tfplan resulting from an action: plan as Concourse resource input

retry after a time

Much like cloudformation before it, I usually find terraform needs to take a few 'stabs' at completing an update. It seems like it's more of a problem for us in AWS us-east region than for people using us-west.

Are you amenable to a retry and a time-between-retry parameter?

Support more storage driver type

Hi @ljfranklin,

I am intend to use terraform resource type in my concourse pipeline, to create IaaS resources.
But found it only support S3 as storage driver by now.

Unfortunately, I need to upload the file to OSS, which is a Cloud Object Storage Service of Alibaba. Do you have plan to support it?

OSS API overviews

Best Regards
YueWang

Provide example Concourse pipeline to build Terraform + Resource from source

As new features are added to Terraform so quickly, some users would like to use new Terraform features with this resource before the feature has been officially released. A set of re-usable Concourse tasks and pipeline config would allow teams to build terraform from source from a given git ref and push a "one-off" docker image for the resource containing that dev build. The task should optionally take the Terraform git ref as well as the Resource git ref, otherwise default to building from master.

Resource should support Terraform state backends

Backends: https://www.terraform.io/docs/backends/index.html

Unfortunately this resource can't use the built-in remote state backends at all right now. Concourse resources are required to implement a check function that can look for new versions of a resource. To have tighter control over the state files, the resource currently supports only S3. However, now that backends are listed directly in the Terraform config files it becomes even more desirable to support all the backends. I'll have to do some more thinking on this.

Proxy from envs is ignored by client

The terraform cli wrapper in client currently builds the Envs from scratch, picking up only the PATH env var. Should this take the current environment (os.Environ) as the starting point before adding more environment variables?

We're specifying the proxy details using the http_proxy, https_proxy and no_proxy, without which terraform won't be able to speak to hashicorp or the iaas.

Get so i can depend on a destroy

In my pipeline, i want to ensure the stack is removed before redeploying it, i'd like something like:

  - name: purge-previous-deployment
    plan:
      - get: environment
        trigger: true
        passed: [claim-environment]
      - get: appdog-ci
      - put: opsman-terraform
        params:
          action: destroy
          env_name: {{environment-name}}
        get_params:
          action: destroy

  - name: create-deployment
    plan:
      - get: opsman-terraform
        trigger: true
        passed: [purge-previous-deployment]
      - get: appdog-ci
      - put: opsman-terraform
        params:
          env_name: {{environment-name}}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.