Giter Site home page Giter Site logo

boltops-tools / terraspace Goto Github PK

View Code? Open in Web Editor NEW
661.0 21.0 45.0 866 KB

Terraspace: The Terraform Framework

Home Page: https://terraspace.cloud

License: Apache License 2.0

Ruby 97.24% HCL 0.36% Shell 2.41%
terraspace terraform boltops aws azure google-cloud

terraspace's Introduction

Terraspace

Gem Version

BoltOps Badge

BoltOps Learn Badge

The Terraform Framework.

Official Docs Site: terraspace.cloud

Support the Project

  • Please watch/star this repo to help grow and support the project.
  • Consider subscribing to BoltOps Learn. You'll get access to many training videos. Subscribing helps support the project.

Quick Start

Watch the video

Here are commands to get started:

terraspace new project infra --plugin aws --examples
cd infra
terraspace up demo
terraspace down demo
  • The new command generates a starter project.
  • The up command creates an s3 bucket.
  • The down command cleans up and deletes the bucket.

The default plugin is aws. Major cloud providers are supported: aws, azurerm, google.

Usage

Create infrastructure:

$ terraspace up demo
Building .terraspace-cache/us-west-2/dev/stacks/demo
Current directory: .terraspace-cache/us-west-2/dev/stacks/demo
=> terraform init -get >> /tmp/terraspace/log/init/demo.log
=> terraform apply
Apply complete! Resources: 2 added, 0 changed, 0 destroyed.
$

Destroy infrastructure:

$ terraspace down demo
Building .terraspace-cache/us-west-2/dev/stacks/demo
Current directory: .terraspace-cache/us-west-2/dev/stacks/demo
=> terraform destroy
Destroy complete! Resources: 2 destroyed.
$

Deploy Multiple Stacks

To deploy all the infrastructure stacks:

$ terraspace all up
Will run:
    terraspace up vpc      # batch 1
    terraspace up mysql    # batch 2
    terraspace up redis    # batch 2
    terraspace up instance # batch 3
Are you sure? (y/N)

To choose multiple stacks to deploy

$ terraspace all up mysql redis
Will run:
    terraspace up vpc   # batch 1
    terraspace up mysql # batch 2
    terraspace up redis # batch 2
Are you sure? (y/N)

When you use the all command, the dependency graph is calculated and the stacks are deployed in the right order.

Terrafile

Terraspace makes it easy to use Terraform modules sourced from your own git repositories, other git repositories, or the Terraform Registry. Use any module you want:

Terrafile:

# GitHub repo
mod "s3", source: "boltops-tools/terraform-aws-s3", tag: "v0.1.0"
# Terraform registry
mod "sg", source: "terraform-aws-modules/security-group/aws", version: "3.10.0"

To install modules:

terraspace bundle

Features

  • DRY: You can keep your code DRY. Terraspace builds your Terraform project with common app and config/terraform structure that gets built each deploy. You can override the settings if needed, like for using existing backends. See: Existing Backends.
  • Generators: Built-in generators to quickly create the starter module. Focus on code instead of boilerplate structure.
  • Multiple Environments: Tfvars & Layering allow you to the same code with different tfvars to create multiple environments. Terraspace conventionally loads tfvars from the tfvars folder. Rich layering support allows you to build different environments like dev and prod with the same code. Examples are in Full Layering.
  • Deploy Multiple Stacks: The ability to deploy multiple stacks with a single command. Terraspace calculates the dependency graph and deploys stacks in the right order. You can also target specific stacks and deploy subgraphs.
  • Secrets Support: Terraspace has built-in secrets support for AWS Secrets Manager, AWS SSM Parameter Store, Azure Key Vault, Google Secrets Manager. Easily set variables from Cloud secrets providers.
  • Terrafile: Terraspace makes it easy to use Terraform modules sourced from your own git repositories, other git repositories, or the Terraform Registry. The git repos can be private or public. This is an incredibly powerful feature of Terraspace because it opens up a world of modules for you to use. Use any module you want.
  • Configurable CLI: Configurable CLI Hooks and CLI Args allow you to adjust the underlying terraform command.
  • Testing: A testing framework that allows you to create test harnesses, deploy real-resources, and have higher confidence that your code works.
  • Terraform Cloud and Terraform Enterprise Support: TFC and TFE are both supported. Terraspace adds additional conveniences to make working with Terraform Cloud Workspaces easier.
  • Terraspace Cloud Support: Terraspace Cloud adds additional features and conveniences like a Dashboard, History, Team Management, Permissions, Real-time Logging, and Cost Estimates. It's specifically designed for Terraspace. You might also be interested in this blog post: Terraspace Cloud Intro

Comparison

Here are some useful comparisons to help you compare Terraspace vs other tools in the ecosystem:

More info: terraspace.cloud

terraspace's People

Contributors

ahaymond avatar alexjfisher avatar blucas avatar byron70 avatar ccebecbc avatar cumpsd avatar fearoffish avatar jbbarth avatar johnlister avatar jrdemasi avatar nevdullcode avatar pvshewale avatar smasset-orange avatar syphernl avatar tongueroo avatar vilmosnagy avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

terraspace's Issues

Terraspace bundle install example stacks

Checklist

  • Upgrade Terraspace: Are you using the latest version of Terraspace? This allows Terraspace to fix issues fast. There's an Upgrading Guide: https://terraspace.cloud/docs/misc/upgrading/
  • Reproducibility: Are you reporting a bug others will be able to reproduce and not asking a question. If you're unsure or want to ask a question, do so on https://community.boltops.com
  • Code sample: Have you put together a code sample to reproduce the issue and make it available? Code samples help speed up fixes dramatically. If it's an easily reproducible issue, then code samples are not needed. If you're unsure, please include a code sample.

My Environment

Software Version
Operating System macOS Big Sur version 11.2
Terraform v0.14.6
Terraspace 0.5.10
Ruby 3.0.0p0

Expected Behaviour

Install modules in vendor directory

Current Behavior

Install modules in vendor directory plus adds all example stacks

Step-by-step reproduction instructions

Specify module in Terrafile and run terraspace bundle

Code Sample

#Terrafile
stack_options(purge: true)
mod "eks", source: "terraform-aws-modules/eks/aws", stacks: :all, version: "14.0.0"
mod "iam", source: "terraform-aws-modules/iam/aws", stacks: :all, version: "3.8.0"
mod "vpc", source: "terraform-aws-modules/vpc/aws", stacks: :all, version: "2.70.0"

Solution Suggestion

Don't add sample stacks

Terraspace fmt don't work on modules

Checklist

  • Upgrade Terraspace: Are you using the latest version of Terraspace? This allows Terraspace to fix issues fast. There's an Upgrading Guide: https://terraspace.cloud/docs/misc/upgrading/
  • Reproducibility: Are you reporting a bug others will be able to reproduce and not asking a question. If you're unsure or want to ask a question, do so on https://community.boltops.com
  • Code sample: Have you put together a code sample to reproduce the issue and make it available? Code samples help speed up fixes dramatically. If it's an easily reproducible issue, then code samples are not needed. If you're unsure, please include a code sample.

My Environment

Software Version
Operating System linux
Terraform 0.14.6
Terraspace 0.6.9
Ruby ruby 2.7.2p137 (2020-10-01 revision 5445e04352) [x86_64-linux]

Expected Behaviour

Code in modules are formated

Current Behavior

Code in modules are don't formated

Step-by-step reproduction instructions

  1. Download Sample codes
  2. Execute terraspace fmt
  3. Command format stacks, but don't format modules

Code Sample

https://github.com/ByJacob/terraspace-bug-issues-116

Solution Suggestion

unlock not supported in terraspace

i fond terraspace didn't support terraform subcommand unlock
this is what i executed
min@MacBook-Pro-2 ~/r/demo> terraspace unlock
Could not find command "unlock".

My Environment

Software Version
Operating System macos big sur
Terraform v0.12.29
Terraspace 0.5.1
Ruby ruby 2.7.2p137 (2020-10-01 revision 5445e04352) [x86_64-darwin20]

Expected Behaviour

it is nice to have this unlock command supported ,instead of cd to build dir running terraform unlock command

Terraspace does not work for non-cloud providers

I tested terraspace to use with http backend and it does not seem to support any other backend apart from aws/azure/gcp. I also tried the same with local backend. I end up having the same error.

Checklist

  • Upgrade Terraspace: Are you using the latest version of Terraspace? This allows Terraspace to fix issues fast. There's an Upgrading Guide: https://terraspace.cloud/docs/misc/upgrading/
  • Reproducibility: Are you reporting a bug others will be able to reproduce and not asking a question. If you're unsure or want to ask a question, do so on https://community.boltops.com
  • Code sample: Have you put together a code sample to reproduce the issue and make it available? Code samples help speed up fixes dramatically. If it's an easily reproducible issue, then code samples are not needed. If you're unsure, please include a code sample.

My Environment

Software Version
Operating System macOs/Intel/ 11.5 BigSur
Terraform v1.0.2 on darwin_amd64
Terraspace 0.6.12
Ruby 3.0.2p107 (2021-07-07 revision 0db68f0233) [x86_64-darwin20]

Expected Behaviour

terraspace should work

Current Behavior

gives out the following error

[deployments] terraspace up test -y
<dir>/gems/terraspace-0.6.12/lib/terraspace/compiler/expander.rb:26:in `autodetect': undefined method `[]' for nil:NilClass (NoMethodError)
	from <dir>gems/memoist-0.16.2/lib/memoist.rb:213:in `autodetect'
	from <dir>/gems/terraspace-0.6.12/lib/terraspace/mod.rb:130:in `cache_dir'
...

Step-by-step reproduction instructions

- terraspace new project deployments
- cd deployments
- terraspace new stack test
# update provider config in /deployments/config/terraform/backend.tf
# add local backend and remove aws plugin
- terrspace up test -y

Code Sample

attaching the following project sample

deployments.zip

Package Terraspace as a Linux AppImage (see https://appimage.org/)

Summary

Terraform and Terragrunt users are accustomed to the simple, zero-dependency, installation of go binaries. For these users, installation of a Ruby Gem is problematic. Also, many of these users do not have root privileges on their hosts which makes package manager-based installations problematic. These non-root users cannot install Docker either. So, to target these users, consider packaging Terraspace as an AppImage (see https://appimage.org/).

Motivation

Simplify installation of Terraspace for non-root users who cannot install Terraspace using a package manager and cannot install Docker to use the Terraspace docker image.

terraform not installed.

I am using terraspace as described but getting error as terraform not installed despite terraform installed.. any idea why its happening? I am using VS

PS C:\Users\Ext03545\infra> terraspace check_setup
Detected Terrspace version: 0.5.11
Terraform not installed. Unable to detect a terraform command. Please double check that terraform is installed.
See: https://terraspace.cloud/docs/install/terraform/
PS C:\Users\Ext03545\infra> terraform --version
Terraform v0.14.7

this is the error I am facing despite terraform being installed in it.

Any help will be appreciated.
Thank you.

Parallel Terraspace Execution

Summary

Support running terraspace [all] init/plan/up in parallel within the same workspace.

Motivation

I'd like to be able to, at the very least, execute parallel terraspace all plan on a given CI build. Imagine your terraspace project has multiple layers, separated per-region. When raising a pull request to that project, ideally, your CI process should execute a plan on all the regions associated to that project to see what effect your change has on each of those layers. Having the CI process execute a plan on all the regions would provide the most feedback to the engineer to validate that his/her change has the desired effect on the given infrastructure.

I don't think parallel execution is currently possible as terraspace writes its log files to a flat folder structure rather than a layered one. Given a terraspace project which is using layering, terraspace build will create a per-layer directory structure with the resulting terraform root module such as .terraspace-cache/<region>/<env>/[modules,stacks]. However, when your run an [all] plan or [all] up, the logs will be stored in a flattened structure such as logs/plan/plan.log

Guide-level explanation

I don't think there is anything to add here.

Reference-level explanation

  1. Identify the various layers which need to have a plan run against them
  2. Trigger a plan per-layer
  3. Each plan should write to its own /log/<layer>/<env>/plan/plan.log
  4. Any other non-layered disk access would also need to follow the same pattern as above, or in .terraspace-cache/<layer>

Drawbacks

  1. Possible complexity issues?

Unresolved Questions

Not sure.

[Module Testing] tfvars file is not being used by terraspace test

Checklist

  • Upgrade Terraspace
  • Reproducibility
  • Code sample

My Environment

Software Version
Operating System Ubuntu 20.04 (WSL2)
Terraform 1.0.2
Terraspace 0.6.11
Ruby ruby 2.7.4p191 (2021-07-07 revision a21a3b7d23) [x86_64-linux]

Expected Behaviour

  • Terraspace use spec/fixtures/tfvars/demo.tfvars for terraspace test at module level

Current Behavior

  • terraspace test run init, plan and apply without using demo.tfvars
variable "myvar" {
  type        = string
  description = "Required variable"
}

  • Terraspace test return success, but logs said it can't init, plan or apply because validation. It is not including tfvars file! as input for terraspace test
terraspace test                                     
=> cd test && bundle exec rspec

main
Terraspace.logger has been reconfigured to /tmp/terraspace/log/test.log
Building test harness at: /tmp/terraspace/test-harnesses/mybug-harness
Test harness built.
=> terraspace up mybug -y
=> terraspace down mybug -y

Finished in 9.3 seconds (files took 0.42923 seconds to load)
0 examples, 0 failures

[2021-07-15T11:17:07 #17971 ]: Building .terraspace-cache/us-west-2/test/modules/mybug
[2021-07-15T11:17:10 #17971 ]: Built in .terraspace-cache/us-west-2/test/modules/mybug
[2021-07-15T11:17:10 #17971 ]: Current directory: .terraspace-cache/us-west-2/test/modules/mybug
[2021-07-15T11:17:10 #17971 ]: => terraform plan -input=false -out /tmp/terraspace/plans/mybug-3f31c6546719d94c6bba70979b8e06d2.plan
[2021-07-15T11:17:10 #17971 ]: �[31m╷�[0m�[0m
[2021-07-15T11:17:10 #17971 ]: �[31m│�[0m �[0m�[1m�[31mError: �[0m�[0m�[1mNo value for required variable�[0m
[2021-07-15T11:17:10 #17971 ]: �[31m│�[0m �[0m
[2021-07-15T11:17:10 #17971 ]: �[31m│�[0m �[0m�[0m  on variables.tf line 3:
[2021-07-15T11:17:10 #17971 ]: �[31m│�[0m �[0m   3: �[4mvariable "myvar"�[0m {�[0m
[2021-07-15T11:17:10 #17971 ]: �[31m│�[0m �[0m
[2021-07-15T11:17:10 #17971 ]: �[31m│�[0m �[0mThe root module input variable "myvar" is not set, and has no default
[2021-07-15T11:17:10 #17971 ]: �[31m│�[0m �[0mvalue. Use a -var or -var-file command line argument to provide a value for
[2021-07-15T11:17:10 #17971 ]: �[31m│�[0m �[0mthis variable.
[2021-07-15T11:17:10 #17971 ]: �[31m╵�[0m�[0m
[2021-07-15T11:17:10 #17971 ]: �[31mError running command: terraform plan -input=false -out /tmp/terraspace/plans/mybug-3f31c6546719d94c6bba70979b8e06d2.plan�[0m
[2021-07-15T11:17:10 #17971 ]: Building .terraspace-cache/us-west-2/test/modules/mybug
[2021-07-15T11:17:11 #17971 ]: Built in .terraspace-cache/us-west-2/test/modules/mybug
[2021-07-15T11:17:11 #17971 ]: => terraform plan -input=false -destroy
[2021-07-15T11:17:12 #17971 ]: �[31m╷�[0m�[0m
[2021-07-15T11:17:12 #17971 ]: �[31m│�[0m �[0m�[1m�[31mError: �[0m�[0m�[1mNo value for required variable�[0m
[2021-07-15T11:17:12 #17971 ]: �[31m│�[0m �[0m
[2021-07-15T11:17:12 #17971 ]: �[31m│�[0m �[0m�[0m  on variables.tf line 3:
[2021-07-15T11:17:12 #17971 ]: �[31m│�[0m �[0m   3: �[4mvariable "myvar"�[0m {�[0m
[2021-07-15T11:17:12 #17971 ]: �[31m│�[0m �[0m
[2021-07-15T11:17:12 #17971 ]: �[31m│�[0m �[0mThe root module input variable "myvar" is not set, and has no default
[2021-07-15T11:17:12 #17971 ]: �[31m│�[0m �[0mvalue. Use a -var or -var-file command line argument to provide a value for
[2021-07-15T11:17:12 #17971 ]: �[31m│�[0m �[0mthis variable.
[2021-07-15T11:17:12 #17971 ]: �[31m╵�[0m�[0m
[2021-07-15T11:17:12 #17971 ]: �[31mError running command: terraform plan -input=false -destroy�[0m

Step-by-step reproduction instructions

  • Create module
  • Create variables which is required to be set
  • Create spec/fixtures/tfvars/demo.tfvars with input for this variable.
  • Run terraspace test

Code Sample

Solution Suggestion

NA

'terraspace new plugin' throws 'NoMethodError'

My Environment

Software Version
Operating System Fedora 31
Terraform 0.14.4
Terraspace 0.5.10
Ruby 2.7.2p137

Expected Behaviour

Skeleton for new plugin created.

Current Behavior

'terraspace new plugin' throws 'NoMethodError'

Step-by-step reproduction instructions

terraspace new plugin terraspace_plugin_hyperv
=> Creating new plugin: terraspace_plugin_hyperv
Traceback (most recent call last):
	23: from /opt/terraspace/embedded/bin/terraspace:23:in `<main>'
	22: from /opt/terraspace/embedded/bin/terraspace:23:in `load'
	21: from /opt/terraspace/embedded/lib/ruby/gems/2.7.0/gems/terraspace-0.5.10/exe/terraspace:14:in `<top (required)>'
	20: from /opt/terraspace/embedded/lib/ruby/gems/2.7.0/gems/thor-1.0.1/lib/thor/base.rb:485:in `start'
	19: from /opt/terraspace/embedded/lib/ruby/gems/2.7.0/gems/terraspace-0.5.10/lib/terraspace/command.rb:59:in `dispatch'
	18: from /opt/terraspace/embedded/lib/ruby/gems/2.7.0/gems/thor-1.0.1/lib/thor.rb:392:in `dispatch'
	17: from /opt/terraspace/embedded/lib/ruby/gems/2.7.0/gems/thor-1.0.1/lib/thor/invocation.rb:127:in `invoke_command'
	16: from /opt/terraspace/embedded/lib/ruby/gems/2.7.0/gems/thor-1.0.1/lib/thor/command.rb:27:in `run'
	15: from /opt/terraspace/embedded/lib/ruby/gems/2.7.0/gems/thor-1.0.1/lib/thor.rb:243:in `block in subcommand'
	14: from /opt/terraspace/embedded/lib/ruby/gems/2.7.0/gems/thor-1.0.1/lib/thor/invocation.rb:116:in `invoke'
	13: from /opt/terraspace/embedded/lib/ruby/gems/2.7.0/gems/terraspace-0.5.10/lib/terraspace/command.rb:59:in `dispatch'
	12: from /opt/terraspace/embedded/lib/ruby/gems/2.7.0/gems/thor-1.0.1/lib/thor.rb:392:in `dispatch'
	11: from /opt/terraspace/embedded/lib/ruby/gems/2.7.0/gems/thor-1.0.1/lib/thor/invocation.rb:127:in `invoke_command'
	10: from /opt/terraspace/embedded/lib/ruby/gems/2.7.0/gems/thor-1.0.1/lib/thor/command.rb:27:in `run'
	 9: from /opt/terraspace/embedded/lib/ruby/gems/2.7.0/gems/thor-1.0.1/lib/thor.rb:40:in `block in register'
	 8: from /opt/terraspace/embedded/lib/ruby/gems/2.7.0/gems/thor-1.0.1/lib/thor/invocation.rb:116:in `invoke'
	 7: from /opt/terraspace/embedded/lib/ruby/gems/2.7.0/gems/thor-1.0.1/lib/thor/group.rb:232:in `dispatch'
	 6: from /opt/terraspace/embedded/lib/ruby/gems/2.7.0/gems/thor-1.0.1/lib/thor/invocation.rb:134:in `invoke_all'
	 5: from /opt/terraspace/embedded/lib/ruby/gems/2.7.0/gems/thor-1.0.1/lib/thor/invocation.rb:134:in `map'
	 4: from /opt/terraspace/embedded/lib/ruby/gems/2.7.0/gems/thor-1.0.1/lib/thor/invocation.rb:134:in `each'
	 3: from /opt/terraspace/embedded/lib/ruby/gems/2.7.0/gems/thor-1.0.1/lib/thor/invocation.rb:134:in `block in invoke_all'
	 2: from /opt/terraspace/embedded/lib/ruby/gems/2.7.0/gems/thor-1.0.1/lib/thor/invocation.rb:127:in `invoke_command'
	 1: from /opt/terraspace/embedded/lib/ruby/gems/2.7.0/gems/thor-1.0.1/lib/thor/command.rb:27:in `run'
/opt/terraspace/embedded/lib/ruby/gems/2.7.0/gems/terraspace-0.5.10/lib/terraspace/cli/new/plugin.rb:16:in `create_plugin': undefined method `core_template_source' for #<Terraspace::CLI::New::Plugin:0x0000000002d25440> (NoMethodError)

terraspace taint

Summary

It would be convenient to have a terraspace taint command, forwarding to terraform taint.

Motivation

When taining a resource, I always have to go to the directory where the stack resides and invoke terraform taint

Guide-level explanation

See Reference-level explanation

Reference-level explanation

When invoking terraspace taint demo null_resource.foo
Then terraspace should invoke terraform taint null_resource.foo for the stack demo.

Selection of targets (stacks vs. modules) shall be according to existing behaviour.

(Optional) Maybe the stack could be ommited, if the resource is unique.
(Optional) Maybe it would be possible to specify multiple resources to taint.

Drawbacks

None

Unresolved Questions

Running Terraspace in Alpine Ruby Docker Container

Hi,

I'm trying to test out terraspace inside an alpine 3.12 docker container with ruby runtime installed. I'm able to install terraspace and validate the installation and everything seems to be fine. When I try to execute terraspace build demo or terraspace up demo I got the following error. I'm not familiar with ruby, I'm not sure if this is even possible to run in alpine docker. I've also tried in a Debian container and I get the same error. Containers were downloaded from https://hub.docker.com/_/ruby

Is good if you can include some documentation on how to install and execute inside a docker container, I believe this will help a lot of people who want to try out this awesome tool without having the need to install Ruby runtime components into their current workspace.

Error

root@ebf18855f463:/infra# terraspace up demo
cannot load such file -- /usr/local/bundle/gems/terraspace_plugin_google-0.2.2/lib/terraspace_plugin_google/interfaces
WARNING: Unable to require "bundler/setup"
There may be something funny with your ruby and bundler setup.
You can try upgrading bundler and rubygems:

    gem update --system
    gem install bundler

Here are some links that may be helpful:

* https://bundler.io/blog/2019/01/03/announcing-bundler-2.html

Also, running bundle exec in front of your command may remove this message.

Traceback (most recent call last):
	11: from /usr/local/bundle/bin/terraspace:23:in `<main>'
	10: from /usr/local/bundle/bin/terraspace:23:in `load'
	 9: from /usr/local/bundle/gems/terraspace-0.4.4/exe/terraspace:12:in `<top (required)>'
	 8: from /usr/local/bundle/gems/terraspace-0.4.4/exe/terraspace:12:in `require'
	 7: from /usr/local/bundle/gems/terraspace-0.4.4/lib/terraspace/cli.rb:1:in `<top (required)>'
	 6: from /usr/local/bundle/gems/terraspace-0.4.4/lib/terraspace/cli.rb:2:in `<module:Terraspace>'
	 5: from /usr/local/bundle/gems/terraspace-0.4.4/lib/terraspace/cli.rb:41:in `<class:CLI>'
	 4: from /usr/local/bundle/gems/terraspace-0.4.4/lib/terraspace/cli.rb:41:in `require'
	 3: from /usr/local/bundle/gems/terraspace-0.4.4/lib/terraspace/cli/cloud.rb:1:in `<top (required)>'
	 2: from /usr/local/bundle/gems/terraspace-0.4.4/lib/terraspace/cli/cloud.rb:2:in `<class:CLI>'
	 1: from /usr/local/bundle/gems/terraspace-0.4.4/lib/terraspace/cli/cloud.rb:3:in `<class:Cloud>'
/usr/local/bundle/gems/terraspace-0.4.4/lib/terraspace/cli/cloud.rb:3:in `require': cannot load such file -- /usr/local/bundle/gems/terraspace-0.4.4/lib/terraspace/terraform (LoadError)

Environment

root@ebf18855f463:/usr/local/bundle/gems/terraspace-0.4.4/lib/terraspace# bundle -v
Bundler version 2.1.4
root@ebf18855f463:/usr/local/bundle/gems/terraspace-0.4.4/lib/terraspace# ruby -v
ruby 2.7.2p137 (2020-10-01 revision 5445e04352) [x86_64-linux]
root@ebf18855f463:/usr/local/bundle/gems/terraspace-0.4.4/lib/terraspace# gem env
RubyGems Environment:
  - RUBYGEMS VERSION: 3.1.4
  - RUBY VERSION: 2.7.2 (2020-10-01 patchlevel 137) [x86_64-linux]
  - INSTALLATION DIRECTORY: /usr/local/bundle
  - USER INSTALLATION DIRECTORY: /root/.gem/ruby/2.7.0
  - RUBY EXECUTABLE: /usr/local/bin/ruby
  - GIT EXECUTABLE: /usr/bin/git
  - EXECUTABLE DIRECTORY: /usr/local/bundle/bin
  - SPEC CACHE DIRECTORY: /root/.gem/specs
  - SYSTEM CONFIGURATION DIRECTORY: /usr/local/etc
  - RUBYGEMS PLATFORMS:
    - ruby
    - x86_64-linux
  - GEM PATHS:
     - /usr/local/bundle
     - /root/.gem/ruby/2.7.0
     - /usr/local/lib/ruby/gems/2.7.0
  - GEM CONFIGURATION:
     - :update_sources => true
     - :verbose => true
     - :backtrace => false
     - :bulk_threshold => 1000
     - "install" => "--no-document"
     - "update" => "--no-document"
  - REMOTE SOURCES:
     - https://rubygems.org/
  - SHELL PATH:
     - /usr/local/bundle/bin
     - /usr/local/sbin
     - /usr/local/bin
     - /usr/sbin
     - /usr/bin
     - /sbin
     - /bin

Failed to apply / plan project if stack mentioned in ignore_stacks has dependencies

Software Version
Operating System MacOS 11.2.1
Terraform v0.14.6
Terraspace 0.5.11
Ruby 2.7.2

Expected Behaviour

stack demo-ssm should be skipped, stack demo should be deployed

Current Behavior

Exception: undefined method `children' for nil:NilClass (NoMethodError)

Step-by-step reproduction instructions

  1. terraspace new project infra --plugin aws --examples
  2. terraspace new stack demo-ssm
  3. add following code into demo-ssm/main.tf
    output "debug" { value = var.bucket_name }
  4. add following code into demo-ssm/variables.tf
    variable "bucket_name" { type = string }
  5. add following code into demo-ssm/tfvars/dev.tfvars
    bucket_name = <%= output('demo.bucket_name') %>
  6. look at the graph dependencies (All works as expected)
terraspace all graph --format text
Building graph...
Downloading tfstate files for dependencies defined in tfvars...
demo-ssm
└── demo
  1. add following line into the app.rb
Terraspace.configure do |config|
  config.logger.level = :info
  config.test_framework = "rspec"
  config.all.ignore_stacks = ["demo-ssm"]  # <<= ignore stack demo-ssm
end

  1. try to run graph again

Code Sample

ignore_stacks_example.zip

Solution Suggestion

--test-structure seems buggy

My Environment

Software Version
Operating System ubuntu 20.10
Terraform 0.14.7
Terraspace 0.6.3
Ruby 2.7.2p137

Expected Behaviour

no bugs ;)

Current Behavior

running (or any argument with --test-structure)

terraspace new project --examples --test-structure tools
will output
=> Creating new project called tools.
      create  tools
      create  tools/.gitignore
      create  tools/Gemfile
      create  tools/README.md
      create  tools/Terrafile
      create  tools/config/app.rb
       exist  tools
      create  tools/config/terraform/backend.tf
      create  tools/config/terraform/provider.tf
=> Creating test for new module: example
      create  tools/app/modules/example
      create  tools/app/modules/example/main.tf
      create  tools/app/modules/example/outputs.tf
      create  tools/app/modules/example/variables.tf
=> Creating new stack called demo.
      create  tools/app/stacks/demo
      create  tools/app/stacks/demo/main.tf
      create  tools/app/stacks/demo/outputs.tf
      create  tools/app/stacks/demo/variables.tf
Traceback (most recent call last):
	23: from /opt/terraspace/embedded/bin/terraspace:23:in `<main>'
	22: from /opt/terraspace/embedded/bin/terraspace:23:in `load'
	21: from /opt/terraspace/embedded/lib/ruby/gems/2.7.0/gems/terraspace-0.6.3/exe/terraspace:14:in `<top (required)>'
	20: from /opt/terraspace/embedded/lib/ruby/gems/2.7.0/gems/thor-1.1.0/lib/thor/base.rb:485:in `start'
	19: from /opt/terraspace/embedded/lib/ruby/gems/2.7.0/gems/terraspace-0.6.3/lib/terraspace/command.rb:59:in `dispatch'
	18: from /opt/terraspace/embedded/lib/ruby/gems/2.7.0/gems/thor-1.1.0/lib/thor.rb:392:in `dispatch'
	17: from /opt/terraspace/embedded/lib/ruby/gems/2.7.0/gems/thor-1.1.0/lib/thor/invocation.rb:127:in `invoke_command'
	16: from /opt/terraspace/embedded/lib/ruby/gems/2.7.0/gems/thor-1.1.0/lib/thor/command.rb:27:in `run'
	15: from /opt/terraspace/embedded/lib/ruby/gems/2.7.0/gems/thor-1.1.0/lib/thor.rb:243:in `block in subcommand'
	14: from /opt/terraspace/embedded/lib/ruby/gems/2.7.0/gems/thor-1.1.0/lib/thor/invocation.rb:116:in `invoke'
	13: from /opt/terraspace/embedded/lib/ruby/gems/2.7.0/gems/terraspace-0.6.3/lib/terraspace/command.rb:59:in `dispatch'
	12: from /opt/terraspace/embedded/lib/ruby/gems/2.7.0/gems/thor-1.1.0/lib/thor.rb:392:in `dispatch'
	11: from /opt/terraspace/embedded/lib/ruby/gems/2.7.0/gems/thor-1.1.0/lib/thor/invocation.rb:127:in `invoke_command'
	10: from /opt/terraspace/embedded/lib/ruby/gems/2.7.0/gems/thor-1.1.0/lib/thor/command.rb:27:in `run'
	 9: from /opt/terraspace/embedded/lib/ruby/gems/2.7.0/gems/thor-1.1.0/lib/thor.rb:40:in `block in register'
	 8: from /opt/terraspace/embedded/lib/ruby/gems/2.7.0/gems/thor-1.1.0/lib/thor/invocation.rb:116:in `invoke'
	 7: from /opt/terraspace/embedded/lib/ruby/gems/2.7.0/gems/thor-1.1.0/lib/thor/group.rb:232:in `dispatch'
	 6: from /opt/terraspace/embedded/lib/ruby/gems/2.7.0/gems/thor-1.1.0/lib/thor/invocation.rb:134:in `invoke_all'
	 5: from /opt/terraspace/embedded/lib/ruby/gems/2.7.0/gems/thor-1.1.0/lib/thor/invocation.rb:134:in `map'
	 4: from /opt/terraspace/embedded/lib/ruby/gems/2.7.0/gems/thor-1.1.0/lib/thor/invocation.rb:134:in `each'
	 3: from /opt/terraspace/embedded/lib/ruby/gems/2.7.0/gems/thor-1.1.0/lib/thor/invocation.rb:134:in `block in invoke_all'
	 2: from /opt/terraspace/embedded/lib/ruby/gems/2.7.0/gems/thor-1.1.0/lib/thor/invocation.rb:127:in `invoke_command'
	 1: from /opt/terraspace/embedded/lib/ruby/gems/2.7.0/gems/thor-1.1.0/lib/thor/command.rb:27:in `run'
/opt/terraspace/embedded/lib/ruby/gems/2.7.0/gems/terraspace-0.6.3/lib/terraspace/cli/new/project.rb:65:in `create_test': uninitialized constant Terraspace::CLI::New::Test::Bootstrap (NameError)
Did you mean?  Terraspace::Booter

Update terraspace to work with terraform 1.0.0

Checklist

  • Upgrade Terraspace: Are you using the latest version of Terraspace? This allows Terraspace to fix issues fast. There's an Upgrading Guide: https://terraspace.cloud/docs/misc/upgrading/
  • Reproducibility: Are you reporting a bug others will be able to reproduce and not asking a question. If you're unsure or want to ask a question, do so on https://community.boltops.com
  • Code sample: Have you put together a code sample to reproduce the issue and make it available? Code samples help speed up fixes dramatically. If it's an easily reproducible issue, then code samples are not needed. If you're unsure, please include a code sample.

My Environment

Detected Terrspace version: 0.6.10
Detected Terraform bin: /root/.tfenv/bin/terraform
Detected Terraform v1.0.0
Terraspace requires Terraform v0.12.x and above
The installed version of terraform may not work with terraspace.
Recommend using at least terraform v0.12.x
If you would like to bypass this check. Use TS_VERSION_CHECK=0

Expected Behaviour

Version check accepts terraform 1.0.0 as well as 0.12.x

Support tfenv's .terraform-version file

Summary

tfenv supports the use of a .terraform-version file to define the version of terraform that should be used by Terraform project. Currently Terraspace does not include a .terraform-version file in the build even if present in the stack directory.

Motivation

This will enable the support of stack or project level terraform version management to be recorded within a repository rather than require management via CI/CD tooling.

Guide-level explanation

If a .terraform-version file is present in the stack directory, it is included in the build output. This enables tfenv to select the appropriate terraform version to use for the stack.

Remove from and add to state

Summary

I would like to manually remove an item from the state. I can do this via terraform state rm ...., but I don't see such a command with terraspace. Also I haven't seen any functionality to add existing resources to the state. This would also be valuable.

Motivation

Sometimes automating the removal of items it's feasible or sometimes it's done manually in the UI. In those use cases terraspace breaks because it's referencing a resource that doesn't exist. Also if you add in terraspace later into the project it would be nice to add them to the state file so terraform can manage the resources

Guide-level explanation

Reference-level explanation

Drawbacks

Unresolved Questions

Terraspace azure China

When I use terraspace with azure China service principal, and run terraspace up, it prompt login fail error.
Set ARM_environment cannot help to solve the issue , testing result with azure_check.rb also fail , and it seems will ignore the ARM_environment value

Publish latest alpine/terraspace container image to hub.docker.com

Summary

Publish latest alpine/terraspace container image to hub.docker.com

Motivation

In some environments, it is problematic to install/configure a ruby runtime with a gem such as Terraspace. As an alternative, support running Terraspace from a container. Terragrunt provides such a container here: https://github.com/alpine-docker/terragrunt. The Terragrunt container on dockerhub has been downloaded over 1 million times. Terraspace should consider following suit and defining git repo https://github.com/alpine/terraspace for use in pushing the latest Terraspace container image to dockerhub.

A Terraspace container image would also support native k8s CI pipelines that want to use Terraspace to manage infrastructure via a gitops model.

Finally, the Terraform community is accustomed to working with a dependency-free terraform Go binary and seamlessly-downloaded plugins. Any Ruby-specific dependencies and complexity should be hidden as much as possible in order to increase adoption within the Terraform community.

See current attempt to define a working alpine/terraspace container here.

Guide-level explanation

Reference-level explanation

Drawbacks

Unresolved Questions

[feature-request] terraspace fmt specific files

Summary

In #94, terraspace fmt was added (🥳 ). Can this be further expanded upon to pass a list of files to run fmt on, instead of the entire project? Something like this:

terraspace fmt stacks/project/main.tf stacks/gke/main.tf

Motivation

We are using a .pre-commit-config.yaml to run certain things as a pre-commit hook to the project. We've added this into the mix to ensure our files are formatted. However, if we change one file, it runs on the entire project resulting in a long processing of our larger Terraspace projects.

If we could pass individual files, pre-commit-config allows us to pass the staged files to the script being called. This would drastically improve pre-commit performance as it would only need to run on the files that were actually modified.

Note: I would submit a PR, but I am not familiar enough with Ruby and am hoping someone else has some bandwidth to cook this up. 🍻

Terraspace hangs when TF_LOG=TRACE environment variable exists

Checklist

  • Upgrade Terraspace: Are you using the latest version of Terraspace? This allows Terraspace to fix issues fast. There's an Upgrading Guide: https://terraspace.cloud/docs/misc/upgrading/
  • Reproducibility: Are you reporting a bug others will be able to reproduce and not asking a question. If you're unsure or want to ask a question, do so on https://community.boltops.com
  • Code sample: Have you put together a code sample to reproduce the issue and make it available? Code samples help speed up fixes dramatically. If it's an easily reproducible issue, then code samples are not needed. If you're unsure, please include a code sample.

My Environment

export TF_LOG=TRACE

Software Version
Operating System ubuntu 18.04
Terraform 0.14.8
Terraspace 0.6.5
Ruby 2.5.1

Expected Behaviour

The terraform logs to be present in the terraspace logs.

Current Behavior

Step-by-step reproduction instructions

  1. Create a stack set that accesses AWS resources (using a gitlab HTTP backend, but also fails with an S3 backend)
  2. export TF_LOG=TRACE
  3. terraspace all xxx (or any terraspace command really)

Code Sample

Not needed

Solution Suggestion

Ensure that both standard out and standard error when executing the terraform application are captured and streamed to the log file in an appropriate manner and flushed to disk/screen frequently to ensure no IO blocks occur.

Use tfenv's .terraform-version file

I would like to have a per project .terraform-version file
So when I do terraspace new project infra it should create a new .terraform-version file an the root
That will be used for all stacks.
Then later I just need to update one file when I bump terraform version.

Great project!

How do you reference modules managed by Terrafile

Terrafile is being used to manage tf modules from a private registry. It took a while to figure out how to do this (i do think that using the same format as terraform module source parameter would be useful). How do i reference such modules from a terraspace module and stack.

Terraspace don't recognize dependency when use output in tf file

Checklist

  • Upgrade Terraspace: Are you using the latest version of Terraspace? This allows Terraspace to fix issues fast. There's an Upgrading Guide: https://terraspace.cloud/docs/misc/upgrading/
  • Reproducibility: Are you reporting a bug others will be able to reproduce and not asking a question. If you're unsure or want to ask a question, do so on https://community.boltops.com
  • Code sample: Have you put together a code sample to reproduce the issue and make it available? Code samples help speed up fixes dramatically. If it's an easily reproducible issue, then code samples are not needed. If you're unsure, please include a code sample.

My Environment

Software Version
Operating System linux
Terraform 0.14.6
Terraspace 0.6.9
Ruby ruby 2.7.2p137 (2020-10-01 revision 5445e04352) [x86_64-linux]

Expected Behaviour

terraspace all graph correct recognize dependencies
https://github.com/ByJacob/terraspace-bug-issues-115/blob/main/expected_image.png

Current Behavior

terraspace all graph don't correct recognize dependencies
https://github.com/ByJacob/terraspace-bug-issues-115/blob/main/dependencies-20210531134254.png

Step-by-step reproduction instructions

  1. Get repository with sample code
  2. Execute terraspace all graph in saple folder
  3. Open dependencies png in .terraspace-cache/graph

Code Sample

https://github.com/ByJacob/terraspace-bug-issues-115

Solution Suggestion

Adding a relationship search in the tf file

Support for JSON formatted plan output

Summary

Please provide support to output the result of a terraspace plan and terraspace all plan run as JSON.

Motivation

Terraform provides support for the JSON format since version 0.12.
The JSON format can be used to pass the results of a plan into a static code analysis before applying the changes, e.g. conftest.

Guide-level explanation

The existing CLI could be extended to support --json as an additional parameter.
Resulting in commands like:

  • terraspace plan <stack> --out <stack>.tfplan.json --json
  • terraspace all plan --json

Drawbacks

The feature is available in terraform >=0.12.
Support for older versions might be cumbersome since additional tooling would be needed.

Unresolved Questions

  • Naming convention for file names for the terraspace all plan --json call.

version `GLIBC_2.25' not found

My Environment

Software Version
Operating System CentOS 7
Terraform 0.13.6
Terraspace 0.5.10-20210107100209
Ruby

Expected Behaviour

following install intruction at: https://terraspace.cloud/docs/install/standalone/centos/

terraspace version
0.5.10

Current Behavior

installation seems work but runing terraspace I get:

terraspace version

/opt/terraspace/embedded/bin/ruby: /lib64/libc.so.6: version `GLIBC_2.25' not found (required by /opt/terraspace/embedded/lib/libruby.so.2.7)

Step-by-step reproduction instructions

steps in https://terraspace.cloud/docs/install/standalone/centos

Code Sample

Solution Suggestion

Version the docker images with semver

Summary

Package the docker images published to dockrhub with the semantic version of terraspace so that workloads can pin the image to the relevant version in case we encounter a breaking bug (such as #132)

Motivation

This allows people to follow CICD best practices by explicitly specifying the version of the image they want to use and allow for testing of upgrades to the images to prevent breakage of existing workloads.

Guide-level explanation

Currently we have daily builds of images available at (https://hub.docker.com/r/boltops/terraspace/tags?page=1&ordering=last_updated)
Which gives us images with the tags "alpine", "ubuntu", "debian", etc
Change the build process so that we get images tagged with the semantic version as well as the distro which would give us tags like:

  • 0.6.11-ubuntu
  • 0.6.11-alpine
  • 0.6.11-centos
  • 0.6.12-ubuntu
  • 0.6.12-alpine
  • 0.6.13-ubuntu, latest
  • etc.

This will allow our CICD pipelines using these images to specify a specific version to ensure our builds stay functional until we've had the opportunity to test a new release with our pipelines to see if any issues occur.
..

Reference-level explanation

Not sure what CICD automation is being used here to create the images.

Drawbacks

You'll end up with more images hosted on docker hub

Unresolved Questions

Nil

terraspace init complains about backends (http)

Checklist

  • Upgrade Terraspace: Are you using the latest version of Terraspace? This allows Terraspace to fix issues fast. There's an Upgrading Guide: https://terraspace.cloud/docs/misc/upgrading/
  • Reproducibility: Are you reporting a bug others will be able to reproduce and not asking a question. If you're unsure or want to ask a question, do so on https://community.boltops.com
  • Code sample: Have you put together a code sample to reproduce the issue and make it available? Code samples help speed up fixes dramatically. If it's an easily reproducible issue, then code samples are not needed. If you're unsure, please include a code sample.

My Environment

Software Version
Operating System Ubuntu 20.04 LTS
Terraform 1.0.0
Terraspace 0.6.13
Ruby

Expected Behaviour

terraspace all init works correctly without reporting issues

Current Behavior

running

terraspace clean all -y
terraspace all init -y

when using the http backend to store state (Gitlab cloud).
Terrastate complains about the backed initialization occuring. This seems to happen generally after a new version of terrspace has come out.

Step-by-step reproduction instructions

$ terraspace all init -y
Building one stack to build all stacks
Building .terraspace-cache/ap-southeast-2/dev/stacks/kubernetes_vpc
Downloading tfstate files for dependencies defined in tfvars...
╷
│ Error: Backend initialization required, please run "terraform init"
│ 
│ Reason: Initial configuration of the requested backend "http"
│ 
│ The "backend" is the interface that Terraform uses to store state,
│ perform operations, etc. If this message is showing up, it means that the
│ Terraform configuration you're using is using a custom configuration for
│ the Terraform backend.
│ 
│ Changes to backend configurations require reinitialization. This allows
│ Terraform to set up the new configuration, copy existing state, etc. Please run
│ "terraform init" with either the "-reconfigure" or "-migrate-state" flags to
│ use the current configuration.
│ 
│ If the change reason above is incorrect, please verify your configuration
│ hasn't changed and try again. At this point, no changes to your existing
│ configuration or state have been made.

<.. snip for brevity .. happens to all stacks involved repeatedly ..>

Code Sample

None available that I can share

Solution Suggestion

not sure where to start

Output helper backend issue

Checklist

  • Upgrade Terraspace: Are you using the latest version of Terraspace? This allows Terraspace to fix issues fast. There's an Upgrading Guide: https://terraspace.cloud/docs/misc/upgrading/
  • Reproducibility: Are you reporting a bug others will be able to reproduce and not asking a question. If you're unsure or want to ask a question, do so on https://community.boltops.com
  • Code sample: Have you put together a code sample to reproduce the issue and make it available? Code samples help speed up fixes dramatically. If it's an easily reproducible issue, then code samples are not needed. If you're unsure, please include a code sample.

My Environment

Software Version
Operating System CentOS 8
Terraform v0.13.6
Terraspace 0.6.6
Ruby ruby 2.7.2p137 (2020-10-01 revision 5445e04352) [x86_64-linux]

Expected Behaviour

In stack-2-1 define some variables in variables.tf and in base.tfvars those vars with output helper pointing to stack-1 output. Use variables in stack-2 configuration as var.my_var1, etc.

running terraspace up stack-1 then running terraspace up stack-2-1 allow cross stacks variables passing.

Current Behavior

Only when output helper is present in base.tfvars for stack-2-1, we get backend strange configuration behavior on terraspace up stack-1 completely blocking use of output helper. It seems that during stack-1 up steps first steps (dependencies management for stack-2-1) inconsistently change stack-1 backend preventing only for these steps to connect to stack-1 backend to pull tfstate.

terraspace init stack-1
terraspace up stack-1

returns s3 backend unreachable errors. Removing output helpers configuration from stack-2-1 base.tfvars allow terraspace to run up and init without errors. Also running terraform directory in cache dir works without errors.

It is possible to have right backend config with terraspace init stack-1 but on terraspace up stack-1 dependency steps remove previous .terraspace_cache/.../stack-1/.terraform/terraform.tfstate created leaving process without backend config and returning messago to init... this happen in a loop !

Step-by-step reproduction instructions

note: no global config/terraform/backend.tf config set (use only stacks specific backend.tf config)

  1. app/stack/stack-1/config/terraform/backend.tf (custom stack-1 backend config)

  2. app/stack/stack-2-1/config/terraform/backend.tf (stack-2-1 backend config different from stack-1)
    same issue using config/terraform/backend.tf instead of con in step (2)

  3. app/stack/stack-1/output.tf some output variable consumed by stack-2-1

  4. Define stack-1 variables in app/stack/stack-2-1/variables.tf and use output helper configuration in app/stack/stack-2-1/tfvars/base.tfvars

  5. Put some code in app/stack/stack-2-1/main.tf using stack-1 output vars

  6. run terraspace up stack-1 and get errors about non existing S3 backend (check .terraspace_cache/.../stack-1/.terraform/terraform.tfstate and note configuration is different from .terraspace_cache/.../stack-1/backend.tf)

  7. Remove any output helper configuration from app/stack/stack-2-1/tfvars/base.tfvars

  8. Runu terraspace up stack-1 (note no more errors ar present)

Code Sample

  1. app/stack/stack-1/main.tf -> define a resource
  2. app/stack/stack-1/config/terraform/backend.tf (specific S3 backend configuration)
  3. app/stack/stack-1/output.tf (output value for resource on step 1)
  4. app/stack/stack-2-1/config/terraform/backend.tf (different by stack-1 backend config in step2)
  5. app/stack/stack-2-1/variables.tf (define a variables to activate stack-2-1 stack-1 dependency)
  6. app/stack/stack-2-1/tfvars/base.tfvars (e.g. my_var1= <%= output('stack-1.my_var1') %>)
  7. app/stack/stack-2-1/main.tf (define a resource using as variables these defined on step 5)
  8. run terraspace up stack-1or terraspace init stack-1 (verify errors)
  9. Remove output helper config from app/stack/stack-2-1/tfvars/base.tfvars
  10. run terraspace up stack-1or terraspace init stack-1 (verify no error)

Solution Suggestion

No solution found as now. Also manual workaround (manually running terraspace pull command in ache dir works but at next execution with terraspace I get again errors blocking outpt helper solution).

more info on community link

completion_script fails to execute

Checklist

  • Upgrade Terraspace: Are you using the latest version of Terraspace? This allows Terraspace to fix issues fast.
  • Reproducibility: Are you reporting a bug others will be able to reproduce and not asking a question. If you're unsure or want to ask a question, do so on StackOverflow.
  • Code sample: Have you put together a code sample to reproduce the issue and make it available? Code samples help speed up fixes dramatically. If it's an easily reproducible issue, then code samples are not needed. If you're unsure, please include a code sample.

My Environment

Software Version
Operating System MacOS Mojave 10.14.5
Terraform v0.14.0-rc1
Terraspace 0.5.6
Ruby 2.7.1

Expected Behaviour

terraspace completion_script should execute successfully generating a script that can be used for shell completion

Current Behavior

➜ terraspace completion_script
Traceback (most recent call last):
9: from ~/.rbenv/versions/2.7.1/bin/terraspace:23:in `<main>'
8: from ~/.rbenv/versions/2.7.1/bin/terraspace:23:in `load'
7: from ~/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0/gems/terraspace-0.5.6/exe/terraspace:14:in `<top (required)>'
6: from ~/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0/gems/thor-1.0.1/lib/thor/base.rb:485:in `start'
5: from ~/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0/gems/terraspace-0.5.6/lib/terraspace/command.rb:59:in `dispatch'
4: from ~/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0/gems/thor-1.0.1/lib/thor.rb:392:in `dispatch'
3: from ~/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0/gems/thor-1.0.1/lib/thor/invocation.rb:127:in `invoke_command'
2: from ~/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0/gems/thor-1.0.1/lib/thor/command.rb:27:in `run'
1: from ~/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0/gems/terraspace-0.5.6/lib/terraspace/cli.rb:217:in `completion_script'
~/.rbenv/versions/2.7.1/lib/ruby/gems/2.7.0/gems/terraspace-0.5.6/lib/terraspace/completer/script.rb:4:in `generate': undefined local variable or method `logger' for Terraspace::Completer::Script:Class (NameError)

Step-by-step reproduction instructions

  1. rbenv install 2.7.1
  2. rbenv global 2.7.1
  3. gem install terraspace
  4. terraspace completion_script

Code Sample

Solution Suggestion

Suggestion for project structure

I apologize in advance for the length of this suggestion! I haven't fully looked into your tool yet, but your project structure looked similar to one I use, so I thought I'd share it here.

Summary

This is a suggestion about project structure. Based on my experience, it seems like a good balance of how to structure virtually any infrastructure-as-code in a VCS.

This model allows me to manage anything from one specific set of infrastructure, to a large mono-repo of many products, regions and cloud providers. I'm sure it's not perfect though, so I'm curious to hear where it can be improved!

Motivation

I spent a lot of time working and re-working on project structure over many different IaC projects, and so far this makes the most sense for everything I have worked on. It's not perfect, but I think the principles behind it make it intuitive and help it resist unnecessary complexity.

Some of the problems I faced while developing this structure:

  • When I need to "just get something done", I do not want to be forced to traverse 50 directories and read all the code just to figure out how to change a single variable's output somewhere.
  • My expectations should match reality. If I want to deploy only one thing, it should not break unexpectedly because somebody changed something else.
  • How do I minimize the sprawl of components across a VCS that can happen in large projects?
  • How can I keep my code DRY without it becoming annoying?
  • Can I separate different environments' configuration logically?
  • Can I dictate how the structure should be used to prevent these problems?
  • Will anyone other than me understand it?

Guide-level explanation

There are three major directory hierarchies, each with its own philosophy of how they are to be used.

$ tree
.
├── env
├── apps
└── libs

3 directories, 0 files

env/

When you run any program, it's running in an environment. It may be your IBM Laptop running Windows 10 on your home network in Northern Virginia, or a Debian virtual server running on an ARM chipset on AWS in China. The same program may run in both places, but how it runs will change based on its environment.

The env/ directory is the "environment" directory, or configuration directory. This is where environment-specific configuration lives - which is to say, any configuration that ever changes between any environment.

In this directory we name a new directory after our environment. This name can be incredibly generic, such as "aws". Or it can include more information, like the account name, and the product family name. But it shouldn't get too specific, because of the next bit of this hierarchy. So, we'll give this first directory a name with where the infrastructure is hosted, the product family name, and an indicator about which general environment this is:

.
├── env
│   └── aws-acmecorp-nonprod
├── apps
└── libs

4 directories, 0 files

Now, this might be enough for a lot of people. But if you think you might end up getting more specific with your configuration, you can deepen the hierarchy. The longer your infrastructure sticks around (and grows), the more likely this will be.

Let's say you want to deploy the same basic infrastructure to two regions: us-east-1 and eu-west-1. Sounds simple enough, let's just make two new directories! But at a certain point you realize that not all AWS resources are region-specific. Some are global (like IAM, or Route53). If I put that infrastructure configuration in one region's directory, I always have to deploy that region just to get the global changes I wanted. So to make it easier to only deploy the global changes, we make a third directory.

$ tree
.
├── env
│   └── aws-acmecorp-nonprod
│       ├── global
│       ├── eu-west-1
│       └── us-east-1
├── apps
└── libs

5 directories, 0 files

Now let's say that over time, your infrastructure is growing. You have a lot of Route53 resources and it takes a while to terraform apply. You'd like to be able to just deploy the Route53 changes and nothing else. But it's annoying to have to use the taint or -target options to Terraform. So we create a few more directories, and these will each end up being its own Terraform root module & remote state. (The modules won't live in these directories, though; more on that later)

$ tree
.
├── env
│   └── aws-acmecorp-nonprod
│       ├── eu-west-1
│       ├── global
│       │   ├── iam
│       │   └── route53
│       └── us-east-1
├── apps
└── libs

9 directories, 0 files

That's our basic configuration hierarchy! Now, what's the philosophy of the env/ tree?

The main rule is: no code, only configuration. If you have some code used to generate, parse, or load configuration, it should not live in the env/ directory. This keeps the scope of what's in this directory tighter. Any code will go in the other two directory hierarchies.

The second rule is, you should not have inter-dependencies on configuration outside of a single path. If you have configuration in env/aws-acmecorp-nonprod/global/iam/, it should not refer to configuration in, say, env/aws-acmecorp-nonprod/eu-west-1/. The reasoning there is pretty obvious: if you change something in one region, you don't want it to accidentally impact something in a different region.

If you really need to refer to configuration somewhere else, remember that you are essentially referring to external state, and that you can't necessarily expect how it will behave. Within the context of Terraform, you would typically use a terraform_remote_state data source to pull outside configuration at run-time. We assume each environment will have its own Terraform state for the basic principle of reliability: if one environment goes down, and the other environment depends on it, we're in trouble!

But does that mean you can't re-use configuration? Not at all; you can inherit configuration from parent directories. For example, maybe you'll always want some specific tags to be applied to all your infrastructure, no matter where it lives. You can put that configuration in a parent directory and refer to it when you run terraform.

$ tree
.
├── apps
├── env
│   └── aws-acmecorp-nonprod
│       ├── eu-west-1
│       ├── global
│       │   ├── iam
│       │   └── route53
│       ├── terraform.tfvars.json
│       └── us-east-1
│           ├── override.tf.json
│           └── terraform.tfvars.json -> ../terraform.tfvars.json
└── libs

9 directories, 2 files

You can actually do this in a couple ways. You can use symbolic links and have Terraform automatically pick up the inherited configuration:

$ pwd
/home/vagrant/foo/env/aws-acmecorp-nonprod/us-east-1
$ terraform plan

Or you can explicitly reference each configuration file:

$ pwd
/home/vagrant/foo/env/aws-acmecorp-nonprod/us-east-1
$ terraform plan -var-file=../terraform.tfvars.json -var-file=override.tf.json

Or you can use a small script or Makefile to generate inherited configuration on the fly. I do not recommend this behavior, though. A bug in your script could cause your configuration to be generated improperly and cause unexpected results. It's much better to use static configuration that can be peer-reviewed & tested and won't accidentally change later.

$ pwd
/home/vagrant/foo/env/aws-acmecorp-nonprod/us-east-1
$ tmp=`mktemp`
$ jq -s '.[0] * .[1]' ../terraform.tfvars.json override.auto.tfvars.json > $tmp
$ terraform plan -var-file=$tmp

Now that we have an understanding of the env/ hierarchy, let's move on.

apps/

We all understand the basic principle of an application: it's a collection of code that you execute somewhere. Often, you feed it configuration, and input, and you get output. But the core code of the application doesn't need to change based on where or how you run it. It's a complete, usable tool, that probably still needs to be told what specifically to do with it.

That's what each directory in the apps/ hierarchy basically is. Each subdirectory is an "application": a complete unit of working, executable code.

The philosophy is similar to before:

  1. This directory should be 95% code, and 5% default configuration.
  2. An apps/ directory should never depend on another apps/ directory.

The reasoning here, like before, is to compartmentalize the purpose of this directory to just be "an application". A pre-built, tested, working, individual application with as few external dependencies as possible. This makes it easier to reason about how it works, which makes it easier to maintain, and makes it more reliable.

If you're wondering: "Wait, few external dependencies?", don't worry. You can still load reusable modules from anywhere you want - particularly, from the next directory hierarchy. The main thing is, don't load or call anything directly (e.g. using relative paths) from a different apps/ directory.

In the context of Terraform, an apps/ directory is a root module. It includes your backend, your providers, your variable definitions, and loads modules. You run Terraform in an apps/ root module, passing in configuration at run-time.

$ pwd
/home/vagrant/foo/apps/aws-infra-region
$ terraform plan -var-file=../../env/aws-acmecorp-nonprod/terraform.tfvars.json -var-file=../../env/aws-acmecorp-nonprod/us-east-1/override.tf.json

But isn't that a bit complicated or error-prone to run? Well, it might be, except that we're not going to run it this way in practice. Instead, we create a Makefile in our environment directory.

$ tree
.
├── apps
│   └── aws-infra-region
│       ├── backend.tf
│       ├── modules.tf
│       ├── override.tf.json
│       ├── providers.tf
│       └── variables.tf
├── env
│   └── aws-acmecorp-nonprod
│       ├── eu-west-1
│       ├── global
│       │   ├── iam
│       │   └── route53
│       ├── terraform.tfvars.json
│       └── us-east-1
│           ├── Makefile
│           ├── override.tf.json
│           └── terraform.tfvars.json -> ../terraform.tfvars.json
└── libs
$ cd env/aws-acmecorp-nonprod/us-east-1/
$ pwd
/home/vagrant/foo/env/aws-acmecorp-nonprod/us-east-1
$ make plan
ENV=`pwd` && \
cd ../../../apps/aws-infra-region/ && \
terraform plan \
        -var-file=override.tf.json
        -var-file=$ENV/../terraform.tfvars.json \
        -var-file=$ENV/override.tf.json
terraform plan -var-file=override.tf.json -var-file=/home/vagrant/foo/env/aws-acmecorp-nonprod/us-east-1/../terraform.tfvars.json -var-file=/home/vagrant/foo/env/aws-acmecorp-nonprod/us-east-1/override.tf.json

As you can see, we can now keep our configuration separate from our reusable module, and run a deployment on a specific environment, without needing to remember anything other than the directory and make plan.

You can even add more reliability & simplicity here. Just have your apps/ root module load the AWS account ID and region from variables, and keep the values in your configuration files: aws_account_id in your terraform.tfvar.json file, and aws_region in your override.tf.json file. If for some reason you accidentally call Terraform with AWS credentials for the wrong region, Terraform will die complaining about not being able to access the right region.

And of course, each apps/ sub-directory can have its own testing/ directory with some sample configs and a Makefile.

The next and final hierarchy is simple, but helps us manage the reusable code a bit more.

libs/

We're familiar with how applications work: they are compiled with certain instructions specific to them. But if you need to maintain a larger set of functions which aren't specific to this one application, that's where libraries come in. They are reusable sets of code which can be included in applications, but they aren't applications themselves.

The sub-directories in libs/ are the same: reusable, independent sets of code. These should have virtually no default configuration at all and be limited in scope. The application should just include these and then use its own default configuration with them. And note that the libraries can't be executed themselves. Sounds a lot like a Terraform sub-module!

Like in the other hierarchies, you should limit inter-dependencies here as much as possible. An application can load as many libs/ sub-modules as it wants, but if libs/ sub-directories start depending on other libs/, it starts to become more difficult to reason about how things work.

You can also deepen the hierarchy here. If you end up with 20 libs/ Terraform sub-modules, you can put a bunch of them into one sub-directory. It won't make any difference to your apps/ root module. You can also include a testing/ sub-directory for each,  to validate your sub-module with some default configuration and another Makefile.


Once all this is implemented, you will have some fairly DRY code that is easy to reason about, easy to maintain, and easy to use.

Everything described above can be extended to whatever kind of Infrastructure-as-Code you need to maintain. For example, you may end up with a Packer configuration and Makefile as another apps/ directory. Or maybe you keep some Ansible roles in libs/ and call them from an ansible apps/ directory.

Are there any other useful directories we can include in our project?

bin/

Ah, the old stand-by. Here you can keep the various scripts or wrappers you might need to work within the above structure. Sure, you could make a new directory in apps/ for each one, but let's not get carried away :-) Maybe some scripts to help you run your code in a CI/CD pipeline? Or maybe you'll end up with some sort of generic tool that helps you call commands in your project structure in a certain way...


Drawbacks

There is one big down-side to the libs/ directory. Terraform currently has bugs which prevent it from properly using trees of modules which use relative links to refer to one another. Referring to sub-modules in libs/ using relative paths will work from apps/ root modules, but if a libs/ module tries to refer to yet another module via relative paths, Terraform won't be able to resolve the path. This is due to how Terraform downloads and runs modules in its own .terraform/modules/ directory structure, and does not properly deduce the path based on the original relative path.


Unresolved Questions

As is obvious from Terragrunt and Terraspace, you still need to use the project structure with some kind of "wrapper", or the long paths and command-lines become annoying.

I show how you can use Makefiles with the project structure to deploy changes simply and reliably. But Make eventually becomes complicated and clunky to use this way, so I ended up writing wrappers for apps that I use with this project structure. The wrappers allow you to specify a directory to change to before execution, a series of configuration files to apply one after the other, the ability to load options from yet another json file, etc. I'm pretty sure we've all written one or two of these :-)

Layering for locals

Summary

Environment specific locals: devel, prod, ..

Motivation

Sometimes locals contains complex data structures defining infra configuration. Having a layering of locals for different environments can be very useful to define different structure (number of element, size, ecc.).

How can i refer output of stack from different TS_ENV

How can i use output of stack from different TS_ENV?

So for example i'm not using multiple account setup, and i want to have some common IAM resources

Or if i have only one account for user management. and then in other accounts i want to refer users in assume role policy

Is it possible to use test-kitchen as the test framework

One of the blockers here at the consultancy i work for is that the Terraform test framework we use is test-kitchen or more specifically kitchen-terraform. Is it possible to use test-kitchen instead of rspec-terraspace or integrate the two but drive it from test-ktichen?

Using the libvirt provider, not in the Terraform registry throws `Failed to query available provider packages`

Checklist

  • Upgrade Terraspace: Are you using the latest version of Terraspace? This allows Terraspace to fix issues fast.
  • Reproducibility: Are you reporting a bug others will be able to reproduce and not asking a question. If you're unsure or want to ask a question, do so on StackOverflow.
  • Code sample: Have you put together a code sample to reproduce the issue and make it available? Code samples help speed up fixes dramatically. If it's an easily reproducible issue, then code samples are not needed. If you're unsure, please include a code sample.

My Environment

Software Version
Operating System CentOS 8
Terraform v0.13.5
Terraspace v0.5.10
Ruby ruby 2.7.2p137 (2020-10-01 revision 5445e04352) [x86_64-linux]

Expected Behaviour

I'm expecting/hoping to be able to use this libvirt provider with Terraspace. For now it can only be used as a local provider until the project have uploaded it to the HashiCorp Terraform registry.

Current Behavior

When executing terrspace up STACK_NAME Terraspace throws the following error:

Error: Failed to query available provider packages

Could not retrieve the list of available versions for provider
hashicorp/libvirt: provider registry.terraform.io/hashicorp/libvirt was not
found in any of the search locations

- /root/terraform-providers/

Re-producing

  1. execute terraspace new project WHATEVER_NAME
  2. execute terraspace new stack test-k3s-cluster
  3. execute terraspace new module k3s-cluster

the files relative to the stack: test-k3s-cluster should contain

  • Gist main.tf, find it here
  • Gist variables.tf, find it here
  • outputs.tf. that one is empty so no Gist for that

the files relative to the module: k3s-cluster should contain

  • Gist main.tf, find it here
  • variables.tf, contains the same as the one in the stack
  • outputs.tf, that one is empty so no Gist for that

Terraspace/Terraform config files should contain:

  • Gist backend.tf, find it here
  • Gist provider.tf, find it here
  • Gist terraform.tf, find it here

The project structure is:

.
├── app
│   ├── modules
│   │   └── k3s-cluster
│   │       ├── conf
│   │       │   ├── network-config.yaml
│   │       │   └── user-data.yaml
│   │       ├── main.tf
│   │       ├── outputs.tf
│   │       └── variables.tf
│   └── stacks
│       └── test-k3s-cluster
│           ├── main.tf
│           ├── outputs.tf
│           └── variables.tf
├── config
│   ├── app.rb
│   └── terraform
│       ├── backend.tf
│       ├── provider.tf
│       └── terraform.tf
├── Gemfile
├── Gemfile.lock
├── README.md
└── Terrafile
  1. execute terraspace up test-k3s-cluster and the err should be seen after some time

Logs

The below is the Terraspace run log

Initializing modules...
- k3s-cluster in ../../modules/k3s-cluster

Initializing the backend...

Successfully configured the backend "remote"! Terraform will automatically
use this backend unless the backend configuration changes.

Initializing provider plugins...
- Finding local/providers/libvirt versions matching "~> 0.6.3"...
- Finding latest version of hashicorp/libvirt...
- Installing local/providers/libvirt v0.6.3...
- Installed local/providers/libvirt v0.6.3 (unauthenticated)
Initializing modules...
- k3s-cluster in ../../modules/k3s-cluster

Initializing the backend...

Successfully configured the backend "remote"! Terraform will automatically
use this backend unless the backend configuration changes.

Initializing provider plugins...
- Finding local/providers/libvirt versions matching "~> 0.6.3"...
- Finding latest version of hashicorp/libvirt...
- Using local/providers/libvirt v0.6.3 from the shared cache directory

Other comments

  • I'm also befuddled on why the cache is created at this path: .terraspace-cache/us-east-1/dev/stacks/test-k3s-cluster/... as I have no region specified anywhere.

  • Also its like Terraspace thinks that it should interact with AWS. As these following warnings are displayed.

Building .terraspace-cache/us-east-1/dev/stacks/test-k3s-cluster
INFO: You're missing AWS credentials. Only local services are currently available
INFO: You're missing AWS credentials. Only local services are currently available
Built in .terraspace-cache/us-east-1/dev/stacks/test-k3s-cluster
Syncing to Terraform Cloud: test-k3s-cluster => hospites-dev
** Also likely the culprit to the region mishap described above
  • Its also strange that Terraspace believes I'm trying to use Terraform Cloud as the only feature of Terraform Cloud I'm using is the remote state feature.

Looking forward to all the help I can get. Any help is appreciated.

Thank you very much

Terraspace show --json adds newlines to json output

I use the command: TS_ENV=test terraspace show aks --plan aks.tfplan --json to produce a json file which I then parse with jq to show some statistics about the changes to be made in a merge request. However terraspace show --json adds several newlines to (it seems) random places in this json file. I tried to do the same in the cache/stack dir with terraform show and it does not add any newlines to random places.

Terraspace:

TS_ENV=test terraspace init aks && TS_ENV=test terraspace plan aks --out aks.tfplan && TS_ENV=test terraspace show aks --plan aks.tfplan --json | wc -l

3

Terraform (in the cache/stack directory):

terraform show -json aks.tfplan | wc -l

1 

I only expect one newline at the end of the file. Terraspace: 0.6.11, Terraform: 1.0.2

Add instance option to `terraspace all`

Summary

Add --instance=INSTANCE option to terraspace all - currently this useful feature is only available when using individual stack deployments.

Motivation

The instance option is especially useful e.g. for pull requests. My usage uses the current commit ID via CI as instance and name resources accordingly. Currently I need to independently up all stacks individually, because I can't leverage this feature with terraspace all.

cluster_name = "cluster-<%= expansion(':ENV') %><%= options[:instance] ? '-' + options[:instance] : nil %>"

Guide-level explanation

https://terraspace.cloud/docs/tfvars/instance-option/

Reference-level explanation

https://terraspace.cloud/docs/tfvars/instance-option/

Drawbacks

None.

Unresolved Questions

None.

regular issue regarding gem dependencies faraday

Checklist

  • Upgrade Terraspace: last version
  • Reproducibility: almost at each terraspace update
  • Code sample: no specific to code

My Environment

Software Version
Operating System GNU/Linux Ubuntu 21.04
Terraform 1.0.4
Terraspace 0.6.13-20210902100211
Ruby 2.7.2p137

Expected Behaviour

When there is an update of terraspace, to have just some basic dependency to update

Current Behavior

When there is an update of terraspace, the installation require some update (OK), but most of the time I need to clear dependency to have a valid installation

"You have already activated faraday 1.7.0, but your Gemfile requires faraday 0.17.4. Prepending bundle exec to your command may solve this."

The issue seems to be related to faraday that are required in two different version (0.

Step-by-step reproduction instructions

  1. Create a project with terraspace, make sure everything run smoothly
  2. Wait for terraspace update
  3. Somme dependency issue will appears, update them individually or globally whatever) with "bundle update X"
  4. run terraspace check, see the error with faraday 0.17.4 & 1.7.0 following the dependency
  5. run bundle clean --force and bundle install to fix the issue

Solution Suggestion

Force some dependency or at least check compatibility. In #135 it's seems there is the same issue, but upgrade ruby seems to solve the issue.

Note

I'm not a ruby developer...

config/terraform content should only be copied to stacks

Checklist

  • Upgrade Terraspace: Are you using the latest version of Terraspace? This allows Terraspace to fix issues fast.
  • Reproducibility: Are you reporting a bug others will be able to reproduce and not asking a question. If you're unsure or want to ask a question, do so on StackOverflow.
  • Code sample: Have you put together a code sample to reproduce the issue and make it available? Code samples help speed up fixes dramatically. If it's an easily reproducible issue, then code samples are not needed. If you're unsure, please include a code sample.

My Environment

Software Version
Operating System MacOS Mojave 10.14.5
Terraform v0.14.0
Terraspace 0.5.7
Ruby 2.7.1

Expected Behaviour

Content of config/terraform should be copied to built stacks only.

Current Behavior

Content of config/terraform is copied to not only built stacks, but also Terrafile vendor sourced modules. I believe this is a bug. For example config/terraform/backend.tf is being copied into .terraspace-cache/us-east-2/dev/modules/vpc as it is a vendor sourced module (meaning, a module declared in Terrafile), but not .terraspace-cache/us-east-2/dev/modules/my-custom-module. backend.tf has no real affect on the project, but a versions.tf file which specifies required_providers causes errors in validate, plan, etc commands.

Step-by-step reproduction instructions

Run the following commands

terraspace new project copy-config-bug
cd copy-config-bug

echo 'mod "vpc", source: "terraform-aws-modules/vpc/aws", version: "2.64.0"' > Terrafile
echo 'terraform { \nrequired_providers { \naws = "~> 3.19" \n} \n}' > config/terraform/version.tf
terraform fmt config/terraform/version.tf
rm config/terraform/backend.tf

terraspace new module m1
terraspace new stack s1

terraspace bundle
terraspace build

When you list the contents of the modules you will find that version.tf is excluded from m1 but included in vpc. I believe it should be excluded from both:

tree .terraspace-cache/us-east-1/dev/modules
├── m1 <-- version.tf is correctly excluded from m1
│   ├── main.tf
│   ├── outputs.tf
│   └── variables.tf
└── vpc
    ├── CHANGELOG.md
    ├── LICENSE
    ├── Makefile
    ├── README.md
    ├── examples
    │   ├── ...
    ├── main.tf
    ├── ...
    ├── variables.tf
    ├── version.tf <-- should not be here
    ├── versions.tf

Code Sample

Solution Suggestion

I believe that the contents of config/terraform should only be copied across to stacks that are built.

terraspace test with steps

Hello I'm writing here because I didn't received answer from forums
mainly I'd like to use the same approach I did for terratest , so splitting the test
passing variable who define the stage (create, test, destroy) as I can following the doc
everytime I execute terraspace test another plan is applied an another resource it's create ,
ca someone give me some tips for this, sorry I'm quite new in ruby .

State download conflict with multiple terraspace executions running simultaneously

Checklist

  • Upgrade Terraspace: Are you using the latest version of Terraspace? This allows Terraspace to fix issues fast. There's an Upgrading Guide: https://terraspace.cloud/docs/misc/upgrading/
  • Reproducibility: Are you reporting a bug others will be able to reproduce and not asking a question. If you're unsure or want to ask a question, do so on https://community.boltops.com
  • Code sample: Have you put together a code sample to reproduce the issue and make it available? Code samples help speed up fixes dramatically. If it's an easily reproducible issue, then code samples are not needed. If you're unsure, please include a code sample.

Difficult to reproduce but comes down to the fact that the environment is not getting specied

My Environment

Software Version
Operating System Ubuntu 20.04
Terraform 1.0.0
Terraspace 0.6.11
Ruby 2.7.0

Expected Behaviour

Multiple invocations on the same stack with different environments should be able to execute simultaneously on the same machine.

I've got a complex stack of 8 stages that is being deployed to multiple environments (dev, test, staging, prod). When running an apply on multiple environments on the same machine simultaneously causes the executions to get corrupted.

Current Behavior

Depending on your timing you can have the code work or you'll get ERB errors trying to resolve values from the state.

The issue is caused by how terraspace stores its material in /tmp for each invocation.

Errno::ENOENT: No such file or directory @ rb_sysopen - /tmp/terraspace/remote_state/stacks/stackname/state.json

The generated plans are unique enough since they have their filename appended with a random hex string as per:

lib/terraspace/cli/up.rb:      "#{Terraspace.tmp_root}/plans/#{@mod.name}-#{@@random}.plan"

Step-by-step reproduction instructions

Difficult to reproduce since it is a bit reliant on race conditions but take a complex stack and run

terraspace all init --exit-on-fail -y && terraspace all plan -y --exit-on-fail

For multiple environments in different shells on the same machine.

Code Sample

N/A

Solution Suggestion

The downloaded state files, plans, and other material stored in /tmp/terraspace should be separated by environment as well. If there are files common to all terraspace invocations they can still be placed in the root (since you want to be able to run subsequent plan/refresh/validate/up after running an init, meaning they need a predictable path to store their material). Splitting by environment alone should be sufficient to allow multiple invocations.

Add integration GitHub with slack

Summary

Add the ability to subscribe to the repository in slack. https://github.com/integrations/slack

Motivation

I would like to be able to receive updates about the latest versions, but up-to-date and it doesn't work :c

/github subscribe boltops-tools/terraspace
Either the app isn't installed on your repository or the repository does not exist. Install it to proceed.
Note: You need to be an organization owner to install the app (or ask one to install it for you).

All users can use the function by monitoring activity in repositories on slack channels.

Guide-level explanation

Unfortunately, I have not found documentation on how to add my application to this. The only message is the one given above.

Reference-level explanation

Drawbacks

Unresolved Questions

Add ERB template support to the command helpers

Summary

Add the ability to do ERB template resolution on the fields (in particular the args field) passed to the command definition in the config/args/terraform.rb file.

Motivation

I'm hoping to be able to generate suitable output Terraform plan files when using terraspace all plan with an individual plan name for each stack/instance to review the changes and to feed the suitable plans in to the apply/up commands.

Guide-level explanation

In particular i'd be interested in doing the below so that I can add the "plan" files to my MR/PR's via the CI/CD process (GitLab).

command("plan",
  args: ["-lock-timeout=22m", '-out=<%= expansion(":ENV-:ACCOUNT-:REGION-:MOD_NAME-:INSTANCE") %>.tfplan.json'],
)

Can't run the "demo" example on Mac

I followed the "Getting Started" guide for AWS and installed Terraspace (0.6.13) using brew.
I was able to execute

terraspace check_setup
Detected Terrspace version: 0.6.13
Detected Terraform bin: /usr/local/bin/terraform
Detected Terraform v1.0.4
Terraspace requires Terraform v0.12.x and above
You're all set!

And after that, I've executed

terraspace new project infra --plugin aws --examples
=> Creating new project called infra.
      create  infra
      create  infra/.gitignore
      create  infra/Gemfile
      create  infra/README.md
      create  infra/Terrafile
      create  infra/config/app.rb
       exist  infra
      create  infra/config/terraform/backend.tf
      create  infra/config/terraform/provider.tf
=> Creating test for new module: example
      create  infra/app/modules/example
      create  infra/app/modules/example/main.tf
      create  infra/app/modules/example/outputs.tf
      create  infra/app/modules/example/variables.tf
=> Creating new stack called demo.
      create  infra/app/stacks/demo
      create  infra/app/stacks/demo/main.tf
      create  infra/app/stacks/demo/outputs.tf
      create  infra/app/stacks/demo/variables.tf
=> Installing dependencies with: bundle install
Fetching gem metadata from https://rubygems.org/........
Resolving dependencies.....
Fetching rake 13.0.6
Installing rake 13.0.6
Using concurrent-ruby 1.1.9
Using zeitwerk 2.4.2
Using public_suffix 4.0.6
Using aws-eventstream 1.1.1
Using jmespath 1.4.0
Using memoist 0.16.2
Using multipart-post 2.1.1
Using digest-crc 0.6.4
Using unf_ext 0.0.7.7
Using eventmachine 1.2.7
Using google-protobuf 3.17.3 (universal-darwin)
Using jwt 2.2.3
Using multi_json 1.15.0
Using os 1.1.1
Using httpclient 2.8.3
Using mini_mime 1.1.0
Using rainbow 3.0.0
Using uber 0.1.0
Using retriable 3.1.2
Fetching minitest 5.14.4
Using bundler 2.2.25
Using text-table 1.2.4
Using google-cloud-errors 1.1.0
Using graph 2.10.0
Using tilt 2.0.10
Using rspec-support 3.10.2
Using rubyzip 2.3.2
Using thor 1.1.0
Using tty-tree 0.4.0
Using i18n 1.8.10
Using tzinfo 2.0.4
Using addressable 2.8.0
Using aws-sigv4 1.2.4
Using gcp_data 0.2.0
Using diff-lcs 1.4.4
Using unf 0.1.4
Using eventmachine-tail 0.6.5
Using trailblazer-option 0.1.1
Using dsl_evaluator 0.1.3
Using rspec-core 3.10.1
Using rspec-expectations 3.10.1
Fetching racc 1.5.2
Using declarative 0.0.20
Using domain_name 0.5.20190701
Using representable 3.1.1
Using http-cookie 1.0.4
Using googleapis-common-protos-types 1.1.0
Fetching aws-partitions 1.489.0
Fetching faraday 0.17.4
Using timeliness 0.3.10
Fetching rexml 3.2.5
Using rspec-mocks 3.10.2
Using deep_merge 1.2.1
Using azure_info 0.1.2
Fetching webrick 1.7.0
Using grpc 1.38.0 (universal-darwin)
Using rspec 3.10.0
Using googleapis-common-protos 1.3.11
Using rhcl 0.1.0
Using grpc-google-iam-v1 0.6.11
Using hcl_parser 0.1.0
Installing aws-partitions 1.489.0
Installing minitest 5.14.4
Fetching aws-sdk-core 3.119.1
Installing faraday 0.17.4
Installing webrick 1.7.0
Installing rexml 3.2.5
Fetching activesupport 6.1.4.1
Installing racc 1.5.2 with native extensions
Using faraday-cookie_jar 0.0.7
Using ms_rest 0.7.6
Using signet 0.15.0
Using google-cloud-env 1.5.0
Using googleauth 0.17.0
Using google-cloud-core 1.6.0
Fetching faraday_middleware 0.14.0
Using google-apis-core 0.4.1
Using ms_rest_azure 0.12.0
Using google-apis-iamcredentials_v1 0.6.0
Using azure_mgmt_resources 0.18.2
Using azure_mgmt_storage 0.23.0
Using google-apis-storage_v1 0.6.0
Fetching gapic-common 0.3.4
Using google-cloud-storage 1.34.1
Installing aws-sdk-core 3.119.1
Installing activesupport 6.1.4.1
Installing faraday_middleware 0.14.0
Installing gapic-common 0.3.4
Using aws-sdk-dynamodb 1.62.0
Using aws-sdk-kms 1.46.0
Using aws-sdk-secretsmanager 1.48.0
Using aws-sdk-ssm 1.115.0
Using aws_data 0.1.1
Using cli-format 0.2.0
Using render_me_pretty 0.8.3
Using rspec-terraspace 0.3.0
Fetching google-cloud-secret_manager-v1beta1 0.8.0
Fetching google-cloud-secret_manager-v1 0.8.0
Fetching aws-sdk-s3 1.99.0
Fetching nokogiri 1.12.3 (x86_64-darwin)
Installing google-cloud-secret_manager-v1beta1 0.8.0
Installing google-cloud-secret_manager-v1 0.8.0
Using google-cloud-secret_manager 1.1.2
Using terraspace_plugin_google 0.3.0
Installing aws-sdk-s3 1.99.0
Using s3-secure 0.5.1
Using terraspace_plugin_aws 0.3.0
Installing nokogiri 1.12.3 (x86_64-darwin)
Using terraspace-bundler 0.4.0
Fetching azure-core 0.1.15
Installing azure-core 0.1.15
Fetching azure-storage-common 1.1.0
Installing azure-storage-common 1.1.0
Fetching azure-storage-blob 1.1.0
Installing azure-storage-blob 1.1.0
Using terraspace_plugin_azurerm 0.3.1
Using terraspace 0.6.13
Bundle complete! 3 Gemfile dependencies, 101 gems now installed.
Use `bundle info [gemname]` to see where a bundled gem is installed.
================================================================
Congrats! You have successfully created a terraspace project.
Check out the created files. Adjust to the examples and then deploy with:

    cd infra
    terraspace up demo -y   # to deploy
    terraspace down demo -y # to destroy

More info: https://terraspace.cloud/

note that as part of this execution it seems like some components have been installed but it finished successfully.

When I tried to launch the "demo" I receive the following error:

terraspace up demo
You have already activated faraday 1.7.0, but your Gemfile requires faraday 0.17.4. Prepending `bundle exec` to your command may solve this.
Traceback (most recent call last):
	9: from /opt/terraspace/embedded/bin/terraspace:23:in `<main>'
	8: from /opt/terraspace/embedded/bin/terraspace:23:in `load'
	7: from /opt/terraspace/embedded/lib/ruby/gems/2.7.0/gems/terraspace-0.6.13/exe/terraspace:11:in `<top (required)>'
	6: from /opt/terraspace/embedded/lib/ruby/2.7.0/rubygems/core_ext/kernel_require.rb:83:in `require'
	5: from /opt/terraspace/embedded/lib/ruby/2.7.0/rubygems/core_ext/kernel_require.rb:83:in `require'
	4: from /opt/terraspace/embedded/lib/ruby/gems/2.7.0/gems/terraspace-0.6.13/lib/terraspace.rb:5:in `<top (required)>'
	3: from /opt/terraspace/embedded/lib/ruby/2.7.0/rubygems/core_ext/kernel_require.rb:83:in `require'
	2: from /opt/terraspace/embedded/lib/ruby/2.7.0/rubygems/core_ext/kernel_require.rb:83:in `require'
	1: from /opt/terraspace/embedded/lib/ruby/gems/2.7.0/gems/terraspace-0.6.13/lib/terraspace/autoloader.rb:3:in `<top (required)>'
/opt/terraspace/embedded/lib/ruby/gems/2.7.0/gems/terraspace-0.6.13/lib/terraspace/autoloader.rb:3:in `require': cannot load such file -- zeitwerk (LoadError)

Note that now I receive the same error even for executing terraspace check_setup.

Why is terraspace deeply integrated to 'cloud provider' backend?

I really like the approach of terraspace but I am not quite sure if I am doing it wrong or this is by design. I would like to use terraspace without using any cloud provider. I am using a HTTP backend and currently it seems only AWS, Azure, GCP, TFC or TFE. Am I right in assuming this ?

Change daily builds to only build new versions by Semver instead of daily

My Environment

Our terraspace stacks only work with version 0.6.11 (as per: #132 and #133 )
We were building docker images using the debian package but the daily builds have rotated this version out of the available packages in the apt repo.
Daily builds are a great idea, but pointless unless you have daily changes. Just rely on changes to the code to trigger a new package build.

Expected Behaviour

apt install terraspace=0.6.11-20210726100211

works.

Current Behavior

Version doesn't exist anymore

Running terraspace using shell script

Hey!!
I wanted to know is it possible to run terraspace using shell scripts like taking the input variable from shell script command and deploying the services? my main intention of using terraspace is for deploying multiple modules at once.

This is possible while using terraform. but terraspace i am not getting it.

Output Helper with Instance deployment

Hi,

It's possible using output helper with instance deployment?

Because the output helper uses the following syntax:

output("DEPENDANT_STACK.OUTPUT_KEY", options={})

And the helper will look for the variable in .terraspace-cache/region/env/stacks/DEPENDANT_STACK but in an instance deployment the folder created it's .terraspace-cache/region/env/stacks/DEPENDANT_STACK.INSTANCE.

How can i use the Output helper in this kind of deployment? If it's not possible, i would like to request this feature and in the meanwhile i would be highly appreciated any ideas for workaround.

Thanks

check_setup.rb incorrect version check

Checklist

  • Upgrade Terraspace: Are you using the latest version of Terraspace? This allows Terraspace to fix issues fast. There's an Upgrading Guide: https://terraspace.cloud/docs/misc/upgrading/
  • Reproducibility: Are you reporting a bug others will be able to reproduce and not asking a question. If you're unsure or want to ask a question, do so on https://community.boltops.com
  • Code sample: Have you put together a code sample to reproduce the issue and make it available? Code samples help speed up fixes dramatically. If it's an easily reproducible issue, then code samples are not needed. If you're unsure, please include a code sample.

My Environment

Software Version
Operating System Mac Mojave
Terraform 1.0.0
Terraspace 0.6.10
Ruby 2.7.1

Expected Behaviour

When executing a terraspace project using terraform 1.0.0 check_setup succeeds.

Current Behavior

Detected Terrspace version: 0.6.10
Detected Terraform bin: /usr/local/bin/terraform
Detected Terraform v1.0.0
Terraspace requires Terraform v0.12.x and above
The installed version of terraform may not work with terraspace.
Recommend using at least terraform v0.12.x
If you would like to bypass this check. Use TS_VERSION_CHECK=0

Step-by-step reproduction instructions

Setup a new terraspace project which uses terraform 1.0.0, then run a terraspace command such as bundle exec terraspace all init

Code Sample

N/A

Solution Suggestion

The logic in check_setup.rb must be wrong somewhere. Unfortunately my ruby skills are lacking so I can't spot what is wrong 😞

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.