Giter Site home page Giter Site logo

hashicorp / terraform-provider-vault Goto Github PK

View Code? Open in Web Editor NEW
451.0 230.0 535.0 22.69 MB

Terraform Vault provider

Home Page: https://www.terraform.io/docs/providers/vault/

License: Mozilla Public License 2.0

Makefile 0.07% Go 98.12% Shell 0.11% HTML 1.34% Smarty 0.32% HCL 0.04%
terraform terraform-provider vault

terraform-provider-vault's Introduction

Terraform Provider

Maintainers

This provider plugin is maintained by the Vault team at HashiCorp.

Best Practices

We recommend that you avoid placing secrets in your Terraform config or state file wherever possible, and if placed there, you take steps to reduce and manage your risk. We have created a practical guide on how to do this with our opensource versions in Best Practices for Using HashiCorp Terraform with HashiCorp Vault:

Best Practices for Using HashiCorp Terraform with HashiCorp Vault

This webinar walks you through how to protect secrets when using Terraform with Vault. Additional security measures are available in paid Terraform versions as well.

Requirements

  • Terraform 0.12.x and above, we recommend using the latest stable release whenever possible.
  • Go 1.20 (to build the provider plugin)

Building The Provider

Clone repository to: $GOPATH/src/github.com/hashicorp/terraform-provider-vault

$ mkdir -p $GOPATH/src/github.com/hashicorp; cd $GOPATH/src/github.com/hashicorp
$ git clone [email protected]:hashicorp/terraform-provider-vault

Enter the provider directory and build the provider

$ cd $GOPATH/src/github.com/hashicorp/terraform-provider-vault
$ make build

Developing the Provider

If you wish to work on the provider, you'll first need Go installed on your machine (version 1.20+ is required). You'll also need to correctly setup a GOPATH, as well as adding $GOPATH/bin to your $PATH.

To compile the provider, run make build. This will build the provider and put the provider binary in the $GOPATH/bin directory.

$ make build
...
$ $GOPATH/bin/terraform-provider-vault
...

In order to test the provider, you can simply run make test.

$ make test

In order to run the full suite of Acceptance tests, you will need the following:

Note: Acceptance tests create real resources, and often cost money to run.

  1. An instance of Vault running to run the tests against
  2. The following environment variables are set:
    • VAULT_ADDR - location of Vault
    • VAULT_TOKEN - token used to query Vault. These tests do not attempt to read ~/.vault-token.
  3. The following environment variables may need to be set depending on which acceptance tests you wish to run. There may be additional variables for specific tests. Consult the specific test(s) for more information.
    • AWS_ACCESS_KEY_ID
    • AWS_SECRET_ACCESS_KEY
    • GOOGLE_CREDENTIALS the contents of a GCP creds JSON, alternatively read from GOOGLE_CREDENTIALS_FILE
    • RMQ_CONNECTION_URI
    • RMQ_USERNAME
    • RMQ_PASSWORD
    • ARM_SUBSCRIPTION_ID
    • ARM_TENANT_ID
    • ARM_CLIENT_ID
    • ARM_CLIENT_SECRET
    • ARM_RESOURCE_GROUP
  4. Run make testacc

If you wish to run specific tests, use the TESTARGS environment variable:

TESTARGS="--run DataSourceAWSAccessCredentials" make testacc

Using a local development build

It's possible to use a local build of the Vault provider with Terraform directly. This is useful when testing the provider outside the acceptance test framework.

Configure Terraform to use the development build of the provider.

warning: backup your ~/.terraformrc before running this command:

cat > ~/.terraformrc <<HERE
provider_installation {
  dev_overrides {
    "hashicorp/vault" = "$HOME/.terraform.d/plugins"
  }
  
  # For all other providers, install them directly from their origin provider
  # registries as normal. If you omit this, Terraform will _only_ use
  # the dev_overrides block, and so no other providers will be available.
  direct {}
}
HERE

Then execute the dev make target from the project root.

make dev

Now Terraform is set up to use the dev provider build instead of the provider from the HashiCorp registry.

Debugging the Provider

The following is adapted from Debugging Providers.

Starting A Provider In Debug Mode

You can enable debbuging with the make debug target:

make debug

This target will build a binary with compiler optimizations disabled and copy the provider binary to the ~/.terraform.d/plugins directory. Next run Delve on the host machine:

dlv exec --accept-multiclient --continue --headless --listen=:2345 \
  ~/.terraform.d/plugins/terraform-provider-vault -- -debug

The above command enables the debugger to run the process for you. terraform-provider-vault is the name of the executable that was built with the make debug target. The above command will also output the TF_REATTACH_PROVIDERS information:

TF_REATTACH_PROVIDERS='{"hashicorp/vault":{"Protocol":"grpc","ProtocolVersion":5,"Pid":52780,"Test":true,"Addr":{"Network":"unix","String":"/var/folders/g1/9xn1l6mx0x1dry5wqm78fjpw0000gq/T/plugin2557833286"}}}'

Connect your debugger, such as your editor or the Delve CLI, to the debug server. The following command will connect with the Delve CLI:

dlv connect :2345

At this point you may set breakpoint in your code.

Running Terraform With A Provider In Debug Mode

Copy the line starting with TF_REATTACH_PROVIDERS from your provider's output. Either export it, or prefix every Terraform command with it.

Run Terraform as usual. Any breakpoints you have set will halt execution and show you the current variable values.

terraform-provider-vault's People

Contributors

apparentlymart avatar austingebauer avatar benashz avatar bflad avatar bhuisgen avatar catsby avatar cvbarros avatar fairclothjm avatar gpiper14 avatar greut avatar grubernaut avatar hamishforbes avatar hashicorp-copywrite[bot] avatar jasonodonnell avatar jtcressy avatar lawliet89 avatar martinssipenko avatar mongey avatar p0pr0ck5 avatar paddycarver avatar petems avatar phylu avatar radeksimko avatar raymonstah avatar riuvshyn avatar sergeytrasko avatar tvoran avatar tyrannosaurus-becks avatar vinay-gopalan avatar zlaticanin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

terraform-provider-vault's Issues

problems connecting to multiple Vaults with aliased providers

Hi!

This may be more of a question then I bug, I wasn't sure and couldn't find anyone else trying something similar (possibly because it's a bad idea) so I figured I'd ask here. :)

I've got some Terraform code that manages aws resources in multiple regions, and in each region, I have a Vault cluster that I'd like to configure based on the outputs of some of my other code. I had thought to create aliases for the vault module, and the use the provider argument on the resources I needed to create, which is a patter thats worked with the aws provider, but the vault provider seems to error out when I try to connect.

So I guess question 1, is should this even work?

If so, is there something I'm missing? If not, is it a bug that it's not working as expected?

Terraform Version

09:54 $ terraform -v
Terraform v0.11.1
+ provider.aws v1.2.0
+ provider.template v1.0.0
+ provider.vault v1.0.0

Affected Resource(s)

Please list the resources as a list, for example:

  • the vault provider :)

Terraform Configuration Files

provider "vault" {
  address = "https://vault.us-east-1.example.com"
  alias   = "us-east-1"
  token   = "token"
  version = "1.0.0"
}

provider "vault" {
  address = "https://vault.us-west-1.example.com"
  alias   = "us-west-1"
  token   = "token"
  version = "1.0.0โ€
}

resource "vault_mount" "secret_us-west-1" {
  provider    = "vault.us-west-1"
  path        = "foo/secret"
  type        = "kv"
}

resource "vault_mount" "secret_us-east-1" {
  provider    = "vault.us-east-1"
  path        = "foo/secret"
  type        = "kv"
}

Expected Behavior

It was my expectation that Vault would connect to both vaults, and be able to configure and manage resources in both vaults. I was expecting a similar behavior to the aws provider when you create an alias and use the provider argument on a resource...

Actual Behavior

Running terraform plan returns permission denied errors on both vaults

Error: Error refreshing state: 2 error(s) occurred:

* provider.vault.us-east-1: failed to create limited child token: Error making API request.

URL: POST https://vault.us-east-1.example.com/v1/auth/token/create
Code: 403. Errors:

* permission denied
* provider.vault.us-west-1: failed to create limited child token: Error making API request.

URL: POST https://vault.us-west-1.example.com/v1/auth/token/create
Code: 403. Errors:

* permission denied

Crash when reading / refreshing Vault Generic Secret resources.

Terraform Version

Terraform v0.11.0

  • provider.vault v1.0.0

Vault Version

Vault v0.9.0 ('bdac1854478538052ba5b7ec9a9ec688d35a3335')

Affected Resource(s)

  • vault_generic_secret

Terraform Configuration Files

variable "redis_internal_cache_url" { default = "url" }
variable "environment" { default = "staging" }

resource "vault_generic_secret" "redis_cache" {
  path = "${var.environment}/secrets/redis/cache"

  data_json = <<JSON
{
  "url": "${var.redis_internal_cache_url}",
}
JSON

}

Panic Output

Relevant crash log output: https://gist.github.com/forstermatth/9ca3d0d057691d8a7233e2e628d2cbac

Expected Behavior

Should read and refresh the state.

Actual Behavior

Tries to read a value, gets a nil pointer and dies.

Steps to Reproduce

  1. terraform apply or terraform refresh or terraform plan

Important Factoids

  • It is trying to refresh ~100 vault_generic_secrets and a few vault_mounts
  • This results in state corruption, and anything to do with refreshing state fails afterwards
  • The only way I've figured out how to get around this is to completely wipe the state and start again
  • It seems to be inconsistent in which secret it fails on, the null pointer is returned on a different resource often
  • The secrets exist already in vault (the plan has been applied previously)
  • I am using the Postgres storage back-end

Feature request: vault_hcl_policy_document data source (like aws_iam_policy_document)

It would be nice to write Vault policies directly as HCL in our Terraform files and have them subject to Terraform validation and editor integrations. Currently it seems the policy documents need to be written as strings (a bit of a pain to edit) or read from files (where you can't do substitutions).

An analogous function is served in the aws provider by aws_iam_policy_document.

The idea is that instead of:

resource "vault_policy" "example" {
  name = "dev-team"

  policy = <<EOT
path "secret/my_app" {
  policy = "write"
}
EOT
}

You could write:

data "vault_hcl_policy_document" "example" {
  path "secret/my_app" {
    capabilities = ["write"]
  }
}

resource "vault_policy" "example" {
  name = "dev-team"
  policy = "${data.vault_hcl_policy_document.example.json}"
}

References

https://www.terraform.io/docs/providers/aws/d/iam_policy_document.html

(Note that I'm new-ish to Terraform and to using it for Vault, so please let me know if I've mixed something up).

Unable to run `terraform destroy` when configuring `okta` auth backend

Not all Vault endpoints support delete which means that if you try and set up a Vault instance using vault_generic_secret, such as the example below, you won't be able to successfully run terraform destroy.

resource "vault_auth_backend" "example" {
  type = "okta"
}

resource "vault_generic_secret" "example" {
  path = "auth/${vault_auth_backend.example.path}/config"
  data_json = <<EOT
{
}
EOT
}

The Vault team are reluctant to go through all of the endpoints to add delete (hashicorp/vault#2889) but the way the Vault provider currently works prevents them from being properly.

One solution I thought of would be to move away from generic vault_auth_backend to ones specific to each auth backend or mount. The advantage that this approach has is that it avoids having to look up API endpoints to know how to configure an auth backend and helps stop vault_generic_secret from being used as a hammer for all of the problems.

Opinion?

vault_auth_backend should not add final / to path

Terraform Version

v0.11.0

Affected Resource(s)

  • vault_auth_backend

Terraform Configuration Files

resource "vault_auth_backend" "k8s" {
  type = "kubernetes"
  path = "acs-dev"
}

output "vault-k8s-auth-backend" {
  value = "${vault_auth_backend.k8s.path}"
}

Expected Behavior

The Vault provider should set the path exactly to what I specified without adding final "/".

Actual Behavior

The Vault provider added final "/" to the path, giving "acs-dev/". I don't want "/" in the path. If path were a computed attribute, I would not necessarily object. But since path is something I am setting, I feel that the Vault Provider should not change what I set.

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. terraform apply

LDAP Auth Backend Resource(s)

Hi!

Would be great to have an auth backend resource for adding an LDAP connection similar to the Okta resources.

Terraform Version

Terraform v0.11.7
Terraform Vault Provider v1.1.0

Affected Resource(s)

New resource(s) for LDAP Auth Backend

Terraform Configuration Files

provider "vault" {
  version="~> 1.1"
}

Debug Output

Error: module.test_vault_config.vault_ldap_auth_backend.ldap: Provider doesn't support resource: vault_ldap_auth_backend

Expected Behavior

New resources would allow Vault to be configured to talk to an LDAP provider (such as Active Directory) and include the available options for LDAP auth.

References

LDAP configuration is supported in Vault, just not yet available in this provider: https://www.vaultproject.io/docs/auth/ldap.html

vault_generic_secret fails to read long secrets (unexpected EOF)

Terraform Version

terraform on windows v0.11.2

PS C:\temp\terratest> terraform -v
Terraform v0.11.2
+ provider.vault v1.0.0

Affected Resource(s)

Please list the resources as a list, for example:

  • data vault_generic_secret

Terraform Configuration Files

data "vault_generic_secret" "ssh_key" {
  path = "secret/path/ssh-key"
}

output "id_rsa_pub" {
   value = "${data.vault_generic_secret.ssh_key.data["id_rsa_pub"]}"
}

output "id_rsa" {
   value = "${data.vault_generic_secret.ssh_key.data["id_rsa"]}"
}

Vault Content

generated a public private keypair

$ ssh-keygen -t rsa -b 4096 -C "[email protected]"`

upload it to vault at secret/path/ssh-key

{
  "id_rsa": "<<-redacted private key->>",
  "id_rsa_pub": "<<-redacted public key->>"
}

Debug Output

TF_LOG='TRACE'
https://gist.github.com/tdemeester/caed0744f8d65992d63df063f6aa9f17

Panic Output

terraform does not create a panic.

Expected Behavior

both keys should be available for the generic secret.
e.a

output "id_rsa_pub" {
   value = "${data.vault_generic_secret.ssh_key.data["id_rsa_pub"]}"
}

output "id_rsa" {
   value = "${data.vault_generic_secret.ssh_key.data["id_rsa"]}"
}

Actual Behavior

returns an error id_rsa_pub is not available

data.vault_generic_secret.ssh_key: Refreshing state...

Error: Error refreshing state: 1 error(s) occurred:

* output.id_rsa_pub: key "id_rsa_pub" does not exist in map data.vault_generic_secret.ssh_key.data in:

${data.vault_generic_secret.ssh_key.data["id_rsa_pub"]}

log shows

2018-01-19T13:43:27.210+0100 [WARN ] plugin: error closing client during Kill: err="unexpected EOF"
2018-01-19T13:43:27.218+0100 [DEBUG] plugin: plugin process exited: path=C:\temp\terratest\.terraform\plugins\windows_amd64\terraform-provider-vault_v1.0.0_x4.exe

If i do not access id_rsa_pub the provider does show the above warning but does not produce a panic or error.

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. fill vault as speciefied at vault content
  2. terraform apply with terraform code as specified at Terraform Configuration Files
  3. terraform apply

Important Factoids

reproduced the error on a clean vault install.
all code above and gist is directly from fresh environment.

the error is resolved when splitting public and private key.

the key data.vault_generic_secret.ssh_key.data_json does contain the full vault entry.

References

Are there any other GitHub issues (open or closed) or Pull Requests that should be linked here? For example:

  • looks like its related #1

0 TTL in `vault_mount` makes terraform continuously want to apply changes

Setting default_lease_ttl_seconds or max_lease_ttl_seconds to 0 means that terraform plan will always want to apply changes. The reason for this is that zero gets translated as 'apply the default value' and so when reading for the next plan, Terraform will assume the TTL returned needs to be set back to zero.

Note that this will mean just resetting the mount back to the default TTL every time plan is run so shouldn't cause any real problems.

allow vault_generic_secret to do a more in-depth comparison of the json_data to determine changes

Terraform Version

0.10.7

Affected Resource(s)

  • vault_generic_secret

Issue

Hi! Thanks for all the work you guys do! I had a quick issue.

The vault_generic_secret (when using the allow_read: true config) will compare the json_data you are passing in against what's already there, and make changes as needed. This is generally fine. However, vault_generic_secret can be used to adjust anything in vault with a path. When adjusting a non-secret path, often that path will have some settings associated with it that will always be returned, regardless of whether you set them or not. This will cause terraform to always update those paths since there is technically a difference between what is set in json_data and what is returned. As a concrete example, when setting auth/aws/role/test-role I use the following config:

resource "vault_generic_secret" "iam-role-auth" {
  path       = "auth/aws/role/test-role"
  allow_read = true

  # 2592000 = 720h = 1m
  data_json = <<EOT
{
 "auth_type":"iam",
 "policies":"default,test-policy",
 "max_ttl":"2592000",
 "bound_iam_principal_arn":"${aws_iam_role.api.arn}"
 }
EOT
}

However, when querying vault, vault returns the following:

{
  "allow_instance_migration":false,
  "auth_type":"iam",
  "bound_account_id":"",
  "bound_ami_id":"",
  "bound_iam_instance_profile_arn":"",
  "bound_iam_principal_arn":"xxxxx",
  "bound_iam_principal_id":"xxxxxx",
  "bound_iam_role_arn":"",
  "bound_region":"",
  "bound_subnet_id":"",
  "bound_vpc_id":"",
  "disallow_reauthentication":false,
  "inferred_aws_region":"",
  "inferred_entity_type":"",
  "max_ttl":2592000,
  "period":0,
  "policies":["default","test-policy"],
  "resolve_aws_unique_ids":true,
  "role_tag":"",
  "ttl":0
  }

So there are many values that are different than what I am setting (since I don't set them). My initial thought was just to pass those default values in as well. However, I can't do that since vault throws a number of errors as it doesn't allow many of the settings above to be set in the specific case of auth_type: iam. So I'm stuck either having these policies update on every apply (bad) setting allow_read: false and just living with the drift (bad from an auditing perspective) or implementing a workaround based on env variables that sets allow_read: true every now and then just to make sure that there is no drift (messy).

To avoid all this it would be great if there was an option to do a deeper comparison of the json that is returned. Perhaps an option that allows you to decide what gets compared something like:

compare_json: <value>
where <value> could be one of:

  • present - only compare the json elements that are explicitly set in the provider
  • all - compare all json elements regardless of whether they are explicitly set in the provider
  • ?? - some other options?

I think this would greatly increase the flexibility and usability of the generic_secret provider for vault.

Thanks so much!

vault_policy needs import support

Hi there,
We have a bunch of existing policies we've written into vault manually. I'm trying to migrate these policies to terraform so we can manage them a little more cleanly. I can't run an apply due to the risk of screwing with production policies / policy attachments.

Terraform Version

(mybranch)โšก % terraform -v                                                                                                                                     ~/repos/terraform/vault
Terraform v0.11.3
+ provider.vault v1.0.0

Affected Resource(s)

Please list the resources as a list, for example:

  • vault_policy

Terraform Configuration Files

provider "vault" {
  address = "http://vault.com.pany"
}

resource "vault_policy" "myapp-dev" {
  name   = "myapp-dev"
  policy = "${file("${path.module}/policy/applications/myapp-dev.hcl")}"
}

Expected Behavior

No new key should be created.

Actual Behavior

Terraform plan shows creating a new key at the target location

(mybranch)โšก [1] % vault read sys/policy/myapp-dev                                                                                                               
Key  	Value
---  	-----
name 	myapp-dev
rules	path "secret/directory/myapp/dev/*" {
  capabilities = ["create","read","update","delete","list"]
}

(mybranch)โšก % vault policies myapp-dev                                                                                                                         
path "secret/directory/myapp/dev/*" {
  capabilities = ["create","read","update","delete","list"]
}


## Terraform (plan) output
...
+ create 
...
+ vault_policy.myapp-dev
    id:     <computed>
    name:   "myapp-dev"
    policy: "path \"secret/directory/myapp/dev/*\" {\n  capabilities = [\"create\",\"read\",\"update\",\"delete\",\"list\"]\n}\n"

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. Create a policy at `sys/policy/foo' Manually
  2. Create an HCL / .tf file that mirrors that policy's implementation
  3. terraform plan

FR: Add option to avoid storing some values in resource vault_generic_secret

I'm trying to use Terraform to configure and manage our Vault instances. As I understand it, I should be able to do most of this using vault_generic_secret resources.

I want to avoid persisting, tracking, or logging the values that are actually "secret". I want to provide these parameters at 'apply' time, release Terraform of any responsibility for state/drift tracking, and rest easy knowing there aren't secret copies lurking in Terraform state files.

The long-lived hashicorp/terraform#516 is the general case of this problem in Terraform.

hashicorp/terraform#15797 attempts a general Terraform solution to this problem, but is stalled on reasonable questions about the deep Terraform-wide implications of the change.

I propose something more modest: vault_generic_secret is specifically targeted at uploading configuration that often includes secrets. Can we solve the secret-persistence problem (even just temporarily) for just this resource?

Would it work to add a secret_data_json field or similar alongside data_json? secret_data_json would be marked Sensitive (data_json currently is not), and would not be persisted to state. It would be merged with data_json when sent to Vault.

I don't know how to prevent persisting the value - would setting StateFunc to a function returning "" work? Or is something more complicated required?

I'm happy to help with implementation.

References

hashicorp/terraform#516
hashicorp/terraform#15797

Caused crash by in Vault Provider's interaction with Kubernetes Auth Backend

Terraform Version

Run terraform -v to show the version. If you are not running the latest version of Terraform, please upgrade because your issue may have already been fixed.
0.10.8 on SaaS TFE

Affected Resource(s)

Please list the resources as a list, for example:

  • vault_generic_secret when used against /auth//config

Terraform Configuration Files

terraform {
  required_version = ">= 0.10.1"
}

provider "vault" {}

data "vault_generic_secret" "gcp_credentials" {
  path = "secret/gcp/credentials"
}

provider "google" {
  credentials = "${data.vault_generic_secret.gcp_credentials.data[var.gcp_project]}"
  project     = "${var.gcp_project}"
  region      = "${var.gcp_region}"
}

resource "google_container_cluster" "k8sexample" {
  name               = "${var.cluster_name}"
  description        = "example k8s cluster"
  zone               = "${var.gcp_zone}"
  initial_node_count = "${var.initial_node_count}"
  enable_kubernetes_alpha = "true"

  master_auth {
    username = "${var.master_username}"
    password = "${var.master_password}"
  }

  node_config {
    machine_type = "${var.node_machine_type}"
    disk_size_gb = "${var.node_disk_size}"
    oauth_scopes = [
      "https://www.googleapis.com/auth/compute",
      "https://www.googleapis.com/auth/devstorage.read_only",
      "https://www.googleapis.com/auth/logging.write",
      "https://www.googleapis.com/auth/monitoring"
    ]
  }
}

resource "vault_generic_secret" "config" {
  path = "auth/${var.gcp_project}/config"
  data_json = <<EOT
  {
    "token_reviewer_jwt": "reviewer_service_account_jwt",
    "kubernetes_host": "https://${google_container_cluster.k8sexample.endpoint}:443",
    "kubernetes_ca_cert": "${chomp(replace(base64decode(google_container_cluster.k8sexample.master_auth.0.cluster_ca_certificate), "\n", "\\n"))}"
  }
  EOT
}

resource "vault_generic_secret" "role" {
  path = "auth/${var.gcp_project}/role/demo"
  data_json = <<EOT
  {
    "bound_service_account_names": "cats-and-dogs",
    "bound_service_account_namespaces": "default",
    "policies": "admins",
    "ttl": "1h"
  }
  EOT
}

Debug Output

https://gist.github.com/rberlind/6944e691c53e3a65276c76e9cbe66201

Panic Output

If Terraform produced a panic, please provide a link to a GitHub Gist containing the output of the crash.log.

https://gist.github.com/rberlind/49c597dd8fb2a5e6c667e646e0c07391

Expected Behavior

I was trying to queue a Destroy plan in TFE. All my resources should have been destroyed.

Actual Behavior

What actually happened?

Terraform is having issues with the Vault Provider's vault_generic_secret resource which I used to write to the auth/roger-berlind-gke-dev/auth path in order to configure the Kubernetes Auth Backend (with path roger-berlind-gke-dev since I also have one configured at roger-berlind-gke-prod). The issue is that the Kubernetes Auth Backend does not support the delete operation. I reported this at hashicorp/vault-plugin-auth-kubernetes#13. As a way of trying to get around. I actually produced the circumstances that lead to the crash by running vault auth-disable roger-berlind-gke-dev to see if disabling the auth backend would then allow the Terraform Vault provider to destroy the rest of my resources. Unfortunately, it still tried to delete the /auth/roger-berlind-gke-dev/config path when it probably no longer existed. Note that even after I re-enabled the Kubernetes Auth Backend with the same path, I still got the crash.

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. terraform plan -destroy -out=destroy.tfplan
  2. terraform apply destroy.tfplan
    (I was running in TFE and queued destroy plan, but this is what that does)

Important Factoids

After using vault CLI to write data to auth/roger-berlind-gke-dev/config, I could then do destroy plans without crashing. But the plan still fails because it cannot delete the vault_generic_secret.config resource.

References

Are there any other GitHub issues (open or closed) or Pull Requests that should be linked here? For example:

Vault provider is unable to access data with a non-root token

This issue was originally opened by @mperriere as hashicorp/terraform#16457. It was migrated here as a result of the provider split. The original body of the issue is below.


Hi there,

Terraform Version

0.10.6

Terraform Configuration Files

cat main.tf
provider "vault" {
  address = "http://${var.vault_host}:${var.vault_port}"
  skip_tls_verify = true
  token = "dc985ea7-57eb-77b5-17c9-ae86fe019c82"
}

data "vault_generic_secret" "key" {
  path = "secret/dev/onenext/db_user01"
}

output pwd {
 value = "${data.vault_generic_secret.key.data["pwd"]}"
}

cat variable.tf
variable vault_host {
 #default="consuldev.pipeline.aws"
 default="10.196.14.160"
}
variable vault_port {
 default="8200"
}
variable env {
 default="dev"
}
variable project_name {
 default="onenext"
}

Debug Output

terraform-0.10.6 apply
data.external.read_token: Refreshing state...
data.external.fetch_token: Refreshing state...
Error refreshing state: 1 error(s) occurred:

  • provider.vault: failed to create limited child token: Error making API request.

URL: POST http://10.196.14.160:8200/v1/auth/token/create
Code: 403. Errors:

  • permission denied

Expected Behavior

I'm trying to create a policy based for every project and env: we need a token for every project, in every environnment (DEV, PREP, PROD).
Then i attach a token to it, write a secret inside the path defined in the policy, and finally i'm trying to read it from Terraform. The token should allow read access to the credentials, but it does not.

the rules are defined in a ansible playbook (outside terraform scope)
path "secret/{{env}}/{{project_name}}/*" {
capabilities = ["create", "read", "update", "delete", "list"]
}
path "secret/{{env}}/{{project_name}}" {
capabilities = ["list"]
}
path "auth/token/lookup-self" {
capabilities = ["read"]
}

Next step, i'm creating a token attached to that policy:
(sorry, ansible code too):

  • hashivault_token_create:
    display_name: "{{project_name}}-{{env}}"
    policies: ["{{project_name}}-{{env}}"]
    renewable: True
    token: "{{root_token}}"

This token is available, and can be queried on the Vault server:
VAULT_TOKEN=dc985ea7-57eb-77b5-17c9-ae86fe019c82 vault token-lookup --address=http://127.0.0.1:8200
Key Value


accessor ec4ccd86-c6b9-b0e5-b337-5504ced231cd
creation_time 1508927130
creation_ttl 2764800
display_name token-onenext-dev
explicit_max_ttl 0
id dc985ea7-57eb-77b5-17c9-ae86fe019c82
meta
num_uses 0
orphan false
path auth/token/create
policies [default onenext-dev]
renewable true
ttl 2761029

Next step. i'm writing a secret in the policy path. No issue there.

Now i'm trying to read it from the vault cli: this is ok, i can read the secret. This proves that the secret, policy and token are correctly defined:

VAULT_TOKEN=dc985ea7-57eb-77b5-17c9-ae86fe019c82 vault read --address=http://127.0.0.1:8200/ secret/dev/onenext/db_user01
Key Value


refresh_interval 768h0m0s
pwd tai.s0dohM5r

Now i'm trying to do the same thing with terraform.
I get that error:

PS: if i switch back to the root token in the provider vault definition, i can access to the secret.

It seems that terraform is trying to create a temporary token instead of using the one newly created to just read the secret key with the provided token.

Perhaps i did not understand the documentation? I did not find full usage examples on the internet, i don't know what's wrong here.

Steps to Reproduce

Please list the full steps required to reproduce the issue, for example:
2. terraform apply

EC2 constraints for `vault_aws_auth_backend_role` should be lists

Hi there,

Thank you for opening an issue. Please note that we try to keep the Terraform issue tracker reserved for bug reports and feature requests. For general usage questions, please see: https://www.terraform.io/community.html.

Terraform Version

0.11.4

Affected Resource(s)

Please list the resources as a list, for example:

  • vault_aws_auth_backend_role

If this issue appears to affect multiple resources, it may be an issue with Terraform's core, so please mention this.

Terraform Configuration Files

resource "vault_aws_auth_backend_role" "system" {
  role = "system"
  auth_type = "ec2"
  bound_ami_id = [ "${concat(data.aws_ami_ids.released_amis.ids, data.aws_ami_ids.testing_amis.ids)}" ]
  policies = "system"
}

Expected Behavior

The list of AMI ids should be applied as constraints to the new role.

Actual Behavior

Planning fails:

Acquiring state lock. This may take a few moments...
Releasing state lock. This may take a few moments...

Error: vault_aws_auth_backend_role.system: bound_ami_id must be a single value, not a list

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. terraform plan

Important Factoids

All of the EC2 auth constraints for the AWS auth background take lists, according to the api docs. But looking at the source (and the TF docs) they're all implemented as strings for Terraform.

Unable to destroy secret_id after it is consumed

Terraform Version

Terraform v0.11.7

  • provider.vault v1.1.0

Vault Version

0.9.1

Affected Resource(s)

  • vault_approle_auth_backend_role_secret_id

Terraform Configuration Files

resource "vault_approle_auth_backend_role" "app_role" {
  backend   = "approle"
  role_name = "app_role_test"
  policies  = ["${vault_policy.policy_approle.name}"]

  secret_id_num_uses = "1"
  secret_id_ttl      = "${var.secret_ttl}"
}

resource "vault_approle_auth_backend_role_secret_id" "app_secret_id" {
  backend   = "${vault_approle_auth_backend_role.app_role.backend}"
  role_name = "${vault_approle_auth_backend_role.app_role.role_name}"
}

Expected Behavior

Terraform should not throw an error during the destroy process if the secret_id has been consumed in the vault.

Actual Behavior

Error: Error refreshing state: 1 error(s) occurred:

* module.create_vault_approle.vault_approle_auth_backend_role_secret_id.app_secret_id: 1 error(s) occurred:

* module.create_vault_approle.vault_approle_auth_backend_role_secret_id.app_secret_id: vault_approle_auth_backend_role_secret_id.app_secret_id: Error checking if AppRole auth backend role SecretID "backend=approle::role=app_role_test::accessor="abcd1234" exists: Error making API request.

URL: PUT http://127.0.0.1:8200/v1/auth/approle/role/app_role_test/secret-id-accessor/lookup
Code: 500. Errors:

* 1 error occurred:

* failed to find accessor entry for secret_id_accessor:

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. terraform apply
  2. curl --request POST --data "{ "role_id" : "$ROLE_ID", "secret_id" : "$SECRET_ID" }" http://127.0.0.1:8200/v1/auth/approle/login
  3. terraform destroy

Okta auth backend - Disable MFA option

Running Terraform v0.11.7, provider.vault v1.1.0.

The okta_auth_backend resource was recently implemented (Thank you to @paddycarver for doing that work). https://www.terraform.io/docs/providers/vault/r/okta_auth_backend.html

The only missing piece I have found so far is the ability to disable the "okta verify push factor" (mfa), which as far as I know requires their mobile app.

Here is the flag that can be used to disable mfa on the vault backend.
https://www.vaultproject.io/api/auth/okta/index.html#bypass_okta_mfa

vault_generic_secret fails to read secrets (unexpected EOF)

This issue was originally opened by @IevgenKabanets as hashicorp/terraform#10999. It was migrated here as part of the provider split. The original body of the issue is below.


Terraform Version

Terraform v0.8.2

Affected Resource(s)

Please list the resources as a list, for example:

  • vault_generic_secret

Terraform Configuration Files

provider "vault" {
  address = "http://vault_server:8200"
  skip_tls_verify = true
}

data "vault_generic_secret" "docker" {
  path = "secret/docker"
}

output "secret" {
	value = "${data.vault_generic_secret.docker.data["docker_registry_pwd"]}"
}

Debug Output

https://gist.github.com/IevgenKabanets/c16d2e5ef4520921ba05e5a79ee11079

Panic Output

https://gist.github.com/IevgenKabanets/c16d2e5ef4520921ba05e5a79ee11079

Expected Behavior

The secret should be read, as it's present in Vault and accessible with curl.

Actual Behavior

Crashed with * data.vault_generic_secret.docker: unexpected EOF

Steps to Reproduce

  1. export VAULT_TOKEN=<root_token or any token>
  2. terraform plan or terraform apply

Important Factoids

This works fine

curl -X GET -H "X-Vault-Token:$VAULT_TOKEN" http://vault_server:8200/v1/secret/docker/docker_registry_pwd

Also, the error is gone once I read full path to entry (secret/docker/docker_registry_pwd)

data "vault_generic_secret" "docker" {
  path = "secret/docker/docker_registry_pwd"
}

which seems to be wrong, as vault_generic_secret should return a map with possible keys/values.

References

Are there any other GitHub issues (open or closed) or Pull Requests that should be linked here? For example:

Support for managing Vault audit backends

It looks like this provider doesn't yet support enabling or configuring Vault's audit backendsย โ€“ย does anyone using this provider to automate a Vault deployment via Terraform have a reasonable workaround for automating the enabling/configuration of one or more audit backends?

Terraform should be able to generate wrapped vault secret-ids for servers

This issue was originally opened by @gtmtech as hashicorp/terraform#12687. It was migrated here as part of the provider split. The original body of the issue is below.


Terraform all versions. Hashicorp Vault 0.6+

Terraform<->Vault is missing a fundamental feature which would allow vault AppRole authentication to work.

AppRole authentication in Vault is provided so that a server, or apps running on a server are able to authenticate itself to Vault and procure secrets.

In order to work, the authentication login step requires two bits of information - a role_id and a secret_id. Vault best practice describes how you generate a role_id, and might commit this into configuration management (e.g. Packer+Ansible => AMI) , whereas the secret_id should be supplied to the final app if possible.

It also suggests that for maximum security, the secret_id when requested should be response-wrapped using the Vault cubbyhole backend, which basically means that only one entity is able to decrypt it. If the target/server decrypts it then all is well. If a bad actor decrypts it first, then the target/server cannot decrypt it, and can raise an alert about doing so.

All this presents a lovely case for an integration step between terraform and vaults AppRole, so that prior to spinning up an aws_instance resource, terraform requests a wrapped secret-id from Vault's AppRole API, passes the wrapped secret-id into user-data, and then the resultant ec2 instance spinning up can unwrap the secret, and get whatever secrets it needs.

However this functionality is missing. Could we have it please?

All that would be needed is a new data resource called something like vault_approle_wrapped_secret:

data "vault_approle_secret_id" "jenkins" {
    role_id="1234-123456-1234-1234"
    wrapped="true"
    ttl="60s"
}

Then in a corresponding userdata cloud-init template (or equivalent), this can be referred to easily:

SECRET_ID_WRAPPED_TOKEN=${data.vault_approle_secret_id.jenkins}

et Voila - the instance spins up, gets the role_id from its AMI (ansible), get the wrapped token_id from user-data and does the following (probably in a bash startup script)

1. authenticates to vault with unwraponly:password to allow it to unwrap wrapped secrets.
2. unwraps the SECRET_ID_WRAPPED_TOKEN to yield a SECRET_ID
3. re-authenticates to vault using the role_id and the secret_id
4. pulls whichever secrets it needs out of vault.

Cannot associate multiple policies with a role

Terraform Version

Terraform v0.11.0
Terraform v0.10.4

Affected Resource(s)

vault_aws_auth_backend_role

Terraform Configuration Files

resource "vault_aws_auth_backend_role" "example" {
  role                                 = "test-role"
  auth_type                            = "iam"
  bound_iam_principal_arn              = "arn:aws:iam::11111111111:role/*"
  ttl                                  = 60
  max_ttl                              = 120
  policies                             = ["default", "admin"]
}

This config a shortened version of the resource example: https://www.terraform.io/docs/providers/vault/r/aws_auth_backend_role.html

Debug Output

https://gist.github.com/dond00m/2979149b76a6b0c794b83258fe8de814

Expected Behavior

Create vault role that is associated with multiple policies

Actual Behavior

Terraform cannot convert the policy list and the vault role is not created.

Noting that terraform will not error out if the policy list is set to any of the following:

["default"]
[""]

However, it does not seem to actually make any changes to the policy association i.e when there are multiple policies, it will not remove those policies.

Steps to Reproduce

Specify more than one policy in policy list E.g

policies                             = ["default", "admin", "new"]

then apply
terraform apply

Support for the Vault token_helper

Hi,

Please support the Vault token_helper as a way to provide a token to the vault provider.

Terraform Version

Terraform v0.11.7

  • provider.template v1.0.0
  • provider.vault v1.1.0

Affected Resource(s)

  • provider vault

Terraform Configuration Files

provider "vault" {}

Debug Output

Error: Error refreshing state: 1 error(s) occurred:

* provider.vault: No vault token found: open /Users/<User>/.vault-token: no such file or directory

Expected Behavior

To succesfuly provision resources in vault by retrieving a vault_token from the token_helper specified in my ~/.vault config file.

Steps to Reproduce

  1. Configure a vault token_helper in ~/.vault (vault-token-helper.sh)
  2. terraform apply

Deploying Vault policies/configurations via Terraform fails on many endpoints.

Terraform Version

Terraform v0.11.1

Affected Resource(s)

All Terraform Vault actions

Terraform Configuration Files

resource "vault_generic_secret" "audit_file" {
  path = "sys/audit/file"

  data_json = <<EOT
{
  "type": "file",
  "options": {
      "path": "F:/HashiCorp/Vault/vault.audit/audit.log"
  }
}
EOT
}

Expected Behavior

It should create the audit configuration

Actual Behavior

It throws an error:

C:\> terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

vault_generic_secret.audit_file: Refreshing state... (ID: sys/audit/file)

Error: Error refreshing state: 1 error(s) occurred:

* vault_generic_secret.audit_file: 1 error(s) occurred:

* vault_generic_secret.audit_file: vault_generic_secret.audit_file: error reading from Vault: Error making API request.

URL: GET https://vault.domain.com:8200/v1/sys/audit/file
Code: 405. Errors:

* 1 error occurred:

* unsupported operation

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. terraform apply or terraform plan

Important Factoids

Seems to be related to the fact that not all HTTP verbs are accepted anymore.

Terraform v0.11.3 crashes trying to plan changes to vault 0.9.3

This issue was originally opened by @burdandrei as hashicorp/terraform#17346. It was migrated here as a result of the provider split. The original body of the issue is below.


Terraform Version

Terraform v0.11.3
+ provider.vault v1.0.0

Terraform Configuration Files

resource "vault_generic_secret" "mysql-yotpodbnameprod-write" {
  path = "admin/mysql-dbnameprod/roles/write"

  data_json = <<EOT
{
  "creation_statements": "CREATE USER '{{name}}'@'%' IDENTIFIED BY '{{password}}';GRANT SELECT,INSERT,UPDATE,DELETE,EXECUTE,SHOW VIEW,CREATE,ALTER,REFERENCES,INDEX,CREATE VIEW,CREATE ROUTINE,ALTER ROUTINE,EVENT,DROP,TRIGGER,CREATE TEMPORARY TABLES,LOCK TABLES ON dbnameprod.* TO '{{name}}'@'%';",
  "db_name": "mysql",
  "default_ttl": 0,
  "max_ttl": 0,
  "renew_statements": "",
  "revocation_statements": "",
  "rollback_statements": ""
}
EOT

  depends_on = ["vault_mount.mysql-dbname"]
}

...

Debug Output

https://gist.github.com/burdandrei/03651abab342d44594c75bd1d63b024d#file-plan-trace

Crash Output

https://gist.github.com/burdandrei/03651abab342d44594c75bd1d63b024d#file-crash-log

Expected Behavior

Update policies in Vault

Actual Behavior

Crash

Additional Context

It Happened after Vault update from 0.8.3 to 0.9.3

References

None

Missing strict ordering: vault_mount must happen before vault_generic_secret (at least for PKI backend)

Terraform Version

$ terraform -v
Terraform v0.11.3

  • provider.vault v1.0.0

Affected Resource(s)

Please list the resources as a list, for example:

  • vault_mount
  • vault_generic_secret

Terraform Configuration Files

provider "vault" { }

resource "vault_mount" "team5_k8s_apiserver" {

  path = "team5/k8s-apiserver"
  type = "pki"
}

resource "vault_generic_secret" "team5_k8s_apiserver_ca" {

  path = "team5/k8s-apiserver/root/generate/internal"

  data_json = <<EOT
{
  "common_name": "xxx.io"
}
EOT
}

Debug Output

https://gist.github.com/andrejvanderzee/5fdd73fcd507e7c7375b7fad833ffba1

Expected Behavior

Strict ordering: First create vault_mount, then vault_generic_secret. Note that for terraform destroy I would expect the same reverse order (fist destroy vault_generic_secret, then vault_mount).

Actual Behavior

Parallel creation resulting vault_generic_secret to fail, cause it requires vault_mount first. Note that the same holds for terraform destroy only in reverse order (vault_generic_secret fails because the PKI backend is already unmounted).

Steps to Reproduce

  1. terraform init
  2. terraform apply

'policy_arn' doesn't work for vault_aws_secret_backend_role

Hi - I posted this in the google group and was asked to submit an issue here by @paddycarver.

The issue:

I've defined a resource in a module that creates a Vault aws secret backend role for an IAM role that's created in the same module:

resource "aws_iam_role" "assumed_role" {
  name               = "${var.role_name}"
  path               = "/"
  assume_role_policy = "${data.aws_iam_policy_document.assumed_role_policy.json}"
}

resource "vault_aws_secret_backend_role" "role" {
  backend = "users/aws/master"
  name = "${var.role_name}"
  policy_arn = "${aws_iam_role.assumed_role.arn}"
}

Normally, outside of Terraform, in order to create the secret backend role in Vault, it'd look something like this:

> vault write users/aws/master/roles/<role_name> arn=arn:aws:iam::<aws_account>:role/<role_name>

Success! Data written to: users/aws/master/roles/<role_name>

So I was hoping I could do the same in Terraform. However, when calling the module and applying, I'm seeing the following:

module.my_iam_role.vault_aws_secret_backend_role.role: Creating...
  backend:    "" => "users/aws/master"
  name:       "" => "<role_name>"
  policy_arn: "" => "arn:aws:iam::<aws_account>:role/<role_name>"
Releasing state lock. This may take a few moments...


Error: Error applying plan:


1 error(s) occurred:


* module.iam_role_name.vault_aws_secret_backend_role.role: 1 error(s) occurred:


* vault_aws_secret_backend_role.role: Error creating role "<role_name>" for backend "users/aws/master": Error making API request.


URL: PUT https://<vault_url>:8200/v1/users/aws/master/roles/<role_name>
Code: 500. Errors:


* 1 error occurred:


* Either policy or arn must be provided

At first I thought maybe STS roles just weren't supported, but after looking up that error in the Vault code, I noticed that it's expecting either 'arn' or 'policy' - there's no mention of 'policy_arn'.

I then dug into the Vault provider code, and it just seems to be forwarding the payload to Vault API as 'policy_arn' rather than 'arn' as expected. I tested by simply changing data["policy_arn"] = policyARN to data["arn"] = policyARN, and the role was then configured as expected, no error.

Let me know if any further info would help! Thanks!

Terraform Version

0.11.1

Affected Resource(s)

  • vault_aws_secret_backend_role

Expected Behavior

Vault role should have been configured with an ARN by vault_aws_secret_backend_role resource.

Actual Behavior

Error from vault provider that "policy" or "arn" needed to be configured.

Steps to Reproduce

  1. Create a vault_aws_secret_backend_role resource that defines 'policy_arn' instead of 'policy' and attempt to apply

[Bug/regression] Read vault_approle_auth_backend_role crashes for Vault 0.10.0

Terraform Version

Terraform v0.11.7

Vault version

Vault v0.10.0 ('5dd7f25f5c4b541f2da62d70075b6f82771a650d')

Affected Resource(s)

  • vault_approle_auth_backend_role

Terraform Configuration Files

provider "vault" {
  # NOTE: For example sake, this is a Vault container.
  address = "http://192.168.99.100:30002"
  token = "test"
}

resource "vault_auth_backend" "approle" {
  type = "approle"
}

resource "vault_policy" "newrelic" {
  name = "newrelic"
  policy = <<EOT
path "secret/external/newrelic/*" {
  policy = "read"
}
EOT
}

resource "vault_policy" "transactions" {
  name = "transactions"
  policy = <<EOT
path "secret/transactions/*" {
  policy = "read"
}
EOT
}

resource "vault_approle_auth_backend_role" "transactions" {
  role_name = "transactions"
  policies = ["${vault_policy.transactions.name}", "${vault_policy.newrelic.name}"]

  depends_on = ["vault_policy.newrelic", "vault_policy.transactions"]
}

Debug Output

https://gist.github.com/syndbg/ace5e589beacc809d8efb2e6b34e6be4

Panic Output

https://gist.github.com/syndbg/d9b5e4fc287c043b93377bb609d0c42f

Expected Behavior

It must successfully read the approle data returned by Vault and proceed with provision.

Actual Behavior

Crashed during reading of approle data.

Steps to Reproduce

  1. terraform apply
  2. Write input yes
  3. Observe

Important Factoids

This issue is not reproducible for Vault versions 0.8.x and 0.9.x.

However it's 100% of the time reproducible when running the acceptance test suite against a Vault server with version 0.10.0.

References

None at the moment. I'll submit a PR soon.

Feature Request: Vault Mount Data Source

Use Case

I'd like to be able to define Vault mounts in terraform, and be able to reference them in different files/modules/etc. for building up of secret paths. If we had a vault_mount data source that allows us to determine the path we could define the "mount point" of a given backend centrally, and allow other teams to reference them without hard-coding the full path.

Indeed, the vault_aws_access_credentials resource does provide this. Unfortunately for the more "generic" vault_mount resource would require it to have an identifier other than the mount path so it could be identified in the data source. After all, if you already know the path you don't need the data source to give it to you. I could see how that would be a breaking change in that the structure of the ID could change, but I think it would be manageable if the default is to compute the id from the path as it is now.

Or is the expected route that we will eventually follow the vault_auth_backend pattern for each of Vault's stock mounts? To that end something like vault_generic_secret_backend and vault_pki_secret_backend and expose the mount's path that way?

[Feature request] Add auth_kubernetes_* resources

Motivation

tl;dr Add auth_kubernetes_config and similar resources

Long:

I've previously setup Vault authentication using Kubernetes via CLI, but as a means of automation,
at work we're considering usage of Terraform to do that instead of bash scripts left and right.

We already use Terraform heavily for server provisioning wherever applicable, so why not provision Vault too?

With the most recent release v1.1.0, the addition of approle_auth_backend_* resources, seems like a step towards adding authentication-provider specific resources wherever applicable.

Currently setting up Vault authentication with Kubernetes with vault_generic_secret resources is not possible. From what I've read, there are issues such as hashicorp/vault-plugin-auth-kubernetes#13 and #44.

From my experience with Vault setup so far - I feel like using the wrong tools as vault_generic_secret to do the job.

Terraform Version

Terraform v0.11.7

Affected Resource(s)

No existing resources, yet.

References

Feature request | Support vault list

Feature request

Terraform Version

v0.11.1

Affected Resource(s)

Please list the resources as a list, for example:
data vault_generic_secret

Terraform Configuration Files

data "vault_generic_secret" "get_roles" {
  path = "database/roles/?list=true"
}

The code at https://github.com/terraform-providers/terraform-provider-vault/blob/master/vault/data_source_generic_secret.go#L70 calls Logical.read(). However if we could add a check before it to see if the path contains list=true and call Logical.List() accordingly. This can be used to check if a particular role exists or not since a read on the non-existent role will fail and cannot be used to determine further steps.

Unable to generate certificates through vault pki

Terraform Version

Terraform v0.11.5
+ provider.vault v1.1.0

Affected Resource(s)

  • vault_generic_secret

Terraform Configuration Files

resource "vault_generic_secret" "cert" {
  path = "pki/issue/example"

  data_json = <<EOT
        { 
            "common_name": "foo.example.com",
            "ip_sans": "1.2.3.4"
        }
EOT
}

Debug Output

https://gist.github.com/jordansissel/995df0edaa7fe406b5cc49b37a387c87

Expected Behavior

What should have happened?

Create a certificate + key pair in Vault's PKI engine.

Actual Behavior

What actually happened?

* vault_generic_secret.cert: vault_generic_secret.cert: error reading from Vault: Error making API request.

URL: GET http://127.0.0.1:8200/v1/pki/issue/example

Steps to Reproduce

Minimal vault configuration:

  1. vault server -dev
  2. vault enable secrets pki
  3. vault write pki/roles/example allowed_domains=example.com allow_subdomains=true max_ttl=72h

Terraform:

  1. terraform init
  2. terraform apply

Important Factoids

My current objective is to have vault generate certificates for services on-demand, for example, to generate client and server certificates for services running in Kubernetes.

I can achieve this with vault alone vault write pki/issue/example ... which provides me the right information. However, attempting this with terraform fails becuase GET /pki/issue/:role is not implemented.

I am wondering if the terraform vault_generic_secret resource is simply incompatible with Vault's PKI engine and perhaps a new resource (vault_pki_generate ?) is required to achieve this objective.

database/config always gives a diff

Terraform Version

terraform -v
Terraform v0.11.1

  • provider.aws v1.6.0
  • provider.null v1.0.0
  • provider.random v1.1.0
  • provider.vault v1.0.0

Affected Resource(s)

Please list the resources as a list, for example:

  • vault_generic_secret

Terraform Configuration Files

Hi,

With the below module, database/config path always gives a diff after first apply:

resource "vault_generic_secret" "db_config" {
  count = "${local.is_mysql}"
  path  = "database/config/${var.db_identifier}"

  data_json = <<EOT
{
  "plugin_name":"mysql-database-plugin",
  "connection_details":
    {"connection_url":"${var.db_user}:${var.db_creds}@tcp(${var.rds_name}:3306)/"} ,
    "allowed_roles":["${var.db_role}_rw","${var.db_role}_ro"]
}
EOT
}

This happens as on the vault side, connection_url is actually an object inside connection_details. so changing the data_json to:


  data_json = <<EOT
{
  "plugin_name":"mysql-database-plugin",
  "connection_details":
    {"connection_url":"${var.db_user}:${var.db_creds}@tcp(${var.rds_name}:3306)/"} ,
    "allowed_roles":["${var.db_role}_rw","${var.db_role}_ro"]
}

fixes the diff after creating the config. However, with the above json, new configs cannot be created as vault throws error saying

  • error creating database object: connection_url cannot be empty

Below is the vault audit log from above failed creation:

{"time":"redacted","type":"response","auth":{"client_token":"redacted","accessor":"redacted","display_name":"token-terraform","policies":["redacted"],"metadata":null,"entity_id":"redacted"},"request":{"id":"redacted","operation":"update","client_token":"redacted","client_token_accessor":"redacted","path":"database/config/mydb","data":{"allowed_roles":["redacted"],"connection_details":{"connection_url":"user:pass@tcp(mydb:3306)/"},"plugin_name":"mysql-database-plugin"},"policy_override":false,"remote_address":"redacted","wrap_ttl":0,"headers":{}},"response":{"data":{"error":"error creating database object: connection_url cannot be empty"}},"error":""}

Expected Behavior

There should be no diff

Actual Behavior

There is always a diff

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:
Create a database/config object and then again do a plan. It will show a diff.

Terraform can't find data resource

terraform -v
Terraform v0.11.2

  • provider.null v1.0.0
  • provider.vault v1.0.0

Terraform Configuration Files

provider "vault" {
address = "https://vault.int/"
token = "SomeTokenHere"
}

data "vault_generic_secret" "key" {
path = "secret/domain/test.int/star.test.int/key"
}

data "vault_generic_secret" "certificate" {
path = "secret/domain/test.int/star.test.int/crt"
}

resource "google_compute_ssl_certificate" "default" {
name_prefix = "star-test-in"
description = "default wildcard cert"
private_key = "${data.vault_generic_secret.key.data_json["value"]}"
certificate = "test"
}

Expected Behavior

Kyes should be extracted from vault and ready to use

Actual Behavior

Error: Error applying plan:

1 error(s) occurred:

* google_compute_ssl_certificate.default: Resource 'data.vault_generic_secret.key' not found for variable 'data.vault_generic_secret.key.data_json'

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. terraform apply -var-file=test.tfvars

AWS secret backend nested path support

Hi there,

Terraform Version

v0.11.3

Affected Resource(s)

vault_aws_secret_backend_role

Terraform Configuration Files

resource "vault_aws_secret_backend" "nested-aws-path" {
  access_key                = "${var.access_key}"
  secret_key                = "${var.secret_key}"
  path                      = "nested/aws/path"
 <snip>
}

resource "vault_aws_secret_backend_role" "admin" {
  backend = "${vault_aws_secret_backend.nested-aws-path.path}"
  name    = "admin"

  policy_arn = "arn:aws:iam::aws:policy/AdministratorAccess"
}

Expected Behavior

nested/aws/path is a valid mount point. I expect it to create nested/aws/path/roles/admin. This can be done via the Vault CLI.

Actual Behavior

TF fails providing the following error:
Invalid id nested/aws/path/roles/admin; must be {backend}/roles/{name}

Steps to Reproduce

  1. terraform apply

vault_aws_secret_backend_role fails creating role with policy_arn

Using vault to manage non-secret assets in vault. I can create a vault_aws_secret_backend_role using policy but it throws a 500 using policy_arn.

Terraform Version

Terraform v0.11.3
+ provider.vault v1.0.0

also vault 0.9.3

Affected Resource(s)

  • vault_aws_secret_backend_role

Terraform Configuration Files

provider "vault" {
  address = "https://vault.example.com"
}

terraform {
  backend "s3" {
    bucket = "example-terraform-statefiles"
    key    = "us-east-1/vault-policy.tfstate"
    region = "us-east-1"
    lock_table = "terraform_statelock"
  }
}

Debug Output

These logs are full of tokens and other stuff I'm not going to redact. For this resource:

resource "vault_aws_secret_backend_role" "vault_node_policy" {
  backend = "aws"
  name    = "vault-node-policy"
  policy_arn = "arn:aws:iam::1234567890000:policy/vault_node_policy"
}

I get this error:

* vault_aws_secret_backend_role.vault_node_policy: Error creating role "vault-node-policy" for backend "aws": Error making API request.

URL: PUT https://vault.ops.tropo.com/v1/aws/roles/vault-node-policy
Code: 500. Errors:

Expected Behavior

Role was created in vault backend

Actual Behavior

Got a 500 error from vault

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. terraform apply

Cannot refresh/plan using vault generated STS Federation Token creds for AWS

This issue was originally opened by @Yuxael as hashicorp/terraform#15728. It was migrated here as a result of the provider split. The original body of the issue is below.


Terraform Version

Terraform v0.10.0

Terraform Configuration Files

data "vault_generic_secret" "aws_creds" {
  path = "aws/sts/deploy"
}


provider "aws" {
    region = "${var.ec2_region}"
    access_key = "${data.vault_generic_secret.aws_creds.data["access_key"]}"
    secret_key = "${data.vault_generic_secret.aws_creds.data["secret_key"]}"
    token = "${data.vault_generic_secret.aws_creds.data["security_token"]}"
}

Debug Output

https://gist.github.com/Yuxael/9f3c0ff87f26053688a3bb6d8420877d

Expected Behavior

AWS credentials regenerated and terraform refresh/plan should run successfully

Actual Behavior

Just hangs for eternity. It keeps looping over dag/walk: vertex ... shown at the end of debug.

Steps to Reproduce

  1. Run 'terraform refresh/apply(whatever that generates/updatestfstate`, should work first time).
  2. Wait until lease time for STS token expire.
  3. Run again terraform refresh/plan (will hang forever - cancel).
  4. Manually remove data.vault_generic_secret.aws_creds section from tfstate.
  5. Run once again terraform refresh/plan (will work this time).

Important Factoids

When running first time it runs successfully and do refresh tfstate or outputs a plan. Reruns also works fine until lease time for credentials expires, then it behaves like debug shows.

What bothers me is also the fact that is saves credentials generated by Vault in tfstate:

"data.vault_generic_secret.aws_creds": {
                    "type": "vault_generic_secret",
                    "depends_on": [],
                    "primary": {
                        "id": "0d3e2d54-5e3f-a933-3d2c-a2f8e4a29d34",
                        "attributes": {
                            "data.%": "3",
                            "data.access_key": "<redacted>",
                            "data.secret_key": "<redacted>",
                            "data.security_token": "<redacted>",
                            "data_json": "{\"access_key\":\"<redacted>\",\"secret_key\":\"<redacted>\",\"security_token\":\"<redacted>\"}",
                            "id": "0d3e2d54-5e3f-a933-3d2c-a2f8e4a29d34",
                            "lease_duration": "3599",
                            "lease_id": "aws/sts/deploy/c9808580-0640-b8f2-eaeb-0c6db0997dd5",
                            "lease_renewable": "false",
                            "lease_start_time": "RFC7779",
                            "path": "aws/sts/deploy"
                        },
                        "meta": {},
                        "tainted": false
}

This is talked about in issue hashicorp/terraform#516 and is directly connected with this problem because in order to make plan and refresh commands work again I have to manually delete those stale credentials from tfstate.

References

vault_aws_secret_backend can't work with IAM Instance profiles

Trying to setup a new backend fails unless you specify a specific user access/secret key:

Error: vault_aws_secret_backend.aws: "access_key": required field is not set
Error: vault_aws_secret_backend.aws: "secret_key": required field is not set

However, you can ignore this and use IAM instance profiles
vault secrets enable aws -path aws/prod/myaccount
if you go against vault directly.

Next release date?

Just hoping to get an idea of when we can expect the next cut!

There are some showstoppers in 1.0.0 currently for me but hoping to deprecate my existing vault configuration tooling ASAP!

Thanks

Unable to access CSR generated by vault_generic_secret

Terraform Version

  • Terraform v0.11.3
  • provider.vault v1.0.0
  • Vault v0.9.3

Affected Resource(s)

Just vault_generic_secret (link to docs).

Terraform Configuration Files

resource "vault_mount" "some_pki" {
    path = "some_pki"
    type = "pki"
    default_lease_ttl_seconds = "864000"
    max_lease_ttl_seconds     = "2592000" 
}

resource "vault_generic_secret" "intermediate_ca" {
    path = "${vault_mount.some_pki.path}/intermediate/generate/internal"
    disable_read = true
    data_json = <<EOF
{
    "common_name": "Some intermediate CA",
    "key_type": "ec",
    "key_bits": "384",
    "exclude_cn_from_sans": true
}
EOF
}

Debug Output

https://gist.github.com/vtorhonen/97d37eb5ff9e2aba3afaeccaeafec5c6

HTTP request:

PUT /v1/some_pki/intermediate/generate/internal HTTP/1.1
Host: vault.service.consul:8200
User-Agent: Go-http-client/1.1
Content-Length: 100
X-Vault-Token: 3b957cec-862c-f883-1908-126b96623f89
Accept-Encoding: gzip
Connection: close

{"common_name":"Some intermediate CA","exclude_cn_from_sans":true,"key_bits":"384","key_type":"ec"}

HTTP response:

HTTP/1.1 200 OK
Cache-Control: no-store
Content-Type: application/json
Date: Wed, 07 Feb 2018 19:49:45 GMT
Content-Length: 626
Connection: close

{"request_id":"f54212bd-41ff-0bed-f5bf-0553f10d0fec","lease_id":"","renewable":false,"lease_duration":0,"data":{"csr":"-----BEGIN CERTIFICATE REQUEST-----\nMIIBFjCBngIBADAfMR0wGwYDVQQDExRTb21lIGludGVybWVkaWF0ZSBDQTB2MBAG\nByqGSM49AgEGBSuBBAAiA2IABAQBa3t32/a0LTUn9ZxBKcDV3vAN8tUJuhTzG4n6\nGhCyE9qBe6OEWct1qISek5ngzPko8g1IxnrhVcuFAZZEWPWJFW3pvrUL1+4RlhPC\nPlTO/DPGIL1acFw6Rpf7WXU+gaAAMAoGCCqGSM49BAMCA2cAMGQCMH5fLhwaNnQt\nxbX0MnOKK1KQ4Y3DTrg10Cq0q8uigA4zlx7EXzITH4i2OvpZZBT5dQIwUneza0tC\ntsqgxj7M99ywdygrlYc6x0hjHzF1bYag+mWTKCaMIYCIg1NDlzuOq23O\n-----END CERTIFICATE REQUEST-----"},"wrap_info":null,"warnings":null,"auth":null}

Expected Behavior

Terraform should provide a way to access the certificate request generated by Vault.

Actual Behavior

Certificate request is generated by Vault and it is sent back to the client in the HTTP response. As seen in terraform-provider-vault/vault/resource_generic_secret.go the response data is never parsed as the implementation relies that (at least some) resources can be read after creation. However, HTTP API for Vault PKI secrets engine does not allow reading previously generated CSRs again after the initial call.

The only workaround I've come up so far is to use an external datasource. This is far from ideal and has significant caveats as the datasource if refreshed each time Terraform is run, and each refresh also generates a new intermediate CA.

#!/bin/bash

VAULT_URL="${VAULT_ADDR}/v1/pki/intermediate/generate/internal"

curl -sX PUT \
-H "X-Vault-Token: ${VAULT_TOKEN}" \
-d '{"common_name":"Some intermediate CA","exclude_cn_from_sans":true,"key_bits":"384","key_type":"ec"}' \
${VAULT_URL} | jq -Mcr '.data'

With this Terraform config:

data "external" "hack_intermediate_ca" {
    program = [ "bash", "${path.module}/scripts/hack_intermediate_ca.sh"]
}

output "intermediate_ca_csr" {
    value = "${data.external.hack_intermediate_ca.result.csr}"
}

Steps to Reproduce

  1. Run Vault
  2. Setup Terraform config as shown above
  3. Run terraform apply

Important Factoids

Nope.

References

Not to my knowledge. I tried searching similar issues from Vault's Github issues but couldn't find any.

panic: runtime error: invalid memory address or nil pointer dereference

$ ls .terraform/plugins/darwin_amd64/
lock.json                           terraform-provider-aws_v1.6.0_x4    terraform-provider-vault_v1.0.0_x4  
$ terraform version
Terraform v0.10.8
2018-01-11T18:18:41.485Z [DEBUG] plugin.terraform-provider-vault_v1.0.0_x4: 2018/01/11 18:18:41 [DEBUG] secret: (*api.Secret)(nil)
2018-01-11T18:18:41.487Z [DEBUG] plugin.terraform-provider-vault_v1.0.0_x4: panic: runtime error: invalid memory address or nil pointer dereference
2018-01-11T18:18:41.487Z [DEBUG] plugin.terraform-provider-vault_v1.0.0_x4: [signal SIGSEGV: segmentation violation code=0x1 addr=0x30 pc=0x167026e]
2018-01-11T18:18:41.487Z [DEBUG] plugin.terraform-provider-vault_v1.0.0_x4: 
2018-01-11T18:18:41.487Z [DEBUG] plugin.terraform-provider-vault_v1.0.0_x4: goroutine 82 [running]:
2018-01-11T18:18:41.487Z [DEBUG] plugin.terraform-provider-vault_v1.0.0_x4: github.com/terraform-providers/terraform-provider-vault/vault.genericSecretResourceRead(0xc4202c3180, 0x1803de0, 0xc420121860, 0x1, 0x1d3e840)
2018-01-11T18:18:41.487Z [DEBUG] plugin.terraform-provider-vault_v1.0.0_x4: 	/opt/teamcity-agent/work/222ea50a1b4f75f4/src/github.com/terraform-providers/terraform-provider-vault/vault/resource_generic_secret.go:153 +0x2ce
2018-01-11T18:18:41.487Z [DEBUG] plugin.terraform-provider-vault_v1.0.0_x4: github.com/terraform-providers/terraform-provider-vault/vendor/github.com/hashicorp/terraform/helper/schema.(*Resource).Refresh(0xc420053920, 0xc420128f00, 0x1803de0, 0xc420121860, 0xc42007beb8, 0xc4203ec401, 0x80000000018)
2018-01-11T18:18:41.487Z [DEBUG] plugin.terraform-provider-vault_v1.0.0_x4: 	/opt/teamcity-agent/work/222ea50a1b4f75f4/src/github.com/terraform-providers/terraform-provider-vault/vendor/github.com/hashicorp/terraform/helper/schema/resource.go:321 +0x199
2018-01-11T18:18:41.487Z [DEBUG] plugin.terraform-provider-vault_v1.0.0_x4: github.com/terraform-providers/terraform-provider-vault/vendor/github.com/hashicorp/terraform/helper/schema.(*Provider).Refresh(0xc42008ec40, 0xc420128eb0, 0xc420128f00, 0x1f1b6c8, 0x0, 0x18)
2018-01-11T18:18:41.487Z [DEBUG] plugin.terraform-provider-vault_v1.0.0_x4: 	/opt/teamcity-agent/work/222ea50a1b4f75f4/src/github.com/terraform-providers/terraform-provider-vault/vendor/github.com/hashicorp/terraform/helper/schema/provider.go:284 +0x9a
2018-01-11T18:18:41.487Z [DEBUG] plugin.terraform-provider-vault_v1.0.0_x4: github.com/terraform-providers/terraform-provider-vault/vendor/github.com/hashicorp/terraform/plugin.(*ResourceProviderServer).Refresh(0xc4200e59e0, 0xc420125ef0, 0xc420125ff0, 0x0, 0x0)
2018-01-11T18:18:41.487Z [DEBUG] plugin.terraform-provider-vault_v1.0.0_x4: 	/opt/teamcity-agent/work/222ea50a1b4f75f4/src/github.com/terraform-providers/terraform-provider-vault/vendor/github.com/hashicorp/terraform/plugin/resource_provider.go:510 +0x4e
2018-01-11T18:18:41.488Z [DEBUG] plugin.terraform-provider-vault_v1.0.0_x4: reflect.Value.call(0xc420052840, 0xc42012c0f8, 0x13, 0x18310b8, 0x4, 0xc4203acf20, 0x3, 0x3, 0x0, 0x0, ...)
2018-01-11T18:18:41.488Z [DEBUG] plugin.terraform-provider-vault_v1.0.0_x4: 	/usr/local/go/src/reflect/value.go:434 +0x906
2018-01-11T18:18:41.488Z [DEBUG] plugin.terraform-provider-vault_v1.0.0_x4: reflect.Value.Call(0xc420052840, 0xc42012c0f8, 0x13, 0xc420027720, 0x3, 0x3, 0x0, 0x0, 0x0)
2018-01-11T18:18:41.488Z [DEBUG] plugin.terraform-provider-vault_v1.0.0_x4: 	/usr/local/go/src/reflect/value.go:302 +0xa4
2018-01-11T18:18:41.488Z [DEBUG] plugin.terraform-provider-vault_v1.0.0_x4: net/rpc.(*service).call(0xc420126b00, 0xc420128230, 0xc420146210, 0xc420148700, 0xc42012a540, 0x16cfb60, 0xc420125ef0, 0x16, 0x16cfba0, 0xc420125ff0, ...)
2018-01-11T18:18:41.488Z [DEBUG] plugin.terraform-provider-vault_v1.0.0_x4: 	/usr/local/go/src/net/rpc/server.go:381 +0x142
2018-01-11T18:18:41.488Z [DEBUG] plugin.terraform-provider-vault_v1.0.0_x4: created by net/rpc.(*Server).ServeCodec
2018-01-11T18:18:41.488Z [DEBUG] plugin.terraform-provider-vault_v1.0.0_x4: 	/usr/local/go/src/net/rpc/server.go:475 +0x36b

Which resources are importable?

What resources are importable?

It seems from the import docs that each resource should have a blurb at the bottom if it supports importing, but I don't see that on any of the Vault resources: https://www.terraform.io/docs/import/importability.html

I also saw another issue in the changelog that said import support was added to a bunch of the older resources and see import stanzas in some of the newer resources.

https://github.com/terraform-providers/terraform-provider-vault/blob/master/CHANGELOG.md#100-november-16-2017

https://github.com/terraform-providers/terraform-provider-vault/blob/master/vault/resource_approle_auth_backend_role.go#L26

Is it just missing documentation?

Add Vault Provider with Resources To Manage Mounts, Backends and Policies (Not Secrets)

This issue was originally opened by @ekristen as hashicorp/terraform#12167. It was migrated here as part of the provider split. The original body of the issue is below.


Terraform seems like the perfect tool to used to manage state with respect to managing backends, mounts, and overall configuration of vault (minus secrets).

Using terraform to manage policies, mounts, roles, and other basic configuration.

This is an interesting idea https://www.hashicorp.com/blog/codifying-vault-policies-and-configuration.html but having it always post the data to the endpoints seems like a way to cause problems even though it is idempotent.

Using terraform to manage and maintain state to know when a change is going to be made to me makes more sense.

"dummy interpolation" for data and resource dependencies don't work

Hi there,

I have an issue with data and resource dependencies.
"dummy interpolation" don't work ...

Terraform Version

Terraform v0.9.4

Affected Resource(s)

data "vault_generic_secret"
resource "vault_generic_secret"

Terraform Configuration Files

( please note: this is example .tf file - I don't want to put here role-id generation)

resource "vault_generic_secret" "roles" {
    count       = "${length(var.roles)}"

    path        = "auth/roles/role/${var.roles[count.index]}"
    data_json   = <<_EOT
{
  "foo":   "bar",
  "pizza": "cheese"
}
    _EOT
}

data "vault_generic_secret" "roles_id" {
    count       = "${length(var.roles)}"

    path = "${vault_generic_secret.roles.*.path[count.index]}/role-id"
}

resource "consul_keys" "roles_id" {
    count       = "${length(var.roles)}"

    key {
        path    = "vault-roles/${var.roles[count.index]}/role-id"
        value   = "${replace(data.vault_generic_secret.roles_id.*.data_json[count.index], "/.*:\"(.*)\".*/", "$1")}"
        delete  = "true"
    }   
}

variable "roles" {
    type        = "list"
    default     = [ 
        "default",
        "default1"
    ]
}

Expected Behavior

When I add a new role (yes I add new roles at the end of the list) to the var.roles list I do terraform apply new roles and it role_id added to the Vault and copied to the Consul.

Actual Behavior

I see the error:

Error refreshing state: 1 error(s) occurred:
* module.roles.data.vault_generic_secret.role_id: 1 error(s) occurred:
* module.roles.data.vault_generic_secret.roles_id[51]: index 51 out of range for list vault_generic_secret.roles_id.*.path (max 51) in:
${vault_generic_secret.roles.*.path[count.index]}/role-id

Steps to Reproduce

  1. terraform apply

References

I think this can be related to depends_on for data_source

"Connection is shut down" when using multiple aliased vaults

I'm using Terraform v0.11.3

I'm using two vaults, using an alias with the second one.

provider "vault" {
  address         = "https://cluster-1-server"
  token           = "${var.VAULT_TOKEN_CLUSTER-1}"
}


provider "vault" {
  address         = "https://cluster-2-server"
  token           = "${var.VAULT_TOKEN_CLUSTER-2}"
  alias           = "cluster-2"
}

I'm primarily writing to cluster-1, and I want to connect to cluster-2 within a module.

When I add the write to cluster-2 (as below), I start getting connection issues with cluster-1.

resource "vault_generic_secret" "foo_bar" {
	  provider = "vault.cluster-2"
	
	  path = "path/key"
	  data_json = <<EOT
	{
	  "KEY": "VALUE"
	}
	EOT
}

After adding, I run into an issue where about 1/2 of the cluster-1 connections work as expected, then I get an error

2018/02/26 20:48:38 [ERROR] root: eval: *terraform.EvalSequence, err: vault_generic_secret.cluster_one_thing: connection is shut down

If I comment out the provider line, and write the secret to cluster_1 instead, it works as expected.
It seems like there is some sort of condition where hitting cluster_2 is breaking the ability to talk to cluster_1.
I've tried adding parallelism=1, and received the same error.

[provider/vault] Priority on env vars

This issue was originally opened by @zevran as hashicorp/terraform#12405. It was migrated here as part of the provider split. The original body of the issue is below.


Hi,

When configuring Vault provider I am not able to override vault configuration using environnement variables like VAULT_CACERT or VAULT_CAPATH using this configuration :

provider "vault" {
  address = "https://vault.internal.lan:443"

  ca_cert_file = "/etc/ssl/certs/internal-ca.pem"
  ca_cert_dir  = "/etc/ssl/certs"
}

It prints out the following on a terraform plan when environnement variable VAULT_CACERT is set to /usr/local/etc/openssl/certs/internal-ca.pem

Error refreshing state: 1 error(s) occurred:

* failed to configure TLS for Vault API: Error loading CA File: open /etc/ssl/certs/internal-ca.pem: no such file or directory

Environnement variables should be able to override variables defined in vault provider or am I thinking wrong ?

Thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.