Giter Site home page Giter Site logo

terraform-provider-hcp's Introduction

HashiCorp Cloud Platform logo

HashiCorp Cloud Platform (HCP) Terraform Provider

Available in the Terraform Registry.

The HashiCorp Cloud Platform (HCP) Terraform Provider is a plugin for Terraform that allows for the full lifecycle management of HCP resources. This provider is maintained internally by the HashiCorp Cloud Services team.

Requirements

Using the Provider

See the HashiCorp Cloud Platform (HCP) Provider documentation to get started using the provider.

Contributing

See the contributing directory for more developer documentation.

Design

See the design for documents capturing certain key design decisions made for this provider as a platform.

Example

Below is a complex example that creates a HashiCorp Virtual Network (HVN), an HCP Consul cluster within that HVN, and peers the HVN to an AWS VPC.

// Configure the provider
provider "hcp" {}

provider "aws" {
  region = "us-west-2"
}

// Create a HashiCorp Virtual Network (HVN).
resource "hcp_hvn" "example" {
  hvn_id         = "hvn"
  cloud_provider = "aws"
  region         = "us-west-2"
  cidr_block     = "172.25.16.0/20"
}

// Create an HCP Consul cluster within the HVN.
resource "hcp_consul_cluster" "example" {
  hvn_id         = hcp_hvn.example.hvn_id
  cluster_id     = "consul-cluster"
  tier           = "development"
}

// If you have not already, create a VPC within your AWS account that will
// contain the workloads you want to connect to your HCP Consul cluster.
// Make sure the CIDR block of the peer VPC does not overlap with the CIDR
// of the HVN.
resource "aws_vpc" "peer" {
  cidr_block = "10.220.0.0/16"
}

// Create an HCP network peering to peer your HVN with your AWS VPC.
resource "hcp_aws_network_peering" "example" {
  peering_id          = "peer-id"
  hvn_id              = hcp_hvn.example.hvn_id
  peer_vpc_id         = aws_vpc.peer.id
  peer_account_id     = aws_vpc.peer.owner_id
  peer_vpc_region     = "us-west-2"
}

// Create an HVN route that targets your HCP network peering and matches your AWS VPC's CIDR block.
resource "hcp_hvn_route" "example" {
  hvn_link         = hcp_hvn.example.self_link
  hvn_route_id     = "peer-route-id"
  destination_cidr = aws_vpc.peer.cidr_block
  target_link      = hcp_aws_network_peering.example.self_link
}

// Accept the VPC peering within your AWS account.
resource "aws_vpc_peering_connection_accepter" "peer" {
  vpc_peering_connection_id = hcp_aws_network_peering.example.provider_peering_id
  auto_accept               = true
}

// Create a Vault cluster within the HVN.
resource "hcp_vault_cluster" "example" {
  cluster_id = "vault-cluster"
  hvn_id     = hcp_hvn.example.hvn_id
}

terraform-provider-hcp's People

Contributors

aidan-mundy avatar alexmunda avatar anpucel avatar averche avatar bcmdarroch avatar catsby avatar chapmanc avatar codergs avatar dadgar avatar delores-hashicorp avatar dependabot[bot] avatar dhuckins avatar dmalch-hashicorp avatar hanshasselberg avatar hashicorp-copywrite[bot] avatar hashicorp-tsccr[bot] avatar itsjaspermilan avatar jaireddjawed avatar janet avatar jasonpilz avatar jolisabrownhashicorp avatar lackeyjb avatar leahrob avatar manish-hashicorp avatar nywilken avatar riddhi89 avatar roaks3 avatar squaresurf avatar sylviamoss avatar uruemu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

terraform-provider-hcp's Issues

Error when using non-prod HCP environment

Terraform Version and Provider Version

Terraform version: v1.3.7
HCP provider version: v0.52.0

Affected Resource(s)

  • authentication

Terraform Configuration Files

terraform {
  required_providers {
    hcp = {
      source  = "hashicorp/hcp"
      version = "= 0.52.0"
    }
  }
}

// Configure the provider
provider "hcp" {}


// Create an HVN
resource "hcp_hvn" "example_hvn" {
  hvn_id         = "hcp-tf-example-hvn"
  cloud_provider = "aws"
  region         = "us-west-2"
  cidr_block     = "172.25.16.0/20"
}

Debug Output

https://gist.github.com/NickCellino/def8ecd610ef52df17f290d50733ce64

Panic Output

Steps to Reproduce

  1. terraform apply

Expected Behavior

  1. HVN is created

Actual Behavior

  1. I received the following error message:
Error: unable to get project from credentials: unable to fetch organization list: Could not complete request. Please ensure your HCP_API_HOST, HCP_CLIENT_ID, and HCP_CLIENT_SECRET are correct.

Important Factoids

This works correctly if I pin the hcp provider version to 0.44.0 but if I do 0.45.0 or anything after that, I get the error.

References

  • #0000

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

[Enhancement] Manager user access to Hashicorp Cloud Platform

Description

We would like to be able to manage user access to our HCP organization using terraform.

New or Affected Resource(s)

  • hcp_iam_user

Potential Terraform Configuration

resource "hcp_iam_user" "drew" {
  email = "[email protected]"
  role = "admin"
}

References

  • #0000

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Add metrics and audit log streaming for Vault

Description

It would be great to be able to configure metric and audit log streaming via Terraform, similar to how we configure the rest of HCP. This would enable API keys, passwords, etc., to be sourced from places like Vault allowing for centralized secret rotation.

New or Affected Resource(s)

New resources could be:

  • hcp_vault_metrics_stream
  • hcp_vault_audit_log_stream

Potential Terraform Configuration

resource "hcp_vault_metrics_stream" "this" {
  metrics_provider = "datadog"
  datadog_api_key = "..."
  datadog_site_region = "us1"
}

resource "hcp_vault_audit_log_stream" "this" {
  log_provider = "datadog"
  datadog_api_key = "..."
  datadog_site_region = "us1"
}

References

  • #0000

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Add computed output "auth_method_id" to "hcp_boundary_cluster" resource

Description

After using the HCP Terraform provider to create an 'hcp_boundary_cluster' resource, there is no easy way to grab the initial generated password auth method auth_method_id that is generated for use with the originally supplied username and password values.

Unfortunately, this leads to not being able to cleanly automate the cluster configuration using the Boundary Terraform Provider, as the boundary provider requires the auth_method_id for authentication:

#First, create HCP Boundary Cluster
resource "hcp_boundary_cluster" "boundary_cluster" {
  cluster_id = var.cluster_id
  username   = var.password
  password   = var.username
}

#Second, authenticate to the cluster using boundary provider
provider "boundary" {
  addr                            = "https://1232a069-b529-42ad-ba84-823cb5c3b4d7.boundary.hashicorp.cloud"
  auth_method_id                  = "ampw_1234567890" # NOT OUTPUT BY HCP PROVIDER ABOVE
  password_auth_method_login_name = var.username   
  password_auth_method_password   = var.password
}

New or Affected Resource(s)

  • hcp_boundary_cluster

Potential Terraform Configuration

output "boundary_password_auth_method_id" {
  value = hcp_boundary_cluster.boundary_cluster.auth_method_id
}

References

  • #0000

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Resources defined wrongly

Terraform Version and Provider Version

Terraform version: v1.2.2
HCP provider version: v0.43.0

Affected Resource(s)

  • hcp_hvn_route

Terraform Configuration Files

resource "hcp_hvn_route" "example" {
  hvn_link         = hcp_hvn.hvn.self_link
  hvn_route_id     = var.route_id
  destination_cidr = aws_vpc.peer.cidr_block
  target_link      = data.hcp_aws_network_peering.example.self_link
}

Debug Output

โ•ท
โ”‚ Error: Reference to undeclared resource
โ”‚
โ”‚ on main.tf line 60, in resource "hcp_hvn_route" "example":
โ”‚ 60: hvn_link = hcp_hvn.hvn.self_link
โ”‚
โ”‚ A managed resource "hcp_hvn" "hvn" has not been declared in the root module.
โ•ต

Panic Output

Steps to Reproduce

  1. Followed code example based on https://registry.terraform.io/providers/hashicorp/hcp/latest/docs/guides/peering
  2. terraform apply and error occurs

Expected Behavior

  1. terraform apply is successful

Actual Behavior

  1. terraform apply and error occurs

Important Factoids

  1. resource name with hvn under hcp_hvn does not exist

Line 60 should be changed from

hvn_link         = hcp_hvn.hvn.self_link

to

hvn_link         = hcp_hvn.example.self_link

since example is the name of configured for hcp_hvn in the example code instead of hvn.

References

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

hcp_vault_cluster_admin_token doesn't refresh

Terraform Version and Provider Version

Terraform version: 0.15.5
HCP provider version: 0.7.0

Affected Resource(s)

hcp_vault_cluster_admin_token

Terraform Configuration Files

hashicorp_cloud

resource "hcp_vault_cluster" "vault" {
  cluster_id      = "vault"
  hvn_id          = hcp_hvn.main.hvn_id
  public_endpoint = var.vault_public_endpoint
}

resource "hcp_vault_cluster_admin_token" "token" {
  cluster_id = hcp_vault_cluster.vault.cluster_id
}

output vault_root_token {
  sensitive = true
  value = hcp_vault_cluster_admin_token.token.token
}

output vault_url {
  value = "https://${hcp_vault_cluster.vault.vault_public_endpoint_url}:8200"
}

vault_secrets

data "terraform_remote_state" "hashicorp_cloud" {
  backend = "remote"
  config = {
    organization = var.organization
    workspaces = {
      name = "hashicorp_cloud"
    }
  }
}

provider "vault" {
  address     = data.terraform_remote_state.hashicorp_cloud.outputs.vault_url
  token       = data.terraform_remote_state.hashicorp_cloud.outputs.vault_root_token
}

Actual Behavior

The code above is the only way to use HCP Vault with the Vault provider yet it will fail after the timeout period of 6 hours with the following error message:

โ”‚ Error: Error making API request.
โ”‚ 
โ”‚ URL: GET https://vault.vault.6b287e47-e880-4af5-b1e0-db844115eec4.aws.hashicorp.cloud:8200/v1/auth/token/lookup-self
โ”‚ Code: 403. Errors:
โ”‚ 
โ”‚ * permission denied

Expected Behavior

There should be an option to either not timeout the token or that the token should refresh properly.

HCP Consul Token is not usable with Consul provider

Terraform Version and Provider Version

0.15.3

Affected Resource(s)

hcp_consul_cluster

Terraform Configuration Files

resource "hcp_consul_cluster" "example" {
  cluster_id = "consul-cluster"
  hvn_id     = hcp_hvn.example.hvn_id
  tier       = "development"
}

provider "consul" {
  address    = "demo.consul.io:80"
  token      = ???
}

Expected Behavior

You should be able to pass the HCP Consul root token into the Consul Provider.

Actual Behavior

There is only consul_root_token_accessor_id and consul_root_token_secret_id outputs from hcl_consul_cluster.

You can regenerate the token using hcp_consul_cluster_root_token but only kubernetes_secret is output.

Unclear how to refresh a user session

Terraform Version and Provider Version

Terraform version: v1.3.5
HCP provider version: v0.53.0

Affected Resource(s)

  • provider "hcp" {}

Terraform Configuration Files

provider "hcp" {}

Debug Output

โ•ท
โ”‚ Error: unable to get project from credentials: unable to fetch organization list: could not complete request: please ensure your HCP_API_HOST, HCP_CLIENT_ID, and HCP_CLIENT_SECRET are correct
โ”‚
โ”‚   with provider["registry.terraform.io/hashicorp/hcp"],
โ”‚   on versions.tf line 32, in provider "hcp":
โ”‚   32: provider "hcp" {}
โ”‚
โ•ต

Panic Output

Steps to Reproduce

  1. terraform apply and perform browser-based authentication for HCP provider
  2. Do something else for 20 minutes
  3. run terraform apply again after your session has expired. output gives you no clear explanation on how to refresh your session token. For instance for AWS you would run aws sso login --profile profile-name and then resume using your cached AWS auth.

Expected Behavior

Either: the user should be prompted to repeat web-based auth, the error message or documentation provide clear instructions on how to refresh your user session.

Actual Behavior

terraform apply never gets past refresh phase.

Important Factoids

References

  • #0000

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

hcp_azure_peering_connection Data Source times out too soon

Terraform Version and Provider Version

Terraform version: v1.0.11
HCP provider version: v0.41.0

Affected Resource(s)

  • hcp_azure_peering_connection

Terraform Configuration Files

data "hcp_azure_peering_connection" "peering" {
  hvn_link              = var.hvn.self_link
  peering_id            = hcp_azure_peering_connection.peering.peering_id
  wait_for_active_state = true
}

Steps to Reproduce

  1. terraform apply
  2. Occasionally HCP Azure peering connection takes a long time to get into ACTIVE state

Expected Behavior

I expect the hcp_azure_peering_connection to keep retrying until it reaches the defined timeout, 35 minutes.

Actual Behavior

The hcp_azure_peering_connection always times out after 20 minutes, even if we set a lower bound timeout of 1m explicitly in the Terraform file.

References

I believe the issue might be in the Terraform SDK, and have opened an issue there: hashicorp/terraform-plugin-sdk#1038

I am writing this issue to track it explicitly for the HCP terraform provider.

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Capability to revoke an HCP Packer Iteration

Description

This would allow security teams to have a control plane in regards to the capability to revoke Iterations though IaC.

New or Affected Resource(s)

  • hcp_packer_iteration_revokation

Potential Terraform Configuration

# Copy-paste your Terraform configurations here - for large Terraform configs,
# please use a service like Dropbox and share a link to the ZIP file. For
# security, you can also encrypt the files using our GPG public key: https://keybase.io/hashicorp

resource hcp_packer_iteration_revokation "revokation" {
iteration = 2
comment = "this iteration contains a 0 day vulnerability"
}

References

  • #0000

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Support gossip encryption key as separate output to hcp_consul_cluster resurce

Description

It would be helpful for the gossip encryption key to be a separate output from the hcp_consul_cluster resource. Currently it must be parsed from the consul_config_file output.

New or Affected Resource(s)

  • hcp_consul_cluster

Potential Terraform Configuration

output `consul_gossip_encryption_key` {}

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Consul and Vault URLs aren't correct

Terraform Version and Provider Version

0.15.3

Affected Resource(s)

  • hcp_consul_cluster
  • hcp_vault_cluster

Terraform Configuration Files

output consul_url {
  description = ""
  value = hcp_consul_cluster.consul.consul_private_endpoint_url
}

output vault_url {
  description = ""
  value = hcp_vault_cluster.vault.vault_private_endpoint_url
}

Expected Behavior

URL outputs should be actual URLs i.e. prefixed with https://

Actual Behavior

URLs such as the following are generated:

consul.private.consul.xxxxxxxxxx.aws.hashicorp.cloud

Which then cause errors when you pass them into the Consul and Vault providers.

Support for Vault in Provider

As per documentation 0.1.0 there seems to be no support for Vault yet (Beta).
Please implement support for it in the provider.

Improved testing using HCP Consul/Vault with their Terraform providers

Description

There needs to be improved support for the following use case:

  1. Provision a Consul or Vault cluster in Hashicorp Cloud.
  2. Add entries (e.g. hosts/secrets) in those respective clusters using Terraform.

Right now this provider is unusable for this and would be good to have additional E2E testing to ensure it works.

New or Affected Resource(s)

  • hcp_consul_cluster
  • hcp_vault_cluster

References

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

HCP Consul root token is not regenerated when Consul cluster is regenerated

Terraform Version and Provider Version

Terraform v1.0.1
on darwin_amd64
+ provider registry.terraform.io/hashicorp/aws v2.64.0
+ provider registry.terraform.io/hashicorp/hcp v0.7.0
+ provider registry.terraform.io/hashicorp/random v3.1.0
+ provider registry.terraform.io/hashicorp/tls v3.1.0

Affected Resource(s)

  • hcp_consul_cluster_root_token
  • hcp_consul_cluster

Terraform Configuration Files

resource "hcp_consul_cluster" "main_consul_cluster" {
  cluster_id         = var.cluster_id
  hvn_id             = hcp_hvn.main.hvn_id
  tier               = "development"
  min_consul_version = "1.10.0"
  public_endpoint    = true
}

resource "hcp_consul_cluster_root_token" "token" {
  cluster_id = hcp_consul_cluster.main_consul_cluster.cluster_id
}

Steps to Reproduce

  1. terraform apply
  2. Set public_endpoint to false
  3. terraform apply

Expected Behavior

I expected the hcp_consul_cluster_root_token to be regenerated along with the Consul cluster

Actual Behavior

I expected the hcp_consul_cluster_root_token to be regenerated, but only the hcp_consul_cluster was. This caused my root token to not be accepted by the new Consul cluster.

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Allow for more streamlined vault provider integration

Description

Hey, cannot tell you how excited I am for this provider! I have a small QoL feature request that I think people will like.

Right now, the hcp_vault_cluster.vault_public_endpoint_url is mistmatched with what a standard vault provider would expect. Users need to do something like this

provider "vault" {
  address = join("", ["https://", hcp_vault_cluster.bkbn.vault_public_endpoint_url, ":8200"])
  token = hcp_vault_cluster_admin_token.bkbn.token
}

in order to point a vault provider at a HCP vault cluster.

Ideally, this should be streamlined to something like

 provider "vault" {
  address = hcp_vault_cluster.bkbn.vault_public_full_url // Not a fan of this name but you get the point ๐Ÿ˜ƒ 
  token = hcp_vault_cluster_admin_token.bkbn.token
}

New or Affected Resource(s)

  • hcp_0.6.0

References

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Unexpected "unable to fetch organization list"

Little bit of context here, I've been running the HCP provider since.... 0.6? Anyway currently on 0.8 and everything has been fine and dandy, and out of the blue today I get hit with this error, no infra code has changed since my last deploy so this is super odd.

Terraform Version and Provider Version

Terraform version: 1.0
HCP provider version: 0.8.0 and 0.10.0

Affected Resource(s)

provider / initialization

Terraform Configuration Files

provider "hcp" {
  client_id     = var.hcp_client_id
  client_secret = var.hcp_client_secret
}

Debug Output

Omitting unless really necessary

Panic Output

Steps to Reproduce

  1. terraform plan

Expected Behavior

Happy deploy that has been happening for months before now

Actual Behavior

Angry deploy that complains

โ•ท
โ”‚ Error: unable to fetch organization list: &{0 []  } (*models.GrpcGatewayRuntimeError) is not supported by the TextConsumer, can be resolved by supporting TextUnmarshaler interface
โ”‚
โ”‚   with module.hcp.provider["registry.terraform.io/hashicorp/hcp"],
โ”‚   on modules/hcp/phase_3.tf line 19, in provider "hcp":
โ”‚   19: provider "hcp" {
โ”‚
โ•ต

Important Factoids

References

  • #0000

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Crash when https://status.hashicorp.com/ is inaccessible

Terraform Version and Provider Version

$ terraform version
Terraform v1.2.2
on darwin_amd64
+ provider registry.terraform.io/hashicorp/hcp v0.30.0

Affected Resource(s)

Any

Terraform Configuration Files

Any, but for example...

terraform {
  required_providers {
    hcp = {
      source  = "hashicorp/hcp"
      version = "0.30.0"
    }
  }
}

provider "hcp" {
}

resource "hcp_hvn" "network" {
  hvn_id         = "hcp-hvn-sandbox"
  cloud_provider = "aws"
  region         = "eu-west-2"
  cidr_block     = "10.0.0.0/20"
}

Debug Output

https://gist.github.com/lucymhdavies/8311ccac2e413a7147d26af02434ef53

Steps to Reproduce

If https://status.hashicorp.com is down... or, more realistically, if the agent running terraform has no network access to https://status.hashicorp.com

The error occurs when running terraform plan

A simple way to simulate this, in environments where actually blocking the network access is not possible, is by modifying the agent's local /etc/hosts file. e.g.

127.0.0.1 status.hashicorp.com

Expected Behavior

Error message should warn that status.hashicorp.com was unavailable.

Whether it's desirable for Terraform to quit with an error here (as would be the case if the provider detects an HCP outage), or whether it's preferable to continue regardless, I'm not sure.

Actual Behavior

Provider crashes

The problem here is with the error handling in the isHCPOperational function, which merely logs errors connecting to status.hashicorp.com and then continues regardless:
https://github.com/hashicorp/terraform-provider-hcp/blob/v0.30.0/internal/provider/provider.go#L192-L223

hcp_vault_cluster - updates for plus tier

Description

Now that Plus tier clusters are available, there are some updates needed.

Based on my brief test the hcp_vault_cluster resource already supports creation of plus tier clusters, however the docs should be updated to reflect this. The hcp_vault_cluster (data source) should also be updated.

Support for provisioning a replica cluster should be added.

New or Affected Resource(s)

  • hcp_vault_cluster (resource)
  • hcp_vault_cluster (data source)

Potential Terraform Configuration

resource "hcp_vault_cluster" "example" {
  cluster_id = "vault-cluster"
  hvn_id     = hcp_hvn.example.hvn_id
  tier       = "plus_small"
}

resource "hcp_vault_cluster" "example-replica" {
  cluster_id         = "vault-cluster-replica"
  primary_cluster_id = hcp_vault_cluster.example.id
  hvn_id             = hcp_hvn.replica.hvn_id
  tier               = "plus_small"
}

References

  • #0000

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

`error updating Vault cluster` when trying to increase tier size

Terraform Version and Provider Version

Terraform version: 1.3.0
HCP provider version: 0.43.0

Affected Resource(s)

  • hcp_vault_cluster

Terraform Configuration Files

resource "hcp_hvn" "omit" {
  hvn_id         = "omit"
  cloud_provider = "aws"
  region         = "us-east-1"
  cidr_block     = "omit"
}

resource "hcp_vault_cluster" "omit" {
  cluster_id      = "omit"
  hvn_id          = hcp_hvn.omit.hvn_id
  public_endpoint = true
  tier = "starter_small" // Changed this from empty (dev) to starter_small
}

resource "hcp_vault_cluster_admin_token" "omit" {
  cluster_id = hcp_vault_cluster.omit.cluster_id
}

Debug Output

โ•ท
โ”‚ Error: error updating Vault cluster (omit): [PATCH /vault/2020-11-25/organizations/{cluster.location.organization_id}/projects/{cluster.location.project_id}/clusters/{cluster.id}][403] Update default &{Code:7 Details:[] Error: Message:}
โ”‚
โ”‚ with module.hcp.hcp_vault_cluster.omit,
โ”‚ on modules/hcp/hcp.tf line 8, in resource "hcp_vault_cluster" "omit":
โ”‚ 8: resource "hcp_vault_cluster" "omit" {
โ”‚
โ•ต

Panic Output

Steps to Reproduce

  1. terraform apply

Expected Behavior

Actual Behavior

Important Factoids

References

  • #0000

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Add support for multiple organizations

Description

Please add support for users that are part of more than one organization.
The current provider does not support specifying an org_id so it seems.

One would get an error during a terraform plan that outputs:

โ•ท
โ”‚ Error: unable to get project from credentials: unexpected number of organizations: expected 1, actual: 2
โ”‚ 
โ”‚   with provider["registry.terraform.io/hashicorp/hcp"],
โ”‚   on main.tf line 18, in provider "hcp":
โ”‚   18: provider "hcp" {}
โ”‚ 

This means I cannot use the HCP provider at all.

New or Affected Resource(s)

  • All

Potential Terraform Configuration

provider "hcp" {
  client_id     = "service-principal-key-client-id"
  client_secret = "service-principal-key-client-secret"
  org_id        = "organization-id"
}

References

  • #0000

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

hcp_vault_cluster - support for resizing cluster

Terraform Version and Provider Version

Terraform v1.1.3
on darwin_amd64
+ provider registry.terraform.io/hashicorp/hcp v0.22.0

Affected Resource(s)

  • hcp_vault_cluster

Terraform Configuration Files

first:

resource "hcp_vault_cluster" "example" {
  cluster_id = "vault-cluster"
  hvn_id     = hcp_hvn.example.hvn_id
  tier       = "standard_small"
}

then:

resource "hcp_vault_cluster" "example" {
  cluster_id = "vault-cluster"
  hvn_id     = hcp_hvn.example.hvn_id
  tier       = "standard_medium"
}

Debug Output

https://gist.github.com/assareh/df008a15b5ef883ebd7e953f41f94926

Panic Output

Steps to Reproduce

  1. Provision a standard cluster.
  2. Change the tier to another size of standard cluster and terraform apply

Expected Behavior

The cluster should have been resized. This operation is allowed on standard tier clusters when performed in the HCP console.

Actual Behavior

An error occurred.

Important Factoids

References

  • #0000

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

docs: `hcp_consul_cluster_root_token` and `hcp_consul_cluster` management tokens

Terraform Version and Provider Version

Terraform version: 1.3.5
HCP provider version: 0.49.0

Affected Resource(s)

  • hcp_consul_cluster
  • hcp_consul_cluster_root_token

Expected Behavior

Per hcp_consul_cluster_root_token resource documentation:

"Note that creation of this resource will invalidate the default consul_root_token_accessor_id and consul_root_token_secret_id on the target hcp_consul_cluster resource."

I am expecting the root token created by the hcp_consul_cluster resource to be invalidated/deleted after hcp_consul_cluster_root_token is created.

Actual Behavior

Following hcp_consul_cluster and hcp_consul_cluster_root_token resource creation, consul acl token list returns two management tokens.

Important Factoids

I bootstrapped the HCP Consul cluster yesterday and used the hcp_consul_cluster.example.consul_root_token_secret_id to configure the consul provider then perform additional config/setup tasks.

~24 hours later I added hcp_consul_cluster_root_token and ran another terraform apply.

If there is a clean-up or reconciliation job that runs in the background, I'm unclear when/if that will be triggered and what the qualification criteria is.

My intention is to manage the lifecycle of a root token independently from the hcp_consul_cluster resource. Leveraging the hcp_consul_cluster_root_token resource to decouple the root/bootstrap token from the consul cluster lifecycle appears to be clean a way to accomplish that, but I want to ensure that I'm not leaving any management tokens laying around.

consul acl token list

AccessorID:       c00576dc-603d-ad10-6a7c-dc18c159336b
SecretID:         <redacted>
Partition:        default
Namespace:        default
Description:      
Local:            false
Create Time:      2022-11-21 19:59:36.785458507 +0000 UTC
Legacy:           false
Policies:
   00000000-0000-0000-0000-000000000001 - global-management

AccessorID:       480937ce-0a8d-53c2-d88c-5dc08baee335
SecretID:         <redacted>
Partition:        default
Namespace:        default
Description:      
Local:            false
Create Time:      2022-11-22 22:49:26.228086185 +0000 UTC
Legacy:           false
Policies:
   00000000-0000-0000-0000-000000000001 - global-management

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Allow setting a TTL for hcp_vault_cluster_admin_token resource

Description

If we use hcp_vault_cluster_admin_token in terraform plan it generates a token with 6h TTL. This token is not tracked in state meaning every plan will create a token which goes against the user license. We would like to be able to set a TTL for this token through the terraform config say 5mins in order for these to die off quickly and not count against our license.

New or Affected Resource(s)

  • hcp_XXXXX

Potential Terraform Configuration

resource "hcp_vault_cluster_admin_token" "primary" {
  count = var.tf_plan_flag == "false" ? 1 : 0
  cluster_id = hcp_vault_cluster.primary.cluster_id
  ttl = 5m <------- new config option here


  depends_on = [
    hcp_vault_cluster.primary
  ]
}

References

  • #0000

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

`hcp_packer_image` data source causing plugin crash while running `terraform plan`

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Overview of the Issue

After running terraform plan, to get a speculative plan, the error that the Packer plugin displays is provided below. I have integrated HCP and Packer with Terraform to fetch AMIs that I use to spin my application. Tried moving back to an older and a newer version of the HCP plugin, but the error still persists.

Reproduction Steps

  1. Integrate HCP plugin within Terraform
  2. Use hcp_packer_image data block to fetch the AMI used to provision the application server.
  3. Run remote terraform plan, and the error is displayed in the TF Cloud logs for that plan.

HCP plugin version

From v0.61.0_x5

Log Fragments and crash.log files

The following error below is displayed on stage Plan in TF Cloud when trying to run a remote speculative plan.

Stack trace from the terraform-provider-hcp_v0.61.0_x5 plugin:

panic: interface conversion: error is *url.Error, not *packer_service.PackerServiceGetChannelDefault

goroutine 74 [running]:
github.com/hashicorp/terraform-provider-hcp/internal/clients.GetPackerChannelBySlug({0x1124088?, 0x6?}, 0xc00061e0e0, 0xc000397390, {0xc00048ee10, 0x9}, {0xc00048ee40, 0x6})
github.com/hashicorp/terraform-provider-hcp/internal/clients/packer.go:28 +0x1dc
github.com/hashicorp/terraform-provider-hcp/internal/provider.dataSourcePackerImageRead({0x12775c8, 0xc000696ba0}, 0x0?, {0xf5d2e0?, 0xc00061e0e0})
github.com/hashicorp/terraform-provider-hcp/internal/provider/data_source_packer_image.go:153 +0x5b0
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).read(0xc000247dc0, {0x1277600, 0xc00034ed50}, 0xd?, {0xf5d2e0, 0xc00061e0e0})
github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/resource.go:724 +0x12e
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).ReadDataApply(0xc000247dc0, {0x1277600, 0xc00034ed50}, 0xc00017b100, {0xf5d2e0, 0xc00061e0e0})
github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/resource.go:943 +0x145
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*GRPCProviderServer).ReadDataSource(0xc000298db0, {0x1277600?, 0xc00034ec30?}, 0xc0000a6dc0)
github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/grpc_provider.go:1195 +0x38f
github.com/hashicorp/terraform-plugin-go/tfprotov5/tf5server.(*server).ReadDataSource(0xc0006a63c0, {0x1277600?, 0xc0006a5b00?}, 0xc000528aa0)
github.com/hashicorp/[email protected]/tfprotov5/tf5server/server.go:658 +0x3ef
github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tfplugin5._Provider_ReadDataSource_Handler({0x1098480?, 0xc0006a63c0}, {0x1277600, 0xc0006a5b00}, 0xc0006043f0, 0x0)
github.com/hashicorp/[email protected]/tfprotov5/internal/tfplugin5/tfplugin5_grpc.pb.go:421 +0x170
google.golang.org/grpc.(*Server).processUnaryRPC(0xc0000001e0, {0x127a5a0, 0xc000103380}, 0xc0001ff8c0, 0xc000117a70, 0x1a87770, 0x0)
google.golang.org/[email protected]/server.go:1336 +0xd13
google.golang.org/grpc.(*Server).handleStream(0xc0000001e0, {0x127a5a0, 0xc000103380}, 0xc0001ff8c0, 0x0)
google.golang.org/[email protected]/server.go:1704 +0xa1b
google.golang.org/grpc.(*Server).serveStreams.func1.2()
google.golang.org/[email protected]/server.go:965 +0x98
created by google.golang.org/grpc.(*Server).serveStreams.func1
google.golang.org/[email protected]/server.go:963 +0x28a

Error: The terraform-provider-hcp_v0.61.0_x5 plugin crashed!

This is always indicative of a bug within the plugin. It would be immensely
helpful if you could report the crash with the plugin's maintainers so that it
can be fixed. The output above should help diagnose the issue.

Also under diagnostics section the error is displayed, with the following text:

Error: Plugin did not respond
with data.hcp_packer_image.advanced_ubuntu_ami
on launch_templates.tf line 81, in data "hcp_packer_image" "advanced_ubuntu_ami":
data "hcp_packer_image" "advanced_ubuntu_ami" {
The plugin encountered an error, and failed to respond to the plugin.(*GRPCProvider).ReadDataSource call. The plugin logs may contain more details.

Pin Consul version like Terraform Version Constraint model

Description

Please implement the terraform version constraint syntax to support:
a) specifying a specific version: version = 1.11.5
b) specify a desired major version: version = ~> 1.11.5. <- To allow 1.11.5 or higher, but not 1.12x.

From what I can see, I can only specify a min_consul_version, but I cannot prevent a major version upgrade when, for example, the HCP platform changes the recommended_version from 1.11.5 to 1.12.x.

[min_consul_version](https://registry.terraform.io/providers/hashicorp/hcp/latest/docs/resources/consul_cluster#min_consul_version) (String) The minimum Consul version of the cluster. If not specified, it is defaulted to the version that is currently recommended by HCP.

New or Affected Resource(s)

  • hcp_consul_cluster

Potential Terraform Configuration

resource "hcp_hvn" "example" {
  hvn_id         = "hvn"
  cloud_provider = "aws"
  region         = "us-west-2"
  cidr_block     = "172.25.16.0/20"
}

resource "hcp_consul_cluster" "example" {
  cluster_id = "consul-cluster"
  hvn_id     = hcp_hvn.example.hvn_id
  tier       = "development"
  version = " ~> 1.11.5"
}```

### References

We already to this in TF: https://www.terraform.io/language/expressions/version-constraints

<!--- Please keep this note for the community --->

### Community Note

* Please vote on this issue by adding a ๐Ÿ‘ [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment

<!--- Thank you for keeping this note for the community --->

Feature gap for Vault support

It seems this provider especially for Vault is really unmaintained at the moment.

You are releasing lots of features for Vault (metrics streaming, etc.) but none of them are implemented here.

How can this be adressed? Its frustrating.

documentation says only `aws` is available for `cloud_provider` input on the `hcp_hvn` resource

Documentation for hcp_hvn says only aws is available for the cloud_provider input, whereas the hcp provider already supports Azure as valid input (even if this feature is still BETA on HCP side).

Terraform Version and Provider Version

Terraform version:
HCP provider version: 0.3.2

Affected Resource(s)

  • hcp_hvn

Expected Behavior

Documentation should say:

cloud_provider: (String) The provider where the HVN is located. Only 'aws' or 'azure' are available at this time.

Actual Behavior

Documentation says:

cloud_provider: (String) The provider where the HVN is located. Only 'aws' is available at this time.

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

API Authentication Issue In Release 0.71.0

Terraform Version and Provider Version

Terraform version: 1.4.5
HCP provider version: 0.71.0

Affected Resource(s)

Terraform HCP Provider 0.71.0

Terraform Configuration Files

Redacted Client ID, Client Secret, and Project ID
Provider credentials are just for the proof of concept, they will be removed shortly, please don't shout at me! ๐Ÿ˜„

## Terraform Project For Hashicorp Vault Infrastructure ##

## PROVIDERS ##

provider "aws" {
    region        = var.region
}
provider "hcp" {
    # Required to authenticate with the current vault #
    client_id     = "#CLIENT ID#"
    client_secret = "#CLIENT SECRET#"
    project_id    = "#PROJECT ID#"
}

## MODULES ##

# Vault Infrastructure #

module "networking" {
    source        = "../..//modules/hashicorp-vault/networking"
    cidr_block    = var.cidr_block
    environment   = var.environment
    service_name  = var.service_name
    region        = var.region
}

module "vault_cluster" {
    source        = "../../modules/hashicorp-vault/vault_cluster"
    environment   = var.environment
    service_name  = var.service_name

    depends_on = [
        module.networking.account_id
    ]
}

Debug Output

Panic Output

Steps to Reproduce

  1. terraform apply

Expected Behavior

No changes. Your infrastructure matches the configuration.

Terraform has compared your real infrastructure against your configuration and found no differences, so no changes are needed.

Apply complete! Resources: 0 added, 0 changed, 0 destroyed.

Actual Behavior

Project ID redacted to protect live resources

โ”‚ Error: unable to fetch project "#PROJECT ID#": Get "https://api.cloud.hashicorp.com:443/resource-manager/2019-12-10/projects/#PROJECT ID#": oauth2: "access_denied" "Unauthorized"
โ”‚
โ”‚   with provider["registry.terraform.io/hashicorp/hcp"],
โ”‚   on main.tf line 8, in provider "hcp":
โ”‚    8: provider "hcp" {
โ”‚

Important Factoids

After issue appeared:-

  1. Tested by creating new service principal and attempting a terraform apply

  2. Double checked no environment variables were present on machine

  3. Authenticated with HCP via CURL to check the newly created service principal credentials worked correctly

  4. Pulled list of projects and resources using the newly created service principal credentials

  5. Reverting back to 0.69.0, tested a terraform apply - issue isn't present in previous version used

References

  • #0000

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Manage HCP organization settings

Description

It would be great to be able to manage most if not all of the HCP settings with Terraform. At least the IAM and SSO configurations are missing at the moment.

For SSO, maybe something similar to the aws_acm_certificate_validation resource could be useful for the validation. But even data sources to get the SAML variables would be an improvement.

This of course assumes that the underlying API and SDK have the needed endpoints.

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

hcp_vault_secrets_app description parameter is listed as optional but resource creation fails if not provided

Terraform version: 1.6.2
HCP provider version: v0.74.1

Affected Resource(s)

resource hcp_vault_secrets_app

Terraform Configuration Files

terraform {
  cloud {
    organization = "PRIVATE"

    workspaces {
      name = "PRIVATE"
    }
  }
  required_providers {
    hcp = {
      source  = "hashicorp/hcp"
      version = ">=0.74.1"
    }
  }
}

provider "hcp" {
  project_id = "PRIVATE"
  client_id     = var.hcp_client_id
  client_secret = var.hcp_client_secret
}
variable "hcp_client_id" {
    type = string
}

variable "hcp_client_secret" {
    type = string
    sensitive = true
}
resource "hcp_vault_secrets_app" "demo" {
  app_name = "hcp-bug-demo"
}

Debug Output

โ”‚ Error: Provider produced inconsistent result after apply
โ”‚
โ”‚ When applying changes to hcp_vault_secrets_app.demo, provider
โ”‚ "provider["registry.terraform.io/hashicorp/hcp"]" produced an unexpected
โ”‚ new value: .description: was null, but now cty.StringVal("").
โ”‚
โ”‚ This is a bug in the provider, which should be reported in the provider's
โ”‚ own issue tracker

Steps to Reproduce

  1. terraform apply

Expected Behavior

hcp_vault_secrets_app resource should be created with blank description.

Actual Behavior

hcp_vault_secrets_app resource failed to be created.

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

When try to apply hcp_vault_cluster got rpc error

Terraform Version and Provider Version

Terraform version: 1.2.6
HCP provider version: 0.37.0

Affected Resource(s)

  • hcp_vault_cluster

Terraform Configuration Files

resource "hcp_vault_cluster" "vaut_cluster" {
  cluster_id        = "test-cluster"
  hvn_id            = hcp_hvn.vault_network.hvn_id
  tier              = "dev"
  public_endpoint   = false
  lifecycle {
    prevent_destroy = false
  }
}

Debug Output

2022-07-28T15:32:48.524+0300 [ERROR] plugin.(*GRPCProvider).ApplyResourceChange: error="rpc error: code = Unavailable desc = transport is closing"
2022-07-28T15:32:48.524+0300 [ERROR] plugin.(*GRPCProvider).ApplyResourceChange: error="rpc error: code = Unavailable desc = transport is closing"
2022-07-28T15:32:48.524+0300 [ERROR] plugin.(*GRPCProvider).ApplyResourceChange: error="rpc error: code = Unavailable desc = transport is closing"
2022-07-28T15:32:48.524+0300 [ERROR] vertex "module.vault.hcp_aws_network_peering.vpc_peering_eks" error: Plugin did not respond
2022-07-28T15:32:48.524+0300 [DEBUG] provider.stdio: received EOF, stopping recv loop: err="rpc error: code = Unavailable desc = transport is closing"
2022-07-28T15:32:48.524+0300 [ERROR] vertex "module.vault.hcp_vault_cluster.vaut_cluster" error: Plugin did not respond
2022-07-28T15:32:48.524+0300 [ERROR] vertex "module.vault.hcp_aws_network_peering.vpc_peering_to_clientvpn" error: Plugin did not respond

Panic Output

Stack trace from the terraform-provider-hcp_v0.37.0_x5 plugin:

panic: interface conversion: interface {} is nil, not map[string]interface {}

goroutine 129 [running]:
github.com/hashicorp/terraform-provider-hcp/internal/provider.getObservabilityConfig(0x104e20d, 0xe, 0xc000128e00, 0xea8980, 0xc000590f40, 0x203001, 0x203000)
github.com/hashicorp/terraform-provider-hcp/internal/provider/resource_vault_cluster.go:824 +0x225
github.com/hashicorp/terraform-provider-hcp/internal/provider.resourceVaultClusterCreate(0x1195b08, 0xc0003c3e00, 0xc000128e00, 0xeea660, 0xc000142540, 0xc000590df0, 0x0, 0x0)
github.com/hashicorp/terraform-provider-hcp/internal/provider/resource_vault_cluster.go:271 +0x17a
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).create(0xc0004e00e0, 0x1195b40, 0xc000414f00, 0xc000128e00, 0xeea660, 0xc000142540, 0x0, 0x0, 0x0)
github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/resource.go:707 +0x17f
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).Apply(0xc0004e00e0, 0x1195b40, 0xc000414f00, 0xc0004111e0, 0xc000128c80, 0xeea660, 0xc000142540, 0x0, 0x0, 0x0, ...)
github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/resource.go:837 +0x79d
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*GRPCProviderServer).ApplyResourceChange(0xc00000c240, 0x1195b40, 0xc000414d50, 0xc000012aa0, 0x10521b2, 0x13, 0x1047484)
github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/grpc_provider.go:1021 +0xb4f
github.com/hashicorp/terraform-plugin-go/tfprotov5/tf5server.(*server).ApplyResourceChange(0xc0002360a0, 0x1195b40, 0xc000414d50, 0xc0002d71f0, 0x0, 0x0, 0x0)
github.com/hashicorp/[email protected]/tfprotov5/tf5server/server.go:813 +0x70c
github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tfplugin5._Provider_ApplyResourceChange_Handler(0xfffb00, 0xc0002360a0, 0x1195b40, 0xc000414450, 0xc0002d7180, 0x0, 0x1195b40, 0xc000414450, 0xc000418600, 0x5a3)
github.com/hashicorp/[email protected]/tfprotov5/internal/tfplugin5/tfplugin5_grpc.pb.go:385 +0x214
google.golang.org/grpc.(*Server).processUnaryRPC(0xc00021a380, 0x119e8f8, 0xc0000a0680, 0xc0001f9440, 0xc0004a5830, 0x17d4580, 0x0, 0x0, 0x0)
google.golang.org/[email protected]/server.go:1295 +0x693
google.golang.org/grpc.(*Server).handleStream(0xc00021a380, 0x119e8f8, 0xc0000a0680, 0xc0001f9440, 0x0)
google.golang.org/[email protected]/server.go:1636 +0xd0c
google.golang.org/grpc.(*Server).serveStreams.func1.2(0xc000036690, 0xc00021a380, 0x119e8f8, 0xc0000a0680, 0xc0001f9440)
google.golang.org/[email protected]/server.go:932 +0xab
created by google.golang.org/grpc.(*Server).serveStreams.func1
google.golang.org/[email protected]/server.go:930 +0x1fd

Error: The terraform-provider-hcp_v0.37.0_x5 plugin crashed!

Steps to Reproduce

  1. terraform apply

Waypoint (beta) resources

Description

New or Affected Resource(s)

  • Support for HCP Waypoint resources, i.e. projects, runners

Potential Terraform Configuration

# Copy-paste your Terraform configurations here - for large Terraform configs,
# please use a service like Dropbox and share a link to the ZIP file. For
# security, you can also encrypt the files using our GPG public key: https://keybase.io/hashicorp

References

  • #0000

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform Provider Plugin Crashed

Terraform Version and Provider Version

Terraform version: 1.4.4
HCP provider version: 0.56.0

Affected Resource(s)

  • hcp_XXXXX

Terraform Configuration Files

# create ec2 instance
data "hcp_packer_iteration" "ubuntu-apache" {
  bucket_name = "ubuntu-apache-aws"
  channel = "latest"
}
data "hcp_packer_image" "ubuntu-apache-aws" {
  bucket_name     = "ubuntu-apache-aws"
  channel         = "latest"
  cloud_provider  = "aws"
  region          = "eu-west-3"
}
resource "random_integer" "num" {
  min = 0
  max = local.subnet_count - 1
}

resource "aws_instance" "web_host" {
  ami                    = data.hcp_packer_image.ubuntu-apache-aws.cloud_image_id
  instance_type          = var.instance_type
  vpc_security_group_ids = [aws_security_group.web-node.id]
  subnet_id              = aws_subnet.web_subnet[local.az_num].id

  tags = {
    
      Name        = "${local.resource_prefix}-ec2"
      Environment = var.environment
      ttl = "24h"
      owner = var.owner
      region = var.region
      purpose = "test something fashion"
      atemi = var.atemi
    }
  
  monitoring = true
  ebs_optimized = true
  root_block_device {
    encrypted = true
  }

  lifecycle {
      postcondition {
        condition = self.instance_type == "t3.large"
        error_message = "The selected instance type is to big for this usage"
      }

      postcondition {
        condition = self.tags["purpose"] == "test something fashion"
        error_message = "The selected instance type is to big for this usage"
      }
      postcondition {
        condition = self.ami == data.hcp_packer_image.ubuntu-apache-aws.cloud_image_id
        error_message = "you don't use the latest available image data.hcp_packer_image.worker.cloud_image_id"
      }
    }
}

Debug Output

Error: Plugin did not respond
with data.hcp_packer_iteration.ubuntu-apache
on main.tf line 13, in data "hcp_packer_iteration" "ubuntu-apache":
data "hcp_packer_iteration" "ubuntu-apache" {
The plugin encountered an error, and failed to respond to the plugin.(*GRPCProvider).ReadDataSource call. The plugin logs may contain more details.

Panic Output

https://gist.github.com/jpapazian2000/0052fd96b46333181b1af9fa5e929073

Steps to Reproduce

  • create an ubuntu ami
  • create an hcp packer bucket
  • create an iteration
  • use the above terraform code to provision the ami
    I note that if I revoke the iteration, then I get the above crash in terraform, while I do not get the crash when the iteration is not present

Expected Behavior

with the iteration revoked tfcb should have returned a message saying it can provision the ami as it is revoked

Actual Behavior

crash

Important Factoids

References

  • #0000

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Documentation for the browser OIDC login support

Description

I just accidentally discovered that the provider automatically spins up a browser based OIDC login flow to authenticate against HCP if the client_id and client_secret are no specified (as I forgot to set the env vars in a new module). That's great as it avoids the need for shared secrets in some cases!

The feature seems to have landed in SDK v0.22.0 with hashicorp/hcp-sdk-go#112, and in v0.45.0 of the provider.

Assuming this is a supported feature, it would be good to update the provider documentation.
(Writing docs is hard for me, so I won't volunteer ๐Ÿ™ˆ )

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

HCP Consul CA is not usable with Consul provider

Terraform Version and Provider Version

0.15.3

Affected Resource(s)

hcp_consul_cluster

Terraform Configuration Files

terraform {
  required_version = ">= 0.15.3"
  required_providers {
    consul = {
      source  = "hashicorp/consul"
      version = "~> 2.12.0"
    }

    hcp = {
      source  = "hashicorp/hcp"
      version = "~> 0.6.0"
    }
  }
}


provider "consul" {
  address    = local.hashicorp_cloud.consul_url
  token      = local.hashicorp_cloud.consul_root_token_id
  ca_pem     = hcp_consul_cluster.consul.consul_ca_file
}

Expected Behavior

You should be able to pass the HCP consul_ca_file output into the Consul provider input.

Actual Behavior

You get the following error message:

โ”‚ Error: failed to create http client: Error appending CA: Couldn't parse PEM

API Authentication -> oauth2: cannot fetch token: 403 Forbidden

Terraform Version and Provider Version

Terraform version: v1.6.0
HCP provider version: v0.72.1

Affected Resource(s)

  • hcp_vault_secrets_secret
  • hcp_vault_secrets_app

any vault_secrets resource to be honest ; I'm not sure if I am reaching some rate-limiting that is returning me 403 instead of 429;

Output

โ”‚ Error: unable to fetch project "afaa2972-c2c7-445c-be4d-9f8b75b3e634": Get "https://api.cloud.hashicorp.com:443/resource-manager/2019-12-10/projects/afaa2972-c2c7-445c-be4d-9f8b75b3e634": oauth2: cannot fetch token: 403 Forbidden
โ”‚ Response: <!DOCTYPE html>
โ”‚ <html>
โ”‚ 
โ”‚ <head>
โ”‚     <meta charset="utf-8">
โ”‚     <meta name="viewport" content="width=device-width, initial-scale=1.0">
โ”‚     <link rel="stylesheet" href="https://fonts.googleapis.com/icon?family=Material+Icons">
โ”‚     <title>Forbidden</title>
โ”‚     <style>
โ”‚         @font-face {
โ”‚             font-family: "Stabil Grotesk";
โ•ต
INFO[0101] Encountered an error eligible for retrying. Sleeping 15s before retrying. 
ERRO[0116] 1 error occurred:
        * Exhausted retries (5) for command terraform plan
 
ERRO[0116] Unable to determine underlying exit code, so Terragrunt will exit with error code 1 

Panic on interface conversion of OrganizationServiceListDefault

Terraform Version and Provider Version

Terraform version:
HCP provider version:

Affected Resource(s)

  • not sure

Terraform Configuration Files

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 3.43"
    }
    hcp = {
      source  = "hashicorp/hcp"
      version = ">= 0.36.0"
    }
  }

}

provider "hcp" {
}

Debug Output

Panic Output

# josh at josh-P6KR9LXL6P in ~/Desktop/test-nomad-consul/terraform-aws-hcp-consul/hcp-ui-templates/ec2 on git:main โœ–๏ธŽ [15:45:22]
โ†’ tf apply
โ•ท
โ”‚ Error: Plugin did not respond
โ”‚ 
โ”‚   with provider["registry.terraform.io/hashicorp/hcp"],
โ”‚   on main.tf line 22, in provider "hcp":
โ”‚   22: provider "hcp" {
โ”‚ 
โ”‚ The plugin encountered an error, and failed to respond to the plugin.(*GRPCProvider).ConfigureProvider call. The plugin logs may contain more
โ”‚ details.
โ•ต

Stack trace from the terraform-provider-hcp_v0.40.0_x5 plugin:

panic: interface conversion: error is *url.Error, not *organization_service.OrganizationServiceListDefault

goroutine 21 [running]:
github.com/hashicorp/terraform-provider-hcp/internal/clients.RetryOrganizationServiceList(0x140002a8540, 0x1400047a0c0, 0x0, 0x0, 0x0)
        github.com/hashicorp/terraform-provider-hcp/internal/clients/retry_request.go:34 +0x21c
github.com/hashicorp/terraform-provider-hcp/internal/provider.getProjectFromCredentials(0x1015939c0, 0x14000424f60, 0x140002a8540, 0x40, 0x0, 0x0)
        github.com/hashicorp/terraform-provider-hcp/internal/provider/provider.go:132 +0x50
github.com/hashicorp/terraform-provider-hcp/internal/provider.configure.func1(0x1015939c0, 0x14000424f60, 0x140006fe880, 0x0, 0x14000424ea0, 0x0, 0x0, 0x0)
        github.com/hashicorp/terraform-provider-hcp/internal/provider/provider.go:114 +0x2f0
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Provider).Configure(0x140001327e0, 0x1015939c0, 0x14000424f60, 0x14000424ea0, 0x10120c3cb, 0x12, 0x0)
        github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/provider.go:297 +0x1d8
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*GRPCProviderServer).ConfigureProvider(0x1400012c198, 0x1015939c0, 0x14000424ae0, 0x1400000c408, 0x101204620, 0x9, 0x1012025af)
        github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/grpc_provider.go:557 +0x3bc
github.com/hashicorp/terraform-plugin-go/tfprotov5/tf5server.(*server).Configure(0x140001e4000, 0x1015939c0, 0x14000424ae0, 0x14000540380, 0x0, 0x0, 0x0)
        github.com/hashicorp/[email protected]/tfprotov5/tf5server/server.go:556 +0x254
github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tfplugin5._Provider_Configure_Handler(0x101520a60, 0x140001e4000, 0x1015939c0, 0x140004241b0, 0x1400049e000, 0x0, 0x1015939c0, 0x140004241b0, 0x14000558000, 0x27)
        github.com/hashicorp/[email protected]/tfprotov5/internal/tfplugin5/tfplugin5_grpc.pb.go:331 +0x1c8
google.golang.org/grpc.(*Server).processUnaryRPC(0x1400023e380, 0x10159ccb8, 0x140001224e0, 0x1400077a000, 0x14000417bf0, 0x101bc1478, 0x0, 0x0, 0x0)
        google.golang.org/[email protected]/server.go:1295 +0x500
google.golang.org/grpc.(*Server).handleStream(0x1400023e380, 0x10159ccb8, 0x140001224e0, 0x1400077a000, 0x0)
        google.golang.org/[email protected]/server.go:1636 +0xa50
google.golang.org/grpc.(*Server).serveStreams.func1.2(0x14000034450, 0x1400023e380, 0x10159ccb8, 0x140001224e0, 0x1400077a000)
        google.golang.org/[email protected]/server.go:932 +0x94
created by google.golang.org/grpc.(*Server).serveStreams.func1
        google.golang.org/[email protected]/server.go:930 +0x1f8

Error: The terraform-provider-hcp_v0.40.0_x5 plugin crashed!

This is always indicative of a bug within the plugin. It would be immensely
helpful if you could report the crash with the plugin's maintainers so that it
can be fixed. The output above should help diagnose the issue.

Steps to Reproduce

  1. export env vars for HCP_CLIENT_ID/HCP_CLIENT_SECRET
  2. terraform apply

Expected Behavior

Actual Behavior

Important Factoids

References

  • #0000

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

bug: revoked iterations causes panic

Terraform Version and Provider Version

$ terraform -v
Terraform v1.5.0
on linux_amd64
+ provider registry.terraform.io/hashicorp/aws v5.2.0
+ provider registry.terraform.io/hashicorp/hcp v0.60.0

Affected Resource(s)

  • hcp_packer_image data source

Terraform Configuration Files

provider "hcp" {}

provider "aws" {
  region = "us-east-1"
}

data "hcp_packer_image" "learn-packer-amazonlinux2" {
  bucket_name    = "learn-packer-amazonlinux2-child"
  channel        = "latest"
  cloud_provider = "aws"
  region         = "us-east-1"
}

output "hcp_packer_image" {
  value = data.hcp_packer_image.learn-packer-amazonlinux2
}

Debug Output

gist: terraform-hcp-packer-panic.txt

Panic Output

data.hcp_packer_image.learn-packer-amazonlinux2: Reading...

Planning failed. Terraform encountered an error while generating this plan.

โ•ท
โ”‚ Error: Plugin did not respond
โ”‚
โ”‚   with data.hcp_packer_image.learn-packer-amazonlinux2,
โ”‚   on main.tf line 7, in data "hcp_packer_image" "learn-packer-amazonlinux2":
โ”‚    7: data "hcp_packer_image" "learn-packer-amazonlinux2" {
โ”‚
โ”‚ The plugin encountered an error, and failed to respond to the plugin.(*GRPCProvider).ReadDataSource call. The plugin logs may contain more details.
โ•ต

Stack trace from the terraform-provider-hcp_v0.60.0_x5 plugin:

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x20 pc=0xdec396]

goroutine 99 [running]:
github.com/hashicorp/terraform-provider-hcp/internal/provider.dataSourcePackerImageRead({0x1207608, 0xc000136b40}, 0x0?, {0xf06480?, 0xc0005a21a0})
        github.com/hashicorp/terraform-provider-hcp/internal/provider/data_source_packer_image.go:171 +0x5d6
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).read(0xc000267c00, {0x1207640, 0xc0005171d0}, 0xd?, {0xf06480, 0xc0005a21a0})
        github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/resource.go:724 +0x12e
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).ReadDataApply(0xc000267c00, {0x1207640, 0xc0005171d0}, 0xc0005e9d00, {0xf06480, 0xc0005a21a0})
        github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/resource.go:943 +0x145
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*GRPCProviderServer).ReadDataSource(0xc000280db0, {0x1207640?, 0xc0005170b0?}, 0xc00028aa40)
        github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/grpc_provider.go:1195 +0x38f
github.com/hashicorp/terraform-plugin-go/tfprotov5/tf5server.(*server).ReadDataSource(0xc000599f40, {0x1207640?, 0xc000516720?}, 0xc0005c9090)
        github.com/hashicorp/[email protected]/tfprotov5/tf5server/server.go:658 +0x3ef
github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tfplugin5._Provider_ReadDataSource_Handler({0x1031540?, 0xc000599f40}, {0x1207640, 0xc000516720}, 0xc0001e68c0, 0x0)
        github.com/hashicorp/[email protected]/tfprotov5/internal/tfplugin5/tfplugin5_grpc.pb.go:421 +0x170
google.golang.org/grpc.(*Server).processUnaryRPC(0xc0000001e0, {0x120a5e0, 0xc0005829c0}, 0xc000026d80, 0xc0005956e0, 0x19b2610, 0x0)
        google.golang.org/[email protected]/server.go:1336 +0xd13
google.golang.org/grpc.(*Server).handleStream(0xc0000001e0, {0x120a5e0, 0xc0005829c0}, 0xc000026d80, 0x0)
        google.golang.org/[email protected]/server.go:1704 +0xa1b
google.golang.org/grpc.(*Server).serveStreams.func1.2()
        google.golang.org/[email protected]/server.go:965 +0x98
created by google.golang.org/grpc.(*Server).serveStreams.func1
        google.golang.org/[email protected]/server.go:963 +0x28a

Error: The terraform-provider-hcp_v0.60.0_x5 plugin crashed!

This is always indicative of a bug within the plugin. It would be immensely
helpful if you could report the crash with the plugin's maintainers so that it
can be fixed. The output above should help diagnose the issue.

Steps to Reproduce

  1. Create two versions of an image in HCP Packer. Revoke both of them.
  2. Reference the bucket name that contains both revoked iterations.
  3. terraform plan

Expected Behavior

Would expect the plan to work as expected (as nothing is actually preventing a revoked iteration from being used).

Actual Behavior

Panics.

Important Factoids

If I un-revoke one of the iterations, it works without issue, which makes me believe it is directly related. See also:

image

References

  • May be related: #496
  • May be related: #493

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Enable admin app role to prevent unnecessary increases in client count

Description

Currently (unless I am misunderstanding something, which is totally possible) when a user spins up a vault instance in HCP, they need to issue anhcp_vault_cluster_admin_token in order to connect to their Vault via Terraform provider.

However, this admin token (being not associated with an entity) leads to an ever increasing client count each time a new token is issued on refresh (every 6 hours I believe). Since Vault billing goes up with client count, this feels like a rather unfortunate situation to put users in.

It seems like there should be a pure IaC approach to providing users with some kind of "admin" identity out of the gate to prevent unnecessary client billing.

New or Affected Resource(s)

  • hcp_0.6.0

Potential Terraform Configuration

A total spitball would be something like this that would allow for Terraform to maintain an administrative role over the Vault, without increasing client count on plan/apply/etc when the admin token has expired.

resource "hcp_vault_cluster" "bkbn" {
  cluster_id = "bkbn"
  hvn_id = hcp_hvn.bkbn.hvn_id
  public_endpoint = true
  enable_admin_app_role = true // this line 
}

// At which point something like this could be used to authenticate to the vault provider
resource "hcp_vault_cluster_admin_role" "admin" {
  cluster_id = hcp_vault_cluster.example_vault_cluster.cluster_id
}

References

N/A

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

hcp_vault_cluster_admin_token issues with app.terraform.io remote backend

Terraform Version and Provider Version

Terraform version: 14 or 15
HCP provider version: 0.6.0
Terraform Cloud

Affected Resource(s)

  • hcp_vault_cluster_admin_token

Terraform Configuration Files

Any build using Terraform remote backend. (HVN and Vault are newly created with no configuration other than being made public)

data "hcp_vault_cluster" "main" {
  cluster_id = var.hvn.vault_id
}
resource "hcp_vault_cluster_admin_token" "main" {
  cluster_id = var.hvn.vault_id
}
provider "vault" {
  address   = "https://${data.hcp_vault_cluster.main.vault_public_endpoint_url}:8200"
  token     = hcp_vault_cluster_admin_token.main.token
  namespace = "admin"
}
resource "vault_mount" "db" {
  path = "database"
  type = "database"
}
 backend "remote" {
    hostname     = "app.terraform.io"
    organization = "dochub"

    workspaces {
      name = "dochub-testing"
    }
  }

Debug Output

Error: no vault token found

  on ../modules/hvn/providers.tf line 14, in provider "vault":
  14: provider "vault" {
  1. terraform apply

Expected Behavior

Any behaviour resulting in execution plan creation

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the
following symbols:
  + create
 <= read (data resources)

Terraform will perform the following actions:

Important Factoids

This is only an issue with backend remote. Locally, and s3 backend both work as desired when HCP_CLIENT_ID and HCP_CLIENT_SECRET are present.
Also note that hcp_vault_cluster data resource returns the vault_public_endpoint_url.
Manually adding the environment variable VAULT_TOKEN on Terraform cloud does fix the issue, but doesn't scale.

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Peering Data Sources prevent Destroys if in a FAILED State

Terraform Version and Provider Version

โ†’ terraform --version
Terraform v1.0.11

HCP provider version: latest

Affected Resource(s)

Terraform Configuration Files

# Copy-paste your Terraform configurations here - for large Terraform configs,
# please use a service like Dropbox and share a link to the ZIP file. For
# security, you can also encrypt the files using our GPG public key: https://keybase.io/hashicorp

Debug Output

TestE2E_LongLivedAzure 2022-09-13T17:06:24Z logger.go:66: Error: error waiting for peering connection (azure-e2e-c38853c6-peer) to become 'ACTIVE': unexpected state 'FAILED', wanted target 'ACTIVE'. last error: %!s(<nil>)
TestE2E_LongLivedAzure 2022-09-13T17:06:24Z logger.go:66: 
TestE2E_LongLivedAzure 2022-09-13T17:06:24Z logger.go:66:   with module.hcp_peering.data.hcp_azure_peering_connection.peering,
TestE2E_LongLivedAzure 2022-09-13T17:06:24Z logger.go:66:   on .terraform/modules/hcp_peering/main.tf line 117, in data "hcp_azure_peering_connection" "peering":
TestE2E_LongLivedAzure 2022-09-13T17:06:24Z logger.go:66:  117: data "hcp_azure_peering_connection" "peering" {
TestE2E_LongLivedAzure 2022-09-13T17:06:24Z logger.go:66: 
TestE2E_LongLivedAzure 2022-09-13T17:06:24Z retry.go:99: Returning due to fatal error: FatalError{Underlying: error while running command: exit status 1; 
Error: error waiting for peering connection (azure-e2e-c38853c6-peer) to become 'ACTIVE': unexpected state 'FAILED', wanted target 'ACTIVE'. last error: %!s(<nil>)

  with module.hcp_peering.data.hcp_azure_peering_connection.peering,
  on .terraform/modules/hcp_peering/main.tf line 117, in data "hcp_azure_peering_connection" "peering":
 117: data "hcp_azure_peering_connection" "peering" {
}
    longlived.go:119: Client destroy failed: FatalError{Underlying: error while running command: exit status 1; 
        Error: error waiting for peering connection (azure-e2e-c38853c6-peer) to become 'ACTIVE': unexpected state 'FAILED', wanted target 'ACTIVE'. last error: %!s(<nil>)
        

Panic Output

Steps to Reproduce

  1. have a Peering connection fail to Create
  2. run terraform delete

Expected Behavior

The delete proceeds without failure

Actual Behavior

The delete panics because the peering is in an unexpected state: FAILED. This happens here:

if waitForActive && peering.State != networkmodels.HashicorpCloudNetwork20200907PeeringStateACTIVE {

And also here:

if waitForActive && peering.State != networkmodels.HashicorpCloudNetwork20200907PeeringStateACTIVE {

Important Factoids

I believe the fix to this will be similar to the fix for other resources with FAILED states: #331. We can persist the peering.State to TF state. And then also add the FAILED peering state as an acceptable terminal state wherever we call WaitForPeeringToBeActive:

// WaitForPeeringToBeActive will poll the GET peering endpoint until the state is ACTIVE, ctx is canceled, or an error occurs.
var WaitForPeeringToBeActive = waitForPeeringToBe(peeringState{
	Target:  PeeringStateActive,
	Pending: []string{PeeringStateCreating, PeeringStatePendingAcceptance, PeeringStateAccepted},
})

Eg of where we're going to error out every time:

peering, err = clients.WaitForPeeringToBeActive(ctx, client, peering.ID, hvnLink.ID, loc, peeringCreateTimeout)

References

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

data sources `hcp_packer_{image,iteration}` should have a flag to error if latest image is revoked

Description

There should be a virtual attribute to cause an error from the data source if an image has revoked != nil.

There could also be another virtual attribute to find the latest that is not revoked, which is probably more useful

New or Affected Resource(s)

  • hcp_packer_iteration
  • hcp_packer_image

Potential Terraform Configuration

data "hcp_packer_image" "ubuntu_us_east_2" {
  bucket_name    = "learn-packer-ubuntu"
  cloud_provider = "aws"
  iteration_id   = var.iteration_id
  region         = "us-east-2"

  reject_revoked_latest = true
}

If yall are open to this idea I'm happy to work on a PR

Failed HVN resource is not deleted on terraform destroy

Terraform Version and Provider Version

Terraform version: 1.0.0
HCP provider version: 0.7.0

Affected Resource(s)

  • hcp_hvn
  • Also probably others...

Terraform Configuration Files

terraform {
  required_providers {
    hcp = {
      source  = "hashicorp/hcp"
      version = "~> 0.7.0"
    }
  }
}

resource "hcp_hvn" "default" {
  hvn_id         = var.hvn_id
  cloud_provider = var.cloud_provider
  region         = var.region
  cidr_block     = var.hvn_cidr_block
}

data "hcp_consul_versions" "versions" {}

resource "hcp_consul_cluster" "default" {
  cluster_id         = var.cluster_id
  min_consul_version = var.consul_version == "recommended" ? data.hcp_consul_versions.versions.recommended : var.consul_version
  hvn_id             = hcp_hvn.default.hvn_id
  tier               = var.tier
  public_endpoint    = var.public_endpoint
}

Steps to Reproduce

  1. terraform apply
  2. HVN creation fails, example error:
Error: unable to create HVN (cloud-e2e-bbd97b5-70959): create HVN operation (94869ad2-7594-4f49-8ee6-181a6ae4d0a5) failed [code=13, message=terraform apply failed: error applying Terraform: failed to apply Terraform config: error reading Route for Route Table (rtb-00a99602827a4ba82) with destination (0.0.0.0/0): couldn't find resource]
โ”‚ 
โ”‚   with module.hcp_e2e_consul_service.hcp_hvn.default,
โ”‚   on ../../../terraform/modules/hcp_public/hcp/hcp.tf line 10, in resource "hcp_hvn" "default":
โ”‚   10: resource "hcp_hvn" "default" {
โ”‚ 
โ•ต

Exited with code exit status 1
  1. terraform destroy

Expected Behavior

During the terraform destroy, I expected to see my FAILED HVN resource deleted.

Actual Behavior

2021-11-08T21:54:33Z module.hcp_e2e_consul_service.hcp_hvn.default: Refreshing state... [id=/project/xxxxx/hashicorp.network.hvn/xxxxxx]
2021-11-08T21:54:35Z 
2021-11-08T21:54:35Z Note: Objects have changed outside of Terraform
2021-11-08T21:54:35Z 
2021-11-08T21:54:35Z Terraform detected the following changes made outside of Terraform since the
2021-11-08T21:54:35Z last "terraform apply":
2021-11-08T21:54:35Z 
2021-11-08T21:54:35Z   # module.hcp_e2e_consul_service.hcp_hvn.default has been deleted
2021-11-08T21:54:35Z   - resource "hcp_hvn" "default" {
2021-11-08T21:54:35Z       - cidr_block     = "10.221.0.0/16" -> null
2021-11-08T21:54:35Z       - cloud_provider = "aws" -> null
2021-11-08T21:54:35Z       - hvn_id         = "xxxxxx" -> null
2021-11-08T21:54:35Z       - id             = "/project/xxxxxxxx/hashicorp.network.hvn/xxxxxxx" -> null
2021-11-08T21:54:35Z       - region         = "*********" -> null
2021-11-08T21:54:35Z     }
2021-11-08T21:54:35Z 
2021-11-08T21:54:35Z Unless you have made equivalent changes to your configuration, or ignored the
2021-11-08T21:54:35Z relevant attributes using ignore_changes, the following plan may include
2021-11-08T21:54:35Z actions to undo or respond to these changes.
2021-11-08T21:54:35Z 
2021-11-08T21:54:35Z โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
2021-11-08T21:54:35Z 
2021-11-08T21:54:35Z Changes to Outputs:
2021-11-08T21:54:35Z   - hvn_id = "xxxxxxxxxxxxxx" -> null
2021-11-08T21:54:35Z 
2021-11-08T21:54:35Z You can apply this plan to save these new output values to the Terraform
2021-11-08T21:54:35Z state, without changing any real infrastructure.
2021-11-08T21:54:35Z 
2021-11-08T21:54:35Z Destroy complete! Resources: 0 destroyed.
2021-11-08T21:54:35Z 

This is a problem because, it leaves FAILED HVN resources lying around within the HCP Organization, eventually hitting a quota limit and requiring manual deletion.

My theory is that it is this bit of code which causes the issue:

// The HVN failed to provision properly so we want to let the user know and remove it from
// state
if hvn.State == networkmodels.HashicorpCloudNetwork20200907NetworkStateFAILED {
log.Printf("[WARN] HVN (%s) failed to provision, removing from state", hvnID)
d.SetId("")
return nil
}

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Documentation - hcp_consul_cluster

Terraform Version and Provider Version

Terraform version: 1.1.9
HCP provider version: 0.24.1

Affected Resource(s)

  • hcp_consul_cluster

Problem Statement

When attempting to configure a Consul Agent I started digging through Terraform to see if we were able to get the same configuration bundle that we publish in the portal.

In the portal it is labeled as a Client Configuration File
image

If we look at the provider documentation we have 2 entries 1 for consul_config_file and consul_ca_file which are object sunder the resource/datasource

These files are what are in this Client Configuration File but are labeled as cluster config file. This is confusing since at a first glance it doesn't appear that those can be used on clients and are the configurations for the consul servers.

Suggestion

Add additional detail in the provider documentation that aligns the naming closer to the HCP portal or has a note that indicates that these are to be used on clients within the consul cluster.

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

HCP Consul Cluster failed to launch

Error: unable to create Consul cluster (): create Consul cluster operation () failed [code=13, message=failed to deploy consul cluster: failed to provision Consul cluster: rpc error: code = Internal desc = error applying Terraform: Provider produced inconsistent result after apply When applying changes to aws_iam_role.doormat_ssh, provider "registry.terraform.io/-/aws" produced an unexpected new value for was present, but now absent. This is a bug in the provider, which should be reported in the provider's own issue tracker.]

image

Unable to import the resource into the project:

$ terraform import module.tutorial_infrastructure.module.resources.hcp_consul_cluster.server 
module.tutorial_infrastructure.module.resources.hcp_consul_cluster.server: Importing from ID ""...
module.tutorial_infrastructure.module.resources.hcp_consul_cluster.server: Import prepared!
 Prepared hcp_consul_cluster for import
module.tutorial_infrastructure.module.resources.hcp_consul_cluster.server: Refreshing state... [id=//hashicorp.consul.cluster/]
โ•ท
โ”‚ Error: Cannot import non-existent remote object
โ”‚ 
โ”‚ While attempting to import an existing object to "module.tutorial_infrastructure.module.resources.hcp_consul_cluster.server", the provider detected that no object exists with the given id. Only pre-existing objects can be imported; check that the id is correct and that it
โ”‚ is associated with the provider's configured region or endpoint, or use "terraform apply" to create a new remote object for this resource.
โ•ต

Provide actionable error when Packer fails to connect to inactive HCP Packer registry

Description

Currently the error messaging for querying an image from an inactive HCP Packer registry displays the error directly from the service.

Instead of just pushing the API error to the user, who may not know that a registry has become inactive, the provider should provide a contextual error message explaining why the provider is failing.

Ideally the error message should call out that the registry is no longer in an active state, and provide information on what action they should take next to help resolve the issue.

The steps can be something like:

Please contact your HCP Packer registry owner or support.hashicorp.com for more information on why the registry <project-id>, for <org-id> has been deemed inactive. 
~>  terraform apply
โ•ท
โ”‚ Error: [GET /packer/2021-04-30/organizations/{location.organization_id}/projects/{location.project_id}/images/{bucket_slug}/channels/{slug}][400] PackerService_GetChannel default  &{Code:9 Details:[] Error:Registry not activated. Please make sure the organization has valid billing information. The registry will be reactivated shortly after the billing account is valid again. For questions visit https://support.hashicorp.com/ Message:Registry not activated. Please make sure the organization has valid billing information. The registry will be reactivated shortly after the billing account is valid again. For questions visit https://support.hashicorp.com/}
โ”‚
โ”‚   with data.hcp_packer_iteration.example,
โ”‚   on main.tf line 14, in data "hcp_packer_iteration" "example":
โ”‚   14: data "hcp_packer_iteration" "example" {
โ”‚

New or Affected Resource(s)

  • hcp_packer_image
  • hcp_packer_iteration
  • hcp_packer_image_iteration

Potential Terraform Configuration

terraform {
  required_providers {
    hcp = {
      source = "hashicorp/hcp"
      version = "0.20.0"
    }
  }
}

provider "hcp" {
  # Configuration options
}

data "hcp_packer_iteration" "example" {
  bucket_name = "simple-artifact"
  channel     = "development"
}

output "results" {
 value = data.hcp_packer_iteration.example.id
}

References

Sample error messaging taken from Packer

~>  HCP_PACKER_BUILD_FINGERPRINT=688934r5t67676767b packer build source.pkr.hcl
Error: HCP Packer Registry initialization failure

The HCP Packer registry for the project "d0c23550-b27d-4aa1-a024-71fa03b9ebbf"
within the organization "17ffae46-0ccb-4ef2-ac91-9827d5d5729e" failed with the
following error: Registry not activated. Please make sure the organization has
valid billing information. The registry will be reactivated shortly after the
billing account is valid again. 
For questions visit https://support.hashicorp.com/

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

update hcp_hvn resource documentation

Description

The hcp_hvn resource page currently mentions the below:

  • If the CIDR block values for your HVN and VPCs overlap, then you will not be able to establish a connection. Ensure that any VPCs you plan to connect do not have overlapping values.

  • The default HVN CIDR block value does not overlap with the default CIDR block value for AWS VPCs (172.31.0.0/16). However, if you are planning to use this HVN in production, we recommend adding a custom value instead of using the default.

Should it be update to include/mention Azure VNets or make the verbiage a bit more generic like "your cloud provider network" or something of the sort so it covers, AWS, Azure and future cloud providers too?

New or Affected Resource(s)

  • hcp_hvn

Community Note

  • Please vote on this issue by adding a ๐Ÿ‘ reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.