Giter Site home page Giter Site logo

confluentinc / terraform-provider-confluentcloud Goto Github PK

View Code? Open in Web Editor NEW
52.0 156.0 23.0 274 KB

Confluent Cloud Terraform Provider is deprecated in favor of Confluent Terraform Provider

Home Page: https://registry.terraform.io/providers/confluentinc/confluentcloud/latest/docs

HCL 0.34% Makefile 1.75% Go 97.24% Shell 0.67%
terraform-provider terraform confluent-kafka confluent-cloud

terraform-provider-confluentcloud's Introduction

Documentation

Full documentation is available on the Terraform website.

License

Copyright 2021 Confluent Inc.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

terraform-provider-confluentcloud's People

Contributors

codyaray avatar confluentsemaphore avatar confluentspencer avatar linouk23 avatar lyoung-confluent avatar taihengjin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

terraform-provider-confluentcloud's Issues

Add Kafka API key as a resource

I am missing the creation of an API key in the cluster environment. We are migrating from the unofficial provider: Mongey/terraform-provider-confluentcloud.

What I expect as input:

resource "confluentcloud_api_key" "api_key" {
  cluster_id     = confluentcloud_kafka_cluster.cluster.id
  environment_id = confluentcloud_environment.confluent_environment.id
  description    = "my API KEY for the ${confluentcloud_environment.confluent_environment.display_name} environment"
}

That outputs the key and secret as input for topics:

resource "confluentcloud_kafka_topic" "orders" {
  kafka_cluster      = confluentcloud_kafka_cluster.basic-cluster.id
   ...
  credentials {
    key    = confluentcloud_api_key.api_key.key
    secret = confluentcloud_api_key.api_key.secret
  }
}

v0.4.0/v0.5.0 - Unable to create ACL after creating topic (401 error) - basic cluster

  1. Create the service accounts using the Cloud Keys. - WORKS AS EXPECTED
  2. Create a topic with the cluster key. - WORKS AS EXPECTED

However when I try to add ACL to the topic, for the created service account using the same cluster key, I get the below error

401 Unauthorized
│ 
│   with confluentcloud_kafka_acl.mynamespace-myapp-sample-private-producer,
│   on topic-sample.tf line 23, in resource "confluentcloud_kafka_acl" "mynamespace-myapp-sample-private-producer":
│   23: resource "confluentcloud_kafka_acl" "mynamespace-myapp-sample-private-producer" {

Below is the setup

resource "confluentcloud_kafka_topic" "mynamespace-myapp-sample-private" {
  kafka_cluster    = var.azure_sandbox_cluster_id
  topic_name       = var.mynamespace-myapp-sample-private_topic
  partitions_count = 3
  http_endpoint    = var.azure_sandbox_http_endpoint
  config = {
    "cleanup.policy"      = var.topic_delete_cleanup_policy,
    "max.message.bytes"   = var.topic_max_message_size_bytes,
    "retention.ms"        = var.topic_retention_time_day_ms,
    "min.insync.replicas" = var.topic_min_insync_replicas
  }

  credentials {
    key    = var.cluster_api_key
    secret = var.cluster_api_secret
  }
}


## Producers
#  --------------------------------------------------------------
# ACL (WRITE) for Producer
resource "confluentcloud_kafka_acl" "mynamespace-myapp-sample-private-producer" {
  kafka_cluster = var.azure_sandbox_cluster_id
  resource_type = "TOPIC"
  resource_name = confluentcloud_kafka_topic.mynamespace-myapp-sample-private.topic_name
  pattern_type  = "LITERAL"
  principal     = "User:${var.mynamespace-myproducerapp-sa_id}"
  host          = "*"
  operation     = "WRITE"
  permission    = "ALLOW"
  http_endpoint = var.azure_sandbox_http_endpoint

  credentials {
    key    = var.cluster_api_key
    secret = var.cluster_api_secret
  }
}

What is causing the "401" error during the creation of ACL when the cluster is the same and the topic was provisioned using the same cluster keys?

Much appreciated

Ability to get resources environments by name instead of just id.

It would be extremely helpful to be able to look up the id of an environment or cluster given its name. I could see this either being done using the existing data sources, or adding new plural forms of the existing data sources that return lists that can then be filtered on the caller side. An example of the latter in another provider would be the aws_eks_cluster and aws_eks_clusterS data sources in the aws provider.

API Key Resource: Automatically creating a cluster and topics

It would be super useful to be able to give a service account the rights to access a cluster before creating it.

I have created a service account and am able to create a cluster with its access key. But, as of my understanding, in order to access the newly created cluster and create topics in it I have to log in to the confluent cloud and create new access keys for that specific cluster and use those to create topics.

Ideally that would be possible using the global access keys, so that you would be able to automatically create a new cluster, create all kinds of topics and configurations there, run some extensive tests and delete the cluster afterwards.

Following the sample docs causes: Error: Could not load plugin

➜ confluentcloud git:(main) ✗ terraform plan -parallelism=1 -out=tfplan_add_sa_env_and_cluster

│ Error: Could not load plugin
│
│
│ Plugin reinitialization required. Please run "terraform init".
│
│ Plugins are external binaries that Terraform uses to access and manipulate
│ resources. The configuration provided requires plugins which can't be located,
│ don't satisfy the version constraints, or are otherwise incompatible.
│
│ Terraform automatically discovers provider requirements from your
│ configuration, including providers used in child modules. To see the
│ requirements and constraints, run "terraform providers".
│
│ failed to instantiate provider "registry.terraform.io/confluentinc/confluentcloud" to obtain schema: unknown provider
│ "registry.terraform.io/confluentinc/confluentcloud"

[UX] - Adding details to errors

I know right now there is the 403 for having the wrong API (lack of access to the EA program for terraform) but I also had and issue with the following error:

Error: 402 Payment Required

I wasn't given the proper context to the issue until I ran my apply with debug finding the following:

2021-12-15T14:21:51.690-0800 [INFO]  Starting apply for module.confluentcloud_kafka_dev.confluentcloud_environment.default
2021-12-15T14:21:51.690-0800 [DEBUG] module.confluentcloud_kafka_dev.confluentcloud_environment.default: applying the planned Create change
2021-12-15T14:21:52.018-0800 [INFO]  provider.terraform-provider-confluentcloud_0.2.0: 2021/12/15 14:21:52 [ERROR] Environment create failed &{<nil> <nil> <nil> <nil> 0xc000174b30}, &{402 Payment Required 402 HTTP/1.1 1 1 map[Access-Control-Allow-Credentials:[true] Access-Control-Allow-Headers:[Authorization,Accept,Origin,DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Content-Range,Range] Access-Control-Allow-Methods:[GET,POST,OPTIONS,PUT,DELETE,PATCH] Connection:[keep-alive] Content-Length:[232] Content-Type:[application/json] Date:[Wed, 15 Dec 2021 22:21:51 GMT] Server:[nginx] Strict-Transport-Security:[max-age=31536000; includeSubDomains; preload] X-Content-Type-Options:[nosniff] X-Frame-Options:[deny] X-Request-Id:[******************************] X-Xss-Protection:[1; mode=block]] {{
  "errors": [
    {
      "id": "**********************",
      "status": "402",
      "code": "quota_exceeded",
      "detail": "Your organization is currently limited to 50 environments",
      "source": {}
    }
  ]
}} 232 [] false false map[] *************************}, 402 Payment Required: timestamp=2021-12-15T14:21:52.018-0800

If possible, I think it would be helpful to provide users with that detail line when there is an error and possibly even adding details around that 403 for users that skip the docs and try to use the provider with the wrong API..

Hope this helps with the development of the provider 😄 Feel free to close it if it's not.

Id check fails for old environments

Because of the check we can not use the terraform module with our existing environments. Old environment IDs do not start with "env-" prefix. Is there any special reason for that check? Can the check be relaxed to make it possible to use with older environments?

Attempting to create topics in a foreach within a resource

Hey, Im using the below code to create topics in our Confluent Cloud envs from a var list.

  for_each = toset( var.tf_var_confluent_cloud_kafka_topic_list )
  kafka_cluster = confluentcloud_kafka_cluster.terraform-cluster.id
  topic_name = format("%s%s","tf-",each.key)
  partitions_count = 7
  http_endpoint = confluentcloud_kafka_cluster.terraform-cluster.http_endpoint
  config = {
    "retention.ms" = "6789000"
  }
  credentials {
    key = var.tf_var_confluent_cloud_kafka_cluster_api_key
    secret = var.tf_var_confluent_cloud_kafka_cluster_api_secret
  }
}

When applying the changes, the plugin stops responding:

╷
│ Error: Request cancelled
│ 
│   with confluentcloud_kafka_topic.all_topics[t"opic_name"],
│   on providers.tf line 25, in resource "confluentcloud_kafka_topic" "all_topics":
│   25: resource "confluentcloud_kafka_topic" "all_topics" {
│ 
│ The plugin.(*GRPCProvider).ValidateResourceConfig request was cancelled.
╵
╷
│ Error: Plugin did not respond
│ 
│   with confluentcloud_kafka_topic.all_topics["topic_name"],
│   on providers.tf line 25, in resource "confluentcloud_kafka_topic" "all_topics":
│   25: resource "confluentcloud_kafka_topic" "all_topics" {
│ 
│ The plugin encountered an error, and failed to respond to the plugin.(*GRPCProvider).ApplyResourceChange call. The plugin logs may contain more details.

I also get the following error:

Stack trace from the terraform-provider-confluentcloud_0.5.0 plugin:

panic: reflect: call of reflect.Value.FieldByName on zero Value

goroutine 120 [running]:
reflect.flag.mustBe(...)
	/usr/local/golang/1.16/go/src/reflect/value.go:221
reflect.Value.FieldByName(0x0, 0x0, 0x0, 0x19da363, 0x6, 0x0, 0x140, 0x12b)
	/usr/local/golang/1.16/go/src/reflect/value.go:903 +0x25a
github.com/confluentinc/terraform-provider-ccloud/internal/provider.createDiagnosticsWithDetails(0x1adb6a0, 0xc00082c280, 0xc0003fd5a0, 0x3, 0x3)
	src/github.com/confluentinc/terraform-provider-confluentcloud/internal/provider/utils.go:304 +0x2c5
github.com/confluentinc/terraform-provider-ccloud/internal/provider.kafkaTopicCreate(0x1aea3e8, 0xc0002a0060, 0xc0000f8000, 0x1926000, 0xc000201030, 0xc0007be500, 0x146a3aa, 0xc000402100)
	src/github.com/confluentinc/terraform-provider-confluentcloud/internal/provider/resource_kafka_topic.go:141 +0x525
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).create(0xc0003d8ee0, 0x1aea378, 0xc0003c64c0, 0xc0000f8000, 0x1926000, 0xc000201030, 0x0, 0x0, 0x0)
	pkg/mod/github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/resource.go:341 +0x17f
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).Apply(0xc0003d8ee0, 0x1aea378, 0xc0003c64c0, 0xc000838820, 0xc000402100, 0x1926000, 0xc000201030, 0x0, 0x0, 0x0, ...)
	pkg/mod/github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/resource.go:467 +0x67b
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*GRPCProviderServer).ApplyResourceChange(0xc0000b0108, 0x1aea378, 0xc0003c64c0, 0xc0005c0320, 0x19e3224, 0x12, 0x0)
	pkg/mod/github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/grpc_provider.go:977 +0xacf
github.com/hashicorp/terraform-plugin-go/tfprotov5/tf5server.(*server).ApplyResourceChange(0xc00060e180, 0x1aea420, 0xc0003c64c0, 0xc0008c8000, 0x0, 0x0, 0x0)
	pkg/mod/github.com/hashicorp/[email protected]/tfprotov5/tf5server/server.go:603 +0x465
github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tfplugin5._Provider_ApplyResourceChange_Handler(0x19a12e0, 0xc00060e180, 0x1aea420, 0xc00054a300, 0xc0008a8180, 0x0, 0x1aea420, 0xc00054a300, 0xc000102280, 0x27c)
	pkg/mod/github.com/hashicorp/[email protected]/tfprotov5/internal/tfplugin5/tfplugin5_grpc.pb.go:380 +0x214
google.golang.org/grpc.(*Server).processUnaryRPC(0xc0002c6540, 0x1af1b98, 0xc000603800, 0xc00027e200, 0xc00060c570, 0x1f9bb00, 0x0, 0x0, 0x0)
	pkg/mod/google.golang.org/[email protected]/server.go:1210 +0x52b
google.golang.org/grpc.(*Server).handleStream(0xc0002c6540, 0x1af1b98, 0xc000603800, 0xc00027e200, 0x0)
	pkg/mod/google.golang.org/[email protected]/server.go:1533 +0xd0c
google.golang.org/grpc.(*Server).serveStreams.func1.2(0xc000274340, 0xc0002c6540, 0x1af1b98, 0xc000603800, 0xc00027e200)
	pkg/mod/google.golang.org/[email protected]/server.go:871 +0xab
created by google.golang.org/grpc.(*Server).serveStreams.func1
	pkg/mod/google.golang.org/[email protected]/server.go:869 +0x1fd

Error: The terraform-provider-confluentcloud_0.5.0 plugin crashed!

This is always indicative of a bug within the plugin. It would be immensely
helpful if you could report the crash with the plugin's maintainers so that it
can be fixed. The output above should help diagnose the issue ```

403 on terraform apply

i can't create my environment or basic cluster, when I run terraform apply, it returns error 403 forbidden.

main.tf >>>

terraform {
  required_providers {
    confluentcloud = {
      source  = "confluentinc/confluentcloud"
      version = "0.2.0"
    }
  }
}

provider "confluentcloud" {}

resource "confluentcloud_environment" "test-env" {
  display_name = "Development"
}

resource "confluentcloud_kafka_cluster" "basic-cluster" {
  display_name = "basic_kafka_cluster"
  availability = "SINGLE_ZONE"
  cloud        = "AWS"
  region       = "sa-east-1"
  basic {}

  environment {
    id = confluentcloud_environment.test-env.id
  }
}
confluentcloud_environment.ccloud-environment: Creating...

Error: 403 Forbidden

  on main.tf line 1, in resource "confluentcloud_environment" "test-env:
   12: resource "confluentcloud_environment" "test-env" {

tf version

terraform -v
Terraform v0.15.5
on linux_amd64

Support for dedicated Kafka cluster

Currently, I see the support only for basic and standard kafka cluster creation in the confluentcloud_kafka_cluster resource.

Is there a plan to support creation of "dedicated" cluster in future releases?

Thanks.

API Key Resource: How to create a cluster & topic?

I have the following terraform file based on the examples:

variable "env" {
  default = "test"
}

provider "confluentcloud" {
  api_key    = var.confluentcloud_api_key
  api_secret = var.confluentcloud_api_secret
}

variable "confluentcloud_api_key" {}
variable "confluentcloud_api_secret" {}

resource "confluentcloud_environment" "environment" {
  display_name = var.env
}

resource "confluentcloud_kafka_cluster" "basic-cluster" {
  display_name = var.env
  availability = "SINGLE_ZONE"
  cloud        = "AZURE"
  region       = var.region

  basic {

  }

  environment {
    id = confluentcloud_environment.environment.id
  }
}

resource "confluentcloud_kafka_topic" "transit-alert-trip-patches" {
  kafka_cluster      = confluentcloud_kafka_cluster.basic-cluster.id
  topic_name         = "transit-alert.trip-patches"
  partitions_count   = 4
  http_endpoint      = confluentcloud_kafka_cluster.basic-cluster.http_endpoint
  config = {
#    "cleanup.policy"    = "compact"
#    "max.message.bytes" = "12345"
#    "retention.ms"      = "67890"
  }
  credentials {
    key    = "<Kafka API Key for confluentcloud_kafka_cluster.basic-cluster>"
    secret = "<Kafka API Secret for confluentcloud_kafka_cluster.basic-cluster>"
  }
}

What should I put to the confluentcloud_kafka_topic.transit-alert-trip-patches.credentials block? With the value of the var.confluentcloud_api_key & var.confluentcloud_api_secret the terraform apply failes. I see no related output from the confluentcloud_kafka_cluster.basic-cluster block.

Is there a way to:

  • create an environment
  • create the cluster
  • and create the topics in one terraform apply?

Regards,

Ability to get resources clusters by name instead of just id.

What

See #46 for more details.

It would be extremely helpful to be able to look up the id of an cluster given its name. I could see this either being done using the existing data sources, or adding new plural forms of the existing data sources that return lists that can then be filtered on the caller side. An example of the latter in another provider would be the aws_eks_cluster and aws_eks_clusterS data sources in the aws provider.

data references

Hi there,

Are there plans for data references; as in:

data "confluentcloud_kafka_cluster" "cluster_from_another_stack" {
  display_name = "basic_kafka_cluster"
}

resource "confluentcloud_kafka_topic" "orders" {
  kafka_cluster      = data.confluentcloud_kafka_cluster.cluster_from_another_stack.id
  topic_name         = "orders"
  partitions_count   = 4
  http_endpoint      = data.confluentcloud_kafka_cluster.cluster_from_another_stack.http_endpoint
  config = {
    "cleanup.policy"    = "compact"
    "max.message.bytes" = "12345"
    "retention.ms"      = "67890"
  }
  credentials {
    key    = "<Kafka API Key for data.confluentcloud_kafka_cluster.cluster_from_another_stack>"
    secret = "<Kafka API Secret for data.confluentcloud_kafka_cluster.cluster_from_another_stack>"
  }
}

We would like to have a primary repo which controls all the core infrastructure, but not have to statically store all the requisite ids and pass them in to the producer and consumer services' stack.

username/password not supported in provider

The examples specify that username/password can be used in the provider. However, in my case when I try the same

provider "confluentcloud" {
  username = var.api_key
  password = var.api_secret
}

I see the below error

Error: Unsupported argument
│ 
│   on providers.tf line 2, in provider "confluentcloud":
│    2:   username = var.api_key
│ 
│ An argument named "username" is not expected here.
╵
╷
│ Error: Unsupported argument
│ 
│   on providers.tf line 3, in provider "confluentcloud":
│    3:   password = var.api_secret
│ 
│ An argument named "password" is not expected here.

Much appreciated.

confluentcloud_kafka_acl resource not applying principle

Using provider version 0.3.0

resource "confluentcloud_kafka_acl" "describe-basic-cluster" {
  kafka_cluster = var.kafka_cluster_id
  resource_type = "CLUSTER"
  resource_name = "kafka-cluster"
  pattern_type  = "LITERAL"
  principal     = "User:sa-55555"
  host          = "*"
  operation     = "DESCRIBE"
  permission    = "ALLOW"
  http_endpoint = var.kafka_http_endpoint
  credentials {
    key    = var.cluster_api_key
    secret = var.cluster_api_secret
  }
}

After applying listing cluster ACL via cli yields. Note that principle field is empty

    Principal    | Permission | Operation | ResourceType | ResourceName  | PatternType  
 ------------+------------+-----------+--------------+---------------+--------------
   User:          | ALLOW      | DESCRIBE  | CLUSTER      | kafka-cluster | LITERAL  

Error: 429 Too Many Requests

I get the following errors:

│ Error: 429 Too Many Requests
│ 
│   with module.sentry_kafka_topics.confluentcloud_kafka_topic.kafka_topics["outcomes"],
│   on ../../../../../../modules/confluent-kafka-topics/main.tf line 1, in resource "confluentcloud_kafka_topic" "kafka_topics":
│    1: resource "confluentcloud_kafka_topic" "kafka_topics" {

with code that looks like this:

resource "confluentcloud_kafka_topic" "kafka_topics" {
  for_each = var.topics

  topic_name       = each.value.topic_name
  partitions_count = each.value.partitions_count
  config           = each.value.config

  kafka_cluster = var.cluster_info.cluster_id
  http_endpoint = var.cluster_info.http_endpoint
  credentials {
    key    = local.cluster_credentials.api
    secret = local.cluster_credentials.secret
  }
}

I temporarily resolved the issue by adding a sleep of 1 second between calls in the provider

func kafkaTopicRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics {
        log.Printf("[INFO] Kafka topic read2 for %s", d.Id())
        throttle() <-- Sleep 1s

        clusterId := extractClusterId(d)
        topicName := extractTopicName(d)

Is there a better solution?

Implement an Importer for topics

I wanted to move our confluent cloud infrastructure to being managed by Terraform. However, since there is no importer topics, I cannot move existing topics to Terraform plans.

Service account data source by display name

Current service account data source does not make it possible to provide display name and get the id as output. It is frequently needed to get service account id to use for example in a topic acl. Is it possible to add that feature?

ACL is not appear

Terraform: v1.1.2
confluentcloud: 0.3.0

resource "confluentcloud_kafka_acl" "test" {
  provider      = confluentcloud
  kafka_cluster = module.kafka.cluster
  resource_type = "TOPIC"
  resource_name = "test"
  pattern_type  = "LITERAL"
  principal     = "User:${module.kafka.service_account}"
  operation     = "WRITE"
  permission    = "ALLOW"
  http_endpoint = module.kafka.http_endpoint
  credentials {
    key    = data.google_secret_manager_secret_version.key.secret_data
    secret = data.google_secret_manager_secret_version.secret.secret_data
  }
}

Terraform applies current configuration but acl is not appear.
Topic and user exist

Terraform Debug log

confluentcloud_kafka_acl.test: Creating...
2022-01-27T18:28:35.591+0300 [INFO]  Starting apply for confluentcloud_kafka_acl.test
2022-01-27T18:28:35.592+0300 [DEBUG] confluentcloud_kafka_acl.test: applying the planned Create change
2022-01-27T18:28:35.592+0300 [TRACE] GRPCProvider: ApplyResourceChange
2022-01-27T18:28:36.382+0300 [INFO]  provider.terraform-provider-confluentcloud_0.3.0: 2022/01/27 18:28:36 [DEBUG] Created kafka ACL lkc-eee34/TOPIC#test#LITERAL#User:sa-23ff3t#*#WRITE#ALLOW: timestamp=2022-01-27T18:28:36.382+0300
2022-01-27T18:28:36.383+0300 [TRACE] NodeAbstractResouceInstance.writeResourceInstanceState to workingState for confluentcloud_kafka_acl.test
2022-01-27T18:28:36.383+0300 [TRACE] NodeAbstractResouceInstance.writeResourceInstanceState: writing state object for confluentcloud_kafka_acl.test
2022-01-27T18:28:36.383+0300 [TRACE] NodeAbstractResouceInstance.writeResourceInstanceState to workingState for confluentcloud_kafka_acl.test
2022-01-27T18:28:36.383+0300 [TRACE] NodeAbstractResouceInstance.writeResourceInstanceState: writing state object for confluentcloud_kafka_acl.test
confluentcloud_kafka_acl.test: Creation complete after 0s [id=lkc-eee34/TOPIC#test#LITERAL#User:sa-23ff3t#*#WRITE#ALLOW]
2022-01-27T18:28:36.384+0300 [TRACE] statemgr.Filesystem: creating backup snapshot at terraform.tfstate.d/dev/terraform.tfstate.backup
2022-01-27T18:28:36.387+0300 [TRACE] statemgr.Filesystem: state has changed since last snapshot, so incrementing serial to 141
2022-01-27T18:28:36.387+0300 [TRACE] statemgr.Filesystem: writing snapshot at terraform.tfstate.d/dev/terraform.tfstate
2022-01-27T18:28:36.388+0300 [TRACE] vertex "confluentcloud_kafka_acl.test": visit complete
2022-01-27T18:28:36.388+0300 [TRACE] vertex "provider[\"registry.terraform.io/confluentinc/confluentcloud\"] (close)": starting visit (*terraform.graphNodeCloseProvider)
2022-01-27T18:28:36.389+0300 [TRACE] GRPCProvider: Close
2022-01-27T18:28:36.389+0300 [DEBUG] provider.stdio: received EOF, stopping recv loop: err="rpc error: code = Unavailable desc = transport is closing"
2022-01-27T18:28:36.391+0300 [DEBUG] provider: plugin process exited: path=.terraform/providers/registry.terraform.io/confluentinc/confluentcloud/0.3.0/linux_amd64/terraform-provider-confluentcloud_0.3.0 pid=128741
2022-01-27T18:28:36.391+0300 [DEBUG] provider: plugin exited

How i need to create it in right way?

Enhancement: Need to specify Kafka cluster type as argument - for code reusability

Currently, we use the below code to spin a kafka cluster. (As an example, we are spinning a basic cluster below for a lower environment)

resource "confluentcloud_kafka_cluster" "example" {
  display_name = var.example_kafka_cluster_name
  availability = "SINGLE_ZONE"
  cloud        = "AZURE"
  region       = var.azure_us_east_2
  basic {}

  environment {
    id = var.environment_id
  }
}

If the code above has to be re-used in a higher environment and the need is for a standard/dedicated cluster and not basic, this setup cannot be used as-is, since the cluster type has to be updated in the code. This is even while having workspaces created for different environments.

Only way to do this right now - is to have a separate directory and repeat the code except for the cluster type.

It would help a lot if the cluster type had the ability to be passed/read as an argument. Something along the lines of

resource "confluentcloud_kafka_cluster" "example" {
  display_name = var.example_kafka_cluster_name
  availability = "SINGLE_ZONE"
  cloud         = "AZURE"
  region        = var.azure_us_east_2
  type          = var.cluster_type      # basic/standard/dedicated

  environment {
    id = var.environment_id
  }
}

Thanks.

Unable to run terraform init for custom module.

module.tf

module "confluentcloud" {
  source     = "../../"
  topic_name = "dev-topic"
}

Then ran terraform init

Initializing modules...

Initializing the backend...

Initializing provider plugins...
- Finding latest version of hashicorp/confluentcloud...
- Finding confluentinc/confluentcloud versions matching "0.2.0"...
- Installing confluentinc/confluentcloud v0.2.0...
- Installed confluentinc/confluentcloud v0.2.0 (signed by a HashiCorp partner, key ID
 Error: Failed to query available provider packages
│ 
│ Could not retrieve the list of available versions for provider hashicorp/confluentcloud: provider registry registry.terraform.io does not have a provider named
│ registry.terraform.io/hashicorp/confluentcloud
│ 
│ Did you intend to use confluentinc/confluentcloud? If so, you must specify that source address in each module which requires that provider. To see which modules are currently depending on
│ hashicorp/confluentcloud, run the following command:
│     terraform providers

Any ideas?

API Key Resource: Generate/Ouput API Key & Secret for confluentcloud_service_account object

I have setup automatic creations of clusters, topics, service account, ACL's

However there is no option to output the API key & secret for the service account in the confluentcloud_service_account resource.
This has to be done manually through the portal & copy/paste into our variable sets for the applications using the accounts.

Please provide an option in the next version to generate & output these

Thanks,
Jørgen

Unable to import cluster from confluent cloud

Terraform version

Terraform v0.14.8
+ provider registry.terraform.io/confluentinc/confluentcloud v0.5.0
+ provider registry.terraform.io/hashicorp/aws v3.75.0
+ provider registry.terraform.io/hashicorp/vault v3.2.1

Configuration

The API key was created following this documentation:
https://registry.terraform.io/providers/confluentinc/confluentcloud/latest/docs/guides/sample-project#get-a-confluent-cloud-api-key

resource "confluentcloud_kafka_cluster" "basic" {
  count = var.type == "basic" ? 1 : 0

  display_name = var.cluster_name
  availability = var.availability
  cloud        = var.cloud
  region       = var.region

  basic {}

  environment {
    id = var.environment_id
  }
}

Command

terraform import "confluentcloud_kafka_cluster.basic[0]" "<ENV_ID>/<CLUSTER_ID>"

The environment_id looks like env-1234 and cluster_id looks like lkc-1234.

Output

confluentcloud_kafka_cluster.basic[0]: Importing from ID "<ENV_ID>/<CLUSTER_ID>"...
confluentcloud_kafka_cluster.basic[0]: Import prepared!

Import successful!

The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.

Trace

2022-03-22T16:31:31.793-0300 [INFO]  plugin.terraform-provider-confluentcloud_0.5.0: 2022/03/22 16:31:31 [ERROR] Kafka cluster get failed for id <CLUSTER_ID>, &{403 Forbidden 403 HTTP/1.1 1 1 map[Access-Control-Allow-Credentials:[true] Access-Control-Allow-Headers:[Authorization,Accept,Origin,DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Content-Range,Range] Access-Control-Allow-Methods:[GET,POST,OPTIONS,PUT,DELETE,PATCH] Connection:[keep-alive] Content-Length:[193] Content-Type:[application/json] Date:[Tue, 22 Mar 2022 19:31:26 GMT] Server:[nginx] Strict-Transport-Security:[max-age=31536000; includeSubDomains; preload] X-Content-Type-Options:[nosniff] X-Frame-Options:[deny] X-Request-Id:[c1f310758e972c4b59130f19618f5ae0] X-Xss-Protection:[1; mode=block]] {{
  "errors": [
    {
      "id": "c1f310758e972c4b59130f19618f5ae0",
      "status": "403",
      "code": "forbidden_access",
      "detail": "Forbidden Access",
      "source": {}
    }
  ]
}} 193 [] false false map[] 0xc0002b0800 0xc000245550}, 403 Forbidden: timestamp=2022-03-22T16:31:31.793-0300
2022-03-22T16:31:31.793-0300 [INFO]  plugin.terraform-provider-confluentcloud_0.5.0: 2022/03/22 16:31:31 [WARN] Kafka cluster with id=<CLUSTER_ID> is not found: timestamp=2022-03-22T16:31:31.793-0300
confluentcloud_kafka_cluster.basic[0]: Import prepared!
2022-03-22T16:31:31.795-0300 [WARN]  plugin.stdio: received EOF, stopping recv loop: err="rpc error: code = Unavailable desc = transport is closing"
2022-03-22T16:31:31.799-0300 [DEBUG] plugin: plugin process exited: path=.terraform/providers/registry.terraform.io/confluentinc/confluentcloud/0.5.0/linux_amd64/terraform-provider-confluentcloud_0.5.0 pid=342318
2022-03-22T16:31:31.799-0300 [DEBUG] plugin: plugin exited
2022/03/22 16:31:31 [INFO] Writing state output to: 

Import successful!

The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.

Provider path is wrong when used inside a module

Hi,

I'm trying to write a simple reusable module using this provider.

However, it looks like the provider/terraform seems to detect the wrong provider path only when it's used inside a module.

  • Structure
jmasson@tp13 ~/workspace/test $ find . -name *.tf
./run/main.tf
./module/main.tf
  • consuming the module
terraform {
  required_providers {
    confluentcloud = {
      source  = "confluentinc/confluentcloud"
      version = "0.2.0"
    }
  }
}

provider "confluentcloud" {}

module "test" {
  source = "../module"
}
  • module itself
resource "confluentcloud_service_account" "test-sa" {
  display_name = "test_sa"
  description = "description for test_sa"
}

  • Terraform init
jmasson@tp13 ~/workspace/test/run $ terraform init
Initializing modules...

Initializing the backend...

Initializing provider plugins...
- Finding confluentinc/confluentcloud versions matching "0.2.0"...
- Finding latest version of hashicorp/confluentcloud...
- Installing confluentinc/confluentcloud v0.2.0...
- Installed confluentinc/confluentcloud v0.2.0 (signed by a HashiCorp partner, key ID D4A2B1EDB0EC0C8E)

Partner and community providers are signed by their developers.
If you'd like to know more about provider signing, you can read about it here:
https://www.terraform.io/docs/cli/plugins/signing.html
╷
│ Error: Failed to query available provider packages
│ 
│ Could not retrieve the list of available versions for provider hashicorp/confluentcloud: provider registry registry.terraform.io does not have a provider named registry.terraform.io/hashicorp/confluentcloud
│ 
│ Did you intend to use confluentinc/confluentcloud? If so, you must specify that source address in each module which requires that provider. To see which modules are currently depending on
│ hashicorp/confluentcloud, run the following command:
│     terraform providers

Notice the reference to "hashicorp/confluentcloud" ?

  • terraform providers
jmasson@tp13 ~/workspace/test/run $ terraform providers

Providers required by configuration:
.
├── provider[registry.terraform.io/confluentinc/confluentcloud] 0.2.0
└── module.test
    └── provider[registry.terraform.io/hashicorp/confluentcloud]

Notice the reference to the provider in the module is wrong?

I've tried a couple of different versions of terraform - no change.

Presume this is a provider bug?

thanks

James M

HTTP 403 when creating a service account

Hi,

I've created a global access API key at https://confluent.cloud/environments/XXX/clusters/XXX/api-keys. I was able to create a topic which means that my confluent cloud api key / provider is setup properly.

However, I'm getting a HTTP 403 error when trying to create a service account:

confluentcloud_service_account.core-service-sa: Creating...
╷
│ Error: 403 Forbidden
│ 
│   with confluentcloud_service_account.core-service-sa,
│   on main.tf line 48, in resource "confluentcloud_service_account" "core-service-sa":
│   48: resource "confluentcloud_service_account" "core-service-sa" {
│ 
╵

Any thoughts?

Topic and ACL resources store Kafka API credentials in state file

After creating topics and ACLs with the provider, the Kafka API credentials used are stored in the state file. This is, I suppose, done to allow deletion of the resources (otherwise, you have no way of knowing the credentials since the resource is not present in the inputs).

While sensitive data is allowed in the state file, in this case it is certainly unexpected. At the least, this ought to be documented. Alternatively, if managing topics and ACLs did not require the Kafka API key or if topics and ACLs were managed by a second type of provider (holding the credentials), then credentials might not be required in the state.

Make topic deletion a sync operation

What

image
link

Namely, when using 0.4.0 TF Provider and try to run terraform destroy && terraform apply for a Kafka topic resource, one would be able to see the following error:

│ Error: 400 Bad Request: Topic 'orders' is marked for deletion.
│ 
│   with confluentcloud_kafka_topic.orders,
│   on main.tf line 1136, in resource "confluentcloud_kafka_topic" "orders":
│ 1136: resource "confluentcloud_kafka_topic" "orders" {
│ 
╵

The reason is currently 0.4.0 TF Provider sends a delete request but doesn't wait for it to complete and return immediately instead.

Changing topic configuration generates a replace instead of an update

After having created a topic:

resource "confluentcloud_kafka_topic" "foobar" {
  kafka_cluster = var.kafka_cluster_id
  topic_name = "foobar"
  partitions_count = 1
  http_endpoint = var.kafka_http_endpoint
  config = {
    "retention.ms" = "600000"
  }
 credentials {
    key = var.kafka_api_key
    secret = var.kafka_api_secret
 }

When changing retention.ms, terraform plan shows it will delete/recreate the topic instead of modifying in place:

Terraform will perform the following actions:

  # confluentcloud_kafka_topic.foobar must be replaced
-/+ resource "confluentcloud_kafka_topic" "foobar" {
      ~ config           = { # forces replacement
          ~ "retention.ms" = "600000" -> "6000000"
        }
      ~ id               = "<cluster id>/foobar" -> (known after apply)
        # (4 unchanged attributes hidden)

        credentials {
          # At least one attribute in this block is (or was) sensitive,
          # so its contents will not be displayed.
        }
    }

Plan: 1 to add, 0 to change, 1 to destroy.

I did not verify, but I'd suspect it doesn't matter which configuration parameter you change. This is certainly undesirable in most, if not all, cases of changing topic configuration parameters. I also observed that changing partitions_count will similarly trigger a replace instead of update. I could see arguments either way for this.

The alternative Terraform provider for managing Kafka topics and ACLs an issues an in-place update in the same situations.

Error message when resources are exceeded is confusing

The organization I was working with had 10 environments. When the provider went to great the 11th environment the following error message was returned.

╷
│ Error: 402 Payment Required
│ 
│   with confluentcloud_environment.workshop,
│   on confluent.tf line 16, in resource "confluentcloud_environment" "workshop":
│   16: resource "confluentcloud_environment" "workshop" {
│ 
╵

Stack trace creating confluentcloud_kafka_acl resource

Hey Folks,

I'm getting the following stack trace trying to create a Kafka ACL using Confluent Terraform provider v0.5.0:

confluentcloud_kafka_acl.describe_cluster: Creating...
╷
│ Error: Request cancelled
│ 
│   with confluentcloud_kafka_acl.describe_cluster,
│   on stack_trace.tf line 22, in resource "confluentcloud_kafka_acl" "describe_cluster":
│   22: resource "confluentcloud_kafka_acl" "describe_cluster" {
│ 
│ The plugin.(*GRPCProvider).ApplyResourceChange request was cancelled.
╵

Stack trace from the terraform-provider-confluentcloud_0.5.0 plugin:

panic: reflect: call of reflect.Value.FieldByName on zero Value

goroutine 66 [running]:
reflect.flag.mustBe(...)
	/usr/local/golang/1.16/go/src/reflect/value.go:221
reflect.Value.FieldByName(0x0, 0x0, 0x0, 0xdd983e, 0x6, 0x0, 0x140, 0x12e)
	/usr/local/golang/1.16/go/src/reflect/value.go:903 +0x25a
github.com/confluentinc/terraform-provider-ccloud/internal/provider.createDiagnosticsWithDetails(0xeda2e0, 0xc000596140, 0xc000267470, 0x3, 0x3)
	src/github.com/confluentinc/terraform-provider-confluentcloud/internal/provider/utils.go:304 +0x2c5
github.com/confluentinc/terraform-provider-ccloud/internal/provider.kafkaAclCreate(0xee8fe8, 0xc00060cc60, 0xc0001cae80, 0xd25360, 0xc00022e930, 0xc0001fe3e0, 0x868caa, 0xc0001cad00)
	src/github.com/confluentinc/terraform-provider-confluentcloud/internal/provider/resource_kafka_acl.go:179 +0x547
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).create(0xc0003b10a0, 0xee8f78, 0xc00009e4c0, 0xc0001cae80, 0xd25360, 0xc00022e930, 0x0, 0x0, 0x0)
	pkg/mod/github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/resource.go:341 +0x17f
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).Apply(0xc0003b10a0, 0xee8f78, 0xc00009e4c0, 0xc0001d2680, 0xc0001cad00, 0xd25360, 0xc00022e930, 0x0, 0x0, 0x0, ...)
	pkg/mod/github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/resource.go:467 +0x67b
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*GRPCProviderServer).ApplyResourceChange(0xc000304120, 0xee8f78, 0xc00009e4c0, 0xc00014b090, 0xde2957, 0x12, 0x0)
	pkg/mod/github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/grpc_provider.go:977 +0xacf
github.com/hashicorp/terraform-plugin-go/tfprotov5/tf5server.(*server).ApplyResourceChange(0xc000788080, 0xee9020, 0xc00009e4c0, 0xc000242000, 0x0, 0x0, 0x0)
	pkg/mod/github.com/hashicorp/[email protected]/tfprotov5/tf5server/server.go:603 +0x465
github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tfplugin5._Provider_ApplyResourceChange_Handler(0xda0580, 0xc000788080, 0xee9020, 0xc0001181b0, 0xc00060c360, 0x0, 0xee9020, 0xc0001181b0, 0xc0001be600, 0x2f8)
	pkg/mod/github.com/hashicorp/[email protected]/tfprotov5/internal/tfplugin5/tfplugin5_grpc.pb.go:380 +0x214
google.golang.org/grpc.(*Server).processUnaryRPC(0xc0002f6540, 0xef0618, 0xc0004b4a80, 0xc0001ba000, 0xc0002a25a0, 0x1396a80, 0x0, 0x0, 0x0)
	pkg/mod/google.golang.org/[email protected]/server.go:1210 +0x52b
google.golang.org/grpc.(*Server).handleStream(0xc0002f6540, 0xef0618, 0xc0004b4a80, 0xc0001ba000, 0x0)
	pkg/mod/google.golang.org/[email protected]/server.go:1533 +0xd0c
google.golang.org/grpc.(*Server).serveStreams.func1.2(0xc0003101d0, 0xc0002f6540, 0xef0618, 0xc0004b4a80, 0xc0001ba000)
	pkg/mod/google.golang.org/[email protected]/server.go:871 +0xab
created by google.golang.org/grpc.(*Server).serveStreams.func1
	pkg/mod/google.golang.org/[email protected]/server.go:869 +0x1fd

Error: The terraform-provider-confluentcloud_0.5.0 plugin crashed!

This is always indicative of a bug within the plugin. It would be immensely
helpful if you could report the crash with the plugin's maintainers so that it
can be fixed. The output above should help diagnose the issue.

How to reproduce?

  1. Create a Terraform file with the following content:
terraform {
  required_providers {
    confluentcloud = {
      source  = "confluentinc/confluentcloud"
      version = "0.5.0"
    }
  }
}

provider "confluentcloud" {}

resource "confluentcloud_environment" "stack_trace" {
  display_name = "stack_trace"
}

resource "confluentcloud_service_account" "stack_trace" {
  display_name = "stack-trace"
  description  = "Service account for stack trace reproduction"
}

resource "confluentcloud_kafka_cluster" "stack_trace" {
  display_name = "default"
  availability = "SINGLE_ZONE"
  cloud        = "GCP"
  region       = "us-west4"
  basic {}

  environment {
    id = confluentcloud_environment.stack_trace.id
  }
}

output "environment_id" {
  value = confluentcloud_environment.stack_trace.id
}

output "cluster_id" {
  value = confluentcloud_kafka_cluster.stack_trace.id
}

output "service_account_id" {
  value = confluentcloud_service_account.stack_trace.id
}
  1. Run terraform apply:
$ terraform apply
confluentcloud_environment.default: Refreshing state... [id=env-3y5do]
confluentcloud_service_account.tessitura_integration: Refreshing state... [id=sa-22g2dq]
confluentcloud_kafka_cluster.default: Refreshing state... [id=lkc-w7769j]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # confluentcloud_environment.stack_trace will be created
  + resource "confluentcloud_environment" "stack_trace" {
      + display_name = "stack_trace"
      + id           = (known after apply)
    }

  # confluentcloud_kafka_cluster.stack_trace will be created
  + resource "confluentcloud_kafka_cluster" "stack_trace" {
      + api_version        = (known after apply)
      + availability       = "SINGLE_ZONE"
      + bootstrap_endpoint = (known after apply)
      + cloud              = "GCP"
      + display_name       = "default"
      + http_endpoint      = (known after apply)
      + id                 = (known after apply)
      + kind               = (known after apply)
      + rbac_crn           = (known after apply)
      + region             = "us-west4"

      + basic {}

      + environment {
          + id = (known after apply)
        }
    }

  # confluentcloud_service_account.stack_trace will be created
  + resource "confluentcloud_service_account" "stack_trace" {
      + api_version  = (known after apply)
      + description  = "Service account for stack trace reproduction"
      + display_name = "stack-trace"
      + id           = (known after apply)
      + kind         = (known after apply)
    }

Plan: 3 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + cluster_id         = (known after apply)
  + environment_id     = (known after apply)
  + service_account_id = (known after apply)

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

confluentcloud_service_account.stack_trace: Creating...
confluentcloud_environment.stack_trace: Creating...
confluentcloud_environment.stack_trace: Creation complete after 2s [id=env-zoy90]
confluentcloud_kafka_cluster.stack_trace: Creating...
confluentcloud_service_account.stack_trace: Creation complete after 2s [id=sa-rrm5k0]
confluentcloud_kafka_cluster.stack_trace: Creation complete after 8s [id=lkc-w77kkg]

Apply complete! Resources: 3 added, 0 changed, 0 destroyed.

Outputs:

cluster_id = "lkc-w77kkg"
environment_id = "env-zoy90"
service_account_id = "sa-rrm5k0"
  1. Create API Key:
confluent api-key create \
    --service-account $(terraform output -raw service_account_id) \
    --environment $(terraform output -raw environment_id) \
    --resource $(terraform output -raw cluster_id)

It may take a couple of minutes for the API key to be ready.
Save the API key and secret. The secret is not retrievable later.
+---------+------------------------------------------------------------------+
| API Key | XXX_api_key_here_XXX                                             |
| Secret  | XXX_secret_here_XXX                                              |
+---------+------------------------------------------------------------------+
  1. Using the API Key and Secret from previous command output, add the following code to the Terraform file created in point 1:
resource "confluentcloud_kafka_acl" "describe_cluster" {
  kafka_cluster = confluentcloud_kafka_cluster.stack_trace.id
  http_endpoint = confluentcloud_kafka_cluster.stack_trace.http_endpoint
  resource_type = "CLUSTER"
  resource_name = "kafka-cluster"
  pattern_type  = "LITERAL"
  principal     = "User:${confluentcloud_service_account.stack_trace.id}"
  host          = "*"
  operation     = "DESCRIBE"
  permission    = "ALLOW"

  credentials {
    key    = "XXX_api_key_here_XXX"
    secret = "XXX_secret_here_XXX"
  }
}
  1. Finally, run terraform apply again to get the stack trace:
$ terraform apply
confluentcloud_service_account.tessitura_integration: Refreshing state... [id=sa-22g2dq]
confluentcloud_environment.stack_trace: Refreshing state... [id=env-zoy90]
confluentcloud_environment.default: Refreshing state... [id=env-3y5do]
confluentcloud_service_account.stack_trace: Refreshing state... [id=sa-rrm5k0]
confluentcloud_kafka_cluster.stack_trace: Refreshing state... [id=lkc-w77kkg]
confluentcloud_kafka_cluster.default: Refreshing state... [id=lkc-w7769j]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # confluentcloud_kafka_acl.describe_cluster will be created
  + resource "confluentcloud_kafka_acl" "describe_cluster" {
      + host          = "*"
      + http_endpoint = "https://pkc-6ojv2.us-west4.gcp.confluent.cloud:443"
      + id            = (known after apply)
      + kafka_cluster = "lkc-w77kkg"
      + operation     = "DESCRIBE"
      + pattern_type  = "LITERAL"
      + permission    = "ALLOW"
      + principal     = "User:sa-rrm5k0"
      + resource_name = "kafka-cluster"
      + resource_type = "CLUSTER"

      + credentials {
          + key    = (sensitive value)
          + secret = (sensitive value)
        }
    }

Plan: 1 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

confluentcloud_kafka_acl.describe_cluster: Creating...
╷
│ Error: Request cancelled
│ 
│   with confluentcloud_kafka_acl.describe_cluster,
│   on stack_trace.tf line 22, in resource "confluentcloud_kafka_acl" "describe_cluster":
│   22: resource "confluentcloud_kafka_acl" "describe_cluster" {
│ 
│ The plugin.(*GRPCProvider).ApplyResourceChange request was cancelled.
╵

Stack trace from the terraform-provider-confluentcloud_0.5.0 plugin:

panic: reflect: call of reflect.Value.FieldByName on zero Value

goroutine 66 [running]:
reflect.flag.mustBe(...)
	/usr/local/golang/1.16/go/src/reflect/value.go:221
reflect.Value.FieldByName(0x0, 0x0, 0x0, 0xdd983e, 0x6, 0x0, 0x140, 0x12e)
	/usr/local/golang/1.16/go/src/reflect/value.go:903 +0x25a
github.com/confluentinc/terraform-provider-ccloud/internal/provider.createDiagnosticsWithDetails(0xeda2e0, 0xc000596140, 0xc000267470, 0x3, 0x3)
	src/github.com/confluentinc/terraform-provider-confluentcloud/internal/provider/utils.go:304 +0x2c5
github.com/confluentinc/terraform-provider-ccloud/internal/provider.kafkaAclCreate(0xee8fe8, 0xc00060cc60, 0xc0001cae80, 0xd25360, 0xc00022e930, 0xc0001fe3e0, 0x868caa, 0xc0001cad00)
	src/github.com/confluentinc/terraform-provider-confluentcloud/internal/provider/resource_kafka_acl.go:179 +0x547
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).create(0xc0003b10a0, 0xee8f78, 0xc00009e4c0, 0xc0001cae80, 0xd25360, 0xc00022e930, 0x0, 0x0, 0x0)
	pkg/mod/github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/resource.go:341 +0x17f
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).Apply(0xc0003b10a0, 0xee8f78, 0xc00009e4c0, 0xc0001d2680, 0xc0001cad00, 0xd25360, 0xc00022e930, 0x0, 0x0, 0x0, ...)
	pkg/mod/github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/resource.go:467 +0x67b
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*GRPCProviderServer).ApplyResourceChange(0xc000304120, 0xee8f78, 0xc00009e4c0, 0xc00014b090, 0xde2957, 0x12, 0x0)
	pkg/mod/github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/grpc_provider.go:977 +0xacf
github.com/hashicorp/terraform-plugin-go/tfprotov5/tf5server.(*server).ApplyResourceChange(0xc000788080, 0xee9020, 0xc00009e4c0, 0xc000242000, 0x0, 0x0, 0x0)
	pkg/mod/github.com/hashicorp/[email protected]/tfprotov5/tf5server/server.go:603 +0x465
github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tfplugin5._Provider_ApplyResourceChange_Handler(0xda0580, 0xc000788080, 0xee9020, 0xc0001181b0, 0xc00060c360, 0x0, 0xee9020, 0xc0001181b0, 0xc0001be600, 0x2f8)
	pkg/mod/github.com/hashicorp/[email protected]/tfprotov5/internal/tfplugin5/tfplugin5_grpc.pb.go:380 +0x214
google.golang.org/grpc.(*Server).processUnaryRPC(0xc0002f6540, 0xef0618, 0xc0004b4a80, 0xc0001ba000, 0xc0002a25a0, 0x1396a80, 0x0, 0x0, 0x0)
	pkg/mod/google.golang.org/[email protected]/server.go:1210 +0x52b
google.golang.org/grpc.(*Server).handleStream(0xc0002f6540, 0xef0618, 0xc0004b4a80, 0xc0001ba000, 0x0)
	pkg/mod/google.golang.org/[email protected]/server.go:1533 +0xd0c
google.golang.org/grpc.(*Server).serveStreams.func1.2(0xc0003101d0, 0xc0002f6540, 0xef0618, 0xc0004b4a80, 0xc0001ba000)
	pkg/mod/google.golang.org/[email protected]/server.go:871 +0xab
created by google.golang.org/grpc.(*Server).serveStreams.func1
	pkg/mod/google.golang.org/[email protected]/server.go:869 +0x1fd

Error: The terraform-provider-confluentcloud_0.5.0 plugin crashed!

This is always indicative of a bug within the plugin. It would be immensely
helpful if you could report the crash with the plugin's maintainers so that it
can be fixed. The output above should help diagnose the issue.

Additional information

$ cat /etc/os-release 
NAME="Ubuntu"
VERSION="20.04.3 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.3 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal

$ terraform -v
Terraform v1.1.3
on linux_amd64
+ provider registry.terraform.io/confluentinc/confluentcloud v0.5.0

Your version of Terraform is out of date! The latest version
is 1.1.7. You can update by downloading from https://www.terraform.io/downloads.html

Error: 429 Too Many Requests

I am receiving this error on terraform plan.

I do have more than 35 topics

│ Error: 429 Too Many Requests

│ with confluentcloud_kafka_topic.topics["<topic_name>"],
│ on main.tf line 17, in resource "confluentcloud_kafka_topic" "topics":
│ 17: resource "confluentcloud_kafka_topic" "topics" {

Use service account id instead of integer id for principal argument

Hi,
When creating the confluentcloud_kafka_acl resource I need to provide the principal argument which seems only accepting an "integer id" instead of, for example, confluentcloud_service_account.id.
Is there any plan to support confluentcloud_service_account.id in the future in order to be able to reference it directly by using principal = "User:${confluentcloud_service_account.test-sa.id}"? so that I don't need to create service account first and then manually retrieve the "integer id" which is also a hacky procedure imho as this id is not returned/documented anywhere when using REST API or Confluent CLI.

Cheers,

acl - support for service account in acl. (clarification)

Is there a way to assign a service account and a consumer group while creating an acl for a topic?

I am trying to replicate the below feature from here

  • Allow a consumer to read from a topic using a consumer group by defining acl
ccloud kafka acl create --allow --service-account sa-55555 --operation READ --operation DESCRIBE --consumer-group java_example_group_1
ccloud kafka acl create --allow --service-account sa-55555 --operation READ --operation DESCRIBE --topic '*'

Below is what I have - based on the examples. From what I understand - I don't have an option to assign service account to the acl's and from what I understand, principal is not the same as service account (ex: sa-55555)

Is the below statement the equivalent of assigning the service account?

**principal     = "User:${var.service_account_terraform_sa_id}"**

acl for topic operation

resource "confluentcloud_kafka_acl" "read-test-topic" {
  kafka_cluster = var.cluster_id
  resource_type = "TOPIC"
  resource_name = confluentcloud_kafka_topic.test-topic.topic_name
  pattern_type  = "LITERAL"
  principal     = "User:${var.service_account_terraform_sa_id}"
  host          = "*"
  operation     = "READ"
  permission    = "ALLOW"
  http_endpoint = var.cluster_http_endpoint

  credentials {
    key    = var.kafka_api_key
    secret = var.kafka_api_secret
  }
}

What will be the name of the consumer group after applying acl for consumer group and can the name be customized?

resource "confluentcloud_kafka_acl" "consumer-group-test-topic" {
  kafka_cluster = var.cluster_id
  resource_type = "GROUP"
  resource_name = confluentcloud_kafka_topic.test-topic.topic_name
  pattern_type  = "LITERAL"
  principal     = "User:${var.service_account_terraform_sa_id}"
  host          = "*"
  operation     = "READ"
  permission    = "ALLOW"
  http_endpoint = var.cluster_http_endpoint

  credentials {
    key    = var.kafka_api_key
    secret = var.kafka_api_secret
  }
}

Much appreciated

Deploying "Apache Kafka on Confluent Cloud"

Hello,

as far as I understand it, this Terraform Provider "only" connects to an existing Confluent Cloud instance, that was created manually previously. Wouldn't it be better to create the Cloud integration with this provider too?
https://azure.microsoft.com/en-us/blog/introducing-seamless-integration-between-microsoft-azure-and-confluent-cloud/

Maybe it would even be possible to forego the need of the api key and secret, by using a Service Principal or Managed Identitiy.

Am I missing something, or is this planned for the future?

Thanks for your help!

Terraform Scripts fails, error indicates plugin crashed

Getting this error while running the below terraform script for provisioning of topic in Confluent kafka cluster.
Giving the cluster_id , secrets as inputs.

╷
│ Error: Plugin did not respond
│ 
│   with confluentcloud_kafka_topic.topics["confluent-test-topic"],
│   on topics.tf line 1, in resource "confluentcloud_kafka_topic" "topics":
│    1: resource "confluentcloud_kafka_topic" "topics" {
│ 
│ The plugin encountered an error, and failed to respond to the
│ plugin.(*GRPCProvider).ApplyResourceChange call. The plugin logs may
│ contain more details.
╵
Releasing state lock. This may take a few moments...

Stack trace from the terraform-provider-confluentcloud_0.5.0 plugin:

panic: reflect: call of reflect.Value.FieldByName on zero Value

goroutine 67 [running]:
reflect.flag.mustBe(...)
        /usr/local/golang/1.16/go/src/reflect/value.go:221
reflect.Value.FieldByName(0x0, 0x0, 0x0, 0x104ad1e0e, 0x6, 0x0, 0x1b6, 0x0)
        /usr/local/golang/1.16/go/src/reflect/value.go:903 +0x190
github.com/confluentinc/terraform-provider-ccloud/internal/provider.createDiagnosticsWithDetails(0x104d8bcb8, 0x14000332780, 0x1400008f588, 0x3, 0x3)
        src/github.com/confluentinc/terraform-provider-confluentcloud/internal/provider/utils.go:304 +0x240
github.com/confluentinc/terraform-provider-ccloud/internal/provider.kafkaTopicCreate(0x104d9b188, 0x1400009d020, 0x14000689480, 0x104cc3ea0, 0x14000182540, 0x140006cea80, 0x14000689300, 0x10482b700)
        src/github.com/confluentinc/terraform-provider-confluentcloud/internal/provider/resource_kafka_topic.go:141 +0x374
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).create(0x14000181500, 0x104d9b118, 0x14000416880, 0x14000689480, 0x104cc3ea0, 0x14000182540, 0x0, 0x0, 0x0)
        pkg/mod/github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/resource.go:341 +0x118
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*Resource).Apply(0x14000181500, 0x104d9b118, 0x14000416880, 0x14000328680, 0x14000689300, 0x104cc3ea0, 0x14000182540, 0x0, 0x0, 0x0, ...)
        pkg/mod/github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/resource.go:467 +0x4ec
github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema.(*GRPCProviderServer).ApplyResourceChange(0x1400000d470, 0x104d9b118, 0x14000416880, 0x14000392550, 0x104adad89, 0x12, 0x0)
        pkg/mod/github.com/hashicorp/terraform-plugin-sdk/[email protected]/helper/schema/grpc_provider.go:977 +0x870
github.com/hashicorp/terraform-plugin-go/tfprotov5/tf5server.(*server).ApplyResourceChange(0x14000688200, 0x104d9b1c0, 0x14000416880, 0x14000198000, 0x0, 0x0, 0x0)
        pkg/mod/github.com/hashicorp/[email protected]/tfprotov5/tf5server/server.go:603 +0x338
github.com/hashicorp/terraform-plugin-go/tfprotov5/internal/tfplugin5._Provider_ApplyResourceChange_Handler(0x104d3ef20, 0x14000688200, 0x104d9b1c0, 0x140005cc8a0, 0x1400009c6c0, 0x0, 0x104d9b1c0, 0x140005cc8a0, 0x1400066a600, 0x2e0)
        pkg/mod/github.com/hashicorp/[email protected]/tfprotov5/internal/tfplugin5/tfplugin5_grpc.pb.go:380 +0x1c8
google.golang.org/grpc.(*Server).processUnaryRPC(0x140002be8c0, 0x104da2c38, 0x14000092d80, 0x140006de100, 0x140006865d0, 0x105235900, 0x0, 0x0, 0x0)
        pkg/mod/google.golang.org/[email protected]/server.go:1210 +0x3e8
google.golang.org/grpc.(*Server).handleStream(0x140002be8c0, 0x104da2c38, 0x14000092d80, 0x140006de100, 0x0)
        pkg/mod/google.golang.org/[email protected]/server.go:1533 +0xa50
google.golang.org/grpc.(*Server).serveStreams.func1.2(0x140003021b0, 0x140002be8c0, 0x104da2c38, 0x14000092d80, 0x140006de100)
        pkg/mod/google.golang.org/[email protected]/server.go:871 +0x94
created by google.golang.org/grpc.(*Server).serveStreams.func1
        pkg/mod/google.golang.org/[email protected]/server.go:869 +0x1f8

Error: The terraform-provider-confluentcloud_0.5.0 plugin crashed!

This is always indicative of a bug within the plugin. It would be immensely
helpful if you could report the crash with the plugin's maintainers so that it
can be fixed. The output above should help diagnose the issue.

can't create topics/acls for dedicated clusters

Having issues when creating topics and ACLs on the dedicated cluster (PrivateLink). It works fine with basic clusters. what can be the cause of this issue - is that dedicated clusters don't have REST endpoint enabled?

confluentcloud_kafka_topic.test: Creating...
│ Error: 404 Not Found
│ with confluentcloud_kafka_topic.test,
│ on test.tf line 17, in resource "confluentcloud_kafka_topic" "test":
│ 17: resource "confluentcloud_kafka_topic" "test" {

v0.4.0 - Error: Provider produced inconsistent result after apply

We are trying to upgrade the provider from version 0.2.0, because we are getting the Error: 429 Too Many Requests, to the 0.4.0 version.

On apply we get the Plan: 15 to add, 0 to change, 0 to destroy.

When confirming the apply, configuration starts to drift: in confluent cloud all the resources are created, but some of them are not reflected in the state file.

Within the process we get the following errors:

│ Error: Provider produced inconsistent result after apply
│
│ When applying changes to module.kafka_topics.confluentcloud_kafka_topic.kafka_topics["ingest_metrics"], provider
│ "module.sentry_kafka_topics.provider[\"registry.terraform.io/confluentinc/confluentcloud\"]" produced an unexpected new value: Root resource was present, but now absent.
│ 
│ This is a bug in the provider, which should be reported in the provider's own issue tracker.

and

│ Error: 404 Not Found: 
│ 
│   with module.kafka_topics.confluentcloud_kafka_topic.kafka_topics["events_subscription_results"],
│   on ../../../../confluent-kafka-topics/main.tf line 1, in resource "confluentcloud_kafka_topic" "kafka_topics":
│    1: resource "confluentcloud_kafka_topic" "kafka_topics" {

What can we do to get around this issue?

BUG: Credentials configuration block isn't being updated

Steps to recreate:

  1. Create cluster specific API keys in Confluent Cloud Console.
  2. Create a topic resource using the keys from #1: https://registry.terraform.io/providers/confluentinc/confluentcloud/latest/docs/resources/confluentcloud_kafka_topic
  3. Remove API Keys (#1) in Confluent Cloud
  4. Create new set of credentials (#1) and update the credentials block (#2)
  5. Run a plan/apply and get a 401 error
  6. Check the state file and you will see the old credentials that were removed

data.confluentcloud_service_account.service_accounts_confluent with display name doesn't use cursor when listing...

Hi,

When using:
data "confluentcloud_service_account" "service_accounts_confluent" and display_name on an existing sa name (verified with confluent cli). The provider doesn't find the sa.
After having a look to the sources it seems the provider call the api that has a paginated result with 100 elements max by page... It seems in the provider the cursor of the pagination is not handled and search the service account only on the 100st service account.

set acl to topic

when i use the script in the "sample projet", i have this error The argument "host" is required, but no definition was found.

But in the doc, i don't see this parameter
resource "confluentcloud_kafka_acl" "describe-orders" { kafka_cluster = confluentcloud_kafka_cluster.test-basic-cluster.id resource_type = "TOPIC" resource_name = confluentcloud_kafka_topic.orders.topic_name pattern_type = "LITERAL" principal = "User:${var.service_account_int_id}" operation = "DESCRIBE" permission = "ALLOW" http_endpoint = confluentcloud_kafka_cluster.test-basic-cluster.http_endpoint credentials { key = var.kafka_api_key secret = var.kafka_api_secret } }

It is a mistake ?

confluentcloud_service_account update dispay_name fails

Changing the display_name in a resource of type confluentcloud_service_account does not work.

The provider tries to update the field but the Confluent API does not allow to update the display_name, see API docs.

Proposed solution: change the schema to recreate the service account when the display name is changed, ie. set ForceNew for the  paramDisplayName key  

paramDisplayName: {
Type: schema.TypeString,
Required: true,
Description: "A human-readable name for the Service Account.",
ValidateFunc: validation.StringIsNotEmpty,
},

403 errors when creating Service Accounts

Hello,
I'm getting 403s errors only when TF is trying to create the SAs.

Error: 403 Forbidden
with module.topics.confluentcloud_service_account.this["test-sa-2"]
on ../modules/topics/main.tf line 47, in resource "confluentcloud_service_account" "this":
resource "confluentcloud_service_account" "this" {

All other resources are working fine. Tried with User Account and Global Cloud access, also with SAs and OrganizationAdmin role.

Using the latest 0.2.0 release!

Do you know what could be the problem?

Thank you,
Andrei

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.