Giter Site home page Giter Site logo

mongey / terraform-provider-confluentcloud Goto Github PK

View Code? Open in Web Editor NEW
110.0 6.0 47.0 297 KB

A Terraform provider for managing resource in confluent.cloud

License: MIT License

Makefile 4.93% Shell 4.90% Go 90.17%
kafka confluent-cloud confluent-platform terraform-provider terraform

terraform-provider-confluentcloud's Introduction

terraform-plugin-confluentcloud

A Terraform plugin for managing Confluent Cloud Kafka Clusters.

Installation

Download and extract the latest release to your terraform plugin directory (typically ~/.terraform.d/plugins/) or define the plugin in the required_providers block.

terraform {
  required_providers {
    confluentcloud = {
      source = "Mongey/confluentcloud"
    }
  }
}

Example

Configure the provider directly, or set the ENV variables CONFLUENT_CLOUD_USERNAME &CONFLUENT_CLOUD_PASSWORD

terraform {
  required_providers {
    confluentcloud = {
      source = "Mongey/confluentcloud"
    }
    kafka = {
      source  = "Mongey/kafka"
      version = "0.2.11"
    }
  }
}

provider "confluentcloud" {
  username = "[email protected]"
  password = "hunter2"
}

resource "confluentcloud_environment" "environment" {
  name = "production"
}

resource "confluentcloud_kafka_cluster" "test" {
  name             = "provider-test"
  service_provider = "aws"
  region           = "eu-west-1"
  availability     = "LOW"
  environment_id   = confluentcloud_environment.environment.id
  deployment = {
    sku = "BASIC"
  }
  network_egress  = 100
  network_ingress = 100
  storage         = 5000
}

resource "confluentcloud_schema_registry" "test" {
  environment_id   = confluentcloud_environment.environment.id
  service_provider = "aws"
  region           = "EU"

  # Requires at least one kafka cluster to enable the schema registry in the environment.
  depends_on = [confluentcloud_kafka_cluster.test]
}

resource "confluentcloud_api_key" "provider_test" {
  cluster_id     = confluentcloud_kafka_cluster.test.id
  environment_id = confluentcloud_environment.environment.id
}

resource "confluentcloud_service_account" "test" {
  name           = "test"
  description    = "service account test"
}

locals {
  bootstrap_servers = [replace(confluentcloud_kafka_cluster.test.bootstrap_servers, "SASL_SSL://", "")]
}

provider "kafka" {
  bootstrap_servers = local.bootstrap_servers

  tls_enabled    = true
  sasl_username  = confluentcloud_api_key.provider_test.key
  sasl_password  = confluentcloud_api_key.provider_test.secret
  sasl_mechanism = "plain"
  timeout        = 10
}

resource "kafka_topic" "syslog" {
  name               = "syslog"
  replication_factor = 3
  partitions         = 1
  config = {
    "cleanup.policy" = "delete"
  }
}

output "kafka_url" {
  value = local.bootstrap_servers
}

output "key" {
  value     = confluentcloud_api_key.provider_test.key
  sensitive = true
}

output "secret" {
  value     = confluentcloud_api_key.provider_test.secret
  sensitive = true
}

Importing existing resources

This provider supports importing existing Confluent Cloud resources via terraform import.

Most resource types use the import IDs returned by the ccloud CLI. confluentcloud_kafka_cluster and confluentcloud_schema_registry can be imported using <environment ID>/<cluster ID>.

terraform-provider-confluentcloud's People

Contributors

albrechtflo-hg avatar askoriy avatar benweint avatar borisnaydis avatar brunodomenici avatar cgroschupp avatar dependabot[bot] avatar egor-georgiev avatar jfgaretsymphony avatar karstensiemer avatar luggage66 avatar mongey avatar nbob31 avatar nickstamat avatar obrientimothya avatar patrickpichler avatar petern-sc avatar prabhakarank87 avatar wuestkamp avatar yinzara avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

terraform-provider-confluentcloud's Issues

Granular Access - Terraform variable not Supported

The Current terraform-provider-confluentcloud plugin not supported the feature to create Granular access for API key alongwith ACL permission to respective service account.

Any workaround for this to incorporate in terraform ?

Add data sources for environment and cluster

Our use case at the moment with Confluent is creating kafka cluster that are accessible over AWS private link. This means we need to manually provision our clusters than using this plugin.
It would be useful for us to simply have data sources for the environment and cluster, and use the provider to manage service accounts.

service account: produced an unexpected new value for was present, but now absent

When creating the following service account I get an error

resource "confluentcloud_service_account" "kafka-1-rw" {
  name           = "kafka-1-rw"
  description    = "service account kafka-1-rw"
}

resource "confluentcloud_api_key" "kafka-1-rw" {
  cluster_id     = confluentcloud_kafka_cluster.kafka-1.id
  environment_id = confluentcloud_environment.environment.id
  user_id        = confluentcloud_service_account.kafka-1-rw.id
  description    = "kafka-1-rw"
}
2020-09-15T12:45:16.733Z [DEBUG] plugin.terraform-provider-confluentcloud_v0.0.5: 2020/09/15 12:45:16 [ERROR] Could not create Service Account: service_accounts: Service name is already in use.
2020/09/15 12:45:16 [DEBUG] module.ccloud.confluentcloud_service_account.kafka-1-rw: apply errored, but we're indicating that via the Error pointer rather than returning it: Provider produced inconsistent result after apply: When applying changes to module.ccloud.confluentcloud_service_account.kafka-1-rw, provider "registry.terraform.io/-/confluentcloud" produced an unexpected new value for was present, but now absent. This is a bug in the provider, which should be reported in the provider's own issue tracker.
2020/09/15 12:45:16 [ERROR] module.ccloud: eval: *terraform.EvalApplyPost, err: Provider produced inconsistent result after apply: When applying changes to module.ccloud.confluentcloud_service_account.kafka-1-rw, provider "registry.terraform.io/-/confluentcloud" produced an unexpected new value for was present, but now absent.This is a bug in the provider, which should be reported in the provider's own issue tracker.
2020/09/15 12:45:16 [ERROR] module.ccloud: eval: *terraform.EvalSequence, err: Provider produced inconsistent result after apply: When applying changes to module.ccloud.confluentcloud_service_account.kafka-1-rw, provider "registry.terraform.io/-/confluentcloud" produced an unexpected new value for was present, but now absent.

Once I run terraform import module.ccloud.confluentcloud_service_account.kafka-1-rw <id> it starts working.

Pre-built releases

The README links to the releases page, and it looks like there some builds happening, but no binaries available from there.

Would it be possible to create a release with pre-built binaries? I don't have the Go toolchain set up myself, so having pre-built binaries would be great.

Make Provider and Region configurable for Acc Tests

In the acceptance test for Clusters (and potentially also in the one for Connectors, see #82), the Cloud provider and region to use for the cluster are encoded directly into the terraform config to use.

As e.g. we are tied to one specific cloud provider with our Confluent Cloud instance (hint: not AWS), I had to locally adjust the test files to be able to run the Acceptance Test, and rollback these changes after successful testing.

If these two parameters were also configurable for the Acc Tests, one could run the Acc Tests more easily - also e.g. in one's own GitHub Fork.

Error: kafka: client has run out of available brokers to talk to (Is your cluster reachable?)

Hi guys,

I was trying create the topics in AWS MSK Cluster, but show this error below for me.

Error: kafka: client has run out of available brokers to talk to (Is your cluster reachable?)

  on main.tf line 33, in resource "kafka_topic" "rfmkafkatopic":
  33: resource "kafka_topic" "rfmkafkatopic" {

Note: I was running terraform plan by GitHub Actions. Do I need to set up some security group for example?

And I was using this module to create MSK cluster

module "kafka" {
  source  = "cloudposse/msk-apache-kafka-cluster/aws"
  version = "0.5.2"

  name                      = random_id.msk_cluster_id.hex
  vpc_id                    = data.aws_vpc.my_vpc.id
  security_groups           = [module.eks.worker_security_group_id, var.bastion_host_security_group_id, var.gh_runner_sg]
  subnet_ids                = data.aws_subnet_ids.my_private_subnets.ids
  kafka_version             = var.kafka_version
  number_of_broker_nodes    = var.number_of_broker_nodes
  broker_instance_type      = var.broker_instance_type
  client_broker             = "TLS_PLAINTEXT"
  client_tls_auth_enabled   = false
  client_sasl_scram_enabled = false
  cloudwatch_logs_log_group = aws_cloudwatch_log_group.msk.name
  cloudwatch_logs_enabled   = true
  jmx_exporter_enabled      = true
  node_exporter_enabled     = true

  context = module.this.context

}

My provider.tf file

provider "kafka" {
  bootstrap_servers = [data.aws_msk_cluster.cluster.bootstrap_brokers_tls]
  skip_tls_verify   = var.msk_skip_tls_verify
  tls_enabled       = var.msk_tls_enabled
}

My versions.tf file

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 3.36"
    }

    http = {
      source  = "terraform-aws-modules/http"
      version = "2.4.1"
    }

    kafka = {
      source  = "Mongey/kafka"
      version = "0.3.3"
    }

    local    = ">= 1.4"
    null     = ">= 2.1"
    template = ">= 2.1"
    random   = ">= 2.1"
  }
}

my data.tf file

data "aws_msk_cluster" "cluster" {
  cluster_name = local.msk_name
}

Provider Error Creating Cluster on Confluent Account

Hello!.

I'm using the confluent cloud provider from this repo: v 0.0.11 , to deploy several resources: kafka cluster, schema registry, some ACLs and a connector for a sink; Terraform plan works and no error is shown, however while trying to create the kafka cluster with apply, I'm getting this error:

Error: clusters: deployment creation failed: deployment creation failed: error getting schedulable pkc id:"deployment-xxxx" account_id:"env-xxxx" network_access:<public_internet:<enabled:true > > sku:STANDARD provider:<cloud:AZURE region:"West US 2" > durability:HIGH creation_constraint:<> : no network region available for deployment: id:"deployment-6gq28" account_id:"env-z9njy" network_access:<public_internet:<enabled:true > > sku:STANDARD provider:<cloud:AZURE region:"West US 2" > durability:HIGH creation_constraint:<> : resource not found: error getting schedulable pkc id:"deployment-xxxx" account_id:"env-xxxx" network_access:<public_internet:<enabled:true > > sku:STANDARD provider:<cloud:AZURE region:"West US 2" > durability:HIGH creation_constraint:<> : no network region available for deployment: id:"deployment-6gq28" account_id:"env-z9njy" network_access:<public_internet:<enabled:true > > sku:STANDARD provider:<cloud:AZURE region:"West US 2" > durability:HIGH creation_constraint:<> : resource not found
โ”‚ 
โ”‚   with confluentcloud_kafka_cluster.kafka-test,
โ”‚   on confluent-cloud.tf line 5, in resource "confluentcloud_kafka_cluster" "kafka-test":
โ”‚    5: resource "confluentcloud_kafka_cluster" "kafka-test" {
โ”‚ 

My test cluster is very generic:

resource "confluentcloud_environment" "environment-test" {
  name = "testing"
}

resource "confluentcloud_kafka_cluster" "kafka-test" {
  name             = "kafka-cluster-test"
  service_provider = "azure"
  region           = "West US 2"
  availability     = "HIGH"
  environment_id   = confluentcloud_environment.environment-test.id
  deployment = {
    sku = "STANDARD"
  }
  network_egress  = 100
  network_ingress = 100
  storage         = 5000
}

Is this an account configuration error?, is it my Azure Tenant or Confluent Azure Tenant? or a Terraform configuration Issue.

Thanks for the help!.

Kafka topic is not created when provisioning the Kafka Cluster

I try to run the example to provision a Kafka cluster in the Confluent Cloud and create a topic.
The environment and cluster are successfully created but the topic is not created. It looks like the Kafka broker is not yet ready to create the topic.

I'm using:

  • Terraform v0.13.4
  • provider registry.terraform.io/mongey/confluentcloud v0.0.5
  • provider registry.terraform.io/mongey/kafka v0.2.10

terraform apply output:

confluentcloud_environment.environment: Creating...
confluentcloud_service_account.test: Creating...
confluentcloud_environment.environment: Creation complete after 1s [id=env-9pk7v]
confluentcloud_kafka_cluster.test: Creating...
confluentcloud_kafka_cluster.test: Creation complete after 0s [id=lkc-9g3ky]
confluentcloud_api_key.provider_test: Creating...
confluentcloud_api_key.provider_test: Creation complete after 1s [id=145974]
kafka_topic.syslog: Creating...

Error: Provider produced inconsistent result after apply

When applying changes to confluentcloud_service_account.test, provider
"registry.terraform.io/mongey/confluentcloud" produced an unexpected new
value: Root resource was present, but now absent.

This is a bug in the provider, which should be reported in the provider's own
issue tracker.


Error: kafka: client has run out of available brokers to talk to (Is your cluster reachable?)

  on main.tf line 54, in resource "kafka_topic" "syslog":
  54: resource "kafka_topic" "syslog" {

When I run terraform apply for the second time the topic will be created.
Is there a way to wait until the Kafka cluster is ready to create a topic?

Error: get cluster - Terraform Import

@Mongey When import the existing cluster using terraform import, getting the below error

Command for Import
$terraform import confluentcloud_kafka_cluster.test lkc-x2yvk

Output:-
confluentcloud_kafka_cluster.test: Importing from ID "lkc-x2yvk"...
confluentcloud_kafka_cluster.test: Import prepared!
Prepared confluentcloud_kafka_cluster for import
confluentcloud_kafka_cluster.test: Refreshing state... [id=lkc-x2yvk]

Error: get cluster: Oops, something went wrong

verson 0.0.1 on windows/linux has a mixup between "confluentcloud" and "confluent-cloud"

adam@DESKTOP-MNAR8U3:/mnt/c/Users/adam_a/terraform$ ./terraform init

Initializing the backend...

Initializing provider plugins...

The following providers do not have any version constraints in configuration,
so the latest version was installed.

To prevent automatic upgrades to new major versions that may contain breaking
changes, it is recommended to add version = "..." constraints to the
corresponding provider blocks in configuration, with the constraint strings
suggested below.

  • provider.confluent-cloud: version = "~> 0.0"
  • provider.google: version = "~> 3.8"

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
adam@DESKTOP-MNAR8U3:/mnt/c/Users/adam_a/terraform$ ./terraform plan

Error: Error in function call

on demo.tf line 2, in provider "google":
2: credentials = file("axonite-dev-11ff7f13be93.json")

Call to function "file" failed: no file exists at
axonite-dev-11ff7f13be93.json.

Error: Invalid resource type

on demo.tf line 9, in resource "confluent-cloud_environment" "test":
9: resource "confluent-cloud_environment" "test" {

The provider provider.confluent-cloud does not support resource type
"confluent-cloud_environment".

Error: Invalid resource type

on demo.tf line 13, in resource "confluent-cloud_kafka_cluster" "test":
13: resource "confluent-cloud_kafka_cluster" "test" {

The provider provider.confluent-cloud does not support resource type
"confluent-cloud_kafka_cluster".

What args goes into provider?

The current terraform version is 0.12.13

getting an error: "An argument named "CONFLUENT_CLOUD_USERNAME" is not expected here.

Terraform keeps forcing new resource on unchanged cluster

Hi,

I have an issue on cluster in confluent cloud, it keeps forcing new resource on unchanged cluster.

`-/+ confluentcloud_kafka_cluster.test (new resource required)
id: "lkc-nkqz6" => (forces new resource)
availability: "LOW" => "LOW"
bootstrap_servers: "SASL_SSL://pkc-4nym6.us-east-1.aws.confluent.cloud:9092" =>
deployment.%: "1" => "0" (forces new resource)
deployment.sku: "BASIC" => "" (forces new resource)
environment_id: "env-7qnnp" => "env-7qnnp"
name: "provider-test" => "provider-test"
network_egress: "100" => "100"
network_ingress: "100" => "100"
region: "us-east-1" => "us-east-1"
service_provider: "aws" => "aws"
storage: "5000" => "5000"

`
what are default values to set deployment?

thanks
Niranjan

Support ACLs

Now that service accounts can be created by this provider, we need a way of setting ACLs so that the service accounts can be used.

The provider should support setting alter, alter-configs, cluster-action, create, delete, describe, describe-configs, idempotent-write, read, and write on topics, consumer-groups, and clusters and the underlying API requires a service account ID (rather than name).

terraform import missing fields

Terraform import doesn't import all fields - particularly for service accounts, but it seems to affect other types too.

Here's an example for a service account.

I run terraform import confluentcloud_service_account.test_account 108949 which appears to import the resource successfully. But a subsequent terraform plan gives the following:

  # confluentcloud_service_account.test_account must be replaced
-/+ resource "confluentcloud_service_account" "test_account" {
      + description = "Alice's test account" # forces replacement
      ~ id          = "108949" -> (known after apply)
      + name        = "alice-test-account" # forces replacement
    }

If you look at the data in the tfstate file, both description and name are null. If I manually edit the tfstate file to correct these fields then all is well, but I shouldn't have to do that.

Additionally, whilst changing the name is correctly forcing a replacement, it should be possible to change the description without forcing a replacement. Though this is perhaps best raised as a separate issue.

Error when availability == "HIGH"

When durability is set to "HIGH" on a confluentcloud_kafka_cluster resource the API returns:

Error: clusters: deployment creation/validation failed: Creating multi zone cluster is not eligible for the sku.:
        durability: Creating multi zone cluster is not eligible for the sku.BASIC

The provider doesn't seem to expose a way to set the sku to STANDARD.

Terraform import fails for schema registry clusters

Hello.

Similarly to the import issue that was detected for kafka clusters in #64 , there is an error when trying to import a schema registry cluster, as well due to missing environment id in the request sent to Confluent Cloud.

It ends with the following error:

|
โ”‚ Error: Get schema registry error: Bad Request
โ”‚ 
โ”‚ 
|

The same fix that was done for a kafka cluster can be replicated for a schema registry cluster.

Cheers.

Timeout when creating API keys tied to service accounts

When creating API keys tied to a service-account terraform runs indefinitely and or times out. The API keys are created properly and I can see them in Confluent Cloud, but this is not reported back to Terraform. When I create this not supplying the user_id it works fine.

terraform: 0.13.5
terraform-provider-confluentcloud: 0.0.7
terraform-provider-kafka: 0.2.11

resource "confluentcloud_service_account" "devops_team" {
  name        = "devops-team"
}

resource "confluentcloud_api_key" "devops_kafka_api_key" {
  cluster_id     = confluentcloud_kafka_cluster.confluentcloud_kafka_cluster.id
  environment_id = confluentcloud_environment.confluentcloud_environment.id
  user_id        = confluentcloud_service_account.devops_team.id
  depends_on     = [confluentcloud_kafka_cluster.confluentcloud_kafka_cluster]
}

resource "confluentcloud_api_key" "devops_schema_registry_api_key" {
  environment_id   = confluentcloud_environment.confluentcloud_environment.id
  logical_clusters = [confluentcloud_schema_registry.schema_registry.id]
  user_id          = confluentcloud_service_account.devops_team.id
  depends_on = [
    confluentcloud_kafka_cluster.confluentcloud_kafka_cluster,
    confluentcloud_schema_registry.schema_registry
  ]
}

Logs

module.confluent_cloud.confluentcloud_api_key.devops_schema_registry_api_key: Creating...
2021/01/25 11:31:49 [DEBUG] EvalApply: ProviderMeta config value set
2021/01/25 11:31:49 [DEBUG] module.confluent_cloud.confluentcloud_api_key.devops_schema_registry_api_key: applying the planned Create change
2021-01-25T11:31:49.780-0600 [DEBUG] plugin.terraform-provider-confluentcloud_v0.0.7: 2021/01/25 11:31:49 [DEBUG] Creating API key
2021-01-25T11:31:50.147-0600 [DEBUG] plugin.terraform-provider-confluentcloud_v0.0.7: 2021/01/25 11:31:50 [INFO] Created API Key, waiting for it become usable
2021-01-25T11:31:50.147-0600 [DEBUG] plugin.terraform-provider-confluentcloud_v0.0.7: 2021/01/25 11:31:50 [DEBUG] Waiting for state to become: [Ready]
module.confluent_cloud.confluentcloud_api_key.devops_schema_registry_api_key: Still creating... [10s elapsed]
2021-01-25T11:32:00.211-0600 [DEBUG] plugin.terraform-provider-confluentcloud_v0.0.7: 2021/01/25 11:32:00 [INFO] API Key Created successfully: %!s(<nil>)
 runtime error: invalid memory address or nil pointer dereference
2021-01-25T11:32:00.214-0600 [DEBUG] plugin.terraform-provider-confluentcloud_v0.0.7: [signal SIGSEGV: segmentation violation code=0x1 addr=0x58 pc=0x1ac1845]
2021-01-25T11:32:00.214-0600 [DEBUG] plugin.terraform-provider-confluentcloud_v0.0.7: 
2021-01-25T11:32:00.214-0600 [DEBUG] plugin.terraform-provider-confluentcloud_v0.0.7: goroutine 147 [running]:
2021-01-25T11:32:00.214-0600 [DEBUG] plugin.terraform-provider-confluentcloud_v0.0.7: github.com/Mongey/terraform-provider-confluentcloud/ccloud.clusterReady.func1(0xc00037cf50, 0xc00037ce78, 0x2, 0x1, 0xc0000aa001, 0x1654126)
2021-01-25T11:32:00.214-0600 [DEBUG] plugin.terraform-provider-confluentcloud_v0.0.7:   /home/runner/work/terraform-provider-confluentcloud/terraform-provider-confluentcloud/ccloud/resource_kafka_cluster.go:204 +0xa5
2021-01-25T11:32:00.214-0600 [DEBUG] plugin.terraform-provider-confluentcloud_v0.0.7: github.com/hashicorp/terraform/helper/resource.(*StateChangeConf).WaitForState.func1(0xc0000bfe60, 0xc0001a3110, 0xc00004ba40, 0xc00009b320, 0xc0002895a8, 0xc0002895a0)
2021-01-25T11:32:00.214-0600 [DEBUG] plugin.terraform-provider-confluentcloud_v0.0.7:   /home/runner/go/pkg/mod/github.com/hashicorp/[email protected]/helper/resource/state.go:103 +0x29d
2021-01-25T11:32:00.214-0600 [DEBUG] plugin.terraform-provider-confluentcloud_v0.0.7: created by github.com/hashicorp/terraform/helper/resource.(*StateChangeConf).WaitForState
2021-01-25T11:32:00.214-0600 [DEBUG] plugin.terraform-provider-confluentcloud_v0.0.7:   /home/runner/go/pkg/mod/github.com/hashicorp/[email protected]/helper/resource/state.go:80 +0x1bf
2021/01/25 11:32:00 [DEBUG] module.confluent_cloud.confluentcloud_api_key.devops_schema_registry_api_key: apply errored, but we're indicating that via the Error pointer rather than returning it: rpc error: code = Unavailable desc = transport is closing

Creation of any resources of "confluentcloud" provider is running indefinitly

Hello,

I'm facing to the following issue:

The process is blocked:
"confluentcloud_environment.environment: Still creating... [1min30s elapsed]"...and infinitely incrementing

Initially, I wanted to create a "confluentcloud_service_account" using environment & cluster already existing in Confluent Cloud,
not working.....

Then try to create new resources "confluentcloud_environment" and "confluentcloud_kafka_cluster" using TF....it neither work.
The process get stuck.

Any help or advice on this problem ?

Originally posted by @Zico56 in #15 (comment)

is the default "deployment" on Cluster type "Basic"?

In the golang code, I am trying to figure out the default cluster type ( https://github.com/Mongey/terraform-provider-confluentcloud/blob/master/ccloud/resource_kafka_cluster.go). I think that if we don't specify "deployment" in the following then Basic cluster type will be created. Would you be able to confirm that this is correct?

resource "confluentcloud_kafka_cluster" "test" {
  name             = "provider-test"
  service_provider = "aws"
  region           = "eu-west-1"
  availability     = "LOW"
  environment_id   = confluentcloud_environment.environment.id
}

Please ignore the question. I have found the answer.

Enable Confluent Cloud API authentication using Confluent Cloud API Keys

The Confluent Cloud API supports authentication via API keys. At Indeed we use Terraform to manage our Confluent Cloud infrastructure. Authentication using API keys is preferred over username and password authentication.

Confluent Cloud API documentation for using API keys to authenticate:
https://confluent.cloud/api/docs#section/Authentication

Note that Confluent Cloud supports two types of API keys, cluster and cloud.

Cloud API Keys - These grant access to the Confluent Cloud Control Plane APIs, such as for Provisioning and Metrics integrations.
Cluster API Keys - These grant access to a single Confluent cluster, such as a specific Kafka or Schema Registry cluster.

I created an issue against the go-client-confluent-cloud as it appears the client does not yet support API Key based authentication:
cgroschupp/go-client-confluent-cloud#13

Plugin crashes when running README example

Hi,

I was giving this plugin a try this evening, but unfortunately the example in the README seems to crash with new versions of the Kafka provider:

terraform {
  required_providers {
    confluentcloud = {
      source = "Mongey/confluentcloud"
    }

    kafka = {
      source  = "Mongey/kafka"
      version = "0.3.1"
    }
  }
}


resource "confluentcloud_environment" "environment" {
  name = var.confluent_cloud_environment_name
}

resource "confluentcloud_kafka_cluster" "kafka_cluster" {
  name             = "foo"
  service_provider = "aws"
  region           = "eu-west-2"
  availability     = "LOW"
  environment_id   = confluentcloud_environment.environment.id
  deployment = {
    sku = "BASIC"
  }
  network_egress  = 100
  network_ingress = 100
  storage         = 5000
}

resource "confluentcloud_api_key" "credentials" {
  cluster_id     = confluentcloud_kafka_cluster.kafka_cluster.id
  environment_id = confluentcloud_environment.environment.id
}

locals {
  bootstrap_servers = [replace(confluentcloud_kafka_cluster.kafka_cluster.bootstrap_servers, "SASL_SSL://", "")]
}

provider "kafka" {
  bootstrap_servers = local.bootstrap_servers

  tls_enabled    = true
  sasl_username  = confluentcloud_api_key.credentials.key
  sasl_password  = confluentcloud_api_key.credentials.secret
  sasl_mechanism = "plain"
  timeout        = 10
}

resource "kafka_topic" "bar" {
  name               = "bar"
  replication_factor = 3
  partitions         = 1
  config = {
    "cleanup.policy"  = "delete"
    "retention.ms"    = 900000
    "retention.bytes" = 10485760
  }
}

When I run this on Terraform Cloud:

๏ฟฝTerraform v0.15.3
on linux_amd64
Configuring remote state backend...
Initializing Terraform configuration...
โ•ท
โ”‚ Error: Request cancelled
โ”‚ 
โ”‚   with module.confluent_cloud.kafka_topic.bar,
โ”‚   on ../modules/platform/modules/confluent_cloud/main.tf line 57, in resource "kafka_topic" "bar":
โ”‚   57: resource "kafka_topic" "bar" {
โ”‚ 
โ”‚ The plugin.(*GRPCProvider).PlanResourceChange request was cancelled.
โ•ต

Stack trace from the terraform-provider-kafka_v0.3.1 plugin:

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0xef5a7c]

goroutine 27 [running]:
github.com/Mongey/terraform-provider-kafka/kafka.NewClient(0xc00040a090, 0xc000489e70, 0x0, 0x42dd4a)
	/home/runner/work/terraform-provider-kafka/terraform-provider-kafka/kafka/client.go:36 +0x1bc
github.com/Mongey/terraform-provider-kafka/kafka.(*LazyClient).init.func1()
	/home/runner/work/terraform-provider-kafka/terraform-provider-kafka/kafka/lazy_client.go:23 +0x40
sync.(*Once).doSlow(0xc000110e70, 0xc000189160)
	/opt/hostedtoolcache/go/1.13.15/x64/src/sync/once.go:66 +0xe3
sync.(*Once).Do(...)
	/opt/hostedtoolcache/go/1.13.15/x64/src/sync/once.go:57
github.com/Mongey/terraform-provider-kafka/kafka.(*LazyClient).init(0xc000110e70, 0xfc9060, 0x1487d60)
	/home/runner/work/terraform-provider-kafka/terraform-provider-kafka/kafka/lazy_client.go:22 +0x3f8
github.com/Mongey/terraform-provider-kafka/kafka.(*LazyClient).CanAlterReplicationFactor(0xc000110e70, 0x11d52b3, 0x12, 0x1)
	/home/runner/work/terraform-provider-kafka/terraform-provider-kafka/kafka/lazy_client.go:101 +0x2f
github.com/Mongey/terraform-provider-kafka/kafka.customDiff(0xc000114d40, 0x114e300, 0xc000110e70, 0xc000604060, 0xc000114d40)
	/home/runner/work/terraform-provider-kafka/terraform-provider-kafka/kafka/resource_kafka_topic.go:305 +0xfe
github.com/hashicorp/terraform-plugin-sdk/helper/schema.schemaMap.Diff(0xc00036b710, 0xc000429ae0, 0xc000111ce0, 0x1218320, 0x114e300, 0xc000110e70, 0x14ccc00, 0xc000424300, 0x0, 0x0)
	/home/runner/go/pkg/mod/github.com/hashicorp/[email protected]/helper/schema/schema.go:509 +0xac2
github.com/hashicorp/terraform-plugin-sdk/helper/schema.(*Resource).simpleDiff(0xc000106a00, 0xc000429ae0, 0xc000111ce0, 0x114e300, 0xc000110e70, 0xc000111c01, 0xc000015790, 0x40d34d)
	/home/runner/go/pkg/mod/github.com/hashicorp/[email protected]/helper/schema/resource.go:351 +0x85
github.com/hashicorp/terraform-plugin-sdk/helper/schema.(*Provider).SimpleDiff(0xc000106b80, 0xc000015978, 0xc000429ae0, 0xc000111ce0, 0xc000122c60, 0xc000111ce0, 0x0)
	/home/runner/go/pkg/mod/github.com/hashicorp/[email protected]/helper/schema/provider.go:316 +0x99
github.com/hashicorp/terraform-plugin-sdk/internal/helper/plugin.(*GRPCProviderServer).PlanResourceChange(0xc00000ea40, 0x14cbcc0, 0xc000111680, 0xc00010e720, 0xc00000ea40, 0xc000111680, 0xc00008aa80)
	/home/runner/go/pkg/mod/github.com/hashicorp/[email protected]/internal/helper/plugin/grpc_provider.go:633 +0x765
github.com/hashicorp/terraform-plugin-sdk/internal/tfplugin5._Provider_PlanResourceChange_Handler(0x1180120, 0xc00000ea40, 0x14cbcc0, 0xc000111680, 0xc00010e6c0, 0x0, 0x14cbcc0, 0xc000111680, 0xc0002cc360, 0x112)
	/home/runner/go/pkg/mod/github.com/hashicorp/[email protected]/internal/tfplugin5/tfplugin5.pb.go:3171 +0x217
google.golang.org/grpc.(*Server).processUnaryRPC(0xc0000e8000, 0x14da9c0, 0xc00040f500, 0xc000130800, 0xc00036b950, 0x1ca7a68, 0x0, 0x0, 0x0)
	/home/runner/go/pkg/mod/google.golang.org/[email protected]/server.go:995 +0x460
google.golang.org/grpc.(*Server).handleStream(0xc0000e8000, 0x14da9c0, 0xc00040f500, 0xc000130800, 0x0)
	/home/runner/go/pkg/mod/google.golang.org/[email protected]/server.go:1275 +0xd97
google.golang.org/grpc.(*Server).serveStreams.func1.1(0xc000488000, 0xc0000e8000, 0x14da9c0, 0xc00040f500, 0xc000130800)
	/home/runner/go/pkg/mod/google.golang.org/[email protected]/server.go:710 +0xbb
created by google.golang.org/grpc.(*Server).serveStreams.func1
	/home/runner/go/pkg/mod/google.golang.org/[email protected]/server.go:708 +0xa1

Error: The terraform-provider-kafka_v0.3.1 plugin crashed!

This is always indicative of a bug within the plugin. It would be immensely
helpful if you could report the crash with the plugin's maintainers so that it
can be fixed. The output above should help diagnose the issue.

I'm assuming this is something to do with the Confluent Cloud cluster not existing yet when the Kafka adapter attempts to connect to it, but I'm unsure of how to proceed. Do you have any pointers?

I'd like to be able to create a cluster and some topics in one Terraform plan, if possible.

Cheers.

Create environment stuck, never finishes

confluentcloud_environment.test: Still creating... [1m10s elapsed]

This is the env:

resource "confluent-cloud_environment" "test" {
name = "provider-test"
}

Is there a way to get debug information out?

Terraform Import / Plan unable to detect existing API Key & Service accounts

Tried terraform import for existing stress env. Was able to import entities like env, cluster. But subsequent Terraform Plan unable to detect existing resources API Key & Service account and populating under destroy list like show below.

confluentcloud_service_account.test: Refreshing state... [id=156958]
confluentcloud_environment.environment: Refreshing state... [id=env-kj5w2]
confluentcloud_kafka_cluster.test: Refreshing state... [id=lkc-j0xw8]
confluentcloud_api_key.provider_test: Refreshing state... [id=186917]

------------------------------------------------------------------------

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
-/+ destroy and then create replacement

Terraform will perform the following actions:

# confluentcloud_api_key.provider_test must be replaced
-/+ resource "confluentcloud_api_key" "provider_test" {
+ cluster_id = "lkc-j0xw8" # forces replacement
+ environment_id = "env-kj5w2" # forces replacement
~ id = "186917" -> (known after apply)
+ key = (known after apply)
+ secret = (sensitive value)
}

# confluentcloud_service_account.test must be replaced
-/+ resource "confluentcloud_service_account" "test" {
+ description = "Service account for hcs-kafka" # forces replacement
~ id = "156958" -> (known after apply)
+ name = "hcs-kafka-sa" # forces replacement
}

Plan: 2 to add, 0 to change, 2 to destroy.

API rate limiting on concurrent CI deployments

In larger, multi-account Terraform deployments, simultaneous execution of automated CI pipelines can cause the provider Login() function to trigger Confluent API rate limiting.

When such rate limiting occurs, the 'terraform-provider-confluentcloud' module does not attempt to retry the authentication; insead failing immediately and causing the Terraform deployment pipeline to subsequently fail.

Broken with terraform 13

2020/09/28 11:09:58 [INFO] Terraform version: 0.13.0
2020/09/28 11:09:58 [INFO] Go runtime version: go1.14.2
...
2020/09/28 11:10:00 [INFO] CLI command args: []string{"apply"}
   2020/09/28 11:10:00 [TRACE] Meta.Backend: built configuration for "gcs" backend with hash value 2205733219
   2020/09/28 11:10:00 [TRACE] Preserving existing state lineage "8f853e0d-6d3d-37f8-32a7-fcd9b12d4912"
   2020/09/28 11:10:00 [TRACE] Preserving existing state lineage "8f853e0d-6d3d-37f8-32a7-fcd9b12d4912"
   2020/09/28 11:10:00 [TRACE] Meta.Backend: working directory was previously initialized for "gcs" backend
   2020/09/28 11:10:00 [TRACE] Meta.Backend: using already-initialized, unchanged "gcs" backend configuration
   2020/09/28 11:10:00 [TRACE] Meta.Backend: instantiated backend of type *gcs.Backend
   2020/09/28 11:10:00 [TRACE] providercache.fillMetaCache: scanning directory .terraform/plugins
   2020/09/28 11:10:00 [TRACE] getproviders.SearchLocalDirectory: found github.com/cgroschupp/confluent-cloud v0.0.6 for darwin_amd64 at .terraform/plugins/github.com/cgroschupp/confluent-cloud/0.0.6/darwin_amd64
   2020/09/28 11:10:00 [TRACE] providercache.fillMetaCache: including .terraform/plugins/github.com/cgroschupp/confluent-cloud/0.0.6/darwin_amd64 as a candidate package for github.com/cgroschupp/confluent-cloud 0.0.6
   2020/09/28 11:10:00 [DEBUG] checking for provisioner in "."
   2020/09/28 11:10:00 [DEBUG] checking for provisioner in "/Users/user/data/workspace/softs/bin"
   2020/09/28 11:10:00 [DEBUG] checking for provisioner in "/Users/user/.terraform.d/plugins"
   2020/09/28 11:10:00 [DEBUG] checking for provisioner in "/Users/user/.terraform.d/plugins/darwin_amd64"
   2020/09/28 11:10:00 [INFO] Failed to read plugin lock file .terraform/plugins/darwin_amd64/lock.json: open .terraform/plugins/darwin_amd64/lock.json: no such file or directory
   2020/09/28 11:10:00 [TRACE] Meta.Backend: backend *gcs.Backend does not support operations, so wrapping it in a local backend
   2020/09/28 11:10:00 [INFO] backend/local: starting Apply operation
   2020/09/28 11:10:00 [TRACE] backend/local: requesting state manager for workspace "default"
   2020/09/28 11:10:02 [TRACE] backend/local: requesting state lock for workspace "default"
   2020/09/28 11:10:02 [TRACE] backend/local: reading remote state for workspace "default"
   2020/09/28 11:10:03 [TRACE] backend/local: retrieving local state snapshot for workspace "default"
   2020/09/28 11:10:03 [TRACE] backend/local: building context for current working directory
   2020/09/28 11:10:03 [TRACE] terraform.NewContext: starting
   2020/09/28 11:10:03 [TRACE] terraform.NewContext: loading provider schemas
   2020/09/28 11:10:03 [TRACE] LoadSchemas: retrieving schema for provider type "registry.terraform.io/hashicorp/confluent-cloud"
   2020/09/28 11:10:03 [TRACE] LoadSchemas: retrieving schema for provider type "github.com/cgroschupp/confluent-cloud"
   2020-09-28T11:10:03.198+0300 [INFO]  plugin: configuring client automatic mTLS
   2020-09-28T11:10:03.234+0300 [DEBUG] plugin: starting plugin: path=.terraform/plugins/github.com/cgroschupp/confluent-cloud/0.0.6/darwin_amd64/terraform-provider-confluent-cloud_v0.0.6 args=[.terraform/plugins/github.com/cgroschupp/confluent-cloud/0.0.6/darwin_amd64/terraform-provider-confluent-cloud_v0.0.6]
   2020-09-28T11:10:03.239+0300 [DEBUG] plugin: plugin started: path=.terraform/plugins/github.com/cgroschupp/confluent-cloud/0.0.6/darwin_amd64/terraform-provider-confluent-cloud_v0.0.6 pid=5574
   2020-09-28T11:10:03.239+0300 [DEBUG] plugin: waiting for RPC address: path=.terraform/plugins/github.com/cgroschupp/confluent-cloud/0.0.6/darwin_amd64/terraform-provider-confluent-cloud_v0.0.6
   2020-09-28T11:10:03.250+0300 [DEBUG] plugin.terraform-provider-confluent-cloud_v0.0.6: 2020/09/28 11:10:03 [INFO] Creating Provider
   2020-09-28T11:10:03.251+0300 [INFO]  plugin.terraform-provider-confluent-cloud_v0.0.6: configuring server automatic mTLS: timestamp=2020-09-28T11:10:03.251+0300
   2020-09-28T11:10:03.279+0300 [DEBUG] plugin: using plugin: version=5
   2020-09-28T11:10:03.279+0300 [DEBUG] plugin.terraform-provider-confluent-cloud_v0.0.6: plugin address: address=/var/folders/8x/y2xr62qs19g1yzysh9nqxtqs3qrbq0/T/plugin214785797 network=unix timestamp=2020-09-28T11:10:03.279+0300
   2020-09-28T11:10:03.346+0300 [TRACE] plugin.stdio: waiting for stdio data
   2020/09/28 11:10:03 [TRACE] GRPCProvider: GetSchema
   2020-09-28T11:10:03.347+0300 [WARN]  plugin.stdio: received EOF, stopping recv loop: err="rpc error: code = Unimplemented desc = unknown service plugin.GRPCStdio"
   2020/09/28 11:10:03 [TRACE] No provider meta schema returned
   2020/09/28 11:10:03 [TRACE] GRPCProvider: Close
   2020-09-28T11:10:03.355+0300 [DEBUG] plugin: plugin process exited: path=.terraform/plugins/github.com/cgroschupp/confluent-cloud/0.0.6/darwin_amd64/terraform-provider-confluent-cloud_v0.0.6 pid=5574
   2020-09-28T11:10:03.355+0300 [DEBUG] plugin: plugin exited

   Error: Could not load plugin


   Plugin reinitialization required. Please run "terraform init".

   Plugins are external binaries that Terraform uses to access and manipulate
   resources. The configuration provided requires plugins which can't be located,
   don't satisfy the version constraints, or are otherwise incompatible.

   Terraform automatically discovers provider requirements from your
   configuration, including providers used in child modules. To see the
   requirements and constraints, run "terraform providers".

   Failed to instantiate provider
   "registry.terraform.io/hashicorp/confluent-cloud" to obtain schema: unknown
   provider "registry.terraform.io/hashicorp/confluent-cloud"

I am trying to migrate to terraform 13 but the plugin is not working with it.
I don't have a proper provider development environment but I tried to update the go terraform dependency, built it, but still doesn't work.
Can you take a look and provide a new version that works with terraform 13?

Thanks

Schema registry resource has wrong id, so Schema registry API key cannot be created

ID of created confluentcloud_schema_registry is expected to be the ID in confluent.cloud API responce, like lsrc-n5z6d, similar to IDs of other resources created by terraform:

confluentcloud_environment -> id: "env-qp956"
confluentcloud_kafka_cluster -> id: "lkc-51gd2"

Instead, it has the value account schema-registry env-qp956. So with it there is no way to create Schema registry API key.

If the ID would be correct than Schema registry API key could be created with

resource "confluentcloud_api_key" "schema_registry_api_key" {
  environment_id = confluentcloud_environment.environment.id
  logical_clusters = [
    confluentcloud_schema_registry.registry.id
  ]
}

But Terraform returns an error:
Error: api_keys: cluster, key, or account not found

I'm not sure if the problem is in Terraform module or in used go-client-confluent-cloud library

Error: bootstrap_servers was not set

When trying to execute the example TF code from scratch, I'm getting:

Error: bootstrap_servers was not set

  on main.tf line 37, in provider "kafka":
  37: provider "kafka" {

This is because the Kafka provider needs bootstrap_servers to work, though these have not been created by the other resources.

Is there a way to "delay" the execution of the kafka provider?

Issue when creating a service account if one with same name already exists

Attempting to create a service account with the same name as one that already exists exits with an ambiguous error message.

It appears that the previous service account was not delete in a terraform destroy which meant that when reapplying it was not obvious that the service account still existed.

Error: Provider produced inconsistent result after apply

When applying changes to confluentcloud_service_account.terraform, provider
"registry.terraform.io/-/confluentcloud" produced an unexpected new value for
was present, but now absent.

Internal validation of the provider failed

First of all, this is a very interesting plugin. I've been looking for this kind of thing for a while.

I'm new in confluent world and I'm trying to create a cluster with the code:

terraform {
  backend "gcs" {
  }
}

provider "confluentcloud" {
  version = "0.0.1"
}

resource "confluentcloud_environment" "environment" {
  name = "Default"
}

resource "confluentcloud_kafka_cluster" "kafka_cluster" {
  name             = "my-cluster"
  environment_id   = confluentcloud_environment.environment.id
  service_provider = "gcp"
  region           = "europe-west1"
  availability     = "single-zone"
}

resource "confluentcloud_api_key" "provider_test" {
  cluster_id     = confluentcloud_kafka_cluster.kafka_cluster.id
  environment_id = confluentcloud_environment.environment.id
}

I have build the plugin, placed in appropriate folder (it's found in the log) and I have setup the 2 environment variables. I'm using version 0.0.1.

The error message I have is:

Error: Internal validation of the provider failed! This is always a bug
with the provider itself, and not a user issue. Please report
this bug:

1 error occurred:

  • resource confluentcloud_environment: All fields are ForceNew or Computed w/out Optional, Update is superfluous

Looking at the go code, resource_environment.go has only the parameter 'name' which I am providing ...

What am I missing ?

Error : InternalValidate - Release v0.0.8

@Mongey, Getting the below error when use the new release v0.0.8. could you please take a look ?


Error: InternalValidate

Internal validation of the provider failed! This is always a bug
with the provider itself, and not a user issue. Please report
this bug:

1 error occurred:
* resource confluentcloud_kafka_cluster: deployment: TypeMap with Elem
*Resource not supported,use TypeList/TypeSet with Elem *Resource or TypeMap
with Elem *Schema

Cluster creation failed

Hi,

Tried to provision standard cluster using below template and failed after some time as shown below. Any inputs really helps. Thank you

resource "confluentcloud_kafka_cluster" "cluster" {
name = var.cluster_name
service_provider = "azure"
region = "westus2"
availability = "HIGH"
environment_id = module.environment.environment_id
deployment = {
sku = "STANDARD"
}
}

_....
confluentcloud_kafka_cluster.cluster: Still creating... [18m30s elapsed]
confluentcloud_kafka_cluster.cluster: Still creating... [18m40s elapsed]
confluentcloud_kafka_cluster.cluster: Still creating... [18m50s elapsed]
confluentcloud_kafka_cluster.cluster: Still creating... [19m0s elapsed]
confluentcloud_kafka_cluster.cluster: Still creating... [19m10s elapsed]
confluentcloud_kafka_cluster.cluster: Still creating... [19m20s elapsed]
confluentcloud_kafka_cluster.cluster: Still creating... [19m30s elapsed]
confluentcloud_kafka_cluster.cluster: Still creating... [19m40s elapsed]
confluentcloud_kafka_cluster.cluster: Still creating... [19m50s elapsed]

Error: Error waiting for cluster (lkc-132dv) to be ready: context deadline exceeded

on main.tf line 40, in resource "confluentcloud_kafka_cluster" "cluster":
40: resource "confluentcloud_kafka_cluster" "cluster" {

Makefile:149: recipe for target 'apply_dev1_westus2_confluent_kafka_test' failed
make: *** [apply_dev1_westus2_confluent_kafka_test] Error 1_

Terraform import does not import all configuration arguments

Hi! I have some existing Confluent resources that I want to now manage through Terraform.

I successfully terraform import'ed a cluster, but most of the parameters (configuration arguments?) were not imported. This results in Terraform wanting to replace the cluster when I run terraform plan.

 $>  terraform12 state show confluentcloud_kafka_cluster.pre-prod

# confluentcloud_kafka_cluster.pre-prod:
resource "confluentcloud_kafka_cluster" "pre-prod" {
    bootstrap_servers = "SASL_SSL://xxx:9092"
    id                = "lxx-xxx"
}

 $> terraform plan -target=confluentcloud_kafka_cluster.pre-prod

Refreshing Terraform state in-memory prior to plan...
[...]
Terraform will perform the following actions:

  # confluentcloud_kafka_cluster.pre-prod must be replaced
-/+ resource "confluentcloud_kafka_cluster" "pre-prod" {
      + availability      = "LOW" # forces replacement
      ~ bootstrap_servers = "SASL_SSL://xxx:9092" -> (known after apply)
      + environment_id    = "xxx" # forces replacement
      ~ id                = "xxx" -> (known after apply)
      + name              = "xxx" # forces replacement
      + region            = "xxx" # forces replacement
      + service_provider  = "aws" # forces replacement
    }

I'm new to Terraform plugins, but from looking at the code, it looks like this functionality is just not implemented. Is that the case?

Are there plans to implement it? Would it just be a matter of filling out the clusterRead function?

For now I am planning on just manually updating the state file as a workaround. If you have a better suggestion, please let me know!

Error: Login invalid username and password

I am providing the valid confluent email and password but its throwing me the error as following:

Error: login: Invalid username or password. on main.tf line 13, in provider "confluentcloud": 13: provider "confluentcloud"{

Release v0.0.2 has errors which are fixed in master

Has anyone else faced this or I am doing something wrong. Got this when I tried the example given in the readme. Environment is created successfully.

confluentcloud_service_account.test: Creation complete after 0s [id=90607]
confluentcloud_environment.environment: Creating...
confluentcloud_environment.environment: Creation complete after 1s [id=env-v13r0]
confluentcloud_kafka_cluster.test: Creating...

Error: rpc error: code = Unavailable desc = transport is closing


panic: interface conversion: interface {} is nil, not string
2020-07-20T17:17:09.130+1000 [DEBUG] plugin.terraform-provider-confluentcloud_v0.0.2:
2020-07-20T17:17:09.130+1000 [DEBUG] plugin.terraform-provider-confluentcloud_v0.0.2: goroutine 59 [running]:
2020-07-20T17:17:09.130+1000 [DEBUG] plugin.terraform-provider-confluentcloud_v0.0.2: github.com/Mongey/terraform-provider-confluent-cloud/ccloud.clusterCreate(0xc0000faf50, 0x1b87120, 0xc000564fa0, 0x2, 0x2538700)
2020-07-20T17:17:09.130+1000 [DEBUG] plugin.terraform-provider-confluentcloud_v0.0.2: 	/var/folders/2k/fy8w13m902j3twpw64bp_wbc0000gn/T/=tf-installer.XOE99qcq/tf-installer-clone-confluentcloud/ccloud/resource_kafka_cluster.go:115 +0x62d
2020-07-20T17:17:09.130+1000 [DEBUG] plugin.terraform-provider-confluentcloud_v0.0.2: github.com/hashicorp/terraform/helper/schema.(*Resource).Apply(0xc000044b00, 0xc0002a0fa0, 0xc0001246a0, 0x1b87120, 0xc000564fa0, 0x1a7b801, 0xc00078cae8, 0xc000796990)
2020-07-20T17:17:09.130+1000 [DEBUG] plugin.terraform-provider-confluentcloud_v0.0.2: 	/Users/sougata/go/pkg/mod/github.com/hashicorp/[email protected]/helper/schema/resource.go:286 +0x365
2020-07-20T17:17:09.130+1000 [DEBUG] plugin.terraform-provider-confluentcloud_v0.0.2: github.com/hashicorp/terraform/helper/schema.(*Provider).Apply(0xc000044d00, 0xc0007df8d0, 0xc0002a0fa0, 0xc0001246a0, 0xc00079a088, 0xc0001166e0, 0x1a7d700)
2020-07-20T17:17:09.130+1000 [DEBUG] plugin.terraform-provider-confluentcloud_v0.0.2: 	/Users/sougata/go/pkg/mod/github.com/hashicorp/[email protected]/helper/schema/provider.go:285 +0x99
2020-07-20T17:17:09.130+1000 [DEBUG] plugin.terraform-provider-confluentcloud_v0.0.2: github.com/hashicorp/terraform/helper/plugin.(*GRPCProviderServer).ApplyResourceChange(0xc000010a58, 0x1dffb40, 0xc000789d10, 0xc00010e420, 0xc000010a58, 0xc000789d10, 0xc0004bca48)
2020-07-20T17:17:09.130+1000 [DEBUG] plugin.terraform-provider-confluentcloud_v0.0.2: 	/Users/sougata/go/pkg/mod/github.com/hashicorp/[email protected]/helper/plugin/grpc_provider.go:842 +0x892
2020-07-20T17:17:09.130+1000 [DEBUG] plugin.terraform-provider-confluentcloud_v0.0.2: github.com/hashicorp/terraform/internal/tfplugin5._Provider_ApplyResourceChange_Handler(0x1b893e0, 0xc000010a58, 0x1dffb40, 0xc000789d10, 0xc0002a0dc0, 0x0, 0x1dffb40, 0xc000789d10, 0xc00021a5a0, 0x1c3)
2020-07-20T17:17:09.130+1000 [DEBUG] plugin.terraform-provider-confluentcloud_v0.0.2: 	/Users/sougata/go/pkg/mod/github.com/hashicorp/[email protected]/internal/tfplugin5/tfplugin5.pb.go:3019 +0x217
2020-07-20T17:17:09.130+1000 [DEBUG] plugin.terraform-provider-confluentcloud_v0.0.2: google.golang.org/grpc.(*Server).processUnaryRPC(0xc000498f00, 0x1e09f40, 0xc000642a80, 0xc000791000, 0xc0006563f0, 0x24fed00, 0x0, 0x0, 0x0)
2020-07-20T17:17:09.130+1000 [DEBUG] plugin.terraform-provider-confluentcloud_v0.0.2: 	/Users/sougata/go/pkg/mod/google.golang.org/[email protected]/server.go:966 +0x46a
2020-07-20T17:17:09.130+1000 [DEBUG] plugin.terraform-provider-confluentcloud_v0.0.2: google.golang.org/grpc.(*Server).handleStream(0xc000498f00, 0x1e09f40, 0xc000642a80, 0xc000791000, 0x0)
2020-07-20T17:17:09.130+1000 [DEBUG] plugin.terraform-provider-confluentcloud_v0.0.2: 	/Users/sougata/go/pkg/mod/google.golang.org/[email protected]/server.go:1245 +0xc9e
2020-07-20T17:17:09.130+1000 [DEBUG] plugin.terraform-provider-confluentcloud_v0.0.2: google.golang.org/grpc.(*Server).serveStreams.func1.1(0xc000038570, 0xc000498f00, 0x1e09f40, 0xc000642a80, 0xc000791000)
2020-07-20T17:17:09.130+1000 [DEBUG] plugin.terraform-provider-confluentcloud_v0.0.2: 	/Users/sougata/go/pkg/mod/google.golang.org/[email protected]/server.go:685 +0xa1
2020-07-20T17:17:09.130+1000 [DEBUG] plugin.terraform-provider-confluentcloud_v0.0.2: created by google.golang.org/grpc.(*Server).serveStreams.func1
2020-07-20T17:17:09.130+1000 [DEBUG] plugin.terraform-provider-confluentcloud_v0.0.2: 	/Users/sougata/go/pkg/mod/google.golang.org/[email protected]/server.go:683 +0xa1
2020/07/20 17:17:09 [DEBUG] confluentcloud_kafka_cluster.test: apply errored, but we're indicating that via the Error pointer rather than returning it: rpc error: code = Unavailable desc = transport is closing
2020/07/20 17:17:09 [TRACE] <root>: eval: *terraform.EvalMaybeTainted
2020/07/20 17:17:09 [TRACE] EvalMaybeTainted: confluentcloud_kafka_cluster.test encountered an error during creation, so it is now marked as tainted
2020/07/20 17:17:09 [TRACE] <root>: eval: *terraform.EvalWriteState
2020/07/20 17:17:09 [TRACE] EvalWriteState: removing state object for confluentcloud_kafka_cluster.test
2020/07/20 17:17:09 [TRACE] <root>: eval: *terraform.EvalApplyProvisioners
2020/07/20 17:17:09 [TRACE] EvalApplyProvisioners: confluentcloud_kafka_cluster.test has no state, so skipping provisioners
2020/07/20 17:17:09 [TRACE] <root>: eval: *terraform.EvalMaybeTainted
2020/07/20 17:17:09 [TRACE] EvalMaybeTainted: confluentcloud_kafka_cluster.test encountered an error during creation, so it is now marked as tainted
2020/07/20 17:17:09 [TRACE] <root>: eval: *terraform.EvalWriteState
2020/07/20 17:17:09 [TRACE] EvalWriteState: removing state object for confluentcloud_kafka_cluster.test
2020/07/20 17:17:09 [TRACE] <root>: eval: *terraform.EvalIf
2020/07/20 17:17:09 [TRACE] <root>: eval: *terraform.EvalIf
2020/07/20 17:17:09 [TRACE] <root>: eval: *terraform.EvalWriteDiff
2020-07-20T17:17:09.132+1000 [DEBUG] plugin: plugin process exited: path=/Users/sougata/.terraform.d/plugins/terraform-provider-confluentcloud_v0.0.2 pid=90374 error="exit status 2"
2020/07/20 17:17:09 [TRACE] <root>: eval: *terraform.EvalApplyPost
2020/07/20 17:17:09 [ERROR] <root>: eval: *terraform.EvalApplyPost, err: rpc error: code = Unavailable desc = transport is closing
2020/07/20 17:17:09 [ERROR] <root>: eval: *terraform.EvalSequence, err: rpc error: code = Unavailable desc = transport is closing
2020/07/20 17:17:09 [TRACE] [walkApply] Exiting eval tree: confluentcloud_kafka_cluster.test
2020/07/20 17:17:09 [TRACE] vertex "confluentcloud_kafka_cluster.test": visit complete
2020/07/20 17:17:09 [TRACE] dag/walk: upstream of "provider.confluentcloud (close)" errored, so skipping
2020/07/20 17:17:09 [TRACE] dag/walk: upstream of "meta.count-boundary (EachMode fixup)" errored, so skipping
2020/07/20 17:17:09 [TRACE] dag/walk: upstream of "root" errored, so skipping
2020/07/20 17:17:09 [TRACE] statemgr.Filesystem: have already backed up original terraform.tfstate to terraform.tfstate.backup on a previous write
2020/07/20 17:17:09 [TRACE] statemgr.Filesystem: state has changed since last snapshot, so incrementing serial to 13
2020/07/20 17:17:09 [TRACE] statemgr.Filesystem: writing snapshot at terraform.tfstate
2020/07/20 17:17:09 [TRACE] statemgr.Filesystem: removing lock metadata file .terraform.tfstate.lock.info
2020/07/20 17:17:09 [TRACE] statemgr.Filesystem: unlocking terraform.tfstate using fcntl flock
2020-07-20T17:17:09.143+1000 [DEBUG] plugin: plugin exited



!!!!!!!!!!!!!!!!!!!!!!!!!!! TERRAFORM CRASH !!!!!!!!!!!!!!!!!!!!!!!!!!!!

Terraform crashed! This is always indicative of a bug within Terraform.
A crash log has been placed at "crash.log" relative to your current
working directory. It would be immensely helpful if you could please
report the crash with Terraform[1] so that we can fix this.

When reporting bugs, please include your terraform version. That
information is available on the first line of crash.log. You can also
get it by running 'terraform --version' on the command line.

SECURITY WARNING: the "crash.log" file that was created may contain
sensitive information that must be redacted before it is safe to share
on the issue tracker.

[1]: https://github.com/hashicorp/terraform/issues

!!!!!!!!!!!!!!!!!!!!!!!!!!! TERRAFORM CRASH !!!!!!!!!!!!!!!!!!!!!!!!!!!!```

Confluent Cloud Connectors

Hello

Do you have plans for adding features to add source / sink managed connectors to this provider?

Thanks

Add data: env and cluster

Issue: as long as #33 in pending state cluster can't be created in code. That disallows to create of service accounts with API keys for this cluster which required for provider "kafka"

Proposed resolution: Please add data objects that could resolve the issue by fetching existing resources (without importing to tfstate)

ENV

data "confluentcloud_environment" "this" {
  name = "DEV"
}

which return attributes:

{
"id" = "env-xxxxx"
"name" = "DEV"
}


CLUSTER

data "confluentcloud_kafka_cluster" "this" {
  environment_id = data.confluentcloud_environment.this.id
  name           = "DEV-BASIC"
}

which return attributes:

{
"bootstrap_servers" = "SASL_SSL://pkc-xyxyx.ap-southeast-1.aws.confluent.cloud:9092"
"id" = "lkc-09034"
}

Thank you!

Confluent Cloud API Key for resourceType Cloud

There is no way to specify to the cloud api creation command to specify the resourceType. We are unable to to request API key creation for resourceType='cloud' for example.

We can create a schema registry api key by specifying the schema-registry cluster id in the logicalClusters attribute

Add Dedicated Cluster type with networking options

Please add deployment.sku="DEDICATED" with networking"Internet", "PrivateLink", "VPCPeering"

resource "confluentcloud_kafka_cluster" "test" {
  name           = "provider-test"
  environment_id = confluentcloud_environment.env0.id
  #bootstrap_servers = string
  service_provider = "aws" # AWS/GCP
  region           = "ap-southeast-1"
  availability     = "LOW" # LOW(single-zone) or HIGH(multi-zone)
  storage          = 5000  # Storage limit(GB)
  network_ingress  = 100   #Network ingress limit(MBps)
  network_egress   = 100   #Network egress limit(MBps)
  deployment = {           #"Deployment settings.  Currently only `sku` is supported."
    sku = "DEDICATED"      #"BASIC"/"STANDARD"; "DEDICATED" - not supported yet

    # For sku="DEDICATED" only:
    networking       = "VPCPeering" # "Internet", "PrivateLink", "VPCPeering"
    cku = 1
    ## For sku="DEDICATED" && networking="VPCPeering"only:
    cidr_for_confluentcloud = "10.12.0.0/16"
  }
}

image

UPDATE 2020.10.08: seems go-client has it already - we need just port this arguments into tf-provider :)

New Release?

Hi there, I was wondering when you are going to release 0.11? I would really like to use the Confluent Cloud Connector feature that was merged into master 15 days ago. The release 0.10 was release ~ 2 months ago, so it doesn't include the Confluent Cloud Connector feature.

Unable to create kafka-acl at Cluster level for Confluent Cloud

Hi,

I tried to create a Cluster level Acl, using the following instructions:

terraform {
  required_version = "> 0.13"
  required_providers {
    azurerm = {
      source = "hashicorp/azurerm"
      version = "2.58.0"
    }
    confluentcloud = {
      source = "Mongey/confluentcloud"
    }
    kafka = {
      source  = "Mongey/kafka"
      version = "0.3.3"
    }
  }

################################-Confluent Cloud Provider##############################
provider "confluentcloud" {
  username = var.CCLOUD_USER
  password = var.CCLOUD_PASSWORD
}

#######################API Key  And service Account###############################################
resource "confluentcloud_api_key" "dest_api_admin_access" {
  cluster_id     = var.DEST_CLUSTER_ID
  environment_id = var.DEST_ENVIRONMENT_ID
}

resource "confluentcloud_service_account" "replicatorServiceAccount" {
  name           = var.SERVICE_ACCOUNT_NAME
  description    = "Replicator Test Service Account"
}

#####################################################################################
provider "kafka" {
  alias = "dest_cluster"
  bootstrap_servers = var.DEST_BOOTSTRAP_SERVERS

  tls_enabled    = true
  sasl_username  = confluentcloud_api_key.dest_api_admin_access.key
  sasl_password  = confluentcloud_api_key.dest_api_admin_access.secret
  sasl_mechanism = "plain"
  timeout        = 10
}
####################################################################################

resource "kafka_acl" "LicenTopicAcl-Cluster-Create" {
  provider            = kafka.dest_cluster
  resource_name       = var.DEST_CLUSTER_ID
  resource_type       = "Cluster"
  acl_principal       = format("User:%d", confluentcloud_service_account.replicatorServiceAccount.id)
  acl_host            = "*"
  acl_operation       = "Create"
  acl_permission_type = "Allow" 
}

For the resource name neither the Cluster Id nor the Cluster Name worked.
I get the following exception:

Error: kafka server: This most likely occurs because of a request being malformed by the client library or the message was sent to an incompatible broker. See the broker logs for more details.
โ”‚
โ”‚ with kafka_acl.LicenTopicAcl-Cluster-Create,
โ”‚ on license-topic-acls.tf line 48, in resource "kafka_acl" "LicenTopicAcl-Cluster-Create":
โ”‚ 48: resource "kafka_acl" "LicenTopicAcl-Cluster-Create" {
โ”‚

Can you please confirm what is wrong in the format of the message?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.