Comments (10)
@Jaxwood, @Marcus-James-Adams, @ronald05arias, @czerasz-mineiros, @mparker-variant, @jorgenfries, @PlugaruT, @mikhailznak @patrickschmelter @scotartt @vweckerle we're very excited to let you know we've just published a new version of TF Provider that includes api_key
resource among other very exciting improvements: it enables fully automated provisioning of our key Kafka workflows (see the demo) with no more manual intervention and makes it our biggest and most impactful release.
The only gotcha we've renamed it from confluentinc/confluentcloud
to confluentinc/confluent
but we published a migration guide so it should be fairly straightforward. The existing confluentinc/confluentcloud
will be deprecated soon so we'd recommend switching as soon as possible.
New confluentinc/confluent
provider also includes a lot of sample configurations so you won't need to write them from scratch. You can find them here, find a full list of changes here.
from terraform-provider-confluentcloud.
👋 @vweckerle @bohdanverdyi
we're targeting to release a new version of TF Provider (with api_key
resource) on April 28th 🤞
from terraform-provider-confluentcloud.
👋 @scotartt, here's a sample of TF configuration with api_key
resource that displays how it will work (TLDR: create Kafka cluster + Kafka API Key + Kafka Topic (and other resources) in a single terraform apply
run):
provider "confluentcloud" {
api_key = var.confluent_cloud_api_key
api_secret = var.confluent_cloud_api_secret
}
resource "confluentcloud_environment" "staging" {
display_name = "Staging"
}
# Update the config to use a cloud provider and region of your choice.
# https://registry.terraform.io/providers/confluentinc/confluentcloud/latest/docs/resources/confluentcloud_kafka_cluster
resource "confluentcloud_kafka_cluster" "basic" {
display_name = "inventory"
availability = "SINGLE_ZONE"
cloud = "AWS"
region = "us-east-2"
basic {}
environment {
id = confluentcloud_environment.staging.id
}
}
// 'app-manager' service account is required in this configuration to create 'orders' topic and grant ACLs
// to 'app-producer' and 'app-consumer' service accounts.
resource "confluentcloud_service_account" "app-manager" {
display_name = "app-manager"
description = "Service account to manage 'inventory' Kafka cluster"
}
resource "confluentcloud_role_binding" "app-manager-kafka-cluster-admin" {
principal = "User:${confluentcloud_service_account.app-manager.id}"
role_name = "CloudClusterAdmin"
crn_pattern = confluentcloud_kafka_cluster.basic.rbac_crn
}
resource "confluentcloud_api_key" "app-manager-kafka-api-key" {
display_name = "app-manager-kafka-api-key"
description = "Kafka API Key that is owned by 'app-manager' service account"
owner {
id = confluentcloud_service_account.app-manager.id
api_version = confluentcloud_service_account.app-manager.api_version
kind = confluentcloud_service_account.app-manager.kind
}
managed_resource {
id = confluentcloud_kafka_cluster.basic.id
api_version = confluentcloud_kafka_cluster.basic.api_version
kind = confluentcloud_kafka_cluster.basic.kind
environment {
id = confluentcloud_environment.staging.id
}
}
# The goal is to ensure that confluentcloud_role_binding.app-manager-kafka-cluster-admin is created before
# confluentcloud_api_key.app-manager-kafka-api-key is used to create instances of
# confluentcloud_kafka_topic, confluentcloud_kafka_acl resources.
# 'depends_on' meta-argument is specified in confluentcloud_api_key.app-manager-kafka-api-key to avoid having
# multiple copies of this definition in the configuration which would happen if we specify it in
# confluentcloud_kafka_topic, confluentcloud_kafka_acl resources instead.
depends_on = [
confluentcloud_role_binding.app-manager-kafka-cluster-admin
]
}
resource "confluentcloud_kafka_topic" "orders" {
kafka_cluster = confluentcloud_kafka_cluster.basic.id
topic_name = "orders"
partitions_count = 4
config = {
// Example of overriding the default parameter value (2097164) of 'max.message.bytes' topic setting
// https://docs.confluent.io/cloud/current/clusters/broker-config.html
"max.message.bytes" = "2097165"
}
http_endpoint = confluentcloud_kafka_cluster.basic.http_endpoint
credentials {
key = confluentcloud_api_key.app-manager-kafka-api-key.id
secret = confluentcloud_api_key.app-manager-kafka-api-key.secret
}
}
...
We're testing out api_key
resource at the moment and targeting end of March.
from terraform-provider-confluentcloud.
Sure, you could create environment, cluster and app-manager service account using pipeline #1
(TF module #1
).
And then run pipeline #2
(TF module #2
) that would use Cloud API Key of a service account with EnvironmentAdmin
role instead:
provider "confluentcloud" {
# Cloud API Key of a service account with EnvironmentAdmin role
api_key = var.confluent_cloud_api_key
api_secret = var.confluent_cloud_api_secret
}
to create just Kafka API Key and Kafka Topic for the target cluster (including other resources like ACLs etc if necessary).
It's worth mentioning that module #1
should output Kafka Cluster ID, app-manager
Service Account ID and other metadata and module #2
should accept these.
In upcoming release, we're going to share these examples (or even TF modules) to make using TF Provider for CC a bit easier.
Let me know if it helps.
from terraform-provider-confluentcloud.
Hi guys,
any updates on the planned release date for the new confluentcloud_api_key resource?
This is quite crucial for us, as the terraform provider is not really useful, without a way to manage api keys.
Greetings
Valentin
from terraform-provider-confluentcloud.
Thanks for raising this question! We're aware of this issue and we're going to address it in one of our future releases 👍
from terraform-provider-confluentcloud.
hi
what is the expected release date of this feature? it's in the way of our work in automating topic configuration because of the manual step.
how will this feature work? Given that on the Confluent CLI, I can type as follows:
$ confluent kafka topic create help
Created topic "help".
and the topic is created!
from terraform-provider-confluentcloud.
Hi @linouk23
What if we don't want to create the environment/cluster all in one go? Our TF approach is that one pipeline will run to create the target environments and clusters (because the former especially needs OrgAdmin rights to create and the latter needs EnvironmentAdmin over the environment at minimum), and then multiple other pipelines which each contain only a small subset of the topics that any given application requires. This latter pipeline would get credentials injected to it that give it only Admin rights in the target cluster.
from terraform-provider-confluentcloud.
Hi, after working on this for the past week I've got some additional feedback.
Assuming a configuration for a topic as you had above:
resource "confluentcloud_kafka_topic" "orders" {
kafka_cluster = confluentcloud_kafka_cluster.basic.id
topic_name = "orders"
partitions_count = 4
config = {
// Example of overriding the default parameter value (2097164) of 'max.message.bytes' topic setting
// https://docs.confluent.io/cloud/current/clusters/broker-config.html
"max.message.bytes" = "2097165"
}
http_endpoint = confluentcloud_kafka_cluster.basic.http_endpoint
credentials {
key = confluentcloud_api_key.app-manager-kafka-api-key.id
secret = confluentcloud_api_key.app-manager-kafka-api-key.secret
}
}
For comparison, I'm basing my experience over what I need to do to create these similar objects with the Confluent CLI v2.
The problems all start and mainly end with these parameters:
http_endpoint = confluentcloud_kafka_cluster.basic.http_endpoint
credentials {
key = confluentcloud_api_key.app-manager-kafka-api-key.id
secret = confluentcloud_api_key.app-manager-kafka-api-key.secret
}
The http_endpoint
appears to be a a complex object, and is hard to pass into a module as an argument. The cluster id and environment id can be passed (as strings), but to get the just the http_endpoint I have to have access to the entire cluster object in Terraform, which presents difficulties (i.e. I have to have the entire cluster state available to the topic pipeline, read into a data
source, just to get this endpoint and nothing else).
In the Confluent CLI v2 this parameter is completely unnecessary: I'm not even sure there's a way to pass it as an argument.
The credentials {}
block could accept those parameters as arguments, which have been read from an AWS SSM object or other similar secure configuration store. Bearing in mind I want to execute the topic pipeline with a Confluent API key that does not have the permissions to create that user (i.e. pipeline 1. or an intermediate pipeline between that pipeline and this one, has created the service account and the API Keys and injected them into my secure configuration store) . This is workable (unlike what I see with the http_endpoint
currently), but very clunky.
Again, in the Confluent CLI v2, it is not necessary to have pre-created credentials in order then to create a topic. When I create a topic with the v2 CLI, I do not need anything of these parameters, therefore I know that none of them are or should be necessary to the creation of a topic:
confluent kafka topic create <my_topic> --partitions 3 --environment <env_id> --cluster <cluster_id>
IMO the v2 CLI is pretty clean, and it would be nice if the TF provider followed its example.
thanks for your help.
from terraform-provider-confluentcloud.
Thanks for the message @scotartt!
Sounds like you'd prefer this resource definition then (similar to CLI design), is that accurate?
resource "confluentcloud_kafka_topic" "orders" {
kafka_cluster = confluentcloud_kafka_cluster.basic.id
environment {
id = "env-12345"
}
topic_name = "orders"
partitions_count = 4
credentials {
key = confluentcloud_api_key.app-manager-kafka-api-key.id
secret = confluentcloud_api_key.app-manager-kafka-api-key.secret
}
}
to avoid having http_endpoint
attribute?
As you mentioned, one quick workaround could be to have a Kafka cluster data source:
data "confluentcloud_kafka_cluster" "basic" {
id = "lkc-abc123"
environment {
id = "env-xyz456"
}
}
resource "confluentcloud_kafka_topic" "orders" {
kafka_cluster = data.confluentcloud_kafka_cluster.basic.id
topic_name = "orders"
partitions_count = 4
credentials {
key = confluentcloud_api_key.app-manager-kafka-api-key.id
secret = confluentcloud_api_key.app-manager-kafka-api-key.secret
}
http_endpoint = data.confluentcloud_kafka_cluster.test-basic-cluster.http_endpoint
}
from terraform-provider-confluentcloud.
Related Issues (20)
- Ability to get resources environments by name instead of just id. HOT 2
- Error message when resources are exceeded is confusing HOT 3
- Ability to get resources clusters by name instead of just id. HOT 1
- Make topic deletion a sync operation HOT 1
- v0.4.0/v0.5.0 - Unable to create ACL after creating topic (401 error) - basic cluster HOT 16
- Enhancement: Need to specify Kafka cluster type as argument - for code reusability HOT 6
- Terraform Scripts fails, error indicates plugin crashed HOT 20
- data.confluentcloud_service_account.service_accounts_confluent with display name doesn't use cursor when listing... HOT 4
- Stack trace creating confluentcloud_kafka_acl resource HOT 15
- confluentcloud_service_account update dispay_name fails HOT 2
- Unable to import cluster from confluent cloud HOT 3
- Add confluentcloud_user data source HOT 3
- Add resource for resource specific access HOT 4
- Issues consuming to consume messages after creating some ACLs HOT 3
- Add Kafka API key as a resource HOT 10
- Attempting to create topics in a foreach within a resource HOT 3
- Error data after resource confluentcloud_service_account HOT 2
- ACL creation crash on both 0.5.0 and 0.4.0 HOT 11
- No ability to automate creation of cluster api-keys HOT 1
- Connect + Schema Registry? HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from terraform-provider-confluentcloud.