Giter Site home page Giter Site logo

provider-kafka's Introduction

provider-kafka

provider-kafka is a Crossplane Provider that is used to manage Kafka resources.

Usage

  1. Create a provider secret containing a json like the following, see expected schema here:

    {
      "brokers":[
        "kafka-dev-0.kafka-dev-headless:9092"
       ],
       "sasl":{
         "mechanism":"PLAIN",
         "username":"user",
         "password":"<your-password>"
       }
    }
    
  2. Create a k8s secret containing above config:

    kubectl -n crossplane-system create secret generic kafka-creds --from-file=credentials=kc.json
    
  3. Create a ProviderConfig, see this as an example.

  4. Create a managed resource see, see this for an example creating a Kafka topic.

Development

Setting up a Development Kafka Cluster

The following instructions will setup a development environment where you will have a locally running Kafka installation (SASL-Plain enabled). To change the configuration of your instance further, please see available helm parameters here.

  1. (Optional) Create a local kind cluster unless you want to develop against an existing k8s cluster.

  2. Install the Kafka helm chart:

    helm repo add bitnami https://charts.bitnami.com/bitnami
    kubectl create ns kafka-cluster
    helm upgrade --install kafka-dev -n kafka-cluster bitnami/kafka \
      --version 20.0.5 \
      --set auth.clientProtocol=sasl \
      --set deleteTopicEnable=true \
      --set authorizerClassName="kafka.security.authorizer.AclAuthorizer" \
      --wait
    

    Username is "user", obtain password using the following

    kubectl -n kafka-cluster exec kafka-dev-0 -- cat /opt/bitnami/kafka/config/kafka_jaas.conf
    

    Create the Kubernetes secret by adding a JSON filed called kc.json with the following contents

    {
       "brokers": [
          "kafka-dev-0.kafka-dev-headless:9092"
       ],
       "sasl": {
          "mechanism": "PLAIN",
          "username": "user",
          "password": "<password-you-obtained-in-step-2>"
       }
    }

    Once this file is created, apply it by running the following command

    kubectl -n kafka-cluster create secret generic kafka-creds --from-file=credentials=kc.json
  3. Install kubefwd.

  4. Run kubefwd for kafka-cluster namespace which will make internal k8s services locally accessible:

    sudo kubefwd svc -n kafka-cluster
    
  5. To run tests, export the KAFKA_PASSWORD environment variable using the password from step 2

    export KAFKA_PASSWORD="<password-you-obtained-in-step-2>"
    
  6. (optional) Install the kafka cli.

  7. (optional) Configure the kafka cli to talk against local Kafka installation:

    1. Create a config file for the client with the following content at ~/.kcl/config.toml:

      seed_brokers = ["kafka-dev-0.kafka-dev-headless:9092"]
      timeout_ms = 10000
      
      [sasl]
      method = "plain"
      user = "user"
      pass = "<password-you-obtained-in-step-2>"
      
      1. Verify that cli could talk to the Kafka cluster:
      export  KCL_CONFIG_DIR=~/.kcl
      
      kcl metadata --all
      

Building and Running the provider locally

Run against a Kubernetes cluster:

make run

Build, push, and install:

make all

Build image:

make image

Push image:

make push

Build binary:

make build

provider-kafka's People

Contributors

adarmiento avatar hasheddan avatar jastang avatar jograca avatar jutley avatar kenatliberty avatar malodie avatar mariomalinditex avatar marshmallory avatar muvaf avatar nalbury avatar rtoma avatar sergey-kizimov avatar stevendborrelli avatar tilian avatar turkenh avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

provider-kafka's Issues

Provider unhealthy (UnhealthyPackageRevision)

What happened?

crossplane-provider-kafka-controller gets Unhealthy status right after Helm install (clean install)

How can we reproduce it?

Install Helm Chart crossplane-stable/crossplane (1.9.1) with provider-kafka in values:

provider:
  packages:
    - crossplane/provider-kafka-controller:stable

Provider is assigned installed status, but unhealthy. No pods except crossplane and
crossplane-rbac-manager are running.

From ProviderRevision:

status:
  conditions:
    - lastTransitionTime: '2022-09-26T13:42:33Z'
      reason: UnhealthyPackageRevision
      status: 'False'
      type: Healthy

k get Provider -o yaml:

apiVersion: v1
items:
- apiVersion: pkg.crossplane.io/v1
  kind: Provider
  metadata:
    creationTimestamp: "2022-09-26T13:42:09Z"
    generation: 1
    name: crossplane-provider-kafka-controller
    resourceVersion: "1099151477"
    uid: 1dcb7352-2ac7-453a-8dfa-83530b03650e
  spec:
    ignoreCrossplaneConstraints: false
    package: crossplane/provider-kafka-controller:stable
    packagePullPolicy: IfNotPresent
    revisionActivationPolicy: Automatic
    revisionHistoryLimit: 1
    skipDependencyResolution: false
  status:
    conditions:
    - lastTransitionTime: "2022-09-26T13:42:33Z"
      reason: UnhealthyPackageRevision
      status: "False"
      type: Healthy
    - lastTransitionTime: "2022-09-26T13:42:27Z"
      reason: ActivePackageRevision
      status: "True"
      type: Installed
    currentIdentifier: crossplane/provider-kafka-controller:stable
    currentRevision: crossplane-provider-kafka-controller-711b31489f8d
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

What environment did it happen in?

Crossplane version: 1.9.1
K8s version: 1.21.5 (on-premise)

Topic controller Observe function doesn't distinguish between non-existing topics and errors from the client in retrieving the topic

From controller/topic/topic.go Observe() function:

	tpc, err := topic.Get(ctx, c.kafkaClient, meta.GetExternalName(cr))
        if tpc == nil {
		return managed.ExternalObservation{ResourceExists: false}, nil
	}
	if err != nil {
		return managed.ExternalObservation{}, errors.Wrapf(err, "cannot get topic spec from topic client")
	}

topic.Get either returns a topic or an error. In case of errors, the topic will always be nil and the function will return on the first error check. If the topic is not nil err will always be nil, therefore the code in that if block is unreachable.

This code was probably developed because topic.Get() returns an error in both cases when a topic does not exist of when something went wrong, and it is not possible with just the error itself to discern which of the two happened.

As a result, errors from the client performing a Get are interpreted as "Topic does not exist" issuing a topic creation when not necessary

Replication Factor Updates Not Throwing Proper Error

What happened?

Replication Factors in Kafka are not able to be updated. In order to handle this in provider-kafka, we would like to send the user an error message when attempting to update Replication Factors. Currently there is a bug in how this is being handled.

How can we reproduce it?

Apply a configuration with a Replication Factor. Re-apply another configuration with an updated Replication Factor. You will not see any errors when describing this configuration.

Multi-Threading

What problem are you facing?

Currently, we do not have any kind of multi-threading enabled in this provider. A disconnect fix was added by @jutley, but the fix is not necessarily thread safe. Adding multi-threading would probably result in unwanted effects.

How could Crossplane help solve your problem?

Adding this issue to track whether there is significant interest in multi-threading this provider, as well as gain insight from Crossplane about it. https://pkg.go.dev/sigs.k8s.io/controller-runtime/pkg/[email protected]#Options references the configuration option, which is currently defaulting to 1.

"AccessControlList" are silently Ignored by kustomize

What happened?

we running in our environment kustomize and "AccessControlList" is silently ignored during rendering

kubernetes-sigs/kustomize#5042 (comment)
it turns out that this is because of the api-conventions

Lists are collections of resources of one (usually) or more (occasionally) kinds.
The name of a list kind must end with "List". Lists have a limited set of common metadata. All lists use the required "items" field to contain the array of objects they return. Any kind that has the "items" field must be a list kind.
Most objects defined in the system should have an endpoint that returns the full set of resources, as well as zero or more endpoints that return subsets of the full list. Some objects may be singletons (the current user, the system defaults) and may not have lists.

How can we reproduce it?

What environment did it happen in?

Crossplane version:

TLS support should allow a CA to be set for broker certificates

What problem are you facing?

When connecting to a Kafka cluster using TLS, there doesn't seem to be any way to set the CA used to verify the Kafka broker's certificates. This forces the user to use insecureSkipVerify: true, which is not ideal for obvious reasons.

How could Crossplane help solve your problem?

Provide a new field under TLS configuration allowing the CA certificate to be set. It can be similar to the clientCertificate and look something like this:

{
  "tls": {
    "serverCertificateAuthoritySecretRef": {
      "namespace": "crossplane",
      "name": "private-ca",
      "caField": "ca.crt",
    }
  }
}

goroutine leak

What happened?

I've observed that the CPU utilization from the Kafka provider pod has been steadily growing over time. I've observed this in three different configurations:

  • provider -> mTLS proxy -> Kafka
  • provider w/ mTLS -> Kafka
  • provider w/o mTLS -> Kafka

I recently took a look at the metrics, and I can see the the number of goroutines is increasing over time, which I believe explains the CPU issues.

image

Interestingly, with the mTLS proxy in place, it also had growth in CPU usage over time. I believe this suggests that the goroutine leak involves the Kafka connection. I haven't done enough of a deep dive to identify the issue yet though.

How can we reproduce it?

As far as I can tell, you just have to use this provider-kafka component and take a look at the number of goroutines or CPU utilization. I don't think there are any special steps for repro.

What environment did it happen in?

Crossplane version: 1.11.0-rc.0.195.g3ce766b3
provider-kafka: xpkg.upbound.io/crossplane-contrib/provider-kafka:v0.2.0 (I also saw this behavior with 0.1.0)

mTLS support

What problem are you facing?

I'd like to use this provider to manage resources in an MSK cluster with mTLS auth enabled. Currently the provider does not support this, nor TLS. Before I start thinking about submitting this feature, I'd like to reach out to hear of ideas about implementation/configuration etc.

The used Kafka library https://github.com/twmb/franz-go/ supports a pluggable dialer, which is used by the Kafka CLI kcl https://github.com/twmb/kcl/ to provide mTLS. Based on this I expect adding mTLS support to this provider should be possible and straightforward.

I'd like to hear your ideas about how to add the needed configuration in https://github.com/crossplane-contrib/provider-kafka/blob/main/internal/clients/kafka/config.go

How could Crossplane help solve your problem?

Implement the feature, or provide the information needed for submitting a successful PR. Thanks!

Client TLS certificate/key from file

What problem are you facing?

I want to use TLS to connect this provider to our Kafka clusters, but it seems like the certificate must be configured in a Kubernetes secret. This is something I would like to avoid since Kubernetes secrets are considered less secure than alternatives. We are using a CSI driver to obtain ephemeral certificates which I would like to use.

How could Crossplane help solve your problem?

Support for specifying a path within the provider pod for the client certificate in https://github.com/crossplane-contrib/provider-kafka/blob/main/internal/clients/kafka/config.go.

missing --poll, --sync, --max-reconcile-rate container arguments

What problem are you facing?

Kafka provider container is missing --poll, --sync, --max-reconcile-rate argument flags:

usage: crossplane-kafka-provider []

Kafka support for Crossplane.

Flags:
--help Show context-sensitive help (also try --help-long and
--help-man).
-d, --debug Run with debug logging.
-s, --sync=1h Controller manager sync period such as 300ms, 1.5h,
or 2h45m
-l, --leader-election Use leader election for the controller manager.

Other providers like provider-kubernetes, provider-aws does support it

During our performance tests when we had deployed 1000 Managed Resources, provisioning another one took ~10min

What environment did it happen in?
Crossplane version: 1.13.2
Kafka Provider version: xpkg.upbound.io/crossplane-contrib/provider-kafka:v0.4.3

Cloud provider: AWS
Kubernetes version: EKS 1.24.x

How could Crossplane help solve your problem?

by adding support for --poll, --sync, --max-reconcile-rate

ACLs cannot be deleted

What happened?

Last weeks I've been working on engineering a Kafka GitOps feature for our MSK clusters. For this I use Crossplane and this provider. To make the provider work with MSK I've contributed TLS + SCRAM authentication support.

Now managing Topics works great. Creation of ACLs works too, but deletion is not possible. To make this work I changed:

resp, err := cl.DescribeACLs(ctx, ab)
if err != nil {
return nil, errors.Wrap(err, "describe ACLs response is empty")
}
if resp != nil {
exists := resp[0].Described
if len(exists) == 0 {
return nil, errors.New("no create response for acl")
}
}

... into:

        resp, err := cl.DescribeACLs(ctx, ab)
        if err != nil {
                return nil, errors.Wrap(err, "describe ACLs response is empty")
        }
        if exists := resp[0].Described; len(exists) == 0 {
                return nil, nil // no matching ACLs found
        }

The original code throws an error if no ACLs exist for specific criteria. My code allows this.

Now my code works flawlessly (for MSK), but since it is a significant change to the logic and it implies 'delete ACL' never worked, I wonder if I am missing something. So, I'd like a discussion before I submit a patch.

Cheers.

How can we reproduce it?

In short:

  • create a ACL (this works)
  • attempt to delete it, which fails

In detail:

I create the ACL. Here is the resource and its good health:

$ kubectl get accesscontrollist.acl.kafka.crossplane.io
NAME                                               READY   SYNCED   EXTERNAL-NAME                                                                                                                                                                                                                                      AGE
acl-managed-by-crossplane-kafka-provider-acltest   True    True     {"ResourceName":"topic-managed-by-crossplane-kafka-provider-acltest","ResourceType":"Topic","ResourcePrincipal":"User:Foo","ResourceHost":"*","ResourceOperation":"Read","ResourcePermissionType":"Allow","ResourcePatternTypeFilter":"Literal"}   92s

With kcl I check the Kafka side of things:

$ kcl admin acl describe --type any --pattern match --op any --perm any --name topic-managed-by-crossplane-kafka-provider-acltest
TYPE   NAME                                                PATTERN  PRINCIPAL                                                           HOST  OPERATION  PERMISSION  ERROR  ERROR MESSAGE
TOPIC  topic-managed-by-crossplane-kafka-provider-acltest  LITERAL  User:Foo                                                            *     READ       ALLOW

So, indeed: the ACL exists.

Now let's delete the ACL:

$ kubectl delete accesscontrollist.acl.kafka.crossplane.io/acl-managed-by-crossplane-kafka-provider-acltest
accesscontrollist.acl.kafka.crossplane.io "acl-managed-by-crossplane-kafka-provider-acltest" deleted
<hangs>

The delete command hangs. Meanwhile in Kafka the ACL has been removed.

Checking the to-be-deleted ACL resource from another terminal shows it's now unREADY and unSYNCED. Both as expected:

$ k get accesscontrollist.acl.kafka.crossplane.io
NAME                                               READY   SYNCED   EXTERNAL-NAME                                                                                                                                                                                                                                      AGE
acl-managed-by-crossplane-kafka-provider-acltest   False   False    {"ResourceName":"topic-managed-by-crossplane-kafka-provider-acltest","ResourceType":"Topic","ResourcePrincipal":"User:Foo","ResourceHost":"*","ResourceOperation":"Read","ResourcePermissionType":"Allow","ResourcePatternTypeFilter":"Literal"}   16m

Checking the kafka provider (running in debug mode) logs I see this:

2022-07-13T11:43:26.954+0200	DEBUG	provider-kafka	Cannot observe external resource	{"controller": "managed/accesscontrollist.acl.kafka.crossplane.io", "request": "/acl-managed-by-crossplane-kafka-provider-acltest", "uid": "90943fdb-d5d6-44ab-83b6-77f9dc66a15a", "version": "179916", "external-name": "{\"ResourceName\":\"topic-managed-by-crossplane-kafka-provider-acltest\",\"ResourceType\":\"Topic\",\"ResourcePrincipal\":\"User:Foo\",\"ResourceHost\":\"*\",\"ResourceOperation\":\"Read\",\"ResourcePermissionType\":\"Allow\",\"ResourcePatternTypeFilter\":\"Literal\"}", "error": "cannot List ACLs: no create response for acl", "errorVerbose": "no create response for acl\ngithub.com/crossplane-contrib/provider-kafka/internal/clients/kafka/acl.List\n\t/Users/rtoma/redacted/projects/provider-kafka-fork/internal/clients/kafka/acl/acl.go:66\ngithub.com/crossplane-contrib/provider-kafka/internal/controller/acl.(*external).Observe\n\t/Users/rtoma/redacted/projects/provider-kafka-fork/internal/controller/acl/acl.go:158\ngithub.com/crossplane/crossplane-runtime/pkg/reconciler/managed.(*Reconciler).Reconcile\n\t/Users/rtoma/go/pkg/mod/github.com/crossplane/[email protected]/pkg/reconciler/managed/reconciler.go:620\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/Users/rtoma/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:298\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/Users/rtoma/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:253\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/Users/rtoma/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:214\nruntime.goexit\n\t/opt/homebrew/Cellar/go/1.18.3/libexec/src/runtime/asm_arm64.s:1263\ncannot List ACLs\ngithub.com/crossplane-contrib/provider-kafka/internal/controller/acl.(*external).Observe\n\t/Users/rtoma/redacted/projects/provider-kafka-fork/internal/controller/acl/acl.go:161\ngithub.com/crossplane/crossplane-runtime/pkg/reconciler/managed.(*Reconciler).Reconcile\n\t/Users/rtoma/go/pkg/mod/github.com/crossplane/[email protected]/pkg/reconciler/managed/reconciler.go:620\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/Users/rtoma/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:298\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/Users/rtoma/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:253\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/Users/rtoma/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:214\nruntime.goexit\n\t/opt/homebrew/Cellar/go/1.18.3/libexec/src/runtime/asm_arm64.s:1263"}
2022-07-13T11:43:26.955+0200	DEBUG	controller-runtime.manager.events	Warning	{"object": {"kind":"AccessControlList","name":"acl-managed-by-crossplane-kafka-provider-acltest","uid":"90943fdb-d5d6-44ab-83b6-77f9dc66a15a","apiVersion":"acl.kafka.crossplane.io/v1alpha1","resourceVersion":"179916"}, "reason": "CannotObserveExternalResource", "message": "cannot List ACLs: no create response for acl"}

From above debug blob I'd like to highlight:

"errorVerbose": "no create response for acl
  github.com/crossplane-contrib/provider-kafka/internal/clients/kafka/acl.List
  /Users/rtoma/redacted/projects/provider-kafka-fork/internal/clients/kafka/acl/acl.go:66

This is why I believe 'delete ACL' is flawed. The acl.List method throws an error when no ACLs exist. Now to me finding no matching ACLs seems like the expected result of a delete ACL action. But maybe I'm missing something?

What environment did it happen in?

Crossplane version: 1.8.1
Kafka provider: 0.1.0 with TLS/SCRAM support
Kubernetes: 1.22.8 (OpenShift on AWS)

Support ACL configuration via Zookeeper

What problem are you facing?

A fully feature complete provider-kafka needs to be able to use zookeeper to store and configure ACLs on the server per Kafka’s ACL structure specification and resource patterns.

How could Crossplane help solve your problem?

Users of the provider would have the ability to have the control plane to store ACL configurations on the server in accordance with Kafka's ACL structure specification.

CannotCreateExternalResource EOF error

What happened?

After following the documentation I expected to be able to deploy a topic resource. Instead I keep getting the erros below on the kafka-provider's logs:

[ERROR] unable to request api versions; broker: seed 0, err: EOF
[provider-kafka] 2023-01-30T14:40:29.762ZDEBUG   controller-runtime.manager.events       Warning {"object": {"kind":"Topic","name":"sample-topic","uid":"xxxx","apiVersion":"topic.kafka.crossplane.io/v1alpha1","resourceVersion":"1200071675"}, "reason": "CannotCreateExternalResource", "message": "EOF"}

topic.yaml

apiVersion: topic.kafka.crossplane.io/v1alpha1
kind: Topic
metadata:
  name: sample-topic
spec:
  forProvider:
    replicationFactor: 1
    partitions: 1
  providerConfigRef:
    name: kafka-stage-provider

How can we reproduce it?

I followed the README steps, deployed the topic resource and looked for the provider logs

What environment did it happen in?

Crossplane version: helm.sh/chart: crossplane-1.7.0
Kubernetes distribution: GKE 1.21
Kafka: Confluent Cluster

Support Topic Level Configuration Updates

What problem are you facing?

A fully feature complete provider-kafka needs the ability to update the following Topic level configuration values:

cleanup.policy
compression.type
delete.retention.ms
file.delete.delay.ms
flush.messages
flush.ms
follower.replication.throttled.replicas
index.interval.bytes
leader.replication.throttled.replicas
local.retention.bytes
local.retention.ms
max.compaction.lag.ms
max.message.bytes
message.format.version
message.timestamp.difference.max.ms
message.timestamp.type
min.cleanable.dirty.ratio
min.compaction.lag.ms
min.insync.replicas
preallocate
remote.storage.enable
retention.bytes
retention.ms
segment.bytes
segment.index.bytes
segment.jitter.ms
segment.ms
unclean.leader.election.enable
message.downconversion.enable

How could Crossplane help solve your problem?

Providing optional Topic level configuration values via Crossplane will allow users of the provider to achieve similar results to using the Kafka CLI.

Refactor to use Confluent client

We have decided to move forward with refactoring the kafka provider to use Confluent's Kafka Go client. Opening this issue for discussion, questions, concerns, etc.

Support to Add, Remove and List ACLs

What problem are you facing?

A fully feature complete provider-kafka needs to enable users to add, remove, and list ACLs just like a user would via the authorizer CLI.

How could Crossplane help solve your problem?

An ACL controller in provider-kafka providing CRUD operations for ACLs could replicate the functionality of the authorizer CLI.

Support for Updating Topic Partitions

What problem are you facing?

A fully feature complete provider-kafka Topic controller should be able to update Partitions, creating new ones when user selects additional ones or throwing an error when attempting to remove Partitions, which is not supported by Kafka.

Runtime error `log.SetLogger(...) was never called` in the crossplane-provider-kafka-94381af0b190 pod

What happened?

We have rolled out crossplane-provider-kafka version to v0.5.0 two weeks ago and then we have upgraded the kubernetes version from v1.26 to v1.27 and the provider is still working after the upgrade. after few days we are getting errors from the crossplane-provider-kafka pod.

What environment did it happen in?

Crossplane version: v1.15.1
Kubernetes version: v1.27.12-eks-adc7111
kafka provider : xpkg.upbound.io/crossplane-contrib/provider-kafka:v0.5.0

Provider Logs

kubectl logs crossplane-provider-kafka-94381af0b190-c5c7fbc94-zhs5m

[controller-runtime] log.SetLogger(...) was never called; logs will not be displayed. Detected at: > goroutine 161 [running]: > runtime/debug.Stack() > runtime/debug/stack.go:24 +0x5e > sigs.k8s.io/controller-runtime/pkg/log.eventuallyFulfillRoot() > sigs.k8s.io/[email protected]/pkg/log/log.go:60 +0xcd > sigs.k8s.io/controller-runtime/pkg/log.(*delegatingLogSink).WithValues(0xc00005f300, {0xc004677740, 0x2, 0x2}) > sigs.k8s.io/[email protected]/pkg/log/deleg.go:168 +0x49 > github.com/go-logr/logr.Logger.WithValues(...) > github.com/go-logr/[email protected]/logr.go:323 > sigs.k8s.io/controller-runtime/pkg/builder.(*Builder).doController.func1(0xc004677720) > sigs.k8s.io/[email protected]/pkg/builder/controller.go:400 +0x173 > sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc000444f00, {0x20dbbc8, 0xc00043a410}, {0x1a4ad00?, 0xc000050e00?}) > sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:306 +0x16a > sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc000444f00, {0x20dbbc8, 0xc00043a410}) > sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:266 +0x1c9 > sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2() > sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:227 +0x79 > created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2 in goroutine 30 > sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:223 +0x565

ArgoCD-Crossplane: Topic finalizer prevents deletion

Hi team,

Is there a way to override/remove Topic default finalizer (finalizer.managedresource.crossplane.io) declaratively?

The use-case:
ArgoCD cannot remove Topic object after manifest deletion in Git and synchronization never ends:
image

If to remove finalizer from Topic's spec manually - all goes as expected immediately.

Already tried:

metadata:
  finalizers:
    - resources-finalizer.argocd.argoproj.io

and

metadata:
  finalizers: []

but no luck, finalizer.managedresource.crossplane.io finalizer is still added automatically.

crossplane:v1.9.1
provider-kafka-controller:v0.1.0
argocd:v2.4.12

Support SASL mechanism AWS_MSK_IAM for use with MSK

What problem are you facing?

Would like to be able to use AWS IAM when authenticating against an MSK cluster.

How could Crossplane help solve your problem?

Users would be able to attach a role to the provider and therefor follow the same pattern that many other tools use when running in EKS.

I would be happy to make this contribution myself but can't find a contribution doc, so just let me know what approach you like.

When a connection between the Kafka Crossplane provider and MSK drops in AWS, the provider never recovers

What happened?

We have the Crossplane Kafka provider deployed on EKS. After a short period it looses the connection to MSK and never recovers. Until this happens I can see the topic being created and synched and I can query it in MSK. After the connection drops the topic CRD gets desynchronised and never recovers as well. Only a full uninstall of the provider, will open the connection again

How can we reproduce it?

  1. Install Crossplane and Kafka provider on EKS
  2. Drop the connection between the provider and MSK
  3. Wait a bit, the connection should drop and never recover.

Setting used for the MSK Kafka secret:

{
     "brokers":[
       "broker-dns:9098"
      ],
      "sasl":{
        "mechanism":"AWS-MSK-IAM"
      }
}

What environment did it happen in?

Crossplane version: 1.13.2
Crossplane Kafka version: 0.4.3
EKS version: 1.24

Kafka provider logs:

[ERROR] unable to initialize sasl; broker: seed 0, err: read tcp 10.2.38.31:35902->10.2.44.218:9098: i/o timeout
[ERROR] unable to initialize sasl; broker: seed 0, err: read tcp 10.2.38.31:51264->10.2.44.218:9098: i/o timeout
[ERROR] unable to initialize sasl; broker: seed 0, err: read tcp 10.2.38.31:41998->10.2.44.218:9098: i/o timeout
[WARN] unable to open connection to broker; addr: broker-dns:9098, broker: seed 0, err: context deadline exceeded
[WARN] unable to open connection to broker; addr: broker-dns:9098, broker: seed 0, err: context deadline exceeded

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.