Giter Site home page Giter Site logo

confluentinc / cli Goto Github PK

View Code? Open in Web Editor NEW
109.0 216.0 10.0 155.71 MB

CLI for Confluent Cloud and Confluent Platform

Home Page: https://docs.confluent.io/confluent-cli/current/overview.html

License: Other

Makefile 0.78% Go 98.28% Shell 0.79% HTML 0.13% Dockerfile 0.02%
cli cloud platform confluent kafka asyncapi connect ksql schema-registry flink

cli's Introduction

Confluent CLI

Release Build Status

The Confluent CLI lets you manage your Confluent Cloud and Confluent Platform deployments, right from the terminal.

Documentation

The Confluent CLI Overview shows how to get started with the Confluent CLI.

The Confluent CLI Command Reference contains information on command arguments and flags, and is programmatically generated from this repository.

Contributing

All contributions are appreciated, no matter how small! When opening a PR, please make sure to follow our contribution guide.

Installation

The Confluent CLI is available to install for macOS, Linux, and Windows.

Homebrew

Install the latest version of confluent to /usr/local/bin:

brew install confluentinc/tap/cli

APT (Ubuntu and Debian)

Install the latest version of confluent to /usr/bin (requires glibc 2.17 or above for amd64 and glibc 2.27 or above for arm64):

wget -qO - https://packages.confluent.io/confluent-cli/deb/archive.key | sudo apt-key add -
sudo apt install software-properties-common
sudo add-apt-repository "deb https://packages.confluent.io/confluent-cli/deb stable main"
sudo apt update && sudo apt install confluent-cli

YUM (RHEL and CentOS)

Install the latest version of confluent to /usr/bin (requires glibc 2.17 or above for amd64 and glibc 2.27 or above for arm64):

sudo rpm --import https://packages.confluent.io/confluent-cli/rpm/archive.key
sudo yum install yum-utils
sudo yum-config-manager --add-repo https://packages.confluent.io/confluent-cli/rpm/confluent-cli.repo
sudo yum clean all && sudo yum install confluent-cli

Windows

  1. Download the latest Windows ZIP file from https://github.com/confluentinc/cli/releases/latest
  2. Unzip confluent_X.X.X_windows_amd64.zip
  3. Run confluent.exe

Docker

Pull the latest version:

docker pull confluentinc/confluent-cli:latest

Pull confluent v3.6.0:

docker pull confluentinc/confluent-cli:3.6.0

Building from Source

make build
dist/confluent_$(go env GOOS)_$(go env GOARCH)/confluent -h

Cross Compile for Other Platforms

From darwin/amd64 or darwin/arm64, you can build the CLI for any other supported platform.

To build for darwin/amd64 from darwin/arm64, run the following:

GOARCH=amd64 make build

To build for darwin/arm64 from darwin/amd64, run the following:

GOARCH=arm64 make build

To build for linux/amd64 (glibc or musl), run the following:

brew install FiloSottile/musl-cross/musl-cross
GOOS=linux GOARCH=amd64 make cross-build

To build for linux/arm64 (glibc or musl), run the following:

brew install FiloSottile/musl-cross/musl-cross
GOOS=linux GOARCH=arm64 make cross-build

To build for windows/amd64, run the following:

brew install mingw-w64
GOOS=windows GOARCH=amd64 make cross-build

cli's People

Contributors

andymg3 avatar anshul-goyal avatar arodoni avatar arvindth avatar brianstrauch avatar cjohnson-confluent avatar clarence97 avatar codyaray avatar confluentjenkins avatar cryoshida avatar cyrusv avatar dabh avatar dependabot[bot] avatar frankgreco avatar guttz avatar joel-hamill avatar kevin-wu24 avatar luca-filipponi avatar lucy-fan avatar mtodzo avatar muweihe avatar norwood avatar pagrawal10 avatar prathibha-m avatar sgagniere avatar stevenpyzhang avatar swist avatar tadsul avatar yannickpferr avatar zzbennett avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cli's Issues

Concurrent Confluent CLI processes can read an inconsistent config.json file

I have this shell script which uses the Confluent CLI to delete all subjects in a concurrent fashion:

#!/usr/bin/env bash
delete_subject() {
    echo "Soft deleting subject $1"
    confluent schema-registry schema delete --force --version all --subject "$1"
    echo "Hard deleting subject $1"
    confluent schema-registry schema delete --force --permanent --version all --subject "$1"

}

subjects=$(confluent schema-registry schema list -o json | jq -r ".[].subject")

# Perform deletes in parallel
for subject in ${subjects}; do
    delete_subject "$subject" &
    sleep 0.2
done
wait

echo "All subjects deleted."

It works most of the time, but I sometimes get these error messages:

Successfully soft deleted all versions for subject "insurance_customer_activity-value".
  Version  
-----------
        1  
Successfully soft deleted all versions for subject "payment_transaction-value".
  Version  
-----------
        1  
Hard deleting subject shoestore_clickstream-value
Successfully hard deleted all versions for subject "gaming_player-value".
  Version  
-----------
        2  
Hard deleting subject shoestore_shoe-value
Hard deleting subject fleetmgmt_location-value
Error: unable to read configuration file "/Users/gphilippart/.confluent/config.json": invalid character ']' after top-level value
Hard deleting subject pizzastore_order_cancelled-value
Hard deleting subject insurance_customer_activity-value
Error: unable to read configuration file "/Users/gphilippart/.confluent/config.json": invalid character ']' after top-level value
Error: unable to read configuration file "/Users/gphilippart/.confluent/config.json": invalid character ']' after top-level value
Error: unable to read configuration file "/Users/gphilippart/.confluent/config.json": invalid character ']' after top-level value
Hard deleting subject payment_transaction-value

What is probably happening is that another process reads the config.json file in the middle of the config write operation.
It would be best to write the config.json changes in a temporary file and then renaming it, which is an atomic operation at the OS level.

topic produce requires a CA location

According to the confluent kafka topic produce command --ca-location is required for On Premise interactions.

Other confluent kafka topic <subcommand> do not require this property.

It does not make sense, to me, to require this property for only one of these commands, especially since you can choose to establish connection via the --protocol flag which allows for PLAINTEXT (which is fine for local Docker containers).

Support OAuth2 for Consuming and Producing messages to topics

As a security-aware developer,
I would like to get rid of API Keys and use the OAuth2 flow
to consume and produce messages from Confluent cloud.

This flow is already possible when using Confluent cloud and Java, see link above, bit not possible for SSO users when using the Confluent CLI. It would be a major security improvement.

This does not have to be a breaking change, it is just another possibility to authenticate.

Launching confluent local kafka start fails with Error: runtime error: index out of range [0] with length 0

When launching 'confluent local kafka start', I get this error:

confluent local -vvvv kafka start
The local commands are intended for a single-node development environment only, NOT for production usage. See more: https://docs.confluent.io/current/cli/index.html


Pulling from confluentinc/confluent-local
Digest: sha256:8e391de42cfcd3498e7317dcf159790f1f1cc3f3ffce900b30d7da23888687fd
Status: Image is up to date for confluentinc/confluent-local:latest
2023-09-19T10:04:18.214+0200 [TRACE] Successfully pulled Confluent Local image
Error: runtime error: index out of range [0] with length 0

Find below my CLI information

confluent - Confluent CLI

Version:     v3.33.0
Git Ref:     53da89b2
Build Date:  2023-09-14T22:43:04Z
Go Version:  go1.21.0 X:boringcrypto (darwin/amd64)
Development: false

Find below my docker environment information

Client:
 Version:    24.0.6
 Context:    desktop-linux
 Debug Mode: false

Server:
 Containers: 0
  Running: 0
  Paused: 0
  Stopped: 0
 Images: 59
 Server Version: 24.0.6
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Using metacopy: false
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Cgroup Version: 2
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runc.v2 runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 8165feabfdfe38c65b599c4993d227328c231fca
 runc version: v1.1.8-0-g82f18fe
 init version: de40ad0
 Security Options:
  seccomp
   Profile: unconfined
  cgroupns
 Kernel Version: 6.3.13-linuxkit
 Operating System: Docker Desktop
 OSType: linux
 Architecture: x86_64
 CPUs: 2
 Total Memory: 7.769GiB
 Name: docker-desktop
 ID: 746e542e-f404-4449-b366-ba2a54e391a8
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 HTTP Proxy: http.docker.internal:3128
 HTTPS Proxy: http.docker.internal:3128
 No Proxy: hubproxy.docker.internal
 Experimental: false
 Insecure Registries:
  hubproxy.docker.internal:5555
  127.0.0.0/8
 Live Restore Enabled: false

WARNING: daemon is not using the default seccomp profile

confluent kafaka client-config is broken

I'm following the Kafka-Connect-101 (https://developer.confluent.io/courses/kafka-connect/connect-api-hands-on/) and I can't figure out what's wrong with the following:
Command 14 at tutorial:

confluent kafka client-config create java --sr-apikey <sr API key> --sr-apisecret <sr API secret> | tee $HOME/.confluent/java.config

The script is outdated since the documentation here has changed --sr-apikey to full form and they are not consistent (https://docs.confluent.io/confluent-cli/current/command-reference/kafka/client-config/create/confluent_kafka_client-config_create_java.html)
I'm basically getting command not found for client-config, I couldn't find any information on whether the API has been moved since the documentation still says it's the same.

VERSION: confluent-7.1.1

3456253:~# confluent kafka client-config
Manage Apache Kafka.

Usage:
  confluent kafka [command]

Available Commands:
  acl            Manage Kafka ACLs.
  cluster        Manage Kafka clusters.
  link           Manages inter-cluster links.
  mirror         Manages cluster linking mirror topics.
  region         Manage Confluent Cloud regions.
  topic          Manage Kafka topics.

Global Flags:
  -h, --help            Show help for this command.
  -v, --verbose count   Increase verbosity (-v for warn, -vv for info, -vvv for debug, -vvvv for trace).

Use "confluent kafka [command] --help" for more information about a command.

decryptbytes: procdecryptdata: The parameter is incorrect when consuming using api keys in the command

There's an error when consuming a topic using api keys in the command

confluent kafka topic consume <topicName> --from-beginning --schema-registry-api-key <srApiKey> --schema-registry-api-secret <srApiSecret> --api-key <kafkaApiKey> --api-secret <kafkaApiSecret> --environment <env> --cluster <cluster>
Error: failed to create consumer: decryptbytes: procdecryptdata: The parameter is incorrect.
confluent - Confluent CLI

Version:     v3.40.0
Git Ref:     299e722ef
Build Date:  2023-11-02T23:32:56Z
Go Version:  go1.21.0 X:boringcrypto (windows/amd64)
Development: false

Errors with the Windows version

I think there may be a bug in Confluent CLI for Windows: I installed the last available version (file confluent_3.25.1_windows_amd64.zip), and It works ok for some orders (getting the list of topics from the cluster, creating a topic), so connectivity and credentials are ok, but I get an error trying to read from an existing topic with confluent kafka topic consume -b my-topic:

%3|1691140847.189|FAIL|Confluent-CLI_v3.25.1#consumer-1| [thrd:sasl_ssl://pkc-1wvvj.westeurope.azure.confluent.cloud:9092/boot]: sasl_ssl://pkc-1wvvj.westeurope.azure.confluent.cloud:9092/bootstrap: SASL authentication error: Authentication failed (after 5283ms in state AUTH_REQ)

From Linux, I don't get this error.

Error: update client failure: error updating CLI binary

I'm seeing the following error when trying to update confluent locally on Apple M1 using macOS 13.3.1 (a) (Ventura):

➜ confluent update
Checking for updates...
New version of confluent is available
Current Version: v3.11.0
Latest Version:  v3.12.0
[5/4/2023] Confluent CLI v3.12.0 Release Notes
==============================================

New Features
------------
  - Add `--topics` flag to `confluent asyncapi export` to only export specified topics



Do you want to download and install this update? (y/n): y
Downloading confluent version 3.12.0...
Done. Downloaded 43.31 MB in 1 seconds. (32.41 MB/s)
Error: update client failure: error updating CLI binary: unable to copy /var/folders/0_/nkh8mcg92pq1fft8mrzcg98w0000gn/T/confluent3475877262/confluent-v3.12.0-darwin-arm64 to /usr/local/bin/confluent: remove /usr/local/bin/confluent: permission denied

Suggestions:
    Please submit a support ticket.
    In the meantime, see link for other ways to download the latest CLI version:
    https://docs.confluent.io/current/cli/installing.html

However, I'm not seeing the issue Intel macOS 11.7.6 (Big Sur):

➜ confluent update
Checking for updates...
New version of confluent is available
Current Version: v3.11.0
Latest Version:  v3.12.0
[5/4/2023] Confluent CLI v3.12.0 Release Notes
==============================================

New Features
------------
  - Add `--topics` flag to `confluent asyncapi export` to only export specified topics



Do you want to download and install this update? (y/n): y
Downloading confluent version 3.12.0...
Done. Downloaded 45.67 MB in 3 seconds. (14.25 MB/s)

In short, one needs do sudo confluent update instead confluent update. Thus, I believe that the confluent binary executable should invoke sudo in such situations.

Run command with cloud api key instead of user login

Currently I have a service account with a specific role and a cloud api key.
I would like to run a command (e.g. creating a role binding) by using this cloud api key.

confluent login is only meant to login a user (with email+password or SSO) and confluent api-key store/use is only meant for kafka clusters, if I'm not mistaken.

My current workaround would be to use the Confluent Cloud API I think, but it would be nice if we could use the confluent cli as well. Did I miss something or is this not possible?

confluent cli does not take BROWSER environment variable into account within wsl

I try to run confluent login within wsl and it opens the browser within wsl.
The problem with this is that I cannot login with SSO, because I get an Azure error my device is unmanaged.
For other commands I solved this by setting BROWSER=wslview, but it looks like confluent cli does not take this environment variable into account.

The workaround is to use the --no-browser flag, but it would be still nice if it can be fixed.
My second workaround is removing Firefox within wsl.

Snappy support. Not implemented: Decompression (codec 0x2) of message

There's an error when consuming from a topic that seems related to a lack of snappy support:

confluent kafka topic consume --from-beginning --value-format avro

% Error: Local: Not implemented: Decompression (codec 0x2) of message at 152711835 of 165 bytes failed: Local: Not implemented
% Error: Local: Not implemented: Decompression (codec 0x2) of message at 152711835 of 165 bytes failed: Local: Not implemented
% Error: Local: Not implemented: Decompression (codec 0x2) of message at 152711835 of 165 bytes failed: Local: Not implemented
% Error: Local: Not implemented: Decompression (codec 0x2) of message at 152711835 of 165 bytes failed: Local: Not implemented
% Error: Local: Not implemented: Decompression (codec 0x2) of message at 152711835 of 165 bytes failed: Local: Not implemented

Version:     v3.40.0
Git Ref:     299e722ef
Build Date:  2023-11-02T23:32:56Z
Go Version:  go1.21.0 X:boringcrypto (windows/amd64)
Development: false

confluent cli fails to create machine-id file

I'm trying to deploy the confluent-cli pod into an EKS cluster with OPA Gatekeeper. It has an OPA constraint that enforces readOnlyRootFilesystem. Therefore, we must set the readOnlyRootFilesystem: true to comply with the OPA constraint.

However, it prevents writing to the pod's /etc folder. When we run the confluent login --no-browser, it doesn't create the /etc/machine-id.
Hence, we get the following error after a successful login.

cipher: message authentication failed.

Is storing the machine-id in the confluent user home (/home/confluent) possible?

asyncapi cli command does not support on-prem Confluent

Hello

We are looking for a deploy automation tool to import the topic and schema configuration via asycapi Spec to our on-prem Confluent Platform. The asyncapi import / export function is powerful which could help us to address our need. Howeve after studied a bit on the official Confluent Cli document, seems the async cli does not support on-prem yet.

May i know if there has any roadmap on this?

Orphan ACLs left behind deleted service account

Looks like it's possible to delete a service account in Confluent Cloud with ACLs still assigned to it. CLI doesn't throw an error in that case but also doesn't delete ACLs automatically (however, cluster API keys related to the service account ARE deleted automatically)

This leads to orphan ALCs left behind but still counted into cluster-type based ALCs limit.

It would be nice if those ACLs are also deleted automatically, just like API keys.

Schema API Keys Result in 401

I recently just upgraded from 3.38 to 3.45.

Using the confluent kafka topic consume command, when specifying details for Schema Registry API Key/secret and endpoint, I get a 401 with an API key I have been using before without error.

Concretely I get the following error output

Starting Kafka Consumer. Use Ctrl-C to exit.
Error: failed to validate Schema Registry client: 401 Unauthorized

I imagine the same bug exists for produce as well.

confluent kafka topic produce/consume misbehaving for integers

Cannot produce integer keys with confluent cli version v3.30.1

Steps to reproduce:

  1. Init the producer with integer key and string value
    confluent kafka topic produce test.integers --key-format integer --value-format string --parse-key

  2. Send a record:
    1:Test

  3. Init the consumer and receive a wrong key:
    confluent kafka topic consume test.integers --key-format integer --value-format string --print-key --from-beginning image

bootstrap option not being recognised

I am trying to use confluent cli against a onprem cluster, and the bootstrap option is not recognised.

Version: v3.1.1
Git Ref: 336dc3b
Build Date: 2023-02-10T00:08:09Z
Go Version: go1.20 X:boringcrypto (darwin/amd64)
Development: false

Screenshot 2023-02-17 at 16 56 09

General questions about `confluent local kafka start`

Hello,

I have a few general remarks and questions in regard to the docker version for what used to be confluent local services start.

From my understanding of previous use, the bootstrap servers have been default available at localhost:9092. From the logged KafkaRestConfig I can see that: bootstrap.servers = broker:44535. Is this documented somewhere or is it expected from the users that we manually check the KafkaRestConfig?

We have our local development environment running in Docker Containers, and needed to manually get the docker run command and modify it with the --net flag to facilitate communication. Is there any plan to support additional configuration to avoid this issue?

This enabled us to get a connection with the bootstrap servers at broker:44535, but we fail to get a response from the Schema Registry which we would expect to be available at http://localhost:8081. Any idea why?

Additionally, we do not get any feedback in the terminal when running confluent local kafka start. You are left with a blank terminal until the image is finished downloading, and I miss the printed output previously available as:
Starting Zookeeper Zookeeper is [UP] Starting Kafka Kafka is [UP] Starting Schema Registry Schema Registry is [UP] Starting Kafka REST Kafka REST is [UP] Starting Connect Connect is [UP] Starting KSQL Server KSQL Server is [UP] Starting Control Center Control Center is [UP]

And it sometimes fail to create a docker container when running "confluent local kafka start`, and multiple retries are needed without any console output.

I do also believe that this documentation: https://docs.confluent.io/platform/6.2/quickstart/ce-quickstart.html is outdated after the 3.16 release.

Thanks in advance!

BR

invalid memory address or nil pointer dereference when consuming a topic using a context

There is an error when using contexts when consuming from a topic

confluent kafka topic <topicName> --context dev
Error: runtime error: invalid memory address or nil pointer dereference

CLI details:

confluent - Confluent CLI

Version:     v3.40.0
Git Ref:     299e722ef
Build Date:  2023-11-02T23:32:56Z
Go Version:  go1.21.0 X:boringcrypto (windows/amd64)
Development: false

Error: the Confluent CLI requires Java version 1.8 or 1.11.

Hello,
When I execute confluent local services start I am facing the error:

The local commands are intended for a single-node development environment only, NOT for production usage. See more: https://docs.confluent.io/current/cli/index.html As of Confluent Platform 8.0, Java 8 will no longer be supported.

Using CONFLUENT_CURRENT: /var/folders/65/yd73hltj4_lc72d20ppndnsm0000gn/T/confluent.957436
ZooKeeper is [UP]
Kafka is [UP]
Error: the Confluent CLI requires Java version 1.8 or 1.11.
See https://docs.confluent.io/current/installation/versions-interoperability.html .
If you have multiple versions of Java installed, you may need to set JAVA_HOME to the version you want Confluent to use.

I am using the following Java aversion:

openjdk version "17.0.9" 2023-10-17 OpenJDK Runtime Environment Homebrew (build 17.0.9+0) OpenJDK 64-Bit Server VM Homebrew (build 17.0.9+0, mixed mode, sharing)

My operational system is a MacBook Pro MacOS 13.6.4.
And at last I am downloaded conluent-7.6.0.
Please can you help me?

Kind regards

asyncapi import fails to import more complex schema

Hi,
first of all I would like to thank you for implementing confluent asyncapi import command. It's extremely useful!

Unfortunately, it looks like it doesn't handle $ref too well. Check out the following example.

confluent asyncapi import --file asyncapi.yaml --overwrite --unsafe-trace
asyncapi: 2.6.0

info:
  title: Example
  version: 1.0.0

channels:
  entitlements-v1:
    x-messageCompatibility: NONE
    subscribe:
      operationId: EntitlementsV1Subscribe
      message:
        $ref: "#/components/messages/EntitlementsV1Message"

components:
  messages:
    EntitlementsV1Message:
      name: EntitlementsV1Message
      contentType: application/json
      payload:
        title: EntitlementsV1Message
        type: object
        required:
          - uuid
          - level
        properties:
          uuid:
            type: string
            format: uuid
          level:
            $ref: "#/components/schemas/EntityLevel"

  schemas:
    EntityLevel:
      type: string
      enum:
        - team
        - org

The result is as follows:

POST /subjects/entitlements-v1-value/versions HTTP/1.1
User-Agent: Confluent-CLI/v3.10.0 (https://confluent.io; [email protected])
Content-Length: 251
Accept: application/vnd.schemaregistry.v1+json,application/vnd.schemaregistry+json; qs=0.9,application/json; qs=0.5
Content-Type: application/json
Accept-Encoding: gzip

{"schemaType":"JSON","schema":"{\"properties\":{\"level\":{\"$ref\":\"#/components/schemas/EntityLevel\"},\"uuid\":{\"format\":\"uuid\",\"type\":\"string\"}},\"required\":[\"uuid\",\"level\"],\"title\":\"EntitlementsV1Message\",\"type\":\"object\"}"}

HTTP/2.0 422 Unprocessable Entity
Content-Length: 390
Content-Type: application/vnd.schemaregistry.v1+json
Date: Tue, 18 Apr 2023 11:34:24 GMT

{"error_code":42201,"message":"Invalid schema {subject=entitlements-v1-value,version=0,id=-1,schemaType=JSON,references=[],schema={\"properties\":{\"level\":{\"$ref\":\"#/components/schemas/EntityLevel\"},\"uuid\":{\"format\":\"uuid\",\"type\":\"string\"}},\"required\":[\"uuid\",\"level\"],\"title\":\"EntitlementsV1Message\",\"type\":\"object\"}}, details: #: key [components] not found"}

[WARN]  unable to register schema: 422 Unprocessable Entity

As far as I can see, the problem is caused by the fact that the body of POST /subjects/entitlements-v1-value/versions request contains message schema only - without components/schema part.

Produce to topic with avro key schema results in an error

I have a topic with a simple avro schemas for key and value
Key schema:

{
  "fields": [
    {
      "name": "CustomerId",
      "type": "int"
    }
  ],
  "name": "CustomerKey",
  "type": "record"
}

Value schema:

{
  "fields": [
    {
      "name": "FirstName",
      "type": "string"
    },
    {
      "name": "LastName",
      "type": "string"
    }
  ],
  "name": "CustomerValue",
  "type": "record"
}

When trying to produce to the topic, I get an error:

confluent kafka topic produce <topic> --key-format avro --value-format avro --schema <value-schema-id> --key-schema <key-schema-id> --parse-key
{"CustomerId": 1}:{ "FirstName": "Maria", "LastName": "Garcia"}

The error reads:
Error: cannot decode textual record ... : short buffer

How to define Kafka storage path on Confluent CLI

I have simple local installation docs.confluent.io/platform/current/installation/installing_cp/… and it working perfectly, but important parameters can not changed.

I use standard home path for confluent center

   export CONFLUENT_HOME=/home/kafka/confluent/confluent-7.4.0

Unfortunately, I can not change path to store data. I found Kafka config in path and set up needed storage

  cat /home/kafka/confluent/confluent-7.4.0/etc/kafka/server.properties | grep log.dir
  log.dirs=/storage/kafka-logs

Than I restart Confluent Platform, however Path is wrong

 ./confluent local services start
 The local commands are intended for a single-node development environment only, NOT for production usage. See more: https://docs.confluent.io/current/cli/index.html
 As of Confluent Platform 8.0, Java 8 is no longer supported.

 Using CONFLUENT_CURRENT: /tmp/confluent.430818
 Starting ZooKeeper
 ZooKeeper is [UP]
 Kafka is [UP]
 Starting Schema Registry
 Schema Registry is [UP]
 Starting Kafka REST
 Kafka REST is [UP]
 Starting Connect
 Connect is [UP]
 Starting ksqlDB Server
 ksqlDB Server is [UP]
 Starting Control Center
 Control Center is [UP]

I try to use Clonfluent CLI to change path to store data, but not found this parameter. I try to use

./ confluent context list 

in order to update and receive answer - Context None found.
Second possible to manage default setting

./confluent kafka acl list

answer is Error: Kafka REST URL not found, but in reality REST API found

wbMx5

If I try to use

  ./confluent kafka topic list 

give me "No session token found, please enter user credentials. To avoid being prompted, run "confluent login". - login to what? Login to confluent cloud need credit card firstly, even to free plan. What this is trash? To change configuration on my local installation I must set up my credit card to Confluent company? Or what login means this command line?

Support 32 bit architectures

this error is popping up on running this command
curl -sL --http1.1 https://cnfl.io/cli | sh -s -- latest

confluentinc/cli crit uname_arch_check 'i686' got converted to 'i686' which is not a GOARCH value.

Couldnt install

Os ubuntu 22 64 bit
I am getting the below msg when I tried installing the latest

❯ curl -sL --http1.1 https://cnfl.io/cli | sh -s -- latest
confluentinc/cli info checking S3 for tag 'latest'- latest
confluentinc/cli crit unable to find 'latest' - use 'latest' or see https://docs.confluent.io/confluent-cli/current/release-notes.html for available versions.

Even installing 3.6.0 gives me the below error

❯ curl -sL https://raw.githubusercontent.com/confluentinc/cli/main/install.sh | sh -s -- -b /usr/local/bin 3.6.0
confluentinc/cli info checking S3 for tag '3.6.0'luentinc/cli/main/install.sh | sh -s -- -b /usr/local/bin 3.6.0
confluentinc/cli crit unable to find '3.6.0' - use 'latest' or see https://docs.confluent.io/confluent-cli/current/release-notes.html for available versions.

Support for Linux ARM64 bits

We use servers with Debian Linux with ARM64 bits on AWS and it would be great to have support for using Confluent with Linux ARM64

Docker: Unable to run confluent login from console

Using the docker image, I am unable to get confluent login to work. Building a custom image that echos what gets passed into x-www-browser and using the url on another machine results in a timeout.

confluent

Publish an updated go module with a well known OSS licence

With the new confluent plugin support it would be very useful to be able to extend some of the features found in the official cli by reusing some of your code. Unfortunately the cli hasn't been released as a Go module the last 3 years.

go list -versions  -m github.com/confluentinc/cli  
github.com/confluentinc/cli v0.1.0 v0.2.0 v0.3.0 v0.4.0 v0.5.0 v0.6.0 v0.7.0 v0.8.0 v0.9.0 v0.10.0 v0.11.0 v0.12.0 v0.13.0 v0.14.0 v0.15.0 v0.16.0 v0.17.0 v0.18.0 v0.19.0 v0.20.0 v0.21.0 v0.22.0 v0.23.0 v0.24.0 v0.25.0 v0.25.1 v0.25.2 v0.25.3 v0.26.0 v0.26.1 v0.27.0 v0.28.0 v0.29.0 v0.30.0 v0.31.0 v0.32.0 v0.33.0 v0.34.0 v0.35.0 v0.36.0 v0.37.0 v0.38.0 v0.39.0 v0.40.0 v0.41.0 v0.42.0 v0.43.0 v0.44.0 v0.45.0 v0.46.0 v0.47.0 v0.48.0 v0.49.0 v0.50.0 v0.51.0 v0.52.0 v0.53.0 v0.54.0 v0.55.0 v0.56.0 v0.57.0 v0.58.0 v0.59.0 v0.60.0 v0.61.0 v0.62.0 v0.63.0 v0.64.0 v0.65.0 v0.66.0 v0.67.0 v0.68.0 v0.69.0 v0.70.0 v0.71.0 v0.72.0 v0.73.0 v0.74.0 v0.75.0 v0.76.0 v0.77.0 v0.78.0 v0.79.0 v0.80.0 v0.81.0 v0.82.0 v0.83.0 v0.84.0 v0.85.0 v0.86.0 v0.87.0 v0.88.0 v0.89.0 v0.90.0 v0.91.0 v0.92.0 v0.93.0 v0.94.0 v0.95.0 v0.95.1 v0.96.0 v0.97.0 v0.98.0 v0.99.0 v0.100.0 v0.101.0 v0.102.0 v0.103.0 v0.104.0 v0.105.0 v0.106.0 v0.107.0 v0.108.0 v0.109.0 v0.110.0 v0.111.0 v0.112.0 v0.113.0 v0.114.0 v0.115.0 v0.116.0 v0.117.0 v0.118.0 v0.119.0 v0.120.0 v0.121.0 v0.122.0 v0.123.0 v0.124.0 v0.125.0 v0.126.0 v0.127.0 v0.128.0 v0.129.0 v0.130.0 v0.131.0 v0.132.0 v0.133.0 v0.134.0 v0.135.0 v0.136.0 v0.137.0 v0.138.0 v0.139.0 v0.140.0 v0.141.0 v0.142.0 v0.143.0 v0.144.0 v0.145.0 v0.146.0 v0.147.0 v0.148.0 v0.149.0 v0.150.0 v0.151.0 v0.152.0 v0.153.0 v0.154.0 v0.155.0 v0.156.0 v0.157.0 v0.158.0 v0.159.0 v0.160.0 v0.161.0 v0.162.0 v0.163.0 v0.164.0 v0.165.0 v0.166.0 v0.167.0 v0.168.0 v0.169.0 v0.170.0 v0.171.0 v0.172.0 v0.173.0 v0.174.0 v0.175.0 v0.176.0 v0.177.0 v0.178.0 v0.179.0 v0.180.0 v0.181.0 v0.182.0 v0.183.0 v0.184.0 v0.185.0 v0.186.0 v0.187.0 v0.188.0 v0.188.1 v0.189.0 v0.190.0 v0.191.0 v0.192.0 v0.193.0 v0.194.0 v0.195.0 v0.196.0 v0.197.0 v0.198.0 v0.199.0 v0.200.0 v0.201.0 v0.202.0 v0.203.0 v0.204.0 v0.205.0 v0.206.0 v0.207.0 v0.208.0 v0.209.0 v0.210.0 v0.211.0 v0.212.0 v0.213.0 v0.214.0 v0.215.0 v0.216.0 v0.217.0 v0.218.0 v0.219.0 v0.220.0 v0.221.0 v0.222.0 v0.223.0 v0.224.0 v0.225.0 v0.226.0 v0.227.0 v0.228.0 v0.229.0 v0.230.0 v0.231.0 v0.232.0 v0.233.0 v0.234.0 v0.235.0 v0.236.0 v0.237.0 v0.238.0 v0.239.0 v0.240.0 v0.241.0 v0.242.0 v0.243.0 v0.244.0 v0.245.0 v0.246.0 v0.247.0 v0.248.0 v0.249.0 v0.250.0 v0.251.0 v0.252.0 v0.253.0 v0.254.0 v0.255.0 v0.256.0 v0.257.0 v0.258.0 v0.259.0 v0.260.0 v0.261.0 v0.262.0 v0.263.0 v0.264.0 v0.265.0 v1.0.0 v1.1.0 v1.2.0 v1.3.0 v1.4.0 v1.5.0 v1.6.0 v1.7.0 v1.8.0 v1.9.0 v1.10.0 v1.11.0 v1.12.0 v1.12.1 v1.13.0 v1.13.1 v1.14.0 v1.14.1 v1.15.0 v1.16.0 v1.16.1 v1.16.2 v1.16.3 v1.16.4 v1.17.0 v1.18.0 v1.19.0 v1.19.1 v1.20.0 v1.20.1 v1.21.0 v1.21.1 v1.22.0 v1.22.1 v1.23.0 v1.24.0 v1.25.0 v1.26.0 v1.27.0 v1.28.0 v1.28.1 v1.29.0 v1.30.0 v1.31.0 v1.32.0 v1.33.0 v1.34.0 v1.35.0 v1.36.0 v1.37.0 v1.37.1 v1.38.0 v1.39.0 v1.39.1 v1.40.0 v1.40.1 v1.40.2 v1.41.0 v1.42.0 v1.43.0 v1.43.1 v1.43.2 v1.100.0

https://pkg.go.dev/github.com/confluentinc/cli

Published: Sep 15, 2020

Also for anyone to be able to use this code it would be good if the code was published with a well known OSS licence. Otherwise everyone needs to involve the legal department to scrutinize your special license.

Support building on FreeBSD

Hi,

I tried building confluentinc/cli on FreeBSD 13.1-RELEASE-p3 and found this:

The Makefile is GNU Make specific (not very unusual). The README.md says to use make deps and make build and the Makefile also calls make. Replacing make with gmake in both the README.md and Makefile fixes that. The gmake command is also available on Linux so this should make this more portable without requiring FreeBSD specific things.

For the cgo dependency for rdkafa I needed to do a pkg install librdkafka to get librdkafka 1.9.2.

Then to build I used gmake CGO_CFLAGS=`pkg-config --cflags rdkafka` CGO_LDFLAGS=`pkg-config --libs rdkafka` TAGS='dynamic' build. Afterwards I noticed that building with gmake TAGS='dynamic' build also works, but I'm not sure if that also works for fresh builds.

% go version -m ./dist/confluent_freebsd_amd64_v1/confluent | grep -vw dep
./dist/confluent_freebsd_amd64_v1/confluent: go1.19.5
	path	command-line-arguments
	build	-asmflags=all=-trimpath=/home/dveeden/git
	build	-compiler=gc
	build	-gcflags=all=-trimpath=/home/dveeden/git
	build	-ldflags="-s -w -X main.version=v3.0.0-dirty-dveeden -X main.commit=83072d47 -X main.date=2023-01-26T08:38:18Z"
	build	-tags=dynamic
	build	CGO_ENABLED=1
	build	CGO_CFLAGS=
	build	CGO_CPPFLAGS=
	build	CGO_CXXFLAGS=
	build	CGO_LDFLAGS=
	build	GOARCH=amd64
	build	GOOS=freebsd
	build	GOAMD64=v1
% file ./dist/confluent_freebsd_amd64_v1/confluent
./dist/confluent_freebsd_amd64_v1/confluent: ELF 64-bit LSB executable, x86-64, version 1 (FreeBSD), dynamically linked, interpreter /libexec/ld-elf.so.1, for FreeBSD 13.1, FreeBSD-style, Go BuildID=nbtGuL4rOvzdWcchfu4k/FWCY7oI7U3W9T22Jk4pU/SjSsSKJdis-nENQXiNjp/mXRzoVc2_k0MVOozlSpq, stripped
% ./dist/confluent_freebsd_amd64_v1/confluent version
confluent - Confluent CLI

Version:     v3.0.0-dirty-dveeden
Git Ref:     83072d47
Build Date:  2023-01-26T08:38:18Z
Go Version:  go1.19.5 (freebsd/amd64)
Development: true

Consuming While Producing Fails

Hi again.

I'm seeing an odd error when attempting to have both a consumer and producer using the CLI running at the same time.

I start a consumer using the confluent kafka topic consume command. I'm using an avro schema that is configured already on the topic via the Schema Registry in Confluent Cloud.

I think go to a new terminal to create a producer using the confluent kafka topic produce command. I'm sending the raw Avro schema (.avsc file). I successfully produce the message (and I see it produced properly in Confluent Cloud).

However, the consumer now breaks. I get the following error (which is from the golang stdlib)

Error: open /<temp-directory>/ccloud-schema/<topic-name>-<registered-schema-id>.txt: no such file or directory

Interestingly, if I restart the consumer and use the --from-begginning option, I successfully get the message

I'm wondering if there is some attempt at an optimization to cache the registered schema on produce that doesn't jive properly when attempting to consume.

Below are the exact commands I'm using (env vars used for secrets)

# Consume
confluent kafka topic consume <topic> \
 --bootstrap $(KAFKA_BOOTSTRAP_SERVER) \
 --print-offset \
 --delimiter "|" \
 --value-format avro \
 --api-key $(KAFKA_USERNAME) \
 --api-secret $(KAFKA_PASSWORD) \
 --schema-registry-endpoint $(SCHEMA_REGISTRY_URL) \
 --schema-registry-api-key $(SCHEMA_REGISTRY_USER) \
 --schema-registry-api-secret $(SCHEMA_REGISTRY_PASSWORD)
# Produce
confluent kafka topic produce <topic> \
--bootstrap $(KAFKA_BOOTSTRAP_SERVER) \ 
--value-format avro \ 
--schema "schemas/avro/avsc/value.avsc" \
--key-format avro \
--parse-key \
--delimiter "|" \
--key-schema "schemas/avro/avsc/key.avsc" \
--api-key $(KAFKA_USERNAME) \
--api-secret $(KAFKA_PASSWORD) \
--schema-registry-endpoint $(SCHEMA_REGISTRY_URL) \
--schema-registry-api-key $(SCHEMA_REGISTRY_USER) \
--schema-registry-api-secret $(SCHEMA_REGISTRY_PASSWORD)

I've started digging into the code myself and think there is something going on in the StoreSchemasReferences (or related funcs) that store a reference to the schema, but not the schema itself.

Support linux/arm64 in the Confluent Platform package

I have installed Confluent Platform 7.5.2 per the [Install Confluent Platform using Systemd on RHEL and CentOS](https://install/ Confluent Platform using Systemd on RHEL and CentOS) page. I’m installing on Amazon Linux 2023 AWS instance. I’ve completed all steps up to
“Test that you set the CONFLUENT_HOME variable correctly by running the confluent command: confluent --help”.
When I run confluent --help, I get the following error:

confluentinc/cli crit platform linux/arm64 is not supported. Make sure this script is up-to-date and file request at Sign in to GitHub · GitHub

As I just extracted the Confluent from the confluent-7.5.2.tar.gz tarball, I believe the script should be up-to-date. Can anyone provide some direction on how I might correct this situation?

hangs on air gapped environment

Since V3 of the CLI has been released everything I try and execute hangs, even running -h does this. I have showed the full output below. I have also update the config with:

  "version": "1.0.0",
  "disable_update_check": true,
  "disable_updates": true,
  "disable_plugins": true,

output

[root@apptuv72fd ~]# /opt/app/software/confluent-7.3.1/bin/confluent  login --url https://xxxxx.xxxx.test.group:8090  -vvvvv

2023-02-08T13:28:55.939Z [DEBUG] Did not find full credential set from environment variables
2023-02-08T13:28:55.940Z [DEBUG] Searching for netrc machine with filter: {IgnoreCert:true IsCloud:false IsSSO:false Name: URL:https://xxxx.xxxxx.cloud.test.group:8090}
2023-02-08T13:28:55.940Z [DEBUG] Get netrc machine error: open /root/.netrc: no such file or directory

ASYNCAPI 2 Schema registry stand alone module

We find very useful this new feature, but I doesn't 100% in our use case. To be honest is a great idea, but maybe having and stand alone module to obtain the schema from asyncapi definition could be enough for us.

This is our use case :

Our Current Use Case:

  • Users define their events in avro format in their API repository. When they want to publish their schemas they call a job that executes several actions, some of them being:

    1) A custom library is called that merge all files with the definitions, including both the common metadata for all company events and the specific payload for each project, and generates the schema to be published in the schema registry.
    2) The schema is validated and compatibility (FORWARD_TRANSITIVE) is checked. If compatibility condition is not met the job fails
    3) Once everything has been validated the schema is published in the schema registry
    

  • Our users can define their schemas for both on Prem and Cloud environments (though even for cloud environment we are not using confluent cloud schema registry) and we use a custom subject naming strategy as we need to assign the same subject to multiple topics.
        

Request:    

Being able to allow users to define their events only in AsyncAPI form instead of avro will make their job easier so we would like to be able to use the AsyncAPI CLI tool, having just the tool that converts from AsyncAPI definition into an schema as a stand alone module would also be quite helpful as we would be able to integrate that functionality into our process.

If it's not possible, maybe could be a good point to:

  • Being able to configure some of the process steps like the schema registry endpoint or the subject naming strategy, as we need to be able to call on prem SRs and we use a custom naming strategy.
  • Being able to define which steps we want to execute: There are several steps we don't allow, like automatic topic creation, changing compatibility mode or flag creation.

Please don't hesitate to ask anything,
Thanks
 

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.