Giter Site home page Giter Site logo

omarsmak / kafka-consumer-lag-monitoring Goto Github PK

View Code? Open in Web Editor NEW
51.0 3.0 13.0 378 KB

Client tool that exports the consumer lag of Kafka consumer groups to Prometheus or your terminal

License: MIT License

Kotlin 100.00%
kafka kotlin java lag prometheus kafka-consumer prometheus-exporter grafana grafana-dashboard monitoring

kafka-consumer-lag-monitoring's Introduction

Kafka Consumer Lag Monitoring - Lightweight and Cloud Native Ready

Build Status Download

A client tool that exports the consumer lag of a Kafka consumer group to different implementations such as Prometheus or your terminal. It utlizes Kafka's AdminClient and Kafka's Consumer's client in order to fetch such metrics. Consumer lag calculated as follows:

sum(topic_offset_per_partition-consumer_offset_per_partition)

What is Consumer Lag and why is important?

Quoting this article:

What is Kafka Consumer Lag? Kafka Consumer Lag is the indicator of how much lag there is between Kafka producers and consumers....

Why is Consumer Lag Important? Many applications today are based on being able to process (near) real-time data. Think about performance monitoring system like Sematext Monitoring or log management service like Sematext Logs. They continuously process infinite streams of near real-time data. If they were to show you metrics or logs with too much delay – if the Consumer Lag were too big – they’d be nearly useless. This Consumer Lag tells us how far behind each Consumer (Group) is in each Partition. The smaller the lag the more real-time the data consumption.

In summary, consumer lag tells us 2 things:

  • The closer the lag to 0, the more confidence we are on processing messages nearer to real-time. Therefore, it could indicate that our consumers are processing messages in a healthy manner.
  • The further the lag from 0, the less confidence we are on processing messages nearer to real-time. Therefore, it could indicate that our consumers are not processing messages in a healthy manner.

Supported Kafka Versions

Since this client uses Kafka Admin Client and Kafka Consumer client version of 2+, therefore this client supportes Kafka brokers from version 0.10.2+.

Features

  • Rich metrics that show detailed consumer lags on both levels, on the consumer group level and on the consumer member level for more granularity.
  • Metrics are available for both, console and Prometheus.
  • Very fast due to the native compilation by GraalVM Native Image.
  • Highly configurable through either properties configurations or through environment variables.
  • Configurable logging through log4j, currently supports JSON as well the standard logging.
  • Ready to use thin Docker images either for Jar or native application for your cloud deployments such as Kubernetes.
  • The tool is also available as maven package in case you want to be embedded it into your application.

Changelog

0.1.1

  • Issue #29: Publish the artifacts in Maven Central instead of bintray
  • Update Kafka clients to version 2.8.0.

0.1.0

Major Release:

  • Issue #27: Refactor the client in order to minimize the usage of dependencies and remove any reflections.
  • Issue #24: Support native compilation via GraalVM Native Image.
  • Issue #15: Configurable log4j support for either JSON or standard logging.
  • Issue #14: Support of configurations through environment variables.
  • Update all the dependencies to the latest version.

0.0.8:

  • Issue #23: Extend Lag stats on consumer member level.
  • Issue #20: Support consumer group and topic deletion on the fly.
  • Issue #21: Change default port to 9739

0.0.7:

  • Issue #17: Now this client will show newly joined consumer groups as well without the need to restart the client. You should start it once and it will always refresh the consumer groups list according to the poll interval.
  • Kafka client updated to version 2.5.0.

0.0.6:

  • Issue #8: Support configuration file as parameter
  • Kafka client updated to version 2.4.1.

Installation and Usage

Native Application

You can downland the latest release of the Native application from here, currently it only supports Mac and Linux. An example from Prometheus component:

./kafka-consumer-lag-monitoring-prometheus-0.1.0 config.properties

Note to Mac users: You will need to verify the application, to do this, run:

xattr -r -d com.apple.quarantine kafka-consumer-lag-monitoring-prometheus-0.1.0

Uber JAR

You can downland the latest release of the Uber JAR from here. This client requires at least Java 8 in order to run. You can run it like this for example from Console component:

java -jar kafka-consumer-lag-monitoring-console-0.1.0-all.jar -b kafka1:9092,kafka2:9092,kafka3:9092 -c "my_awesome_consumer_group_01" -p 5000

Docker

There two types of docker images:

  1. Docker images based on the native application: This docker image is built using the natively compiled application, the benefit is, you will get faster and small image which is beneficial for your cloud native environment. However, since the native compilation is pretty new to this client, is still an evolving work.
  2. Docker images based on the Uber Jar: This docker image is built using the uber Jar. Although it might be slower and larger, it is the more stable than the Docker native images but is still optimized to run in container orchestration frameworks such as kubernetes as efficient as possible.

Example:

docker run omarsmak/kafka-consumer-lag-monitoring-prometheus-native -p 9739:9739  \
-e kafka_bootstrap_servers=localhost:9092 \
-e kafka_retry_backoff.ms = 200 \
-e monitoring_lag_consumer_groups="test*" \
-e monitoring_lag_prometheus_http_port=9739 \
-e monitoring_lag_logging_rootLogger_appenderRef_stdout_ref=LogToConsole \
-e monitoring_lag_logging_rootLogger_level=info

Usage

Console Component:

This mode will print the consumer lag per partition and the total lag among all partitions and continuously refreshing the metrics per the value of --poll.interval startup parameter. It accepts the following parameters:

./kafka-consumer-lag-monitoring-console-0.1.0 -h    
                                                                                                                                                                                                  130 ↵ omaral-safi@Omars-MBP-2
Usage: kafka-consumer-lag-monitoring-console [-hV] [-b=<kafkaBootstrapServers>]
      [-c=<kafkaConsumerGroups>] [-f=<kafkaPropertiesFile>] [-p=<pollInterval>]
Prints the kafka consumer lag to the console.
 -b, --bootstrap.servers=<kafkaBootstrapServers>
                 A list of host/port pairs to use for establishing the initial
                   connection to the Kafka cluster
 -c, --consumer.groups=<kafkaConsumerGroups>
                 A list of Kafka consumer groups or list ending with star (*)
                   to fetch all consumers with matching pattern, e.g: 'test_v*'
 -f, --properties.file=<kafkaPropertiesFile>
                 Optional. Properties file for Kafka AdminClient
                   configurations, this is the typical Kafka properties file
                   that can be used in the AdminClient. For more info, please
                   take a look at Kafka AdminClient configurations
                   documentation.
 -h, --help      Show this help message and exit.
 -p, --poll.interval=<pollInterval>
                 Interval delay in ms to that refreshes the client lag
                   metrics, default to 2000ms
 -V, --version   Print version information and exit.

An example output:

./kafka-consumer-lag-monitoring-console-0.1.0 -b kafka1:9092,kafka2:9092,kafka3:9092 -c "my_awesome_consumer_group_01" -p 5000
        Consumer group: my_awesome_consumer_group_01
        ==============================================================================
        
        Topic name: topic_example_1
        Total topic offsets: 211132248
        Total consumer offsets: 187689403
        Total lag: 23442845
        
        Topic name: topic_example_2
        Total topic offsets: 15763247
        Total consumer offsets: 15024564
        Total lag: 738683
        
        Topic name: topic_example_3
        Total topic offsets: 392
        Total consumer offsets: 392
        Total lag: 0
        
        Topic name: topic_example_4
        Total topic offsets: 24572
        Total consumer offsets: 24570
        Total lag: 2
        
        Topic name: topic_example_5
        Total topic offsets: 430
        Total consumer offsets: 430
        Total lag: 0
        
        Topic name: topic_example_6
        Total topic offsets: 6342
        Total consumer offsets: 6335    
        Total lag: 7
Example Usage Native Application:
./kafka-consumer-lag-monitoring-console-0.1.0 -c "test*" -b localhost:9092 -p 500
Example Usage Uber Jar Application:
java -jar kafka-consumer-lag-monitoring-console-0.1.0-all.jar -c "test*" -b localhost:9092 -p 500
Example Usage Docker Native Application:
docker run omarsmak/kafka-consumer-lag-monitoring-console-native -c "test*" -b localhost:9092 -p 500
Example Usage Docker Uber Jar Application:
docker run omarsmak/kafka-consumer-lag-monitoring-console -c "test*" -b localhost:9092 -p 500

Prometheus Component:

In this mode, the tool will start an http server on a port that being set in monitoring.lag.prometheus.http.port config and it will expose an endpoint that is reachable via localhost:<http.port>/metrics or localhost:<http.port>/prometheus so prometheus server can scrap these metrics and expose them for example to grafana. You will need to pass the configuration as properties file or via environment variables. An example config file:

kafka.bootstrap.servers=localhost:9092
kafka.retry.backoff.ms = 200
monitoring.lag.consumer.groups=test*
monitoring.lag.prometheus.http.port=9772
monitoring.lag.logging.rootLogger.appenderRef.stdout.ref=LogToConsole
monitoring.lag.logging.rootLogger.level=info

And then you can run it like the following:

Example Usage Native Application:
./kafka-consumer-lag-monitoring-prometheus-0.1.0 config.proprties
Example Usage Uber Jar Application:
java -jar kafka-consumer-lag-monitoring-prometheus-0.1.0-all.jar config.proprties
Example Usage Docker Native Application:

For Docker, we will use the environment variables instead:

docker run omarsmak/kafka-consumer-lag-monitoring-prometheus-native -p 9739:9739  \
-e kafka_bootstrap_servers=localhost:9092 \
-e kafka_retry_backoff.ms = 200 \
-e monitoring_lag_consumer_groups="test*" \
-e monitoring_lag_prometheus_http_port=9739 \
-e monitoring_lag_logging_rootLogger_appenderRef_stdout_ref=LogToConsole \
-e monitoring_lag_logging_rootLogger_level=info 
Example Usage Docker Uber Jar Application:

For Docker, we will use the environment variables instead:

docker run omarsmak/kafka-consumer-lag-monitoring-prometheus -p 9739:9739  \
-e kafka_bootstrap_servers=localhost:9092 \
-e kafka_retry_backoff.ms = 200 \
-e monitoring_lag_consumer_groups="test*" \
-e monitoring_lag_prometheus_http_port=9739 \
-e monitoring_lag_logging_rootLogger_appenderRef_stdout_ref=LogToConsole \
-e monitoring_lag_logging_rootLogger_level=info

Note: By default, port 9739 is exposed by the docker image, hence you should avoid overrding the client's HTTP port through the client's startup arguments (--http.port) as described below when you run the client through docker container and leave it to the default of 9739. However you can still change the corresponding docker mapped port to anything of your choice.

Exposed Metrics:
kafka_consumer_group_offset{group, topic, partition}

The latest committed offset of a consumer group in a given partition of a topic.

kafka_consumer_group_partition_lag{group, topic, partition}

The lag of a consumer group behind the head of a given partition of a topic. Calculated like this: current_topic_offset_per_partition - current_consumer_offset_per_partition.

kafka_topic_latest_offsets{group, topic, partition}

The latest committed offset of a topic in a given partition.

kafka_consumer_group_total_lag{group, topic}

The total lag of a consumer group behind the head of a topic. This gives the total lags from all partitions over each topic, it provides good visibility but not a precise measurement since is not partition aware.

kafka_consumer_group_member_lag{group, member, topic}

The total lag of a consumer group member behind the head of a topic. This gives the total lags over each consumer member within consumer group.

kafka_consumer_group_member_partition_lag{group, member, topic, partition}

The lag of a consumer member within consumer group behind the head of a given partition of a topic.

Configuration

Majority of the components here, for example Prometheus components supports two types of configurations:

  1. Application Properties File: You can provide the application a config properties file as argument e.g: ./kafka-consumer-lag-monitoring-prometheus-0.1.0 config.properties, this is an example config:

     ```
     kafka.bootstrap.servers=localhost:9092
     kafka.retry.backoff.ms = 200
     monitoring.lag.consumer.groups=test*
     monitoring.lag.prometheus.http.port=9772
     monitoring.lag.logging.rootLogger.appenderRef.stdout.ref=LogToConsole
     monitoring.lag.logging.rootLogger.level=info
     ```
    

    Note here the application accepts configs with two prefixes:

    • kafka.: Use the kafka prefix for any config related to Kafka admin client, these configs are basically the same configs that you will find here: https://kafka.apache.org/documentation/#adminclientconfigs/.
    • monitoring.lag. : Use the monitoring.lag prefix to pass any config specific to this client, you will take a look which configs that the client will accept later.
  2. Environment Variables: You can as well pass the configs as environment variables, this is useful when running the application in environment like Docker, for example:

     ```
     docker run --rm -p 9739:9739 \
     -e monitoring_lag_logging_rootLogger_appenderRef_stdout_ref=LogToConsole \
     -e monitoring_lag_consumer_groups="test-*" \
     -e kafka_bootstrap_servers=host.docker.internal:9092  \
     omarsmak/kafka-consumer-lag-monitoring-prometheus-native:latest 
     ```    
    

    Similar to the application properties file, it supports kafka and monitoring.lag. However, you will need to replace all dot . with underscore _ for all the configs, for example the config kafka.bootstrap.servers its environment equivalent is kafka_bootstrap_servers.

Available Configurations

  • monitoring.lag.consumer.groups : A list of Kafka consumer groups or list ending with star (*) to fetch all consumers with matching pattern, e.g: test_v*.
  • monitoring.lag.poll.interval : Interval delay in ms to that refreshes the client lag metrics, default to 2000ms.
  • monitoring.lag.prometheus.http.port : Http port that is used to expose metrics in case, default to 9739.

Logging

The client ships with Log4j bindings and supports JSON and standard logging. The default log4j properties that it uses:

# Log to console
appender.console.type = Console
appender.console.name = LogToConsole
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = [%-5level] %d{yyyy-MM-dd HH:mm:ss.SSS} [%t] %c{1} - %msg%n

# Log to console as JSON
appender.json.type = Console
appender.json.name = LogInJSON
appender.json.layout.type = JsonLayout
appender.json.layout.complete = true
appender.json.layout.compact = false

rootLogger.level = info
rootLogger.appenderRef.stdout.ref = LogInJSON

By default, LogInJSON is enabled. However, you can customtize all of this by providing these configurations prefixed with monitoring.lag.logging.. For example, to enable the standard logging, you will need to add this config monitoring.lag.logging.rootLogger.appenderRef.stdout.ref=LogToConsole or as environment variable: monitoring_lag_logging_rootLogger_appenderRef_stdout_ref=LogToConsole.

Note: When configuring the logging through the environment variables, note that the configuration are case sensitive.

Usage as Library

If you want to use this client embedded into your application, you can achieve that by adding a dependency to this tool in your pom.xml or gradle.build as explained below:

Maven

<dependency>
  <groupId>com.omarsmak.kafka</groupId>
  <artifactId>consumer-lag-monitoring</artifactId>
  <version>0.1.1</version>
</dependency>

Gradle

compile 'com.omarsmak.kafka:consumer-lag-monitoring:0.1.1'

Usage

Java

import com.omarsmak.kafka.consumer.lag.monitoring.client.KafkaConsumerLagClient;
import com.omarsmak.kafka.consumer.lag.monitoring.client.KafkaConsumerLagClientFactory;
import org.apache.kafka.clients.admin.AdminClientConfig;

import java.util.Properties;

public class ConsumerLagClientTest {
    
    public static void main(String[] args){
        // Create a Properties object to hold the Kafka bootstrap servers
        final Properties properties = new Properties();
        properties.setProperty(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, "kafka1:9092");
        
        // Create the client, we will use the Java client 
        final KafkaConsumerLagClient kafkaConsumerLagClient = KafkaConsumerLagClientFactory.create(properties);
        
        // Print the lag of a Kafka consumer
        System.out.println(kafkaConsumerLagClient.getConsumerLag("awesome-consumer"));
    }
}

Kotlin

import com.omarsmak.kafka.consumer.lag.monitoring.client.KafkaConsumerLagClientFactory
import org.apache.kafka.clients.admin.AdminClientConfig
import java.util.Properties

object ConsumerLagClientTest {

    @JvmStatic
    fun main(arg: Array<String>) {
        // Create a Properties object to hold the Kafka bootstrap servers
        val properties = Properties().apply {
            this[AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG] = "kafka1:9092"
        }

        // Create the client, we will use the Kafka AdminClient Java client
        val kafkaConsumerLagClient = KafkaConsumerLagClientFactory.create(properties)

        // Print the lag of a Kafka consumer
        println(kafkaConsumerLagClient.getConsumerLag("awesome-consumer"))
    }
}

Build The Project

Run ./gradlew clean build on the top project folder which is as result, it will run all tests and build the Uber jar.

Project Sponsors

Alt text

kafka-consumer-lag-monitoring's People

Contributors

chrbrnracn avatar colinleroy avatar dr460neye avatar maxarturo avatar omarsmak avatar sesamzoo avatar shini31 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

kafka-consumer-lag-monitoring's Issues

Env Var support for Kuberentes/orchestration

Hi there,

for a orchestration framework support, it would be best to have direct support for environment variables.
The idea:
When all relevant variables are set, the tool does not need start parameters.

At the moment i use the following workaround in my deployment.yaml

args: ["-b", "$(BOOTSTRAP_SERVERS)","-m", "$(MODE)","-c", "$(CONSUMER_GROUPS)","-i", "$(POLL_INTERVAL)", "-p", "$(HTTP_PORT)"]

MonitoringEngine.kt leaks passwords

Describe the bug
The logging of Kafka Configs / Components configs leaks passwords.

To Reproduce
Steps to reproduce the behavior:
configure something like

kafka.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required \
  username="monitoring" \
  password="very-secret-password";
kafka.ssl.truststore.password=another-password

kafka-consumer-lag-monitoring logs Kafka Configs as

Kafka Configs: {ssl.truststore.password=another-password, security.protocol=SASL_SSL, ssl.endpoint.identification.algorithm=, ssl.truststore.location=/etc/ssl/certs/java/cacerts, bootstrap.servers=..., sasl.mechanism=PLAIN, sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="monitor" password="very-secret-password";, client.id=kafka-lag-exporter, ssl.truststore.type=PKCS12}

Expected behavior
kafka-consumer-lag-monitoring logs Kafka Configs as

Kafka Configs: {ssl.truststore.password=[REDACTED], security.protocol=SASL_SSL, ssl.endpoint.identification.algorithm=, ssl.truststore.location=/etc/ssl/certs/java/cacerts, bootstrap.servers=..., sasl.mechanism=PLAIN, sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="monitor" password="[REDACTED]";, client.id=kafka-lag-exporter, ssl.truststore.type=PKCS12}

Log4j vulnerable - upgrade to 2.17.1+

Describe the bug
The latest version of the consumer-lag-monitor is using log4j 2.13.3
Log4j had some serious security issues end of 2021. Refer to https://logging.apache.org/log4j/2.x/security.html
Only the current latest version 2.17.1 has no known security issues.

To Reproduce
Steps to reproduce the behavior:

  1. Go to /gradle.properties
  2. Check log4jSlf4jImplVersion (
    log4jSlf4jImplVersion = 2.13.3
    )

log4jSlf4jImplVersion is not vulnerable itself but pulls log4j-core as a dependency which is vulnerable.

Expected behavior
Log4j is updated to a non-vulnerable version (>=2.17.1)

Desktop (please complete the following information):
not relevant

Smartphone (please complete the following information):
not relevant

Refactor the logic of the output methods for the client

Currently the implementation of the Console and Prometheus are very fixed, meaning that if in case of adding new output method, there will be many redundant code, hence it makes sense to rethink the logic there and add some abstraction in order to allow to have easier extendibility for additional output methods

Doc improvement: Kafka version

Caused by: org.apache.kafka.common.errors.UnsupportedVersionException: The broker only supports OffsetFetchRequest v1, but we need v2 or newer to request all topic partitions.
Just got this trying to run your tool.
I know we're running a rather old version of kafka. It would have been nice if the readme mentioned which version of kafka is supported :-)

Thanks!

Add Kafka Consumer Lag asynchronous client to improve the experience for the users

The current KafkaConsumerLagClient runs in a sync way which is as result, the user as well as the component need to poll the data on fixed interval. Since the Admin Client returns all the data in Future<T> based, would make sense to utilize that. Even expose the API as Flux<T>/Mono<T> async APIs which makes it easier for the user to work with the async APIs.
In order to not break any existing functionality, it would make sense to add a new Client interface that is dedicated for async APIs e.g: KafkaConsumerLagAsyncClient and from the factory to create the client KafkaConsumerLagClientFactory.createAsync(props)

Information about Kafka brokers connection

Hi,

I have a question about the method used to connect to the Kafka brokers.
For example, if I lose a Kafka broker in my cluster, will it try again to connect to it, or is the connection permanently lost?

Regards.

switch to github ci

Describe the bug
A clear and concise description of what the bug is.

To Reproduce
Steps to reproduce the behavior:

  1. Go to '...'
  2. Click on '....'
  3. Scroll down to '....'
  4. See error

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • OS: [e.g. iOS]
  • Browser [e.g. chrome, safari]
  • Version [e.g. 22]

Smartphone (please complete the following information):

  • Device: [e.g. iPhone6]
  • OS: [e.g. iOS8.1]
  • Browser [e.g. stock browser, safari]
  • Version [e.g. 22]

Additional context
Add any other context about the problem here.

Question : metrics equivalence with AWS MSK consumer lag metrics

Is your feature request related to a problem? Please describe.
Hello , thanks for your tooling , we are currently using aws MSK, that provide natively metrics for the consumer lags

https://docs.aws.amazon.com/msk/latest/developerguide/consumer-lag.html

EstimatedMaxTimeLag, EstimatedTimeLag, MaxOffsetLag, OffsetLag, and SumOffsetLag

time based lag metrics
and
"size" based lag metrics

Since we would like to move to our own kafka, we would like a tool to get theses metrics. And looking to your project 👍

Describe the solution you'd like
Could you detail if your metrics are equivalent to the metrics that we are currently using with aws msk

Describe alternatives you've considered
Before the release of that feature inside aws MSK, we were using burrow ( in 2020 ) and it was less precise and complete

Support newly joined consumers and new topics without the need to restart the exporter

Hi,

The exporter is not able to update the list of consumer group even if wildcard * is setted for the -c parameter.
java -jar kafka-consumer-lag-monitoring-0.0.6-all.jar -f ssl-user-config.properties -c "*" -m "prometheus" -i 5000

If I deploy a new app with a new consumer group, it will be a great move to not restart the exporter to have its metrics

Regards.

SSL support

For supporting clusters only exposing SSL listeners you'ld need make these properties configurable on the consumer:

    props.put(SECURITY_PROTOCOL_CONFIG, "SSL");
    props.put(SSL_TRUSTSTORE_LOCATION_CONFIG, "/the/location/of/truststore.jks");
    props.put(SSL_TRUSTSTORE_PASSWORD_CONFIG, "123");
    props.put(SSL_KEYSTORE_LOCATION_CONFIG, "/the/location/of/keystore.jks");
    props.put(SSL_KEYSTORE_PASSWORD_CONFIG, "123");
    props.put(SSL_KEY_PASSWORD_CONFIG, "abc");

If this weren't Kotlin i could have sent a PR :-)

docker build not working

docker run omarsmak/kafka-consumer-lag-monitoring-prometheus-native -p 9739:9739
-e kafka_bootstrap_servers=localhost:9092
-e kafka_retry_backoff.ms = 200
-e monitoring_lag_consumer_groups="test*"
-e monitoring_lag_prometheus_http_port=9739
-e monitoring_lag_logging_rootLogger_appenderRef_stdout_ref=LogToConsole
-e monitoring_lag_logging_rootLogger_level=info

After this Getting this error

./application: /usr/lib/libstdc++.so.6: no version information available (required by ./application)
./application: Relink /usr/lib/libgcc_s.so.1' with /usr/glibc-compat/lib/libc.so.6' for IFUNC symbol `memset'

Docker version 20.10.8, build 3967b7d
OS:- redhat 7.X

Log formatting support

Hi there,

support for log formats would be really helpful in an container orchestration world.
Mostly two thing are used. Either JSON format, or classic log format.
Right now its some kind of a mix of multiple formats due to the kafka default logging.

What do you think about that?

Provide Dockerfile

I really like the idea of your little but important devops tool. I would really appreciate if you could add the dockerfile to the project, to continue a simple ci cd pipeline here. This could really give your project the required attention it should get :)

Looking at building a Kubernetes operator for the monitoring

To simplify deploying this utility as Kubernetes operator could be good idea, for example a custom CRD that supports all the Kafka Properties without the user need to add any kafka properties files. Just define all these in the CRD and run the operator.
Steps to implement this:

  1. Auto generate the CRD blueprint from the Kafka configurations (this is not really needed but for the schema validation). We can ignore this at first iteration.
  2. In the controller, parse the CRD properties and convert it to Map of properties.
  3. Run the monitoring client using the Map.

In order to ensure that works with Standalone mode and Docker mode. We will need to convert this project into multi module project that has one sub project for the client and other sub project for the K8s operator.

Extend LAG Stats on consumer level

We played a bit with your kafka consumer lag monitoring library and have been excited about it.

The only thing we are missing is data splited to consumer level. Sometimes only one consumer in a consumer group is lagging (for different reasons).
With consolidated data per topic al lag is visible, although you don't see which consumer is lagging. In a clustered environment usually consumer groups are spread over different nodes and they can behave differently.

Very much appreciate if you consider this extension

thanks, Bruno

Change default port

Hi,

The goal is to use the following wiki Default port allocations and use an "official" port.

The first available port is 9739.

I also have a PR available to add it in this wiki.

Do you agree to these modifications?

Regards.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.