Giter Site home page Giter Site logo

yahoo / cmak Goto Github PK

View Code? Open in Web Editor NEW
11.7K 11.7K 2.5K 3.97 MB

CMAK is a tool for managing Apache Kafka clusters

License: Apache License 2.0

Scala 84.26% CSS 0.52% HTML 12.72% Shell 1.95% JavaScript 0.42% Less 0.12%
big-data cluster-management kafka scala

cmak's Introduction

CMAK (Cluster Manager for Apache Kafka, previously known as Kafka Manager)

CMAK (previously known as Kafka Manager) is a tool for managing Apache Kafka clusters. See below for details about the name change.

CMAK supports the following:

  • Manage multiple clusters
  • Easy inspection of cluster state (topics, consumers, offsets, brokers, replica distribution, partition distribution)
  • Run preferred replica election
  • Generate partition assignments with option to select brokers to use
  • Run reassignment of partition (based on generated assignments)
  • Create a topic with optional topic configs (0.8.1.1 has different configs than 0.8.2+)
  • Delete topic (only supported on 0.8.2+ and remember set delete.topic.enable=true in broker config)
  • Topic list now indicates topics marked for deletion (only supported on 0.8.2+)
  • Batch generate partition assignments for multiple topics with option to select brokers to use
  • Batch run reassignment of partition for multiple topics
  • Add partitions to existing topic
  • Update config for existing topic
  • Optionally enable JMX polling for broker level and topic level metrics.
  • Optionally filter out consumers that do not have ids/ owners/ & offsets/ directories in zookeeper.

Cluster Management

cluster


Topic List

topic


Topic View

topic


Consumer List View

consumer


Consumed Topic View

consumer


Broker List

broker


Broker View

broker


Requirements

  1. Kafka 0.8.. or 0.9.. or 0.10.. or 0.11..
  2. Java 11+

Configuration

The minimum configuration is the zookeeper hosts which are to be used for CMAK (pka kafka manager) state. This can be found in the application.conf file in conf directory. The same file will be packaged in the distribution zip file; you may modify settings after unzipping the file on the desired server.

cmak.zkhosts="my.zookeeper.host.com:2181"

You can specify multiple zookeeper hosts by comma delimiting them, like so:

cmak.zkhosts="my.zookeeper.host.com:2181,other.zookeeper.host.com:2181"

Alternatively, use the environment variable ZK_HOSTS if you don't want to hardcode any values.

ZK_HOSTS="my.zookeeper.host.com:2181"

You can optionally enable/disable the following functionality by modifying the default list in application.conf :

application.features=["KMClusterManagerFeature","KMTopicManagerFeature","KMPreferredReplicaElectionFeature","KMReassignPartitionsFeature"]
  • KMClusterManagerFeature - allows adding, updating, deleting cluster from CMAK (pka Kafka Manager)
  • KMTopicManagerFeature - allows adding, updating, deleting topic from a Kafka cluster
  • KMPreferredReplicaElectionFeature - allows running of preferred replica election for a Kafka cluster
  • KMReassignPartitionsFeature - allows generating partition assignments and reassigning partitions

Consider setting these parameters for larger clusters with jmx enabled :

  • cmak.broker-view-thread-pool-size=< 3 * number_of_brokers>
  • cmak.broker-view-max-queue-size=< 3 * total # of partitions across all topics>
  • cmak.broker-view-update-seconds=< cmak.broker-view-max-queue-size / (10 * number_of_brokers) >

Here is an example for a kafka cluster with 10 brokers, 100 topics, with each topic having 10 partitions giving 1000 total partitions with JMX enabled :

  • cmak.broker-view-thread-pool-size=30
  • cmak.broker-view-max-queue-size=3000
  • cmak.broker-view-update-seconds=30

The follow control consumer offset cache's thread pool and queue :

  • cmak.offset-cache-thread-pool-size=< default is # of processors>
  • cmak.offset-cache-max-queue-size=< default is 1000>
  • cmak.kafka-admin-client-thread-pool-size=< default is # of processors>
  • cmak.kafka-admin-client-max-queue-size=< default is 1000>

You should increase the above for large # of consumers with consumer polling enabled. Though it mainly affects ZK based consumer polling.

Kafka managed consumer offset is now consumed by KafkaManagedOffsetCache from the "__consumer_offsets" topic. Note, this has not been tested with large number of offsets being tracked. There is a single thread per cluster consuming this topic so it may not be able to keep up on large # of offsets being pushed to the topic.

Authenticating a User with LDAP

Warning, you need to have SSL configured with CMAK (pka Kafka Manager) to ensure your credentials aren't passed unencrypted. Authenticating a User with LDAP is possible by passing the user credentials with the Authorization header. LDAP authentication is done on first visit, if successful, a cookie is set. On next request, the cookie value is compared with credentials from Authorization header. LDAP support is through the basic authentication filter.

  1. Configure basic authentication
  • basicAuthentication.enabled=true
  • basicAuthentication.realm=< basic authentication realm>
  1. Encryption parameters (optional, otherwise randomly generated on startup) :
  • basicAuthentication.salt="some-hex-string-representing-byte-array"
  • basicAuthentication.iv="some-hex-string-representing-byte-array"
  • basicAuthentication.secret="my-secret-string"
  1. Configure LDAP / LDAP + StartTLS / LDAPS authentication

Note: LDAP is unencrypted and insecure. LDAPS is a commonly implemented extension that implements an encryption layer in a manner similar to how HTTPS adds encryption to an HTTP. LDAPS has not been documented, and the specification is not formally defined anywhere. LDAP + StartTLS is the currently recommended way to start an encrypted channel, and it upgrades an existing LDAP connection to achieve this encryption.

  • basicAuthentication.ldap.enabled=< Boolean flag to enable/disable ldap authentication >
  • basicAuthentication.ldap.server=< fqdn of LDAP server >
  • basicAuthentication.ldap.port=< port of LDAP server (typically 389 for LDAP and LDAP + StartTLS and typically 636 for LDAPS) >
  • basicAuthentication.ldap.username=< LDAP search username >
  • basicAuthentication.ldap.password=< LDAP search password >
  • basicAuthentication.ldap.search-base-dn=< LDAP search base >
  • basicAuthentication.ldap.search-filter=< LDAP search filter >
  • basicAuthentication.ldap.connection-pool-size=< maximum number of connection to LDAP server >
  • basicAuthentication.ldap.ssl=< Boolean flag to enable/disable LDAPS (usually incompatible with StartTLS) >
  • basicAuthentication.ldap.starttls=< Boolean flat to enable StartTLS (usually incompatible with SSL) >
  1. (Optional) Limit access to a specific LDAP Group
  • basicAuthentication.ldap.group-filter=< LDAP group filter >
  • basicAuthentication.ldap.ssl-trust-all=< Boolean flag to allow non-expired invalid certificates >

Example (Online LDAP Test Server):

  • basicAuthentication.ldap.enabled=true
  • basicAuthentication.ldap.server="ldap.forumsys.com"
  • basicAuthentication.ldap.port=389
  • basicAuthentication.ldap.username="cn=read-only-admin,dc=example,dc=com"
  • basicAuthentication.ldap.password="password"
  • basicAuthentication.ldap.search-base-dn="dc=example,dc=com"
  • basicAuthentication.ldap.search-filter="(uid=$capturedLogin$)"
  • basicAuthentication.ldap.group-filter="cn=allowed-group,ou=groups,dc=example,dc=com"
  • basicAuthentication.ldap.connection-pool-size=10
  • basicAuthentication.ldap.ssl=false
  • basicAuthentication.ldap.ssl-trust-all=false
  • basicAuthetication.ldap.starttls=false

Deployment

The command below will create a zip file which can be used to deploy the application.

./sbt clean dist

Please refer to play framework documentation on production deployment/configuration.

If java is not in your path, or you need to build against a specific java version, please use the following (the example assumes zulu java11):

$ PATH=/usr/lib/jvm/zulu-11-amd64/bin:$PATH \
  JAVA_HOME=/usr/lib/jvm/zulu-11-amd64 \
  /path/to/sbt -java-home /usr/lib/jvm/zulu-11-amd64 clean dist

This ensures that the 'java' and 'javac' binaries in your path are first looked up in the correct location. Next, for all downstream tools that only listen to JAVA_HOME, it points them to the java11 location. Lastly, it tells sbt to use the java11 location as well.

Starting the service

After extracting the produced zipfile, and changing the working directory to it, you can run the service like this:

$ bin/cmak

By default, it will choose port 9000. This is overridable, as is the location of the configuration file. For example:

$ bin/cmak -Dconfig.file=/path/to/application.conf -Dhttp.port=8080

Again, if java is not in your path, or you need to run against a different version of java, add the -java-home option as follows:

$ bin/cmak -java-home /usr/lib/jvm/zulu-11-amd64

Starting the service with Security

To add JAAS configuration for SASL, add the config file location at start:

$ bin/cmak -Djava.security.auth.login.config=/path/to/my-jaas.conf

NOTE: Make sure the user running CMAK (pka kafka manager) has read permissions on the jaas config file

Packaging

If you'd like to create a Debian or RPM package instead, you can run one of:

sbt debian:packageBin

sbt rpm:packageBin

Credits

Most of the utils code has been adapted to work with Apache Curator from Apache Kafka.

Name and Management

CMAK was renamed from its previous name due to this issue. CMAK is designed to be used with Apache Kafka and is offered to support the needs of the Kafka community. This project is currently managed by employees at Verizon Media and the community who supports this project.

License

Licensed under the terms of the Apache License 2.0. See accompanying LICENSE file for terms.

Consumer/Producer Lag

Producer offset is polled. Consumer offset is read from the offset topic for Kafka based consumers. This means the reported lag may be negative since we are consuming offset from the offset topic faster then polling the producer offset. This is normal and not a problem.

Migration from Kafka Manager to CMAK

  1. Copy config files from old version to new version (application.conf, consumer.properties)
  2. Change start script to use bin/cmak instead of bin/kafka-manager

cmak's People

Contributors

akki avatar bjoernhaeuser avatar dnelson avatar dpavlov avatar dreadpiraterobertson avatar dylanmei avatar fuji-151a avatar ggrossetie avatar gyehuda avatar iamgd67 avatar icco avatar jackode avatar jisookim0513 avatar juergen-walter avatar kelseylam avatar khhirani avatar mcjyang avatar paetling avatar parthkolekar avatar patelh avatar pranavbhole avatar shorani avatar simioa avatar simplesteph avatar sslavic avatar vpiserchia avatar vtomasr5 avatar woshiduncan avatar xuwei-k avatar zheolong avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cmak's Issues

Support for viewing existing clusters

Just got kafka manager up and running and realized I have no ability to specify a root node in zookeeper for the manager to look in. We have a couple of existing clusters under a couple of different nodes in zookeeper that I'd love to add into the manager but don't see a way to currently do this

Can it work out of box?

This is a good tool, however I just do not want to get it installed on production server, while I wish to monitor the topics, etc. Can I install it on VM remotely access production server, collecting metrics and display?

Same question for web-console, kafkaOffsetMonitor, etc.

Thanks

Zookeeper connection error Ask timed out on [ActorSelection[Anchor(akka://kafka-manager-system/), Path(/user/kafka-manager)]]

2015-02-08 16:23:49,016 - [INFO] - from org.apache.zookeeper.ClientCnxn in kafka-manager-system-akka.actor.default-dispatcher-7-SendThread(10.65.196.37:2181)
Opening socket connection to server 10.65.196.37/10.65.196.37:2181. Will not attempt to authenticate using SASL (unknown error)

2015-02-08 16:23:49,312 - [WARN] - from org.webjars.RequireJS in play-akka.actor.default-dispatcher-2
Could not read WebJar RequireJS config for: dustjs-linkedin 2.4.0
Please file a bug at: http://github.com/webjars/dustjs-linkedin/issues/new

2015-02-08 16:23:49,354 - [WARN] - from org.webjars.RequireJS in play-akka.actor.default-dispatcher-2
Could not read WebJar RequireJS config for: json 20121008
Please file a bug at: http://github.com/webjars/json/issues/new

2015-02-08 16:23:49,378 - [WARN] - from org.webjars.RequireJS in play-akka.actor.default-dispatcher-2
Could not read WebJar RequireJS config for: requirejs 2.1.10
Please file a bug at: http://github.com/webjars/requirejs/issues/new

2015-02-08 16:23:58,829 - [ERROR] - from kafka.manager.ApiError in pool-3-thread-2
error : Ask timed out on [ActorSelection[Anchor(akka://kafka-manager-system/), Path(/user/kafka-manager)]] after [1000 ms]
akka.pattern.AskTimeoutException: Ask timed out on [ActorSelection[Anchor(akka://kafka-manager-system/), Path(/user/kafka-manager)]] after [1000 ms]
at akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:333) ~[akka-actor_2.11-2.3.7.jar:na]
at akka.actor.Scheduler$$anon$7.run(Scheduler.scala:117) ~[akka-actor_2.11-2.3.7.jar:na]
at scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:599) ~[scala-library-2.11.4.jar:na]
at scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:109) ~[scala-library-2.11.4.jar:na]
at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:597) ~[scala-library-2.11.4.jar:na]
at akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(Scheduler.scala:467) ~[akka-actor_2.11-2.3.7.jar:na]
at akka.actor.LightArrayRevolverScheduler$$anon$8.executeBucket$1(Scheduler.scala:419) ~[akka-actor_2.11-2.3.7.jar:na]
at akka.actor.LightArrayRevolverScheduler$$anon$8.nextTick(Scheduler.scala:423) ~[akka-actor_2.11-2.3.7.jar:na]
at akka.actor.LightArrayRevolverScheduler$$anon$8.run(Scheduler.scala:375) ~[akka-actor_2.11-2.3.7.jar:na]
at java.lang.Thread.run(Thread.java:724) ~[na:1.7.0_40-MS]

2015-02-08 16:24:04,065 - [ERROR] - from org.apache.curator.ConnectionState in kafka-manager-system-akka.actor.default-dispatcher-7
Connection timed out for connection string (10.65.196.35:2181,10.65.196.36:2181,10.65.196.37:2181) and timeout (15000) / elapsed (15114)
org.apache.curator.CuratorConnectionLossException: KeeperErrorCode = ConnectionLoss
at org.apache.curator.ConnectionState.checkTimeouts(ConnectionState.java:197) [curator-client-2.7.0.jar:na]
at org.apache.curator.ConnectionState.getZooKeeper(ConnectionState.java:87) [curator-client-2.7.0.jar:na]
at org.apache.curator.CuratorZookeeperClient.getZooKeeper(CuratorZookeeperClient.java:115) ~[curator-client-2.7.0.jar:na]
at org.apache.curator.framework.imps.CuratorFrameworkImpl.getZooKeeper(CuratorFrameworkImpl.java:492) ~[curator-framework-2.7.0.jar:na]
at org.apache.curator.framework.imps.CreateBuilderImpl$11.call(CreateBuilderImpl.java:691) ~[curator-framework-2.7.0.jar:na]
at org.apache.curator.framework.imps.CreateBuilderImpl$11.call(CreateBuilderImpl.java:675) ~[curator-framework-2.7.0.jar:na]
at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:107) ~[curator-client-2.7.0.jar:na]
at org.apache.curator.framework.imps.CreateBuilderImpl.pathInForeground(CreateBuilderImpl.java:671) ~[curator-framework-2.7.0.jar:na]
at org.apache.curator.framework.imps.CreateBuilderImpl.protectedPathInForeground(CreateBuilderImpl.java:453) ~[curator-framework-2.7.0.jar:na]
at org.apache.curator.framework.imps.CreateBuilderImpl.forPath(CreateBuilderImpl.java:443) ~[curator-framework-2.7.0.jar:na]
at org.apache.curator.framework.imps.CreateBuilderImpl.forPath(CreateBuilderImpl.java:423) ~[curator-framework-2.7.0.jar:na]
at org.apache.curator.framework.imps.CreateBuilderImpl.forPath(CreateBuilderImpl.java:44) ~[curator-framework-2.7.0.jar:na]
at kafka.manager.KafkaManagerActor$$anonfun$6.apply(KafkaManagerActor.scala:184) ~[kafka-manager_2.11-1.0-SNAPSHOT.jar:1.0-SNAPSHOT]
at kafka.manager.KafkaManagerActor$$anonfun$6.apply(KafkaManagerActor.scala:184) ~[kafka-manager_2.11-1.0-SNAPSHOT.jar:1.0-SNAPSHOT]
at scala.util.Try$.apply(Try.scala:191) ~[scala-library-2.11.4.jar:na]
at kafka.manager.KafkaManagerActor.(KafkaManagerActor.scala:184) ~[kafka-manager_2.11-1.0-SNAPSHOT.jar:1.0-SNAPSHOT]
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[na:1.7.0_40-MS]
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) ~[na:1.7.0_40-MS]
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[na:1.7.0_40-MS]
at java.lang.reflect.Constructor.newInstance(Constructor.java:526) ~[na:1.7.0_40-MS]
at akka.util.Reflect$.instantiate(Reflect.scala:66) ~[akka-actor_2.11-2.3.7.jar:na]
at akka.actor.ArgsReflectConstructor.produce(Props.scala:352) ~[akka-actor_2.11-2.3.7.jar:na]
at akka.actor.Props.newActor(Props.scala:252) ~[akka-actor_2.11-2.3.7.jar:na]
at akka.actor.ActorCell.newActor(ActorCell.scala:552) ~[akka-actor_2.11-2.3.7.jar:na]
at akka.actor.ActorCell.create(ActorCell.scala:578) ~[akka-actor_2.11-2.3.7.jar:na]
at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:456) ~[akka-actor_2.11-2.3.7.jar:na]
at akka.actor.ActorCell.systemInvoke(ActorCell.scala:478) ~[akka-actor_2.11-2.3.7.jar:na]
at akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:279) ~[akka-actor_2.11-2.3.7.jar:na]
at akka.dispatch.Mailbox.run(Mailbox.scala:220) ~[akka-actor_2.11-2.3.7.jar:na]
at akka.dispatch.Mailbox.exec(Mailbox.scala:231) ~[akka-actor_2.11-2.3.7.jar:na]
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) ~[scala-library-2.11.4.jar:na]
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.pollAndExecAll(ForkJoinPool.java:1253) ~[scala-library-2.11.4.jar:na]
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1346) ~[scala-library-2.11.4.jar:na]
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) ~[scala-library-2.11.4.jar:na]
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) ~[scala-library-2.11.4.jar:na]

great project!

Hi - I just deployed Kafka-manager today and it is pretty cool. It abstracts a lot of the day-to-day tasks into a slick webui. What type of effort would be involved if it included near real-time stats like msg/rates disk usage, cluster utilization... I was thinking it would by a nice enhancement.Thanks, Brendan

Priming kafka-manager with 'zkhosts' of a known set of kafka clusters

Hi,

We want to automate some of the kafka management by

  • deploying a few kafka-manager instances via automation (puppet etc)
  • Each instance of kafka-manager is responsible for only a known set of kafka clusters (a bare bones compartmentalization)
  • We also intend to disable the 'add cluster' feature so that our customers cannot modify the scope of a given kafka-manager (what clusters you can manage via that particular instance)

To achieve this, we also need to be able to prime (pre-populate) a kafka-manager instance with a set of kafka clusters' 'zkhost' lists. We intend to provide the list via the application conf (As a list of lists) and set that value in the conf during puppet install of the kafka-manager.

But I am struggling a bit to find a nice 'bootstrap' (Startup) point in the lifecycle of the kafka-manager app to insert a populate_kafka_clusters() method. May be because I am very new to Scala and Play. Can you please point me to such a location or if there is none, can it be added so that other such initializations can be added by others at that interception point? Also can you point me to the code that takes the 'zkhosts' for a cluster and adds them to the backend store (kafka-manager's Zookeeper namespace /kafka-manager I presume)

Kafka cluster health checking

kafka-manager could assist with health-checking of kafka clusters in one of two ways:

  • Pull-based: Expose an http api with health-checks in it (probably a list of checks and their results). Some other system would poll the health-check periodically and alert if checks are failing.
  • Push-based: Actively reach out when a problem is detected, through some kind of pluggable interface. Two useful out of the box implementations might be emails and just simple error logging.

I'm partial to the pull-based checks since I think they're probably easier to integrate with existing monitoring stuff. Either way, I think good stuff to monitor could be taken from the list here: https://kafka.apache.org/082/ops.html

Some of the most interesting stuff is:

  • Broker availability (are they in ZK and can we connect over JMX?)
  • Under-replicated partitions
  • Unavailable partitions

Question - What does red mean?

For some reason, one of the topics shows up as red.

screenshot 2015-04-06 15 38 14

I think this might be a bug in kafka-manager since this topic isn't currently marked for deletion or anything.

What causes a topic to be marked red?

No option to generate partition assignments

Where do I generate the automatic partition assignments? I don't see the option, under Reassign Partitions, I just see the following message :

No data found for any recent reassign partitions command.

binary release would be awesome!

A binary release of this as an executable jar would be awesome! I'd love to try this but I'm not very familiar with the scala toolchain and I'm working through some difficulties with sbt (which I attribute to operator error). Thanks, this kafka-manager looks very cool.

Include sbt wrapper script?

Instead of requiring the user to have a system-wide installation of sbt, it is often preferable to include the sbt wrapper script from sbt-extras (e.g. most Twitter Scala projects include this sbt wrapper).

The benefit is that the version of sbt configured in build.properties will be downloaded automatically if needed. The only change for users is that they must run sbt-related commands via ./sbt instead of sbt.

RPM Build broken in 1.0-SNAPSHOT

The sbt rpm:packageBin process is broken, mostly because of illegal characters in the version string in build.sbt. Temporarily fixing build.sbt to include a sanitized version string allows the RPM to build. (CentOS 6.x / rpmbuild 4.8.0).

$ diff -c build.sbt.orig build.sbt
*** build.sbt.orig      2015-03-09 22:22:02.000000000 +0000
--- build.sbt   2015-03-11 22:45:50.506759803 +0000
***************
*** 5,11 ****
  name := """kafka-manager"""

  /* For packaging purposes, -SNAPSHOT MUST contain a digit */
! version := "1.0-SNAPSHOT1"

  scalaVersion := "2.11.5"

--- 5,11 ----
  name := """kafka-manager"""

  /* For packaging purposes, -SNAPSHOT MUST contain a digit */
! version := "1.0"

  scalaVersion := "2.11.5"

Other errors are still thrown and can serve to be cleaned up / fixed in the RPM spec file. This does not seem to prevent the process from completing the RPM, however.

$ ./sbt rpm:packageBin
[info] Loading project definition from /path/to/kafka-manager/project
[warn] There may be incompatibilities among your library dependencies.
[warn] Here are some of the libraries that were evicted:
[warn]  * com.typesafe.sbt:sbt-native-packager:0.7.4 -> 1.0.0-M4
[warn] Run 'evicted' to see detailed eviction warnings
[info] Set current project to kafka-manager (in build file:/path/to/kafka-manager/)
[info] Packaging /path/to/kafka-manager/target/scala-2.11/kafka-manager_2.11-1.0-sources.jar ...
[info] Updating {file:/path/to/kafka-manager/}root...
[info] Done packaging.
[info] Wrote /path/to/kafka-manager/target/scala-2.11/kafka-manager_2.11-1.0.pom
[info] Resolving jline#jline;2.12 ...
[info] Done updating.
[warn] There may be incompatibilities among your library dependencies.
[warn] Here are some of the libraries that were evicted:
[warn]  * org.webjars:jquery:1.11.1 -> 2.1.1
[warn] Run 'evicted' to see detailed eviction warnings
[info] Packaging /path/to/kafka-manager/target/scala-2.11/kafka-manager_2.11-1.0-javadoc.jar ...
[info] Done packaging.
[info] Packaging /path/to/kafka-manager/target/kafka-manager-1.0-assets.jar ...
[info] Packaging /path/to/kafka-manager/target/scala-2.11/kafka-manager_2.11-1.0.jar ...
[info] Done packaging.
[info] Done packaging.
[info] Building target platforms: noarch-yahoo-Linux
[info] Building for target noarch-yahoo-Linux
[info] Executing(%install): /bin/sh -e /tmp/sbt_107cbb3e/rpm-tmp.T0K00U
[error] + umask 022
[error] + cd /path/to/kafka-manager/target/rpm/BUILD
[error] + '[' -e /path/to/kafka-manager/target/rpm/buildroot ']'
[error] + mv /path/to/kafka-manager/target/rpm/tmp-buildroot/etc /path/to/kafka-manager/target/rpm/tmp-buildroot/usr /path/to/kafka-manager/target/rpm/tmp-buildroot/var /path/to/kafka-manager/target/rpm/buildroot
[error] + /usr/lib/rpm/brp-compress
[error] + /usr/lib/rpm/brp-strip /usr/bin/strip
[error] + /usr/lib/rpm/brp-strip-static-archive /usr/bin/strip
[error] + /usr/lib/rpm/brp-strip-comment-note /usr/bin/strip /usr/bin/objdump
[info] Processing files: kafka-manager-1.0-1.noarch
[info] Provides: config(kafka-manager) = 1.0-1
[info] Requires(interp): /bin/sh /bin/sh /bin/sh /bin/sh
[info] Requires(rpmlib): rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1
[info] Requires(pre): /bin/sh
[info] Requires(post): /bin/sh
[info] Requires(preun): /bin/sh
[info] Requires(postun): /bin/sh
[info] Checking for unpackaged file(s): /usr/lib/rpm/check-files /path/to/kafka-manager/target/rpm/buildroot
[info] Wrote: /path/to/kafka-manager/target/rpm/RPMS/noarch/kafka-manager-1.0-1.noarch.rpm
[info] Executing(%clean): /bin/sh -e /tmp/sbt_107cbb3e/rpm-tmp.5tuBb7
[error] + umask 022
[error] + cd /path/to/kafka-manager/target/rpm/BUILD
[error] + /bin/rm -rf /path/to/kafka-manager/target/rpm/buildroot
[error] + exit 0
[success] Total time: 12 s, completed Mar 11, 2015 10:46:12 PM

rpmbuild for kafka-manager failing

I am getting following error. Anybody faced the same?If yes, any idea what is to be fixed?

[info] Loading project definition from /Users/pukumar/dev/kafka-manager/kafka-manager-master/project
[warn] There may be incompatibilities among your library dependencies.
[warn] Here are some of the libraries that were evicted:
[warn] * com.typesafe.sbt:sbt-native-packager:0.7.4 -> 1.0.0-M4
[warn] Run 'evicted' to see detailed eviction warnings
[info] Set current project to kafka-manager (in build file:/Users/pukumar/dev/kafka-manager/kafka-manager-master/)
[warn] There may be incompatibilities among your library dependencies.
[warn] Here are some of the libraries that were evicted:
[warn] * org.webjars:jquery:(2.1.1, 1.11.1) -> 2.1.3
[warn] Run 'evicted' to see detailed eviction warnings
[info] Wrote /Users/pukumar/dev/kafka-manager/kafka-manager-master/target/scala-2.11/kafka-manager_2.11-1.2.0.pom
[info] Packaging /Users/pukumar/dev/kafka-manager/kafka-manager-master/target/kafka-manager-1.2.0-assets.jar ...
[info] Done packaging.
[info] Building target platforms: noarch-yahoo-Linux
[info] Executing(%install): /bin/sh -e /var/folders/zz/zyxvpxvq6csfxvn_n0000000000000/T/sbt_44c3e8c6/rpm-tmp.3898
[error] + umask 022
[error] + cd /Users/pukumar/dev/kafka-manager/kafka-manager-master/target/rpm/BUILD
[error] + /bin/rm -rf /Users/pukumar/dev/kafka-manager/kafka-manager-master/target/rpm/buildroot
[error] + /bin/mkdir -p /Users/pukumar/dev/kafka-manager/kafka-manager-master/target/rpm/buildroot
[error] + '[' -e /Users/pukumar/dev/kafka-manager/kafka-manager-master/target/rpm/buildroot ']'
[error] + mv /Users/pukumar/dev/kafka-manager/kafka-manager-master/target/rpm/tmp-buildroot/etc /Users/pukumar/dev/kafka-manager/kafka-manager-master/target/rpm/tmp-buildroot/usr /Users/pukumar/dev/kafka-manager/kafka-manager-master/target/rpm/tmp-buildroot/var /Users/pukumar/dev/kafka-manager/kafka-manager-master/target/rpm/buildroot
[error] + '%{_rpmconfigdir}/brp-compress'
[error] /var/folders/zz/zyxvpxvq6csfxvn_n0000000000000/T/sbt_44c3e8c6/rpm-tmp.3898: line 32: fg: no job control
[error] error: Bad exit status from /var/folders/zz/zyxvpxvq6csfxvn_n0000000000000/T/sbt_44c3e8c6/rpm-tmp.3898 (%install)
[info]
[info]
[info] RPM build errors:
[error] Bad exit status from /var/folders/zz/zyxvpxvq6csfxvn_n0000000000000/T/sbt_44c3e8c6/rpm-tmp.3898 (%install)
java.lang.RuntimeException: Unable to run rpmbuild, check output for details. Errorcode 1
at scala.sys.package$.error(package.scala:27)
at com.typesafe.sbt.packager.rpm.RpmHelper$$anonfun$buildPackage$1.apply(RpmHelper.scala:89)
at com.typesafe.sbt.packager.rpm.RpmHelper$$anonfun$buildPackage$1.apply(RpmHelper.scala:74)
at sbt.IO$.withTemporaryDirectory(IO.scala:291)
at com.typesafe.sbt.packager.rpm.RpmHelper$.buildPackage(RpmHelper.scala:74)
at com.typesafe.sbt.packager.rpm.RpmHelper$.buildRpm(RpmHelper.scala:20)
at com.typesafe.sbt.packager.rpm.RpmPlugin$$anonfun$projectSettings$31.apply(RpmPlugin.scala:100)
at com.typesafe.sbt.packager.rpm.RpmPlugin$$anonfun$projectSettings$31.apply(RpmPlugin.scala:98)
at scala.Function3$$anonfun$tupled$1.apply(Function3.scala:35)
at scala.Function3$$anonfun$tupled$1.apply(Function3.scala:34)
at scala.Function1$$anonfun$compose$1.apply(Function1.scala:47)
at sbt.$tilde$greater$$anonfun$$u2219$1.apply(TypeFunctions.scala:40)
at sbt.std.Transform$$anon$4.work(System.scala:63)
at sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:226)
at sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:226)
at sbt.ErrorHandling$.wideConvert(ErrorHandling.scala:17)
at sbt.Execute.work(Execute.scala:235)
at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:226)
at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:226)
at sbt.ConcurrentRestrictions$$anon$4$$anonfun$1.apply(ConcurrentRestrictions.scala:159)
at sbt.CompletionService$$anon$2.call(CompletionService.scala:28)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
error Unable to run rpmbuild, check output for details. Errorcode 1
[error] Total time: 2 s, completed Apr 29, 2015 2:51:28 PM

Bug when building

    5 artifacts copied, 0 already retrieved (24459kB/302ms)

[info] Loading project definition from /home/ubuntu/kafka-manager/project

[warn] There may be incompatibilities among your library dependencies.
[warn] Here are some of the libraries that were evicted:
[warn] * com.typesafe.sbt:sbt-native-packager:0.7.4 -> 1.0.0-M4
[warn] Run 'evicted' to see detailed eviction warnings
[info] Set current project to kafka-manager (in build file:/home/ubuntu/kafka-manager/)

Broker Metric not getting updated

Messages in /sec 0.00 0.00 0.00 0.00
Bytes in /sec 0.00 0.00 0.00 0.00
Bytes out /sec 0.00 0.00 0.00 0.00
Bytes rejected /sec 0.00 0.00 0.00 0.00
Failed fetch request /sec 0.00 0.00 0.00 0.00
Failed produce request /sec 0.00 0.00 0.00 0.00

Surface ZK errors from KafkaStateActor

The KafkaStateActor can be dead-on-arrival due to ZK failures in preStart. This ends up making it time out on all requests. It would be better if it remembered the exception and then re-reported it on requests.

kafka-manager debian package startup script

Hi,
I've just created a debian package with sbt.
And it creates
/usr/share/kafka-manager/bin/kafka-manager
/etc/default/kafka-manager
/etc/kafka-manager/application.conf

A sane default would be to add this line to ..../bin/kafka-manager so it reads /etc/default/kafka-manager

#!/usr/bin/env bash

[ -r /etc/default/kafka-manager ] && . /etc/default/kafka-manager

Another sane default would be to add these lines to /etc/default/kafka-manager.

export PIDFILE="/var/run/kafka-manager/play.pid"
export JAVA_OPTS="-Dpidfile.path=$PIDFILE -Dconfig.file=/etc/kafka-manager/application.conf $JAVA_OPTS"
export APPLICATION_SECRET="somekey"
export ZK_HOSTS="zookeeper01:2181,zookeeper02:2181"

Drain a broker and reassign replicas to healthy brokers efficiently

When we need to reassign replicas from a broker to other brokers,
say, when the broker is dead, the option we have right now is to
reassign all replicas randomly, whereas replicas on healthy
brokers might also be moved around. This could be very inefficient.

I had added a new command called drain to a util my team is using
(lucaluo/kafkat@d0c9457),
which essentially take all replicas (of one or more topics) and put to
a list of healthy brokers. The original replicas on healthy brokers won't
be moved around, which is highly efficient.

I would like to migrate this functionality to kafka-manager as well.
Let me know what do you think.

Is any docker image for kafka-manager avaliable?

We are going to use docker to install kafka-manager, but not sure if any docker image already available here, so that we can try leverage it in our installation? Another question is I found some image in docker site, but not sure right now the docker image how to manage the network between internal docker and external hosts? Because we have zookeeper setup in VMs using puppet, but try to use docker install kafka-manager, not sure will be any issues between network configuration?

Report data sizes by date

A bar chart of data size by hour or by day would be helpful for understanding how regular a data flow is and how much data is new vs soon-to-expire. It would require adding an additional JMX metric to Kafka.

Consumed Message

Does it support consumed message counts, offsets by any chance? If yes, where can I see this?

Metrics not displaying

I'm trying to see the metrics for a new topic that I created, but all the fields are empty.

screen shot 2015-05-05 at 2 26 54 pm

Runaway logging fills up disk

The 16GB root device of the machine I'm running this on ran out of space due to /var/log/upstart/kafka-manager.log. These logs are rotated daily, so it must have logged a lot in a short time.

The main content appears to be blocks like this:

[INFO] [02/14/2015 13:30:53.858] [kafka-manager-system-akka.actor.default-dispatcher-161] [akka://kafka-manager-system/user/kafka-manager/<cluster>/broker-view] Stopped actor akka://kafka-manager-system/user/kafka-manager/<cluster>/broker-view
[INFO] [02/14/2015 13:30:53.858] [kafka-manager-system-akka.actor.default-dispatcher-161] [akka://kafka-manager-system/user/kafka-manager/<cluster>/broker-view] Cancelling updater...
[INFO] [02/14/2015 13:30:53.858] [kafka-manager-system-akka.actor.default-dispatcher-161] [akka://kafka-manager-system/user/kafka-manager/<cluster>/broker-view] Started actor akka://kafka-manager-system/user/kafka-manager/<cluster>/broker-view
[INFO] [02/14/2015 13:30:53.858] [kafka-manager-system-akka.actor.default-dispatcher-161] [akka://kafka-manager-system/user/kafka-manager/<cluster>/broker-view] Scheduling updater for 10 seconds
[INFO] [02/14/2015 13:30:53.859] [kafka-manager-system-akka.actor.default-dispatcher-161] [akka://kafka-manager-system/user/kafka-manager/<cluster>/broker-view] Updating broker view...
[ERROR] [02/14/2015 13:30:53.860] [kafka-manager-system-akka.actor.default-dispatcher-160] [akka://kafka-manager-system/user/kafka-manager/<cluster>/broker-view] null
java.lang.ArithmeticException

Blocks like this appear every few milliseconds in the log file.

Add metrics

I think it would be nice to have metrics/stats about Kafka server (message rate, byte in rate, byte out rate...)

AFAIK metrics/stats are not stored in Zookeeper (Ref: https://cwiki.apache.org/confluence/display/KAFKA/Kafka+data+structures+in+Zookeeper). Would you consider a pull request that retrieve metrics from Kafka server ? I could use the Kafka REST Proxy, JMX or ... ?

A simpler solution might be to get the consumer offset from curator/zookeeper /consumers/[groupId]/offsets/[topic]/[partitionId] ?

Wdyt ?

Report data sizes by broker/topic/partition

Kafka brokers report data sizes by topic-partition in the metric "Size" that is part of the class kafka.log.Log. It would be really useful to see breakdowns of this by broker, topic, and topic-partition. The use cases are endless!

  • Seeing whether data is distributed evenly across brokers
  • Seeing which topics are largest and smallest
  • Seeing whether data is distributed evenly across partitions of a topic

Status icons in the topic list

Status icons in the topic list would be helpful to get a quick birds-eye-view of overall cluster state. One suggestion of colors for the icons is:

  • Green: topic is available and balanced (minimal skew in partition count across brokers)
  • Yellow: topic is available, but not balanced
  • Red: topic is unavailable

Converting zkHosts to lowercase cause problem when zkHosts contains uppercase chroot path

KafkaManager will convert cluster name and zkHosts to lowercase when constructing a ClusterConfig.
This works fine if zkHosts only contains host names and ports, however, it also may contain additional chroot path at its end.
In such cases, the chroot path means a part of a ZNode path, which is case sensitive. Converting it to lowercase will make Kafka Manager refer to a non-existed path on Zookeeper when operating the cluster.

To me, the case conversion is not quite meaningful, even cluster name is not necessarily to be in lowercase. Can we just keep the casings of names and zkHosts as they are given?


How to reproduce

  1. Add a kafka cluster whose zookeeper hosts contains chroot path with uppercase letters. Like zkhost1:2181,zkhost2:2181,zkhost3:2181/KafkaClusterPath
  2. The cluster will be added successfully and listed in cluster list.
  3. Clicking the name of the new added cluster will show you the error.

I tried get rid of the to lowercase conversion in my forked repo
evanmeng@9070032
and looks things work fine.

But I am not sure whether this will break other functions since there is no regression tests yet. Maybe some of you can have a look at it?

rpm build failing

sbt rpm:packageBin
[info] Loading project definition from /Users/iam/Downloads/kafka-manager-master/project
[warn] There may be incompatibilities among your library dependencies.
[warn] Here are some of the libraries that were evicted:
[warn] * com.typesafe.sbt:sbt-native-packager:0.7.4 -> 1.0.0-M4
[warn] Run 'evicted' to see detailed eviction warnings
[info] Set current project to kafka-manager (in build file:/Users/iam/Downloads/kafka-manager-master/)
[warn] There may be incompatibilities among your library dependencies.
[warn] Here are some of the libraries that were evicted:
[warn] * org.webjars:jquery:(2.1.1, 1.11.1) -> 2.1.3
[warn] Run 'evicted' to see detailed eviction warnings
[info] Wrote /Users/iam/Downloads/kafka-manager-master/target/scala-2.11/kafka-manager_2.11-1.2.0.pom
[info] Packaging /Users/iam/Downloads/kafka-manager-master/target/kafka-manager-1.2.0-assets.jar ...
[info] Done packaging.
java.io.IOException: Cannot run program "rpmbuild" (in directory "/Users/iam/Downloads/kafka-manager-master/target/rpm/SPECS"): error=2, No such file or directory
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1047)
at sbt.SimpleProcessBuilder.run(ProcessImpl.scala:349)
at sbt.AbstractProcessBuilder.run(ProcessImpl.scala:128)
at sbt.AbstractProcessBuilder$$anonfun$runBuffered$1.apply(ProcessImpl.scala:159)
at sbt.AbstractProcessBuilder$$anonfun$runBuffered$1.apply(ProcessImpl.scala:159)
at sbt.BufferedLogger.buffer(BufferedLogger.scala:25)
at sbt.AbstractProcessBuilder.runBuffered(ProcessImpl.scala:159)
at sbt.AbstractProcessBuilder.$bang(ProcessImpl.scala:156)
at com.typesafe.sbt.packager.rpm.RpmHelper$$anonfun$buildPackage$1.apply(RpmHelper.scala:87)
at com.typesafe.sbt.packager.rpm.RpmHelper$$anonfun$buildPackage$1.apply(RpmHelper.scala:74)
at sbt.IO$.withTemporaryDirectory(IO.scala:291)
at com.typesafe.sbt.packager.rpm.RpmHelper$.buildPackage(RpmHelper.scala:74)
at com.typesafe.sbt.packager.rpm.RpmHelper$.buildRpm(RpmHelper.scala:20)
at com.typesafe.sbt.packager.rpm.RpmPlugin$$anonfun$projectSettings$31.apply(RpmPlugin.scala:100)
at com.typesafe.sbt.packager.rpm.RpmPlugin$$anonfun$projectSettings$31.apply(RpmPlugin.scala:98)
at scala.Function3$$anonfun$tupled$1.apply(Function3.scala:35)
at scala.Function3$$anonfun$tupled$1.apply(Function3.scala:34)
at scala.Function1$$anonfun$compose$1.apply(Function1.scala:47)
at sbt.$tilde$greater$$anonfun$$u2219$1.apply(TypeFunctions.scala:40)
at sbt.std.Transform$$anon$4.work(System.scala:63)
at sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:226)
at sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:226)
at sbt.ErrorHandling$.wideConvert(ErrorHandling.scala:17)
at sbt.Execute.work(Execute.scala:235)
at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:226)
at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:226)
at sbt.ConcurrentRestrictions$$anon$4$$anonfun$1.apply(ConcurrentRestrictions.scala:159)
at sbt.CompletionService$$anon$2.call(CompletionService.scala:28)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: error=2, No such file or directory
at java.lang.UNIXProcess.forkAndExec(Native Method)
at java.lang.UNIXProcess.(UNIXProcess.java:184)
at java.lang.ProcessImpl.start(ProcessImpl.java:130)
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1028)
at sbt.SimpleProcessBuilder.run(ProcessImpl.scala:349)
at sbt.AbstractProcessBuilder.run(ProcessImpl.scala:128)
at sbt.AbstractProcessBuilder$$anonfun$runBuffered$1.apply(ProcessImpl.scala:159)
at sbt.AbstractProcessBuilder$$anonfun$runBuffered$1.apply(ProcessImpl.scala:159)
at sbt.BufferedLogger.buffer(BufferedLogger.scala:25)
at sbt.AbstractProcessBuilder.runBuffered(ProcessImpl.scala:159)
at sbt.AbstractProcessBuilder.$bang(ProcessImpl.scala:156)
at com.typesafe.sbt.packager.rpm.RpmHelper$$anonfun$buildPackage$1.apply(RpmHelper.scala:87)
at com.typesafe.sbt.packager.rpm.RpmHelper$$anonfun$buildPackage$1.apply(RpmHelper.scala:74)
at sbt.IO$.withTemporaryDirectory(IO.scala:291)
at com.typesafe.sbt.packager.rpm.RpmHelper$.buildPackage(RpmHelper.scala:74)
at com.typesafe.sbt.packager.rpm.RpmHelper$.buildRpm(RpmHelper.scala:20)
at com.typesafe.sbt.packager.rpm.RpmPlugin$$anonfun$projectSettings$31.apply(RpmPlugin.scala:100)
at com.typesafe.sbt.packager.rpm.RpmPlugin$$anonfun$projectSettings$31.apply(RpmPlugin.scala:98)
at scala.Function3$$anonfun$tupled$1.apply(Function3.scala:35)
at scala.Function3$$anonfun$tupled$1.apply(Function3.scala:34)
at scala.Function1$$anonfun$compose$1.apply(Function1.scala:47)
at sbt.$tilde$greater$$anonfun$$u2219$1.apply(TypeFunctions.scala:40)
at sbt.std.Transform$$anon$4.work(System.scala:63)
at sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:226)
at sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:226)
at sbt.ErrorHandling$.wideConvert(ErrorHandling.scala:17)
at sbt.Execute.work(Execute.scala:235)
at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:226)
at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:226)
at sbt.ConcurrentRestrictions$$anon$4$$anonfun$1.apply(ConcurrentRestrictions.scala:159)
at sbt.CompletionService$$anon$2.call(CompletionService.scala:28)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
error java.io.IOException: Cannot run program "rpmbuild" (in directory "/Users/iam/Downloads/kafka-manager-master/target/rpm/SPECS"): error=2, No such file or directory
[error] Total time: 2 s, completed Apr 21, 2015 11:46:47 AM

Support kafka 0.8.2.1

The dropdown box can only be selected to use 8.1.1 or 8.2.0

It would be nice if it could detect the version of the cluster. Or atleast allow selecting 8.2.1 from the drop down box

how to add cluster

hi:
I start the server succeed.
kafkamanager1

but when I add a kafka cluster config, then failed,error info below:
Yikes! Ask timed out on [ActorSelection[Anchor(akka://kafka-manager-system/), Path(/user/kafka-manager)]] after [1000 ms] Try again.
kafkamanager1

java.lang.ArithmeticException: / by zero

I am getting this error when the zookeeper host has a path at the end i.e., zk:2181/replication/kafka

[ERROR] [02/09/2015 21:15:59.637] [kafka-manager-system-akka.actor.default-dispatcher-14] [akka://kafka-manager-system/user/kafka-manager/sjc2/broker-view] / by zero
java.lang.ArithmeticException: / by zero
 at kafka.manager.TopicIdentity.<init>(TopicIdentity.scala:61)
 at kafka.manager.TopicIdentity$.from(TopicIdentity.scala:74)
 at kafka.manager.BrokerViewCacheActor$$anonfun$updateView$1$$anonfun$apply$2$$anonfun$2.apply(BrokerViewCacheActor.scala:79)
 at kafka.manager.BrokerViewCacheActor$$anonfun$updateView$1$$anonfun$apply$2$$anonfun$2.apply(BrokerViewCacheActor.scala:79)
 at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
 at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
 at scala.collection.Iterator$class.foreach(Iterator.scala:743)
 at scala.collection.AbstractIterator.foreach(Iterator.scala:1195)
 at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
 at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
 at scala.collection.TraversableLike$class.map(TraversableLike.scala:245)
 at scala.collection.AbstractTraversable.map(Traversable.scala:104)
 at kafka.manager.BrokerViewCacheActor$$anonfun$updateView$1$$anonfun$apply$2.apply(BrokerViewCacheActor.scala:79)
 at kafka.manager.BrokerViewCacheActor$$anonfun$updateView$1$$anonfun$apply$2.apply(BrokerViewCacheActor.scala:77)
 at scala.Option.foreach(Option.scala:256)
 at kafka.manager.BrokerViewCacheActor$$anonfun$updateView$1.apply(BrokerViewCacheActor.scala:77)
 at kafka.manager.BrokerViewCacheActor$$anonfun$updateView$1.apply(BrokerViewCacheActor.scala:76)
 at scala.Option.foreach(Option.scala:256)
 at kafka.manager.BrokerViewCacheActor.updateView(BrokerViewCacheActor.scala:76)
 at kafka.manager.BrokerViewCacheActor.processActorResponse(BrokerViewCacheActor.scala:68)
 at kafka.manager.BaseActor$$anonfun$receive$1.applyOrElse(BaseActor.scala:26)
 at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
 at kafka.manager.BaseActor.aroundReceive(BaseActor.scala:14)
 at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
 at akka.actor.ActorCell.invoke(ActorCell.scala:487)
 at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:254)
 at akka.dispatch.Mailbox.run(Mailbox.scala:221)
 at akka.dispatch.Mailbox.exec(Mailbox.scala:231)
 at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
 at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
 at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
 at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)


Please tag releases

Please tag your releases to make it check out a release version.

git tag 1.2.2
git push --tags

Thanks!

Adding or have Partition Lag?

I didn't see it mentioned or in the screenshots (maybe I missed it) but are you planning to add lag charts to this manager?

e.g. topic foo, partition 1 is 90,000 messages behind for consumer group "bar" and then chart that over time to understand consumer group performance and lag over time?

[akka://kafka-manager-system/user/kafka-manager/mycluster/broker-view] Failed to get topic metrics for broker BrokerIdentity(2,node.server.com,9092,9999) WARNING arguments left: 1

Please let me know how can i enable the JMX as it keep saying below error and i cant see the metric in the UI

[error] k.m.KafkaJMX$ - Failed to connect to service:jmx:rmi:///jndi/rmi://NODE6.sever.com:9999/jmxrmi
java.rmi.ConnectException: Connection refused to host: 127.0.1.1; nested exception is:
java.net.ConnectException: Connection refused
at sun.rmi.transport.tcp.TCPEndpoint.newSocket(TCPEndpoint.java:619) ~[na:1.7.0_79]
at sun.rmi.transport.tcp.TCPChannel.createConnection(TCPChannel.java:216) ~[na:1.7.0_79]
at sun.rmi.transport.tcp.TCPChannel.newConnection(TCPChannel.java:202) ~[na:1.7.0_79]
at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:129) ~[na:1.7.0_79]
at javax.management.remote.rmi.RMIServerImpl_Stub.newClient(Unknown Source) ~[na:1.7.0_79]
Caused by: java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method) ~[na:1.7.0_79]
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) ~[na:1.7.0_79]
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) ~[na:1.7.0_79]
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) ~[na:1.7.0_79]
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) ~[na:1.7.0_79]
[ERROR] [05/19/2015 13:56:45.492] [pool-14-thread-1] [akka://kafka-manager-system/user/kafka-manager/mycluster/broker-view] Failed to get topic metrics for broker BrokerIdentity(2,NODE6.server.com,9092,9999) WARNING arguments left: 1

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.