yahoo / cmak Goto Github PK
View Code? Open in Web Editor NEWCMAK is a tool for managing Apache Kafka clusters
License: Apache License 2.0
CMAK is a tool for managing Apache Kafka clusters
License: Apache License 2.0
kafka-manager could assist with health-checking of kafka clusters in one of two ways:
I'm partial to the pull-based checks since I think they're probably easier to integrate with existing monitoring stuff. Either way, I think good stuff to monitor could be taken from the list here: https://kafka.apache.org/082/ops.html
Some of the most interesting stuff is:
Please publish the build artifact to a repo (jcenter / bintray?)
a start: https://github.com/yahoo/kafka-manager/pull/78/files
Granted these should be yahoo credentials
The sbt rpm:packageBin
process is broken, mostly because of illegal characters in the version string in build.sbt. Temporarily fixing build.sbt to include a sanitized version string allows the RPM to build. (CentOS 6.x / rpmbuild 4.8.0).
$ diff -c build.sbt.orig build.sbt
*** build.sbt.orig 2015-03-09 22:22:02.000000000 +0000
--- build.sbt 2015-03-11 22:45:50.506759803 +0000
***************
*** 5,11 ****
name := """kafka-manager"""
/* For packaging purposes, -SNAPSHOT MUST contain a digit */
! version := "1.0-SNAPSHOT1"
scalaVersion := "2.11.5"
--- 5,11 ----
name := """kafka-manager"""
/* For packaging purposes, -SNAPSHOT MUST contain a digit */
! version := "1.0"
scalaVersion := "2.11.5"
Other errors are still thrown and can serve to be cleaned up / fixed in the RPM spec file. This does not seem to prevent the process from completing the RPM, however.
$ ./sbt rpm:packageBin
[info] Loading project definition from /path/to/kafka-manager/project
[warn] There may be incompatibilities among your library dependencies.
[warn] Here are some of the libraries that were evicted:
[warn] * com.typesafe.sbt:sbt-native-packager:0.7.4 -> 1.0.0-M4
[warn] Run 'evicted' to see detailed eviction warnings
[info] Set current project to kafka-manager (in build file:/path/to/kafka-manager/)
[info] Packaging /path/to/kafka-manager/target/scala-2.11/kafka-manager_2.11-1.0-sources.jar ...
[info] Updating {file:/path/to/kafka-manager/}root...
[info] Done packaging.
[info] Wrote /path/to/kafka-manager/target/scala-2.11/kafka-manager_2.11-1.0.pom
[info] Resolving jline#jline;2.12 ...
[info] Done updating.
[warn] There may be incompatibilities among your library dependencies.
[warn] Here are some of the libraries that were evicted:
[warn] * org.webjars:jquery:1.11.1 -> 2.1.1
[warn] Run 'evicted' to see detailed eviction warnings
[info] Packaging /path/to/kafka-manager/target/scala-2.11/kafka-manager_2.11-1.0-javadoc.jar ...
[info] Done packaging.
[info] Packaging /path/to/kafka-manager/target/kafka-manager-1.0-assets.jar ...
[info] Packaging /path/to/kafka-manager/target/scala-2.11/kafka-manager_2.11-1.0.jar ...
[info] Done packaging.
[info] Done packaging.
[info] Building target platforms: noarch-yahoo-Linux
[info] Building for target noarch-yahoo-Linux
[info] Executing(%install): /bin/sh -e /tmp/sbt_107cbb3e/rpm-tmp.T0K00U
[error] + umask 022
[error] + cd /path/to/kafka-manager/target/rpm/BUILD
[error] + '[' -e /path/to/kafka-manager/target/rpm/buildroot ']'
[error] + mv /path/to/kafka-manager/target/rpm/tmp-buildroot/etc /path/to/kafka-manager/target/rpm/tmp-buildroot/usr /path/to/kafka-manager/target/rpm/tmp-buildroot/var /path/to/kafka-manager/target/rpm/buildroot
[error] + /usr/lib/rpm/brp-compress
[error] + /usr/lib/rpm/brp-strip /usr/bin/strip
[error] + /usr/lib/rpm/brp-strip-static-archive /usr/bin/strip
[error] + /usr/lib/rpm/brp-strip-comment-note /usr/bin/strip /usr/bin/objdump
[info] Processing files: kafka-manager-1.0-1.noarch
[info] Provides: config(kafka-manager) = 1.0-1
[info] Requires(interp): /bin/sh /bin/sh /bin/sh /bin/sh
[info] Requires(rpmlib): rpmlib(CompressedFileNames) <= 3.0.4-1 rpmlib(PayloadFilesHavePrefix) <= 4.0-1
[info] Requires(pre): /bin/sh
[info] Requires(post): /bin/sh
[info] Requires(preun): /bin/sh
[info] Requires(postun): /bin/sh
[info] Checking for unpackaged file(s): /usr/lib/rpm/check-files /path/to/kafka-manager/target/rpm/buildroot
[info] Wrote: /path/to/kafka-manager/target/rpm/RPMS/noarch/kafka-manager-1.0-1.noarch.rpm
[info] Executing(%clean): /bin/sh -e /tmp/sbt_107cbb3e/rpm-tmp.5tuBb7
[error] + umask 022
[error] + cd /path/to/kafka-manager/target/rpm/BUILD
[error] + /bin/rm -rf /path/to/kafka-manager/target/rpm/buildroot
[error] + exit 0
[success] Total time: 12 s, completed Mar 11, 2015 10:46:12 PM
Instead of requiring the user to have a system-wide installation of sbt
, it is often preferable to include the sbt wrapper script from sbt-extras (e.g. most Twitter Scala projects include this sbt wrapper).
The benefit is that the version of sbt configured in build.properties will be downloaded automatically if needed. The only change for users is that they must run sbt-related commands via ./sbt
instead of sbt
.
KafkaManager
will convert cluster name and zkHosts to lowercase when constructing a ClusterConfig
.
This works fine if zkHosts only contains host names and ports, however, it also may contain additional chroot path at its end.
In such cases, the chroot path means a part of a ZNode path, which is case sensitive. Converting it to lowercase will make Kafka Manager refer to a non-existed path on Zookeeper when operating the cluster.
To me, the case conversion is not quite meaningful, even cluster name is not necessarily to be in lowercase. Can we just keep the casings of names and zkHosts as they are given?
How to reproduce
zkhost1:2181,zkhost2:2181,zkhost3:2181/KafkaClusterPath
I tried get rid of the to lowercase conversion in my forked repo
evanmeng@9070032
and looks things work fine.
But I am not sure whether this will break other functions since there is no regression tests yet. Maybe some of you can have a look at it?
Please let me know how can i enable the JMX as it keep saying below error and i cant see the metric in the UI
[error] k.m.KafkaJMX$ - Failed to connect to service:jmx:rmi:///jndi/rmi://NODE6.sever.com:9999/jmxrmi
java.rmi.ConnectException: Connection refused to host: 127.0.1.1; nested exception is:
java.net.ConnectException: Connection refused
at sun.rmi.transport.tcp.TCPEndpoint.newSocket(TCPEndpoint.java:619) ~[na:1.7.0_79]
at sun.rmi.transport.tcp.TCPChannel.createConnection(TCPChannel.java:216) ~[na:1.7.0_79]
at sun.rmi.transport.tcp.TCPChannel.newConnection(TCPChannel.java:202) ~[na:1.7.0_79]
at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:129) ~[na:1.7.0_79]
at javax.management.remote.rmi.RMIServerImpl_Stub.newClient(Unknown Source) ~[na:1.7.0_79]
Caused by: java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method) ~[na:1.7.0_79]
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) ~[na:1.7.0_79]
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) ~[na:1.7.0_79]
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) ~[na:1.7.0_79]
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) ~[na:1.7.0_79]
[ERROR] [05/19/2015 13:56:45.492] [pool-14-thread-1] [akka://kafka-manager-system/user/kafka-manager/mycluster/broker-view] Failed to get topic metrics for broker BrokerIdentity(2,NODE6.server.com,9092,9999) WARNING arguments left: 1
Please tag your releases to make it check out a release version.
git tag 1.2.2
git push --tags
Thanks!
Hi, Can we get the topic and consumer offsets on the dashboard? With this we can also calculate the lag for a consumer.
Just got kafka manager up and running and realized I have no ability to specify a root node in zookeeper for the manager to look in. We have a couple of existing clusters under a couple of different nodes in zookeeper that I'd love to add into the manager but don't see a way to currently do this
Great project.
The topic list view only has one topic name column. It'd be great if we can see number of partition and number of replications from the same page.
sbt rpm:packageBin
[info] Loading project definition from /Users/iam/Downloads/kafka-manager-master/project
[warn] There may be incompatibilities among your library dependencies.
[warn] Here are some of the libraries that were evicted:
[warn] * com.typesafe.sbt:sbt-native-packager:0.7.4 -> 1.0.0-M4
[warn] Run 'evicted' to see detailed eviction warnings
[info] Set current project to kafka-manager (in build file:/Users/iam/Downloads/kafka-manager-master/)
[warn] There may be incompatibilities among your library dependencies.
[warn] Here are some of the libraries that were evicted:
[warn] * org.webjars:jquery:(2.1.1, 1.11.1) -> 2.1.3
[warn] Run 'evicted' to see detailed eviction warnings
[info] Wrote /Users/iam/Downloads/kafka-manager-master/target/scala-2.11/kafka-manager_2.11-1.2.0.pom
[info] Packaging /Users/iam/Downloads/kafka-manager-master/target/kafka-manager-1.2.0-assets.jar ...
[info] Done packaging.
java.io.IOException: Cannot run program "rpmbuild" (in directory "/Users/iam/Downloads/kafka-manager-master/target/rpm/SPECS"): error=2, No such file or directory
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1047)
at sbt.SimpleProcessBuilder.run(ProcessImpl.scala:349)
at sbt.AbstractProcessBuilder.run(ProcessImpl.scala:128)
at sbt.AbstractProcessBuilder$$anonfun$runBuffered$1.apply(ProcessImpl.scala:159)
at sbt.AbstractProcessBuilder$$anonfun$runBuffered$1.apply(ProcessImpl.scala:159)
at sbt.BufferedLogger.buffer(BufferedLogger.scala:25)
at sbt.AbstractProcessBuilder.runBuffered(ProcessImpl.scala:159)
at sbt.AbstractProcessBuilder.$bang(ProcessImpl.scala:156)
at com.typesafe.sbt.packager.rpm.RpmHelper$$anonfun$buildPackage$1.apply(RpmHelper.scala:87)
at com.typesafe.sbt.packager.rpm.RpmHelper$$anonfun$buildPackage$1.apply(RpmHelper.scala:74)
at sbt.IO$.withTemporaryDirectory(IO.scala:291)
at com.typesafe.sbt.packager.rpm.RpmHelper$.buildPackage(RpmHelper.scala:74)
at com.typesafe.sbt.packager.rpm.RpmHelper$.buildRpm(RpmHelper.scala:20)
at com.typesafe.sbt.packager.rpm.RpmPlugin$$anonfun$projectSettings$31.apply(RpmPlugin.scala:100)
at com.typesafe.sbt.packager.rpm.RpmPlugin$$anonfun$projectSettings$31.apply(RpmPlugin.scala:98)
at scala.Function3$$anonfun$tupled$1.apply(Function3.scala:35)
at scala.Function3$$anonfun$tupled$1.apply(Function3.scala:34)
at scala.Function1$$anonfun$compose$1.apply(Function1.scala:47)
at sbt.$tilde$greater$$anonfun$$u2219$1.apply(TypeFunctions.scala:40)
at sbt.std.Transform$$anon$4.work(System.scala:63)
at sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:226)
at sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:226)
at sbt.ErrorHandling$.wideConvert(ErrorHandling.scala:17)
at sbt.Execute.work(Execute.scala:235)
at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:226)
at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:226)
at sbt.ConcurrentRestrictions$$anon$4$$anonfun$1.apply(ConcurrentRestrictions.scala:159)
at sbt.CompletionService$$anon$2.call(CompletionService.scala:28)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: error=2, No such file or directory
at java.lang.UNIXProcess.forkAndExec(Native Method)
at java.lang.UNIXProcess.(UNIXProcess.java:184)
at java.lang.ProcessImpl.start(ProcessImpl.java:130)
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1028)
at sbt.SimpleProcessBuilder.run(ProcessImpl.scala:349)
at sbt.AbstractProcessBuilder.run(ProcessImpl.scala:128)
at sbt.AbstractProcessBuilder$$anonfun$runBuffered$1.apply(ProcessImpl.scala:159)
at sbt.AbstractProcessBuilder$$anonfun$runBuffered$1.apply(ProcessImpl.scala:159)
at sbt.BufferedLogger.buffer(BufferedLogger.scala:25)
at sbt.AbstractProcessBuilder.runBuffered(ProcessImpl.scala:159)
at sbt.AbstractProcessBuilder.$bang(ProcessImpl.scala:156)
at com.typesafe.sbt.packager.rpm.RpmHelper$$anonfun$buildPackage$1.apply(RpmHelper.scala:87)
at com.typesafe.sbt.packager.rpm.RpmHelper$$anonfun$buildPackage$1.apply(RpmHelper.scala:74)
at sbt.IO$.withTemporaryDirectory(IO.scala:291)
at com.typesafe.sbt.packager.rpm.RpmHelper$.buildPackage(RpmHelper.scala:74)
at com.typesafe.sbt.packager.rpm.RpmHelper$.buildRpm(RpmHelper.scala:20)
at com.typesafe.sbt.packager.rpm.RpmPlugin$$anonfun$projectSettings$31.apply(RpmPlugin.scala:100)
at com.typesafe.sbt.packager.rpm.RpmPlugin$$anonfun$projectSettings$31.apply(RpmPlugin.scala:98)
at scala.Function3$$anonfun$tupled$1.apply(Function3.scala:35)
at scala.Function3$$anonfun$tupled$1.apply(Function3.scala:34)
at scala.Function1$$anonfun$compose$1.apply(Function1.scala:47)
at sbt.$tilde$greater$$anonfun$$u2219$1.apply(TypeFunctions.scala:40)
at sbt.std.Transform$$anon$4.work(System.scala:63)
at sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:226)
at sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:226)
at sbt.ErrorHandling$.wideConvert(ErrorHandling.scala:17)
at sbt.Execute.work(Execute.scala:235)
at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:226)
at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:226)
at sbt.ConcurrentRestrictions$$anon$4$$anonfun$1.apply(ConcurrentRestrictions.scala:159)
at sbt.CompletionService$$anon$2.call(CompletionService.scala:28)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
error java.io.IOException: Cannot run program "rpmbuild" (in directory "/Users/iam/Downloads/kafka-manager-master/target/rpm/SPECS"): error=2, No such file or directory
[error] Total time: 2 s, completed Apr 21, 2015 11:46:47 AM
I think it would be nice to have metrics/stats about Kafka server (message rate, byte in rate, byte out rate...)
AFAIK metrics/stats are not stored in Zookeeper (Ref: https://cwiki.apache.org/confluence/display/KAFKA/Kafka+data+structures+in+Zookeeper). Would you consider a pull request that retrieve metrics from Kafka server ? I could use the Kafka REST Proxy, JMX or ... ?
A simpler solution might be to get the consumer offset from curator/zookeeper /consumers/[groupId]/offsets/[topic]/[partitionId]
?
Wdyt ?
It seems I can't have kafka-manager to use the conf/application.conf file... Have to specify the config options on the commandline using -D...
Is this normal behaviour ?
Right now there is no difference between 'marked for deletion' topics and active topics.
If topic is undergoing reassignments, the derived replication factor could be wrong. This would make generating assignments troublesome since the replication factor would be derived incorrect.
A bar chart of data size by hour or by day would be helpful for understanding how regular a data flow is and how much data is new vs soon-to-expire. It would require adding an additional JMX metric to Kafka.
I am getting following error. Anybody faced the same?If yes, any idea what is to be fixed?
[info] Loading project definition from /Users/pukumar/dev/kafka-manager/kafka-manager-master/project
[warn] There may be incompatibilities among your library dependencies.
[warn] Here are some of the libraries that were evicted:
[warn] * com.typesafe.sbt:sbt-native-packager:0.7.4 -> 1.0.0-M4
[warn] Run 'evicted' to see detailed eviction warnings
[info] Set current project to kafka-manager (in build file:/Users/pukumar/dev/kafka-manager/kafka-manager-master/)
[warn] There may be incompatibilities among your library dependencies.
[warn] Here are some of the libraries that were evicted:
[warn] * org.webjars:jquery:(2.1.1, 1.11.1) -> 2.1.3
[warn] Run 'evicted' to see detailed eviction warnings
[info] Wrote /Users/pukumar/dev/kafka-manager/kafka-manager-master/target/scala-2.11/kafka-manager_2.11-1.2.0.pom
[info] Packaging /Users/pukumar/dev/kafka-manager/kafka-manager-master/target/kafka-manager-1.2.0-assets.jar ...
[info] Done packaging.
[info] Building target platforms: noarch-yahoo-Linux
[info] Executing(%install): /bin/sh -e /var/folders/zz/zyxvpxvq6csfxvn_n0000000000000/T/sbt_44c3e8c6/rpm-tmp.3898
[error] + umask 022
[error] + cd /Users/pukumar/dev/kafka-manager/kafka-manager-master/target/rpm/BUILD
[error] + /bin/rm -rf /Users/pukumar/dev/kafka-manager/kafka-manager-master/target/rpm/buildroot
[error] + /bin/mkdir -p /Users/pukumar/dev/kafka-manager/kafka-manager-master/target/rpm/buildroot
[error] + '[' -e /Users/pukumar/dev/kafka-manager/kafka-manager-master/target/rpm/buildroot ']'
[error] + mv /Users/pukumar/dev/kafka-manager/kafka-manager-master/target/rpm/tmp-buildroot/etc /Users/pukumar/dev/kafka-manager/kafka-manager-master/target/rpm/tmp-buildroot/usr /Users/pukumar/dev/kafka-manager/kafka-manager-master/target/rpm/tmp-buildroot/var /Users/pukumar/dev/kafka-manager/kafka-manager-master/target/rpm/buildroot
[error] + '%{_rpmconfigdir}/brp-compress'
[error] /var/folders/zz/zyxvpxvq6csfxvn_n0000000000000/T/sbt_44c3e8c6/rpm-tmp.3898: line 32: fg: no job control
[error] error: Bad exit status from /var/folders/zz/zyxvpxvq6csfxvn_n0000000000000/T/sbt_44c3e8c6/rpm-tmp.3898 (%install)
[info]
[info]
[info] RPM build errors:
[error] Bad exit status from /var/folders/zz/zyxvpxvq6csfxvn_n0000000000000/T/sbt_44c3e8c6/rpm-tmp.3898 (%install)
java.lang.RuntimeException: Unable to run rpmbuild, check output for details. Errorcode 1
at scala.sys.package$.error(package.scala:27)
at com.typesafe.sbt.packager.rpm.RpmHelper$$anonfun$buildPackage$1.apply(RpmHelper.scala:89)
at com.typesafe.sbt.packager.rpm.RpmHelper$$anonfun$buildPackage$1.apply(RpmHelper.scala:74)
at sbt.IO$.withTemporaryDirectory(IO.scala:291)
at com.typesafe.sbt.packager.rpm.RpmHelper$.buildPackage(RpmHelper.scala:74)
at com.typesafe.sbt.packager.rpm.RpmHelper$.buildRpm(RpmHelper.scala:20)
at com.typesafe.sbt.packager.rpm.RpmPlugin$$anonfun$projectSettings$31.apply(RpmPlugin.scala:100)
at com.typesafe.sbt.packager.rpm.RpmPlugin$$anonfun$projectSettings$31.apply(RpmPlugin.scala:98)
at scala.Function3$$anonfun$tupled$1.apply(Function3.scala:35)
at scala.Function3$$anonfun$tupled$1.apply(Function3.scala:34)
at scala.Function1$$anonfun$compose$1.apply(Function1.scala:47)
at sbt.$tilde$greater$$anonfun$$u2219$1.apply(TypeFunctions.scala:40)
at sbt.std.Transform$$anon$4.work(System.scala:63)
at sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:226)
at sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:226)
at sbt.ErrorHandling$.wideConvert(ErrorHandling.scala:17)
at sbt.Execute.work(Execute.scala:235)
at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:226)
at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:226)
at sbt.ConcurrentRestrictions$$anon$4$$anonfun$1.apply(ConcurrentRestrictions.scala:159)
at sbt.CompletionService$$anon$2.call(CompletionService.scala:28)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
error Unable to run rpmbuild, check output for details. Errorcode 1
[error] Total time: 2 s, completed Apr 29, 2015 2:51:28 PM
Does it support consumed message counts, offsets by any chance? If yes, where can I see this?
The KafkaStateActor can be dead-on-arrival due to ZK failures in preStart. This ends up making it time out on all requests. It would be better if it remembered the exception and then re-reported it on requests.
default is 1000 milli, better make it longer.
although users can configure it themselves, but a proper default is better.
It would be nice if kafka manager supported increasing the number of partitions of a topic.
See 'Increasing replication factor' on this page
We are going to use docker to install kafka-manager, but not sure if any docker image already available here, so that we can try leverage it in our installation? Another question is I found some image in docker site, but not sure right now the docker image how to manage the network between internal docker and external hosts? Because we have zookeeper setup in VMs using puppet, but try to use docker install kafka-manager, not sure will be any issues between network configuration?
Kafka brokers report data sizes by topic-partition in the metric "Size" that is part of the class kafka.log.Log. It would be really useful to see breakdowns of this by broker, topic, and topic-partition. The use cases are endless!
5 artifacts copied, 0 already retrieved (24459kB/302ms)
[info] Loading project definition from /home/ubuntu/kafka-manager/project
[warn] There may be incompatibilities among your library dependencies.
[warn] Here are some of the libraries that were evicted:
[warn] * com.typesafe.sbt:sbt-native-packager:0.7.4 -> 1.0.0-M4
[warn] Run 'evicted' to see detailed eviction warnings
[info] Set current project to kafka-manager (in build file:/home/ubuntu/kafka-manager/)
Hi - I just deployed Kafka-manager today and it is pretty cool. It abstracts a lot of the day-to-day tasks into a slick webui. What type of effort would be involved if it included near real-time stats like msg/rates disk usage, cluster utilization... I was thinking it would by a nice enhancement.Thanks, Brendan
Messages in /sec 0.00 0.00 0.00 0.00
Bytes in /sec 0.00 0.00 0.00 0.00
Bytes out /sec 0.00 0.00 0.00 0.00
Bytes rejected /sec 0.00 0.00 0.00 0.00
Failed fetch request /sec 0.00 0.00 0.00 0.00
Failed produce request /sec 0.00 0.00 0.00 0.00
It would be nice to be able to remove partitions and topics from the web interface. (First vote would be for topics).
I am getting this error when the zookeeper host has a path at the end i.e., zk:2181/replication/kafka
[ERROR] [02/09/2015 21:15:59.637] [kafka-manager-system-akka.actor.default-dispatcher-14] [akka://kafka-manager-system/user/kafka-manager/sjc2/broker-view] / by zero
java.lang.ArithmeticException: / by zero
at kafka.manager.TopicIdentity.<init>(TopicIdentity.scala:61)
at kafka.manager.TopicIdentity$.from(TopicIdentity.scala:74)
at kafka.manager.BrokerViewCacheActor$$anonfun$updateView$1$$anonfun$apply$2$$anonfun$2.apply(BrokerViewCacheActor.scala:79)
at kafka.manager.BrokerViewCacheActor$$anonfun$updateView$1$$anonfun$apply$2$$anonfun$2.apply(BrokerViewCacheActor.scala:79)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
at scala.collection.Iterator$class.foreach(Iterator.scala:743)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1195)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:245)
at scala.collection.AbstractTraversable.map(Traversable.scala:104)
at kafka.manager.BrokerViewCacheActor$$anonfun$updateView$1$$anonfun$apply$2.apply(BrokerViewCacheActor.scala:79)
at kafka.manager.BrokerViewCacheActor$$anonfun$updateView$1$$anonfun$apply$2.apply(BrokerViewCacheActor.scala:77)
at scala.Option.foreach(Option.scala:256)
at kafka.manager.BrokerViewCacheActor$$anonfun$updateView$1.apply(BrokerViewCacheActor.scala:77)
at kafka.manager.BrokerViewCacheActor$$anonfun$updateView$1.apply(BrokerViewCacheActor.scala:76)
at scala.Option.foreach(Option.scala:256)
at kafka.manager.BrokerViewCacheActor.updateView(BrokerViewCacheActor.scala:76)
at kafka.manager.BrokerViewCacheActor.processActorResponse(BrokerViewCacheActor.scala:68)
at kafka.manager.BaseActor$$anonfun$receive$1.applyOrElse(BaseActor.scala:26)
at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
at kafka.manager.BaseActor.aroundReceive(BaseActor.scala:14)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
at akka.actor.ActorCell.invoke(ActorCell.scala:487)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:254)
at akka.dispatch.Mailbox.run(Mailbox.scala:221)
at akka.dispatch.Mailbox.exec(Mailbox.scala:231)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Where do i specify the JMX port ?
Suggest add support for add partition, alter topic
This is a good tool, however I just do not want to get it installed on production server, while I wish to monitor the topics, etc. Can I install it on VM remotely access production server, collecting metrics and display?
Same question for web-console, kafkaOffsetMonitor, etc.
Thanks
A binary release of this as an executable jar would be awesome! I'd love to try this but I'm not very familiar with the scala toolchain and I'm working through some difficulties with sbt (which I attribute to operator error). Thanks, this kafka-manager looks very cool.
Hi,
I've just created a debian package with sbt.
And it creates
/usr/share/kafka-manager/bin/kafka-manager
/etc/default/kafka-manager
/etc/kafka-manager/application.conf
A sane default would be to add this line to ..../bin/kafka-manager so it reads /etc/default/kafka-manager
#!/usr/bin/env bash
[ -r /etc/default/kafka-manager ] && . /etc/default/kafka-manager
Another sane default would be to add these lines to /etc/default/kafka-manager.
export PIDFILE="/var/run/kafka-manager/play.pid"
export JAVA_OPTS="-Dpidfile.path=$PIDFILE -Dconfig.file=/etc/kafka-manager/application.conf $JAVA_OPTS"
export APPLICATION_SECRET="somekey"
export ZK_HOSTS="zookeeper01:2181,zookeeper02:2181"
Hi,
We want to automate some of the kafka management by
To achieve this, we also need to be able to prime (pre-populate) a kafka-manager instance with a set of kafka clusters' 'zkhost' lists. We intend to provide the list via the application conf (As a list of lists) and set that value in the conf during puppet install of the kafka-manager.
But I am struggling a bit to find a nice 'bootstrap' (Startup) point in the lifecycle of the kafka-manager app to insert a populate_kafka_clusters() method. May be because I am very new to Scala and Play. Can you please point me to such a location or if there is none, can it be added so that other such initializations can be added by others at that interception point? Also can you point me to the code that takes the 'zkhosts' for a cluster and adds them to the backend store (kafka-manager's Zookeeper namespace /kafka-manager I presume)
Status icons in the topic list would be helpful to get a quick birds-eye-view of overall cluster state. One suggestion of colors for the icons is:
When we need to reassign replicas from a broker to other brokers,
say, when the broker is dead, the option we have right now is to
reassign all replicas randomly, whereas replicas on healthy
brokers might also be moved around. This could be very inefficient.
I had added a new command called drain
to a util my team is using
(lucaluo/kafkat@d0c9457),
which essentially take all replicas (of one or more topics) and put to
a list of healthy brokers. The original replicas on healthy brokers won't
be moved around, which is highly efficient.
I would like to migrate this functionality to kafka-manager as well.
Let me know what do you think.
2015-02-08 16:23:49,016 - [INFO] - from org.apache.zookeeper.ClientCnxn in kafka-manager-system-akka.actor.default-dispatcher-7-SendThread(10.65.196.37:2181)
Opening socket connection to server 10.65.196.37/10.65.196.37:2181. Will not attempt to authenticate using SASL (unknown error)
2015-02-08 16:23:49,312 - [WARN] - from org.webjars.RequireJS in play-akka.actor.default-dispatcher-2
Could not read WebJar RequireJS config for: dustjs-linkedin 2.4.0
Please file a bug at: http://github.com/webjars/dustjs-linkedin/issues/new
2015-02-08 16:23:49,354 - [WARN] - from org.webjars.RequireJS in play-akka.actor.default-dispatcher-2
Could not read WebJar RequireJS config for: json 20121008
Please file a bug at: http://github.com/webjars/json/issues/new
2015-02-08 16:23:49,378 - [WARN] - from org.webjars.RequireJS in play-akka.actor.default-dispatcher-2
Could not read WebJar RequireJS config for: requirejs 2.1.10
Please file a bug at: http://github.com/webjars/requirejs/issues/new
2015-02-08 16:23:58,829 - [ERROR] - from kafka.manager.ApiError in pool-3-thread-2
error : Ask timed out on [ActorSelection[Anchor(akka://kafka-manager-system/), Path(/user/kafka-manager)]] after [1000 ms]
akka.pattern.AskTimeoutException: Ask timed out on [ActorSelection[Anchor(akka://kafka-manager-system/), Path(/user/kafka-manager)]] after [1000 ms]
at akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:333) ~[akka-actor_2.11-2.3.7.jar:na]
at akka.actor.Scheduler$$anon$7.run(Scheduler.scala:117) ~[akka-actor_2.11-2.3.7.jar:na]
at scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:599) ~[scala-library-2.11.4.jar:na]
at scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:109) ~[scala-library-2.11.4.jar:na]
at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:597) ~[scala-library-2.11.4.jar:na]
at akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(Scheduler.scala:467) ~[akka-actor_2.11-2.3.7.jar:na]
at akka.actor.LightArrayRevolverScheduler$$anon$8.executeBucket$1(Scheduler.scala:419) ~[akka-actor_2.11-2.3.7.jar:na]
at akka.actor.LightArrayRevolverScheduler$$anon$8.nextTick(Scheduler.scala:423) ~[akka-actor_2.11-2.3.7.jar:na]
at akka.actor.LightArrayRevolverScheduler$$anon$8.run(Scheduler.scala:375) ~[akka-actor_2.11-2.3.7.jar:na]
at java.lang.Thread.run(Thread.java:724) ~[na:1.7.0_40-MS]
2015-02-08 16:24:04,065 - [ERROR] - from org.apache.curator.ConnectionState in kafka-manager-system-akka.actor.default-dispatcher-7
Connection timed out for connection string (10.65.196.35:2181,10.65.196.36:2181,10.65.196.37:2181) and timeout (15000) / elapsed (15114)
org.apache.curator.CuratorConnectionLossException: KeeperErrorCode = ConnectionLoss
at org.apache.curator.ConnectionState.checkTimeouts(ConnectionState.java:197) [curator-client-2.7.0.jar:na]
at org.apache.curator.ConnectionState.getZooKeeper(ConnectionState.java:87) [curator-client-2.7.0.jar:na]
at org.apache.curator.CuratorZookeeperClient.getZooKeeper(CuratorZookeeperClient.java:115) ~[curator-client-2.7.0.jar:na]
at org.apache.curator.framework.imps.CuratorFrameworkImpl.getZooKeeper(CuratorFrameworkImpl.java:492) ~[curator-framework-2.7.0.jar:na]
at org.apache.curator.framework.imps.CreateBuilderImpl$11.call(CreateBuilderImpl.java:691) ~[curator-framework-2.7.0.jar:na]
at org.apache.curator.framework.imps.CreateBuilderImpl$11.call(CreateBuilderImpl.java:675) ~[curator-framework-2.7.0.jar:na]
at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:107) ~[curator-client-2.7.0.jar:na]
at org.apache.curator.framework.imps.CreateBuilderImpl.pathInForeground(CreateBuilderImpl.java:671) ~[curator-framework-2.7.0.jar:na]
at org.apache.curator.framework.imps.CreateBuilderImpl.protectedPathInForeground(CreateBuilderImpl.java:453) ~[curator-framework-2.7.0.jar:na]
at org.apache.curator.framework.imps.CreateBuilderImpl.forPath(CreateBuilderImpl.java:443) ~[curator-framework-2.7.0.jar:na]
at org.apache.curator.framework.imps.CreateBuilderImpl.forPath(CreateBuilderImpl.java:423) ~[curator-framework-2.7.0.jar:na]
at org.apache.curator.framework.imps.CreateBuilderImpl.forPath(CreateBuilderImpl.java:44) ~[curator-framework-2.7.0.jar:na]
at kafka.manager.KafkaManagerActor$$anonfun$6.apply(KafkaManagerActor.scala:184) ~[kafka-manager_2.11-1.0-SNAPSHOT.jar:1.0-SNAPSHOT]
at kafka.manager.KafkaManagerActor$$anonfun$6.apply(KafkaManagerActor.scala:184) ~[kafka-manager_2.11-1.0-SNAPSHOT.jar:1.0-SNAPSHOT]
at scala.util.Try$.apply(Try.scala:191) ~[scala-library-2.11.4.jar:na]
at kafka.manager.KafkaManagerActor.(KafkaManagerActor.scala:184) ~[kafka-manager_2.11-1.0-SNAPSHOT.jar:1.0-SNAPSHOT]
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[na:1.7.0_40-MS]
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) ~[na:1.7.0_40-MS]
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[na:1.7.0_40-MS]
at java.lang.reflect.Constructor.newInstance(Constructor.java:526) ~[na:1.7.0_40-MS]
at akka.util.Reflect$.instantiate(Reflect.scala:66) ~[akka-actor_2.11-2.3.7.jar:na]
at akka.actor.ArgsReflectConstructor.produce(Props.scala:352) ~[akka-actor_2.11-2.3.7.jar:na]
at akka.actor.Props.newActor(Props.scala:252) ~[akka-actor_2.11-2.3.7.jar:na]
at akka.actor.ActorCell.newActor(ActorCell.scala:552) ~[akka-actor_2.11-2.3.7.jar:na]
at akka.actor.ActorCell.create(ActorCell.scala:578) ~[akka-actor_2.11-2.3.7.jar:na]
at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:456) ~[akka-actor_2.11-2.3.7.jar:na]
at akka.actor.ActorCell.systemInvoke(ActorCell.scala:478) ~[akka-actor_2.11-2.3.7.jar:na]
at akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:279) ~[akka-actor_2.11-2.3.7.jar:na]
at akka.dispatch.Mailbox.run(Mailbox.scala:220) ~[akka-actor_2.11-2.3.7.jar:na]
at akka.dispatch.Mailbox.exec(Mailbox.scala:231) ~[akka-actor_2.11-2.3.7.jar:na]
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) ~[scala-library-2.11.4.jar:na]
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.pollAndExecAll(ForkJoinPool.java:1253) ~[scala-library-2.11.4.jar:na]
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1346) ~[scala-library-2.11.4.jar:na]
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) ~[scala-library-2.11.4.jar:na]
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) ~[scala-library-2.11.4.jar:na]
Where do I generate the automatic partition assignments? I don't see the option, under Reassign Partitions, I just see the following message :
No data found for any recent reassign partitions command.
I tried to install the kafka-manager debian package build using sbt debian:packageBin
and when I did so , cannot see the log files in the /var/log/kafka-manager dir.
Am I missing something?
The 16GB root device of the machine I'm running this on ran out of space due to /var/log/upstart/kafka-manager.log. These logs are rotated daily, so it must have logged a lot in a short time.
The main content appears to be blocks like this:
[INFO] [02/14/2015 13:30:53.858] [kafka-manager-system-akka.actor.default-dispatcher-161] [akka://kafka-manager-system/user/kafka-manager/<cluster>/broker-view] Stopped actor akka://kafka-manager-system/user/kafka-manager/<cluster>/broker-view
[INFO] [02/14/2015 13:30:53.858] [kafka-manager-system-akka.actor.default-dispatcher-161] [akka://kafka-manager-system/user/kafka-manager/<cluster>/broker-view] Cancelling updater...
[INFO] [02/14/2015 13:30:53.858] [kafka-manager-system-akka.actor.default-dispatcher-161] [akka://kafka-manager-system/user/kafka-manager/<cluster>/broker-view] Started actor akka://kafka-manager-system/user/kafka-manager/<cluster>/broker-view
[INFO] [02/14/2015 13:30:53.858] [kafka-manager-system-akka.actor.default-dispatcher-161] [akka://kafka-manager-system/user/kafka-manager/<cluster>/broker-view] Scheduling updater for 10 seconds
[INFO] [02/14/2015 13:30:53.859] [kafka-manager-system-akka.actor.default-dispatcher-161] [akka://kafka-manager-system/user/kafka-manager/<cluster>/broker-view] Updating broker view...
[ERROR] [02/14/2015 13:30:53.860] [kafka-manager-system-akka.actor.default-dispatcher-160] [akka://kafka-manager-system/user/kafka-manager/<cluster>/broker-view] null
java.lang.ArithmeticException
Blocks like this appear every few milliseconds in the log file.
It would be nice if the readme had a basic overview of how to quickly get up and running with kafka-manager.
Hi
When it can support kafka 0.8.0?
thx
After #52 is merged, it would be cool to improve upon that by refreshing JMX-gathered metrics in the background. This would prevent the BrokerViewCacheActor thread from blocking.
The dropdown box can only be selected to use 8.1.1 or 8.2.0
It would be nice if it could detect the version of the cluster. Or atleast allow selecting 8.2.1 from the drop down box
I didn't see it mentioned or in the screenshots (maybe I missed it) but are you planning to add lag charts to this manager?
e.g. topic foo, partition 1 is 90,000 messages behind for consumer group "bar" and then chart that over time to understand consumer group performance and lag over time?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.