Giter Site home page Giter Site logo

docker-images's Introduction

Confluent Stream Data Platform on Docker (DEPRECATED)

Note: These images are no longer being updated. Confluent's versions of Docker images for Confluent Platform may be found here.

Experimental docker images for running the Confluent Platform. These images are currently intended for development use, not for production use.

Quickstart

The Docker version of the Confluent Quickstart looks like this:

# Start Zookeeper and expose port 2181 for use by the host machine
docker run -d --name zookeeper -p 2181:2181 confluent/zookeeper

# Start Kafka and expose port 9092 for use by the host machine
docker run -d --name kafka -p 9092:9092 --link zookeeper:zookeeper confluent/kafka

# Start Schema Registry and expose port 8081 for use by the host machine
docker run -d --name schema-registry -p 8081:8081 --link zookeeper:zookeeper \
    --link kafka:kafka confluent/schema-registry

# Start REST Proxy and expose port 8082 for use by the host machine
docker run -d --name rest-proxy -p 8082:8082 --link zookeeper:zookeeper \
    --link kafka:kafka --link schema-registry:schema-registry confluent/rest-proxy

If you're using boot2docker, you'll need to adjust how you run Kafka:

# Get the IP address of the docker machine
DOCKER_MACHINE=`boot2docker ip`

# Start Kafka and expose port 9092 for use by the host machine
# Also configure the broker to use the docker machine's IP address
docker run -d --name kafka -p 9092:9092 --link zookeeper:zookeeper \
    --env KAFKA_ADVERTISED_HOST_NAME=$DOCKER_MACHINE confluent/kafka

If all goes well when you run the quickstart, docker ps should give you something that looks like this:

CONTAINER ID        IMAGE                              COMMAND                  CREATED             STATUS              PORTS                    NAMES
7fc453ca701c        confluent/rest-proxy               "/usr/local/bin/rest-"   2 minutes ago       Up 2 minutes        0.0.0.0:8082->8082/tcp   rest-proxy
4d33d52a98bd        confluent/schema-registry:latest   "/usr/local/bin/schem"   2 minutes ago       Up 2 minutes        0.0.0.0:8081->8081/tcp   schema-registry     
d9613d3bc37d        confluent/kafka:latest             "/usr/local/bin/kafka"   2 minutes ago       Up 2 minutes        0.0.0.0:9092->9092/tcp   kafka               
459afcb7dfcf        confluent/zookeeper:latest         "/usr/local/bin/zk-do"   2 minutes ago       Up 2 minutes        0.0.0.0:2181->2181/tcp   zookeeper           

Running on Multiple Remote Hosts and Clustering

To run across multiple hosts you will need some way of communicating between Docker hosts so all remote containers can see each other. This is typically done via some sort of service discovery mechanism (so containers/services can find each other) and/or SDN (so containers can communicate) such as weave or flannel as SDN examples. Having that in place, you can use environment variables to specify the IP/hostname and respective ports for the remote containers and forgo the use of --link. For example to make a 3-node Zookeeper ensemble, each running on separate Docker hosts (zk-1:172.16.42.101, zk-2:172.16.42.102, and zk-3:172.16.42.103), and have a remote Kafka 2-node cluster connection:

docker run --name zk-1 -e zk_id=1 -e zk_server.1=172.16.42.101:2888:3888 -e zk_server.2=172.16.42.102:2888:3888 -e zk_server.3=172.16.42.103:2888:3888 -p 2181:2181 -p 2888:2888 -p 3888:3888 confluent/zookeeper
docker run --name zk-2 -e zk_id=2 -e zk_server.1=172.16.42.101:2888:3888 -e zk_server.2=172.16.42.102:2888:3888 -e zk_server.3=172.16.42.103:2888:3888 -p 2181:2181 -p 2888:2888 -p 3888:3888 confluent/zookeeper
docker run --name zk-3 -e zk_id=3 -e zk_server.1=172.16.42.101:2888:3888 -e zk_server.2=172.16.42.102:2888:3888 -e zk_server.3=172.16.42.103:2888:3888 -p 2181:2181 -p 2888:2888 -p 3888:3888 confluent/zookeeper
docker run --name kafka-1 -e KAFKA_BROKER_ID=1 -e KAFKA_ZOOKEEPER_CONNECT=172.16.42.101:2181,172.16.42.102:2181,172.16.42.103:2181 -p 9092:9092 confluent/kafka
docker run --name kafka-2 -e KAFKA_BROKER_ID=2 -e KAFKA_ZOOKEEPER_CONNECT=172.16.42.101:2181,172.16.42.102:2181,172.16.42.103:2181 -p 9092:9092 confluent/kafka

Changing settings

The images support using environment variables via the Docker -e | --env flags for setting various settings in the respective images. For example:

  • For the Zookeeper image use variables prefixed with ZOOKEEPER_ with the variables expressed exactly as how they would appear in the zookeeper.properties file. As an example, to set syncLimit and server.1 you'd run docker run --name zk -e ZOOKEEPER_syncLimit=2 -e ZOOKEEPER__server.1=localhost:2888:3888 confluent/zookeeper.

  • For the Kafka image use variables prefixed with KAFKA_ with an underscore (_) separating each word instead of periods. As an example, to set broker.id and offsets.storage you'd run docker run --name kafka --link zookeeper:zookeeper -e KAFKA_BROKER_ID=2 -e KAFKA_OFFSETS_STORAGE=kafka confluent/kafka.

  • For the Schema Registry image use variables prefixed with SCHEMA_REGISTRY_ with an underscore (_) separating each word instead of periods. As an example, to set kafkastore.topic and debug you'd run docker run --name schema-registry --link zookeeper:zookeeer --link kafka:kafka -e SCHEMA_REGISTRY_KAFKASTORE_TOPIC=_schemas -e SCHEMA_REGISTRY_DEBUG=true confluent/schema-registry.

  • For the Kafka REST Proxy image use variables prefixed with REST_PROXY_ with an underscore (_) separating each word instead of periods. As an example, to set id and zookeeper_connect you'd run docker run --name rest-proxy --link schema-registry:schema-registry --link zookeeper:zookeeer -e REST_PROXY_ID=2 -e REST_PROXY_ZOOKEEPER_CONNECT=192.168.1.101:2182 confluent/rest-proxy.

You can also download your own file, with similar variable substitution as shown above. To download your own file use the prefixes as shown above, with the special variable CFG_URL appended. For example, to download your own ZK configuration file and leverage the ZOOKEEPER_ variable substitution you could do docker run --name zk -e ZOOKEEPER_CFG_URL=http://myurl/zookeeper.properties ZOOKEEPER_id=1 -e ZOOKEEPER_maxClientCnxns=20 confluent/zookeeper.

Potential Caveats

Running Kafka in Docker does have some potential Caveats.

  • Cluster metadata will use the advertised.listeners configuration setting. This defaults to the hostname of the machine it's running in.

  • NAT networking requires proper advertisement of the host endpoints. This requires a 1 to 1 port mapping such as -p 9092:9092, changing the advertised.listeners to match the docker port mapping, or --net=host. Using host networking is recommended.

Docker Compose

The examples/fullstack directory contains a Docker compose script with a full Confluent stack. This include Zookeeper, a Kafka Broker, the rest proxy, and the schema registry.

Setup your environment

This command will create a docker machine called confluent with a hostname of confluent. Note you can change the driver to whatever virtualization platform you currently use.

docker-machine create --driver virtualbox confluent

This command will setup your shell to use the confluent virtual machine as your docker host.

eval $(docker-machine env confluent)

Local Host entries

A Kafka broker advertises the hostname of the machine it's running on. This requires the hostname to be resolvable on the client machine. You will need to add a host entry for your docker machine to your hosts file.

The command docker-machine ip <machine name> will return the ip address of your docker machine.

> docker-machine ip confluent
192.168.99.100

Edit your hosts file and add a host entry for the docker machine.

192.168.99.100  confluent

Launch Images

cd examples/fullstack
docker-compose up

Connecting

Now all of your services will be available at the host confluent.

Building Images

For convenience, a build.sh script is provided to build all variants of images. This includes:

  • confluent-platform - Confluent Platform base images, with all Confluent Platform packages installed. There are separate images for each Scala version. These images are tagged as confluent/platform-$SCALA_VERSION, with the default (2.10.4) also tagged as confluent/platform.
  • confluent/zookeeper - starts Zookeeper on port 2181.
  • confluent/kafka - starts Kafka on 9092.
  • confluent/schema-registry - starts the Schema Registry on 8081.
  • confluent/rest-proxy - starts the Kafka REST Proxy on 8082.
  • confluent-tools - provides tools with a few links to other containers for commonly used tools.

Note that all services are built only using the default Scala version. When run as services, the Scala version should not matter. If you need a specific Scala version, use the corresponding confluent/platform-$SCALA_VERSION image as your FROM line in your derived Dockerfile.

A second script, push.sh, will push the generated images to Docker Hub. First you'll need to be logged in:

docker login --username=yourhubusername --password=yourpassword [email protected]

then execute the script.

docker-images's People

Contributors

cgswong avatar ept avatar ewencp avatar gwenshap avatar jcustenborder avatar john-azariah-cba avatar samjhecht avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

docker-images's Issues

Chown and chmod to entrypoint

There is a problem with mapping host volumes right now.
For now there is only one entry for changing permissions in Dockerfile which is not enough, when doing host/container volume mapping. There should be changing permissions inside entrypoint, like it's done in Postgres image.
Otherwise mounted volume can have different uid/guid than confluent user/group has and container will fail.

Solution:
Add chmod and chown for volumes inside the container that changes during lifetime to entrypoint script.

chmod 700 $KAFKA_LOG_DIRS $LOG_DIR
chown -R confluent:confluent $KAFKA_LOG_DIRS $LOG_DIR

I wrote example for confluent/kafka, but this is related to all images for example confluent/zookeeper.

Use thin base image

Don't install the all platform (Kafka, Zookeeper & etc.) on each image.

p.s. For example: consider using this as base image for Kafka

Camus image

Would be awesome to get a camus image also.

Zookeeper container fails in CoreOS with "System error: no such file or directory"

This command works in ubuntu 15.04

docker run -d -p 2181:2181 -h zookeeper.lacolhost.com --name zookeeper confluent/zookeeper

but it fails on a CoreOS 717.3.0 machine with:

FATA[0002] Error response from daemon: Cannot start container 9eab63f6fe45896d3e2212a986fa4debf78f6be687e23cf1ae25444b412ffa65: [8] System error: no such file or directory 

Here's the history, to prove it is latest :)

core@core-02 ~ $ docker history confluent/zookeeper
IMAGE               CREATED              CREATED BY                                      SIZE
716bcd319d1a        26 hours ago         /bin/sh -c #(nop) CMD [""]                      0 B
f83bd6daed44        26 hours ago         /bin/sh -c #(nop) ENTRYPOINT ["/usr/local/bin   0 B
1535fb45f83c        26 hours ago         /bin/sh -c #(nop) EXPOSE 2181/tcp 2888/tcp 38   0 B
5cb6b2395149        26 hours ago         /bin/sh -c #(nop) VOLUME [/var/lib/zookeeper]   0 B
bb04b6b4add1        26 hours ago         /bin/sh -c #(nop) USER [confluent]              0 B
7164967fcbcf        26 hours ago         /bin/sh -c rm /etc/kafka/log4j.properties &&    329.5 kB
4b562c558411        26 hours ago         /bin/sh -c #(nop) COPY file:72504669dbbdabba3   1.066 kB
c431a515cb43        26 hours ago         /bin/sh -c #(nop) ENV KAFKA_LOG4J_OPTS=-Dlog4   0 B
4f621c3916a1        26 hours ago         /bin/sh -c #(nop) ENV CONFLUENT_GROUP=conflue   0 B
8b29bb526321        26 hours ago         /bin/sh -c #(nop) ENV CONFLUENT_USER=confluen   0 B
8cb0c6cd3ccc        26 hours ago         /bin/sh -c #(nop) ENV ZK_DATA_DIR=/var/lib/zo   0 B
a8b5b98568ee        3 months ago         /bin/sh -c apt-get update &&     apt-get upgr   281.2 MB
65d74d203b00        3 months ago         /bin/sh -c #(nop) ENV SCALA_VERSION=2.10.4      0 B
1265e16d0c28        3 months ago         /bin/sh -c #(nop) CMD [/bin/bash]               0 B
4f903438061c        3 months ago         /bin/sh -c #(nop) ADD file:64df78b21f6d6583bc   84.96 MB
511136ea3c5a        2.121217 years ago                                                   0 B

Please let me know if you need anything else, thanks!

network reachability when using confluent images.

Can we add environment variables exposing zookeeper and kafka host ip or other solutions to network reachability when using confluent images.

I am using the following

docker run -d --name zookeeper -p 127.0.0.1:2181:2181 --hostname zookeeper confluent/zookeeper

 docker run -d --name kafka --hostname kafka -p 127.0.0.1:9092:9092 --link zookeeper:zookeeper \
     --env KAFKA_LOG_CLEANUP_POLICY=compact confluent/kafka

Then use a the Sarama go client (using 127.0.0.1, 2181 and 9092 for zookeeper and kafka), in a docker container and get the following error

2015/08/12 01:34:27 subscription.go:113: creating consumer:
2015/08/12 01:36:20 server.go:1775: http: panic serving 127.0.0.1:55194: kafka: client has run out of available brokers to talk to (Is your cluster reachable?)
goroutine 12 [running]:
net/http.funcยท011()
    /usr/src/go/src/net/http/server.go:1130 +0xbb

Can we introduce a KAFKA_ADVERTISED_HOST_NAME:, KAFKA_ADVERTISED_PORT: and similar for zookeeper.

These are what my mappings look like

macbook:src aartikumargupta$ docker ps
CONTAINER ID        IMAGE                 COMMAND                CREATED             STATUS              PORTS                                          NAMES
01cb90eb8abf        019547acd204          "/bin/sh -c /program   21 hours ago        Up About an hour    8080/tcp                                       prickly_stallman1   
70b61fb70de9        confluent/kafka       "/bin/sh -c /kafka-d   21 hours ago        Up About an hour    127.0.0.1:9092->9092/tcp                       kafka               
4e5ff816657f        confluent/zookeeper   "/usr/local/bin/zk-d   21 hours ago        Up About an hour    2888/tcp, 127.0.0.1:2181->2181/tcp, 3888/tcp   zookeeper   

Docker image for 3.0?

Hi, all confluent docker images support for 3.0? or 2.1? please let me know, thanks.

other issue, i pull image from there, get very slowly, is other way to get it faster?

Log files not clearing after log retention byte or time limit

Hi,
I had run the confluent kafka and zookeeper changing the default log retention bytes and retention hours
using the following code
docker run -d --name kafka -p 9092:9092 --link zookeeper:zookeeper --env KAFKA_ADVERTISED_HOST_NAME=<my_ip> --env KAFKA_LOG_RETENTION_BYTES=50000000 --env KAFKA_LOG_RETENTION_HOURS=1 confluent/kafka

When i looked into the server.properties inside the docker using exec command, the changes have been made
But, The log files are not deleted and the hard disk get piled up.
Please help me
Thanks in advance

Vulnerability detected in confluent images pulled from docker hub

VULNERABILITY ANALYSIS RESULTS:

DockerHub External Image: confluentinc/cp-schema-registry:5.4.0

[Vulnerability 01]
TITLE: [linux] libgcrypt20 - CVE-2019-13627:

pkg: libgcrypt20: 1.6.3-2+deb8u5

Severity: High

Description: It was discovered that there was a ECDSA timing attack in the libgcrypt20 cryptographic library. Version affected: 1.8.4-5, 1.7.6-2+deb9u3, and 1.6.3-2+deb8u4. Versions fixed: 1.8.5-2 and 1.6.3-2+deb8u7.

Exploitability: Remotely Exploitable

Solution: libgcrypt timing attack fixed in version 1.8.5+

References:

http://lists.opensuse.org/opensuse-security-announce/2019-09/msg00060.html
http://www.openwall.com/lists/oss-security/2019/10/02/2
https://github.com/gpg/libgcrypt/releases/tag/libgcrypt-1.8.5

[Vulnerability 02]
TITLE: [linux] libcomerr2 - CVE-2019-5094:

pkgs:

libcomerr2: 1.42.12-2+b1
e2fslibs: 1.42.12-2+b1
libss2: 1.42.12-2+b1
e2fsprogs: 1.42.12-2+b1

severity: Medium

Exploitability: Locally Exploitable, low complexity

Description: An exploitable code execution vulnerability exists in the quota file functionality of E2fsprogs 1.45.3. A specially crafted ext4 partition can cause an out-of-bounds write on the heap, resulting in code execution. An attacker can corrupt a partition to trigger this vulnerability.

Solution: For Debian 8 "Jessie", this problem has been fixed in version
1.42.12-2+deb8u1.

For the oldstable distribution (stretch), this problem has been fixed
in version 1.43.4-2+deb9u1.

For the stable distribution (buster), this problem has been fixed in
version 1.44.5-1+deb10u2.

References:

https://lists.debian.org/debian-lts-announce/2019/09/msg00029.html
https://lists.fedoraproject.org/archives/list/[email protected]/message/2AKETJ6BREDUHRWQTV35SPGG5C6H7KSI/
https://seclists.org/bugtraq/2019/Sep/58

Name: cp-schema-registry
Tag: 5.4.0
Digest:sha256:6483e6258e517a2dec9d13d3e8b7fff2a963d9ec6f67bcac554b9fecd88d976b
Status: scanned
LastJobStatus: completed
Score: score9
NumberOfVulns: 2
NumberOfMalware: 0
Source: pushed
CreatedAt: 2020-02-06T18:06:53.269Z
FinishedAt: 2020-02-06T18:29:50.303Z
ImageHash: 27756bdebb20
Size: 1584
OS: Debian
OSVersion: 8.11

We scanned the latest image as well but the issues were the same from Tenable Security. We also ran the latest image through another container scanning software we have access to called "Snyk" and there were a lot more vulnerabilities in that image from them.

update quickstart notes on changing settings

Current docs read:

For the Kafka REST Proxy image use variables prefixed with `REST_PROXY_` with ...

It doesn't work, and does not match the docker-compose.yml you suggested in the "examples/fullstack" directory. I suspect the README.md is either a change-ahead or a change-behind the docker-compose script, which uses RP_* instead. (Three of the four variable prefixes in docker-compose.yml do not match, instead using zk_, SR_, and RP_.)

Thanks, very useful tool.

QuickStart question

Im using your quick start instructions to evaluate the confluent platform.

  • started zookeeper,
  • started kafka.
  • the problem with schema-registry: It runs without any errors, but than stops immediately. I have first overseen this,
  • started rest-proxy and got an error message about not running container.

Have you some insights, what can be the cause for it or where to look for hidden errors? To be honest, I'm not an experienced docker user.

# works
# Start Zookeeper and expose port 2181 for use by the host machine
docker run -d --name zookeeper -p 2181:2181 confluent/zookeeper

# works
docker run -d --name kafka -p 9092:9092 --link zookeeper:zookeeper confluent/kafka

# doesn't run - stops without error messages
docker run -d --name schema-registry -p 8081:8081 --link zookeeper:zookeeper \
    --link kafka:kafka confluent/schema-registry

# doesn't work because linked container isn't running
docker run -d --name rest-proxy -p 8082:8082 --link zookeeper:zookeeper \
    --link kafka:kafka --link schema-registry:schema-registry confluent/rest-proxy

log level control from ENV VAR

I am trying to debug an issue with my docker container. Is there a way to control the log level of the containers from an ENV VAR?

Improve the doc

Add more info & create compose file.

The following properties should be set:

  • KAFKA_ADVERTISED_HOST_NAME
  • KAFKA_ADVERTISED_PORT

Review this repo.

docker run -d --name kafka -p 9092:9092 --link zookeeper:zookeeper \

    --env KAFKA_ADVERTISED_HOST_NAME=<DOCKER_HOST> \

    --env KAFKA_ADVERTISED_PORT=9092 confluent/kafka

Can't get stack working with docker network

I'm trying to create a docker-compose file to bring up a dev stack on docker 1.9.1, which has the new network support in favour of which --link has been deprecated.

Ignoring the deprecation and using this docker-compose.yml with links works fine:

zookeeper:
    image: confluent/zookeeper
    ports:
        - "2181:2181"

kafka:
    image: confluent/kafka
    ports:
        - "9092:9092"
    links:
        - zookeeper

schema-registry:
    image: confluent/schema-registry
    ports:
        - "8081:8081"
    links:
        - zookeeper
        - kafka

rest-proxy:
    image: confluent/rest-proxy
    ports:
        - "8082:8082"
    links:
        - zookeeper
        - kafka
        - schema-registry

But when I try to use a docker 1.9 network bridge (called confluent) with this docker-compose.yml, the Schema Registry won't start:

zookeeper:
  image: confluent/zookeeper
  container_name: zookeeper
  net: confluent
  ports:
    - 2181:2181

kafka:
  image: confluent/kafka
  container_name: kafka
  net: confluent
  ports:
    - 9092:9092
  environment:
    # Use zookeeper:2181 which is defined in container's /etc/hosts
    KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
    # The kafka hostname should point to this container in the other containers' /etc/hosts
    KAFKA_ADVERTISED_HOST_NAME: kafka

schema-registry:
  image: confluent/schema-registry
  container_name: registry
  net: confluent
  ports:
    - 8081:8081
  environment:
    SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL: zookeeper:2181

rest-proxy:
  image: confluent/rest-proxy
  container_name: rest-proxy
  net: confluent
  ports:
    - 8082:8082
  environment:
    RP_SCHEMA_REGISTRY_URL: registry:8081
    RP_ZOOKEEPER_CONNECT: zookeeper:2181

The logs from the registry container look like this:

Starting registry
Attaching to registry
registry | SLF4J: Class path contains multiple SLF4J bindings.
registry | SLF4J: Found binding in [jar:file:/usr/share/java/confluent-common/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
registry | SLF4J: Found binding in [jar:file:/usr/share/java/schema-registry/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
registry | SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
registry | SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
registry | [2015-12-18 17:04:48,636] INFO SchemaRegistryConfig values: 
registry |  master.eligibility = true
registry |  port = 8081
registry |  kafkastore.timeout.ms = 500
registry |  kafkastore.init.timeout.ms = 60000
registry |  debug = false
registry |  kafkastore.zk.session.timeout.ms = 30000
registry |  request.logger.name = io.confluent.rest-utils.requests
registry |  metrics.sample.window.ms = 30000
registry |  schema.registry.zk.namespace = schema_registry
registry |  kafkastore.topic = _schemas
registry |  avro.compatibility.level = backward
registry |  shutdown.graceful.ms = 1000
registry |  response.mediatype.preferred = [application/vnd.schemaregistry.v1+json, application/vnd.schemaregistry+json, application/json]
registry |  metrics.jmx.prefix = kafka.schema.registry
registry |  host.name = 2ea69d1754e9
registry |  metric.reporters = []
registry |  kafkastore.commit.interval.ms = -1
registry |  kafkastore.connection.url = zookeeper:2181
registry |  metrics.num.samples = 2
registry |  response.mediatype.default = application/vnd.schemaregistry.v1+json
registry |  kafkastore.topic.replication.factor = 3
registry |  (io.confluent.kafka.schemaregistry.rest.SchemaRegistryConfig:135)
registry | [2015-12-18 17:04:49,152] INFO Initialized the consumer offset to -1 (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread:87)
registry | [2015-12-18 17:04:49,807] WARN The replication factor of the schema topic _schemas is less than the desired one of 3. If this is a production environment, it's crucial to add more brokers and increase the replication factor of the topic. (io.confluent.kafka.schemaregistry.storage.KafkaStore:201)
registry | [2015-12-18 17:04:49,886] INFO [kafka-store-reader-thread-_schemas], Starting  (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread:68)
registry | [2015-12-18 17:05:49,967] ERROR Error starting the schema registry (io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication:57)
registry | io.confluent.kafka.schemaregistry.exceptions.SchemaRegistryInitializationException: Error initializing kafka store while initializing schema registry
registry |  at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.init(KafkaSchemaRegistry.java:164)
registry |  at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.setupResources(SchemaRegistryRestApplication.java:55)
registry |  at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.setupResources(SchemaRegistryRestApplication.java:37)
registry |  at io.confluent.rest.Application.createServer(Application.java:104)
registry |  at io.confluent.kafka.schemaregistry.rest.Main.main(Main.java:42)
registry | Caused by: io.confluent.kafka.schemaregistry.storage.exceptions.StoreInitializationException: io.confluent.kafka.schemaregistry.storage.exceptions.StoreException: Failed to write Noop record to kafka store.
registry |  at io.confluent.kafka.schemaregistry.storage.KafkaStore.init(KafkaStore.java:151)
registry |  at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.init(KafkaSchemaRegistry.java:162)
registry |  ... 4 more
registry | Caused by: io.confluent.kafka.schemaregistry.storage.exceptions.StoreException: Failed to write Noop record to kafka store.
registry |  at io.confluent.kafka.schemaregistry.storage.KafkaStore.getLatestOffset(KafkaStore.java:363)
registry |  at io.confluent.kafka.schemaregistry.storage.KafkaStore.waitUntilKafkaReaderReachesLastOffset(KafkaStore.java:220)
registry |  at io.confluent.kafka.schemaregistry.storage.KafkaStore.init(KafkaStore.java:149)
registry |  ... 5 more
registry exited with code 1

And while registry is trying to start, the kafka container logs this a bunch of times:

kafka  | [2015-12-18 17:05:50,344] INFO Closing socket connection to /172.19.0.2. (kafka.network.Processor)

(I checked, and 172.19.0.2 was the IP address of the registry container.)

And then after it fails, the zookeeper container logs this:

zookeeper  | [2015-12-18 17:05:50,344] WARN caught end of stream exception (org.apache.zookeeper.server.NIOServerCnxn)
zookeeper  | EndOfStreamException: Unable to read additional data from client sessionid 0x151b60caa6b0004, likely client has closed socket
zookeeper  |    at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
zookeeper  |    at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
zookeeper  |    at java.lang.Thread.run(Thread.java:745)
zookeeper  | [2015-12-18 17:05:50,348] INFO Closed socket connection for client /172.19.0.2:38991 which had sessionid 0x151b60caa6b0004 (org.apache.zookeeper.server.NIOServerCnxn)
zookeeper  | [2015-12-18 17:05:50,349] WARN caught end of stream exception (org.apache.zookeeper.server.NIOServerCnxn)
zookeeper  | EndOfStreamException: Unable to read additional data from client sessionid 0x151b60caa6b0005, likely client has closed socket
zookeeper  |    at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
zookeeper  |    at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
zookeeper  |    at java.lang.Thread.run(Thread.java:745)
zookeeper  | [2015-12-18 17:05:50,350] INFO Closed socket connection for client /172.19.0.2:38992 which had sessionid 0x151b60caa6b0005 (org.apache.zookeeper.server.NIOServerCnxn)
zookeeper  | [2015-12-18 17:05:56,000] INFO Expiring session 0x151b60caa6b0005, timeout of 6000ms exceeded (org.apache.zookeeper.server.ZooKeeperServer)
zookeeper  | [2015-12-18 17:05:56,001] INFO Processed session termination for sessionid: 0x151b60caa6b0005 (org.apache.zookeeper.server.PrepRequestProcessor)
zookeeper  | [2015-12-18 17:06:20,000] INFO Expiring session 0x151b60caa6b0004, timeout of 30000ms exceeded (org.apache.zookeeper.server.ZooKeeperServer)
zookeeper  | [2015-12-18 17:06:20,000] INFO Processed session termination for sessionid: 0x151b60caa6b0004 (org.apache.zookeeper.server.PrepRequestProcessor)

I'm OK with using links for now, but it would be nice to get this resolved.

Schema Registry docker container exits as soon as it starts

I am trying out confluent on docker containers. ZooKeeper and Kafka come up correctly. After that when I trying bringing up Schema Registry, the container exits within 5 secs. "" shows the following output:

Here is the log from the schema-registry container:

docker@boot2docker:~$ docker logs schema-registry
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/share/java/confluent-common/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/share/java/schema-registry/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
[2015-06-27 11:09:53,588] INFO SchemaRegistryConfig values: 
    master.eligibility = true
    port = 8081
    kafkastore.timeout.ms = 500
    kafkastore.init.timeout.ms = 60000
    debug = false
    kafkastore.zk.session.timeout.ms = 30000
    request.logger.name = io.confluent.rest-utils.requests
    metrics.sample.window.ms = 30000
    schema.registry.zk.namespace = schema_registry
    kafkastore.topic = _schemas
    avro.compatibility.level = none
    shutdown.graceful.ms = 1000
    response.mediatype.preferred = [application/vnd.schemaregistry.v1+json, application/vnd.schemaregistry+json, application/json]
    metrics.jmx.prefix = kafka.schema.registry
    host.name = fab0b12fd783
    metric.reporters = []
    kafkastore.commit.interval.ms = -1
    kafkastore.connection.url = 172.17.0.5:2181
    metrics.num.samples = 2
    response.mediatype.default = application/vnd.schemaregistry.v1+json
    kafkastore.topic.replication.factor = 3
 (io.confluent.kafka.schemaregistry.rest.SchemaRegistryConfig:135)
[2015-06-27 11:09:54,078] INFO Initialized the consumer offset to -1 (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread:87)
[2015-06-27 11:09:54,697] WARN Creating the schema topic _schemas using a replication factor of 1, which is less than the desired one of 3. If this is a production environment, it's crucial to add more brokers and increase the replication factor of the topic. (io.confluent.kafka.schemaregistry.storage.KafkaStore:172)
[2015-06-27 11:09:54,923] ERROR Server died unexpectedly:  (io.confluent.kafka.schemaregistry.rest.Main:50)
org.apache.kafka.common.config.ConfigException: DNS resolution failed for url in bootstrap.servers: d5e17bff835a:9092
    at org.apache.kafka.common.utils.ClientUtils.parseAndValidateAddresses(ClientUtils.java:38)
    at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:189)
    at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:129)
    at io.confluent.kafka.schemaregistry.storage.KafkaStore.init(KafkaStore.java:143)
    at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.init(KafkaSchemaRegistry.java:162)
    at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.setupResources(SchemaRegistryRestApplication.java:55)
    at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.setupResources(SchemaRegistryRestApplication.java:37)
    at io.confluent.rest.Application.createServer(Application.java:104)
    at io.confluent.kafka.schemaregistry.rest.Main.main(Main.java:42)

Docker setup for Kafka Connect

Please offer:

  • a Dockerfile
  • a DockerHub-hosted image
  • docker-compose.yml configuration
  • docker run... instructions in the README

for Kafka Connect,

to make it easier for newbies to get Kafka Connect up and running.

Problem building an ensemble with the zookeeper version shipped in the Confluent Platform

I'm having the following setup:

  1. Only one physical host with an Ubuntu 15.10
  2. Weave 1.5.0
  3. Confluent Platform 2.0.1

The current problem is that is not possible to make to work an ensemble, at least when both zookeeper servers are running in the same host and I'm declaring the ensemble peers by hostname instead of using their IPs. The following commands are not running:

`docker run -d --name app_zk_1 -e zk_id=1 -e zk_server.1=app_zk_1:2888:3888 -e zk_server.2=app_zk_2:2888:3888 confluent/zookeper

docker run -d --name app_zk_2 -e zk_id=2 -e zk_server.1=app_zk_1:2888:3888 -e zk_server.2=app_zk_2:2888:3888 confluent/zookeper:`

The problem is that the first started Zookeeper server (i.e. app_zk_1) cannot resolve the name of the second started server (i.e. app_zk_1), but the second started server is able to resolve the name of the first.

It appears to be related with the following issue

From the logs of the Zookeeper servers I can see that the version shipped with the Confluent Platform is the 3.4.6-1569965, and the current version for Zookeeper is the 3.4.8.

After that I was using a similar setup, but now with:

  1. Only one physical host with an Ubuntu 15.10
  2. Weave 1.5.0
  3. Current version of Zookeeper (by using this dockerized version)

Now the ensemble is working fine, the ensemble was started with the following commands:

`docker run -d --name app_zk_1 uyjco0/baqend_conf_zookeeper app_zk_1,app_zk_2 1

docker run -d --name app_zk_2 uyjco0/baqend_conf_zookeeper app_zk_1,app_zk_2 2`

zookeeper image - data and log dir are the same?

In the dockerfile:

ENV LOG_DIR "/var/log/zookeeper"
ENV ZK_DATA_DIR "/var/log/zookeeper"

but the volume statement:

VOLUME ["/var/log/zookeeper", "/var/lib/zookeeper"]

Should '/var/lib/zookeeper' be the data dir?

While starting Shema registry, Getting Error starting the schema registry ,Error initializing kafka store while initializing schema registry

./bin/schema-registry-start ./etc/schema-registry/schema-registry.properties
Picked up _JAVA_OPTIONS: -Djava.net.preferIPv4Stack=true
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/hpvertica1/confluent-2.0.1/share/java/confluent-common/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hpvertica1/confluent-2.0.1/share/java/schema-registry/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
[2016-05-11 15:28:26,564] INFO SchemaRegistryConfig values:
master.eligibility = true
port = 8081
kafkastore.timeout.ms = 500
kafkastore.init.timeout.ms = 60000
debug = false
kafkastore.zk.session.timeout.ms = 30000
schema.registry.zk.namespace = schema_registry
request.logger.name = io.confluent.rest-utils.requests
metrics.sample.window.ms = 30000
kafkastore.topic = _schemas
avro.compatibility.level = backward
shutdown.graceful.ms = 1000
access.control.allow.origin =
response.mediatype.preferred = [application/vnd.schemaregistry.v1+json, application/vnd.schemaregistry+json, application/json]
metrics.jmx.prefix = kafka.schema.registry
host.name = 198.105.244.11
metric.reporters = []
kafkastore.commit.interval.ms = -1
kafkastore.connection.url = localhost:2181
metrics.num.samples = 2
response.mediatype.default = application/vnd.schemaregistry.v1+json
kafkastore.topic.replication.factor = 3
(io.confluent.kafka.schemaregistry.rest.SchemaRegistryConfig:135)
[2016-05-11 15:28:27,927] INFO Initialized the consumer offset to -1 (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread:86)
[2016-05-11 15:28:29,330] WARN The replication factor of the schema topic _schemas is less than the desired one of 3. If this is a production environment, it's crucial to add more brokers and increase the replication factor of the topic. (io.confluent.kafka.schemaregistry.storage.KafkaStore:205)
[2016-05-11 15:28:29,679] INFO [kafka-store-reader-thread-_schemas], Starting (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread:68)

[2016-05-11 15:29:29,812] ERROR Error starting the schema registry (io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication:57)
io.confluent.kafka.schemaregistry.exceptions.SchemaRegistryInitializationException: Error initializing kafka store while initializing schema registry
at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.init(KafkaSchemaRegistry.java:166)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.setupResources(SchemaRegistryRestApplication.java:55)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.setupResources(SchemaRegistryRestApplication.java:37)
at io.confluent.rest.Application.createServer(Application.java:109)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain.main(SchemaRegistryMain.java:43)
Caused by: io.confluent.kafka.schemaregistry.storage.exceptions.StoreInitializationException: io.confluent.kafka.schemaregistry.storage.exceptions.StoreException: Failed to write Noop record to kafka store.
at io.confluent.kafka.schemaregistry.storage.KafkaStore.init(KafkaStore.java:155)
at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.init(KafkaSchemaRegistry.java:164)
... 4 more
Caused by: io.confluent.kafka.schemaregistry.storage.exceptions.StoreException: Failed to write Noop record to kafka store.
at io.confluent.kafka.schemaregistry.storage.KafkaStore.getLatestOffset(KafkaStore.java:367)
at io.confluent.kafka.schemaregistry.storage.KafkaStore.waitUntilKafkaReaderReachesLastOffset(KafkaStore.java:224)
at io.confluent.kafka.schemaregistry.storage.KafkaStore.init(KafkaStore.java:153)
... 5 more
Caused by: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
at org.apache.kafka.clients.producer.KafkaProducer$FutureFailure.(KafkaProducer.java:686)
at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:449)
at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:339)
at io.confluent.kafka.schemaregistry.storage.KafkaStore.getLatestOffset(KafkaStore.java:362)
... 7 more
Caused by: org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.

Images for 3.0.1 release

Could we get new images up for the 3.0.1 release, specifically for Schema Registry? Would like to bump from 3.0.0 to 3.0.1 in the Kafka-BigQuery Connector but our integration tests use Docker and we'd like to make sure everything runs smoothly before making the jump.

Schema Registry docker container exits as soon as it starts

Hello,
When I try to bring up schema registry docker instance, as per the README, the instance dies after a few seconds. This seems to be related to #2, which was fixed in the build I am using.

The OS is CentOS Linux release 7.2.1511 (Core).

Here's what I did.

First, I started the docker files as per the README. I even put in a sleep of 90s to delay the schema registry to avoid the aforementioned timing issue.

docker run -d --name zookeeper -p 2181:2181 confluent/zookeeper
docker run -d --name kafka -p 9092:9092 --link zookeeper:zookeeper \
    --env KAFKA_ADVERTISED_HOST_NAME="127.0.0.1" --env KAFKA_ADVERTISED_PORT=9092 \
    confluent/kafka
sleep 90
docker run -d --name schema-registry -p 8081:8081 --link zookeeper:zookeeper \
    --link kafka:kafka confluent/schema-registry
docker run -d --name rest-proxy -p 8082:8082 --link zookeeper:zookeeper \
    --link kafka:kafka --link schema-registry:schema-registry confluent/rest-proxy

After a minute, the schema-registry simply dies. docker logs -f schema-registry shows the following:

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/share/java/confluent-common/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/share/java/schema-registry/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
[2016-04-01 06:49:27,076] INFO SchemaRegistryConfig values: 
    master.eligibility = true
    port = 8081
    kafkastore.timeout.ms = 500
    kafkastore.init.timeout.ms = 60000
    debug = false
    kafkastore.zk.session.timeout.ms = 30000
    request.logger.name = io.confluent.rest-utils.requests
    metrics.sample.window.ms = 30000
    schema.registry.zk.namespace = schema_registry
    kafkastore.topic = _schemas
    avro.compatibility.level = backward
    shutdown.graceful.ms = 1000
    response.mediatype.preferred = [application/vnd.schemaregistry.v1+json, application/vnd.schemaregistry+json, application/json]
    metrics.jmx.prefix = kafka.schema.registry
    host.name = 7bb6ad89fb64
    metric.reporters = []
    kafkastore.commit.interval.ms = -1
    kafkastore.connection.url = 172.17.2.44:2181
    metrics.num.samples = 2
    response.mediatype.default = application/vnd.schemaregistry.v1+json
    kafkastore.topic.replication.factor = 3
 (io.confluent.kafka.schemaregistry.rest.SchemaRegistryConfig:135)
[2016-04-01 06:49:27,400] INFO Initialized the consumer offset to -1 (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread:87)
[2016-04-01 06:49:27,797] WARN Creating the schema topic _schemas using a replication factor of 1, which is less than the desired one of 3. If this is a production environment, it's crucial to add more brokers and increase the replication factor of the topic. (io.confluent.kafka.schemaregistry.storage.KafkaStore:172)
[2016-04-01 06:49:27,854] INFO [kafka-store-reader-thread-_schemas], Starting  (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread:68)
[2016-04-01 06:50:27,911] ERROR Error starting the schema registry (io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication:57)
io.confluent.kafka.schemaregistry.exceptions.SchemaRegistryInitializationException: Error initializing kafka store while initializing schema registry
    at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.init(KafkaSchemaRegistry.java:164)
    at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.setupResources(SchemaRegistryRestApplication.java:55)
    at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.setupResources(SchemaRegistryRestApplication.java:37)
    at io.confluent.rest.Application.createServer(Application.java:104)
    at io.confluent.kafka.schemaregistry.rest.Main.main(Main.java:42)
Caused by: io.confluent.kafka.schemaregistry.storage.exceptions.StoreInitializationException: io.confluent.kafka.schemaregistry.storage.exceptions.StoreException: Failed to write Noop record to kafka store.
    at io.confluent.kafka.schemaregistry.storage.KafkaStore.init(KafkaStore.java:151)
    at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.init(KafkaSchemaRegistry.java:162)
    ... 4 more
Caused by: io.confluent.kafka.schemaregistry.storage.exceptions.StoreException: Failed to write Noop record to kafka store.
    at io.confluent.kafka.schemaregistry.storage.KafkaStore.getLatestOffset(KafkaStore.java:363)
    at io.confluent.kafka.schemaregistry.storage.KafkaStore.waitUntilKafkaReaderReachesLastOffset(KafkaStore.java:220)
    at io.confluent.kafka.schemaregistry.storage.KafkaStore.init(KafkaStore.java:149)
    ... 5 more

Zookeeper has the following ominous message:

OpenJDK 64-Bit Server VM warning: Cannot open file /var/log/kafka/zookeeper-gc.log due to Permission denied

[2016-04-01 06:47:56,061] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2016-04-01 06:47:56,063] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager)
[2016-04-01 06:47:56,063] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager)
[2016-04-01 06:47:56,063] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager)
[2016-04-01 06:47:56,063] WARN Either no config or no quorum defined in config, running  in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain)
[2016-04-01 06:47:56,076] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2016-04-01 06:47:56,076] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain)
[2016-04-01 06:47:56,081] INFO Server environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT (org.apache.zookeeper.server.ZooKeeperServer)
[2016-04-01 06:47:56,081] INFO Server environment:host.name=8e978e62801c (org.apache.zookeeper.server.ZooKeeperServer)
[2016-04-01 06:47:56,082] INFO Server environment:java.version=1.7.0_95 (org.apache.zookeeper.server.ZooKeeperServer)
[2016-04-01 06:47:56,082] INFO Server environment:java.vendor=Oracle Corporation (org.apache.zookeeper.server.ZooKeeperServer)
[2016-04-01 06:47:56,082] INFO Server environment:java.home=/usr/lib/jvm/java-7-openjdk-amd64/jre (org.apache.zookeeper.server.ZooKeeperServer)
[2016-04-01 06:47:56,082] INFO Server environment:java.class.path=:/usr/bin/../core/build/dependant-libs-2.10.4*/*.jar:/usr/bin/../examples/build/libs//kafka-examples*.jar:/usr/bin/../contrib/hadoop-consumer/build/libs//kafka-hadoop-consumer*.jar:/usr/bin/../contrib/hadoop-producer/build/libs//kafka-hadoop-producer*.jar:/usr/bin/../clients/build/libs/kafka-clients*.jar:/usr/bin/../libs/*.jar:/usr/bin/../share/java/kafka/jopt-simple-3.2.jar:/usr/bin/../share/java/kafka/kafka-clients-0.8.2.2.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/kafka_2.10-0.8.2.2-javadoc.jar:/usr/bin/../share/java/kafka/kafka_2.10-0.8.2.2-scaladoc.jar:/usr/bin/../share/java/kafka/kafka_2.10-0.8.2.2-sources.jar:/usr/bin/../share/java/kafka/kafka_2.10-0.8.2.2-test.jar:/usr/bin/../share/java/kafka/kafka_2.10-0.8.2.2.jar:/usr/bin/../share/java/kafka/log4j-1.2.16.jar:/usr/bin/../share/java/kafka/lz4-1.2.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/scala-library-2.10.4.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.6.jar:/usr/bin/../share/java/kafka/slf4j-log4j12-1.6.1.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.1.7.jar:/usr/bin/../share/java/kafka/zkclient-0.3.jar:/usr/bin/../share/java/kafka/zookeeper-3.4.6.jar:/usr/bin/../core/build/libs/kafka_2.10*.jar (org.apache.zookeeper.server.ZooKeeperServer)
[2016-04-01 06:47:56,082] INFO Server environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer)
[2016-04-01 06:47:56,082] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer)
[2016-04-01 06:47:56,082] INFO Server environment:java.compiler=<NA> (org.apache.zookeeper.server.ZooKeeperServer)
[2016-04-01 06:47:56,082] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer)
[2016-04-01 06:47:56,082] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer)
[2016-04-01 06:47:56,082] INFO Server environment:os.version=3.10.0-327.10.1.el7.x86_64 (org.apache.zookeeper.server.ZooKeeperServer)
[2016-04-01 06:47:56,082] INFO Server environment:user.name=confluent (org.apache.zookeeper.server.ZooKeeperServer)
[2016-04-01 06:47:56,082] INFO Server environment:user.home=/home/confluent (org.apache.zookeeper.server.ZooKeeperServer)
[2016-04-01 06:47:56,082] INFO Server environment:user.dir=/ (org.apache.zookeeper.server.ZooKeeperServer)
[2016-04-01 06:47:56,086] INFO tickTime set to 2000 (org.apache.zookeeper.server.ZooKeeperServer)
[2016-04-01 06:47:56,086] INFO minSessionTimeout set to -1 (org.apache.zookeeper.server.ZooKeeperServer)
[2016-04-01 06:47:56,086] INFO maxSessionTimeout set to -1 (org.apache.zookeeper.server.ZooKeeperServer)
[2016-04-01 06:47:56,096] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory)
[2016-04-01 06:47:56,762] INFO Accepted socket connection from /172.17.2.45:38533 (org.apache.zookeeper.server.NIOServerCnxnFactory)
[2016-04-01 06:47:56,767] INFO Client attempting to establish new session at /172.17.2.45:38533 (org.apache.zookeeper.server.ZooKeeperServer)
[2016-04-01 06:47:56,769] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog)
[2016-04-01 06:47:56,787] INFO Established session 0x153d09409ca0000 with negotiated timeout 6000 for client /172.17.2.45:38533 (org.apache.zookeeper.server.ZooKeeperServer)
[2016-04-01 06:47:56,820] INFO Got user-level KeeperException when processing sessionid:0x153d09409ca0000 type:create cxid:0x4 zxid:0x3 txntype:-1 reqpath:n/a Error Path:/brokers Error:KeeperErrorCode = NoNode for /brokers (org.apache.zookeeper.server.PrepRequestProcessor)
[2016-04-01 06:47:56,839] INFO Got user-level KeeperException when processing sessionid:0x153d09409ca0000 type:create cxid:0xa zxid:0x7 txntype:-1 reqpath:n/a Error Path:/config Error:KeeperErrorCode = NoNode for /config (org.apache.zookeeper.server.PrepRequestProcessor)
[2016-04-01 06:47:56,854] INFO Got user-level KeeperException when processing sessionid:0x153d09409ca0000 type:create cxid:0x10 zxid:0xb txntype:-1 reqpath:n/a Error Path:/admin Error:KeeperErrorCode = NoNode for /admin (org.apache.zookeeper.server.PrepRequestProcessor)
[2016-04-01 06:47:57,067] INFO Got user-level KeeperException when processing sessionid:0x153d09409ca0000 type:setData cxid:0x1a zxid:0xf txntype:-1 reqpath:n/a Error Path:/controller_epoch Error:KeeperErrorCode = NoNode for /controller_epoch (org.apache.zookeeper.server.PrepRequestProcessor)
[2016-04-01 06:47:57,114] INFO Got user-level KeeperException when processing sessionid:0x153d09409ca0000 type:delete cxid:0x29 zxid:0x11 txntype:-1 reqpath:n/a Error Path:/admin/preferred_replica_election Error:KeeperErrorCode = NoNode for /admin/preferred_replica_election (org.apache.zookeeper.server.PrepRequestProcessor)
[2016-04-01 06:49:27,311] INFO Accepted socket connection from /172.17.2.46:50163 (org.apache.zookeeper.server.NIOServerCnxnFactory)
[2016-04-01 06:49:27,313] WARN Connection request from old client /172.17.2.46:50163; will be dropped if server is in r-o mode (org.apache.zookeeper.server.ZooKeeperServer)
[2016-04-01 06:49:27,313] INFO Client attempting to establish new session at /172.17.2.46:50163 (org.apache.zookeeper.server.ZooKeeperServer)
[2016-04-01 06:49:27,314] INFO Established session 0x153d09409ca0001 with negotiated timeout 30000 for client /172.17.2.46:50163 (org.apache.zookeeper.server.ZooKeeperServer)
[2016-04-01 06:49:27,528] INFO Accepted socket connection from /172.17.2.46:50164 (org.apache.zookeeper.server.NIOServerCnxnFactory)
[2016-04-01 06:49:27,528] WARN Connection request from old client /172.17.2.46:50164; will be dropped if server is in r-o mode (org.apache.zookeeper.server.ZooKeeperServer)
[2016-04-01 06:49:27,528] INFO Client attempting to establish new session at /172.17.2.46:50164 (org.apache.zookeeper.server.ZooKeeperServer)
[2016-04-01 06:49:27,535] INFO Established session 0x153d09409ca0002 with negotiated timeout 6000 for client /172.17.2.46:50164 (org.apache.zookeeper.server.ZooKeeperServer)
[2016-04-01 06:49:27,593] INFO Got user-level KeeperException when processing sessionid:0x153d09409ca0002 type:create cxid:0x1 zxid:0x15 txntype:-1 reqpath:n/a Error Path:/consumers/schema-registry-7bb6ad89fb64-8081/ids Error:KeeperErrorCode = NoNode for /consumers/schema-registry-7bb6ad89fb64-8081/ids (org.apache.zookeeper.server.PrepRequestProcessor)
[2016-04-01 06:49:27,600] INFO Got user-level KeeperException when processing sessionid:0x153d09409ca0002 type:create cxid:0x2 zxid:0x16 txntype:-1 reqpath:n/a Error Path:/consumers/schema-registry-7bb6ad89fb64-8081 Error:KeeperErrorCode = NoNode for /consumers/schema-registry-7bb6ad89fb64-8081 (org.apache.zookeeper.server.PrepRequestProcessor)
[2016-04-01 06:49:27,689] INFO Accepted socket connection from /172.17.2.47:49313 (org.apache.zookeeper.server.NIOServerCnxnFactory)
[2016-04-01 06:49:27,691] INFO Client attempting to establish new session at /172.17.2.47:49313 (org.apache.zookeeper.server.ZooKeeperServer)
[2016-04-01 06:49:27,699] INFO Established session 0x153d09409ca0003 with negotiated timeout 30000 for client /172.17.2.47:49313 (org.apache.zookeeper.server.ZooKeeperServer)
[2016-04-01 06:49:27,813] INFO Got user-level KeeperException when processing sessionid:0x153d09409ca0001 type:setData cxid:0x7 zxid:0x1b txntype:-1 reqpath:n/a Error Path:/config/topics/_schemas Error:KeeperErrorCode = NoNode for /config/topics/_schemas (org.apache.zookeeper.server.PrepRequestProcessor)
[2016-04-01 06:49:27,816] INFO Got user-level KeeperException when processing sessionid:0x153d09409ca0001 type:create cxid:0x8 zxid:0x1c txntype:-1 reqpath:n/a Error Path:/config/topics Error:KeeperErrorCode = NodeExists for /config/topics (org.apache.zookeeper.server.PrepRequestProcessor)
[2016-04-01 06:49:27,868] INFO Got user-level KeeperException when processing sessionid:0x153d09409ca0000 type:create cxid:0x39 zxid:0x1f txntype:-1 reqpath:n/a Error Path:/brokers/topics/_schemas/partitions/0 Error:KeeperErrorCode = NoNode for /brokers/topics/_schemas/partitions/0 (org.apache.zookeeper.server.PrepRequestProcessor)
[2016-04-01 06:49:27,872] INFO Got user-level KeeperException when processing sessionid:0x153d09409ca0000 type:create cxid:0x3a zxid:0x20 txntype:-1 reqpath:n/a Error Path:/brokers/topics/_schemas/partitions Error:KeeperErrorCode = NoNode for /brokers/topics/_schemas/partitions (org.apache.zookeeper.server.PrepRequestProcessor)
[2016-04-01 06:49:27,895] INFO Got user-level KeeperException when processing sessionid:0x153d09409ca0002 type:create cxid:0x20 zxid:0x24 txntype:-1 reqpath:n/a Error Path:/consumers/schema-registry-7bb6ad89fb64-8081/owners/_schemas Error:KeeperErrorCode = NoNode for /consumers/schema-registry-7bb6ad89fb64-8081/owners/_schemas (org.apache.zookeeper.server.PrepRequestProcessor)
[2016-04-01 06:49:27,902] INFO Got user-level KeeperException when processing sessionid:0x153d09409ca0002 type:create cxid:0x21 zxid:0x25 txntype:-1 reqpath:n/a Error Path:/consumers/schema-registry-7bb6ad89fb64-8081/owners Error:KeeperErrorCode = NoNode for /consumers/schema-registry-7bb6ad89fb64-8081/owners (org.apache.zookeeper.server.PrepRequestProcessor)
[2016-04-01 06:50:28,253] WARN caught end of stream exception (org.apache.zookeeper.server.NIOServerCnxn)
EndOfStreamException: Unable to read additional data from client sessionid 0x153d09409ca0001, likely client has closed socket
    at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
    at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
    at java.lang.Thread.run(Thread.java:745)
[2016-04-01 06:50:28,258] INFO Closed socket connection for client /172.17.2.46:50163 which had sessionid 0x153d09409ca0001 (org.apache.zookeeper.server.NIOServerCnxn)
[2016-04-01 06:50:28,258] WARN caught end of stream exception (org.apache.zookeeper.server.NIOServerCnxn)
EndOfStreamException: Unable to read additional data from client sessionid 0x153d09409ca0002, likely client has closed socket
    at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
    at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
    at java.lang.Thread.run(Thread.java:745)
[2016-04-01 06:50:28,258] INFO Closed socket connection for client /172.17.2.46:50164 which had sessionid 0x153d09409ca0002 (org.apache.zookeeper.server.NIOServerCnxn)
[2016-04-01 06:50:34,000] INFO Expiring session 0x153d09409ca0002, timeout of 6000ms exceeded (org.apache.zookeeper.server.ZooKeeperServer)
[2016-04-01 06:50:34,001] INFO Processed session termination for sessionid: 0x153d09409ca0002 (org.apache.zookeeper.server.PrepRequestProcessor)
[2016-04-01 06:50:58,000] INFO Expiring session 0x153d09409ca0001, timeout of 30000ms exceeded (org.apache.zookeeper.server.ZooKeeperServer)
[2016-04-01 06:50:58,001] INFO Processed session termination for sessionid: 0x153d09409ca0001 (org.apache.zookeeper.server.PrepRequestProcessor)

The Kafka logs are:

[2016-04-01 06:47:56,688] INFO Verifying properties (kafka.utils.VerifiableProperties)
[2016-04-01 06:47:56,709] INFO Property advertised.host.name is overridden to 127.0.0.1 (kafka.utils.VerifiableProperties)
[2016-04-01 06:47:56,709] INFO Property advertised.port is overridden to 9092 (kafka.utils.VerifiableProperties)
[2016-04-01 06:47:56,709] INFO Property auto.create.topics.enable is overridden to true (kafka.utils.VerifiableProperties)
[2016-04-01 06:47:56,709] INFO Property broker.id is overridden to 0 (kafka.utils.VerifiableProperties)
[2016-04-01 06:47:56,709] INFO Property delete.topic.enable is overridden to true (kafka.utils.VerifiableProperties)
[2016-04-01 06:47:56,709] INFO Property log.cleaner.enable is overridden to true (kafka.utils.VerifiableProperties)
[2016-04-01 06:47:56,709] INFO Property log.dirs is overridden to /var/lib/kafka (kafka.utils.VerifiableProperties)
[2016-04-01 06:47:56,709] INFO Property log.retention.check.interval.ms is overridden to 300000 (kafka.utils.VerifiableProperties)
[2016-04-01 06:47:56,709] INFO Property log.retention.hours is overridden to 168 (kafka.utils.VerifiableProperties)
[2016-04-01 06:47:56,709] INFO Property log.segment.bytes is overridden to 1073741824 (kafka.utils.VerifiableProperties)
[2016-04-01 06:47:56,710] INFO Property num.io.threads is overridden to 8 (kafka.utils.VerifiableProperties)
[2016-04-01 06:47:56,710] INFO Property num.network.threads is overridden to 3 (kafka.utils.VerifiableProperties)
[2016-04-01 06:47:56,710] INFO Property num.partitions is overridden to 1 (kafka.utils.VerifiableProperties)
[2016-04-01 06:47:56,710] INFO Property num.recovery.threads.per.data.dir is overridden to 1 (kafka.utils.VerifiableProperties)
[2016-04-01 06:47:56,710] INFO Property port is overridden to 9092 (kafka.utils.VerifiableProperties)
[2016-04-01 06:47:56,710] INFO Property socket.receive.buffer.bytes is overridden to 102400 (kafka.utils.VerifiableProperties)
[2016-04-01 06:47:56,710] INFO Property socket.request.max.bytes is overridden to 104857600 (kafka.utils.VerifiableProperties)
[2016-04-01 06:47:56,710] INFO Property socket.send.buffer.bytes is overridden to 102400 (kafka.utils.VerifiableProperties)
[2016-04-01 06:47:56,710] INFO Property zookeeper.connect is overridden to 172.17.2.44:2181 (kafka.utils.VerifiableProperties)
[2016-04-01 06:47:56,710] INFO Property zookeeper.connection.timeout.ms is overridden to 6000 (kafka.utils.VerifiableProperties)
[2016-04-01 06:47:56,731] INFO [Kafka Server 0], starting (kafka.server.KafkaServer)
[2016-04-01 06:47:56,732] INFO [Kafka Server 0], Connecting to zookeeper on 172.17.2.44:2181 (kafka.server.KafkaServer)
[2016-04-01 06:47:56,738] INFO Starting ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread)
[2016-04-01 06:47:56,743] INFO Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT (org.apache.zookeeper.ZooKeeper)
[2016-04-01 06:47:56,743] INFO Client environment:host.name=f4459ee12fce (org.apache.zookeeper.ZooKeeper)
[2016-04-01 06:47:56,743] INFO Client environment:java.version=1.7.0_95 (org.apache.zookeeper.ZooKeeper)
[2016-04-01 06:47:56,743] INFO Client environment:java.vendor=Oracle Corporation (org.apache.zookeeper.ZooKeeper)
[2016-04-01 06:47:56,743] INFO Client environment:java.home=/usr/lib/jvm/java-7-openjdk-amd64/jre (org.apache.zookeeper.ZooKeeper)
[2016-04-01 06:47:56,743] INFO Client environment:java.class.path=:/usr/bin/../core/build/dependant-libs-2.10.4*/*.jar:/usr/bin/../examples/build/libs//kafka-examples*.jar:/usr/bin/../contrib/hadoop-consumer/build/libs//kafka-hadoop-consumer*.jar:/usr/bin/../contrib/hadoop-producer/build/libs//kafka-hadoop-producer*.jar:/usr/bin/../clients/build/libs/kafka-clients*.jar:/usr/bin/../libs/*.jar:/usr/bin/../share/java/kafka/jopt-simple-3.2.jar:/usr/bin/../share/java/kafka/kafka-clients-0.8.2.2.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/kafka_2.10-0.8.2.2-javadoc.jar:/usr/bin/../share/java/kafka/kafka_2.10-0.8.2.2-scaladoc.jar:/usr/bin/../share/java/kafka/kafka_2.10-0.8.2.2-sources.jar:/usr/bin/../share/java/kafka/kafka_2.10-0.8.2.2-test.jar:/usr/bin/../share/java/kafka/kafka_2.10-0.8.2.2.jar:/usr/bin/../share/java/kafka/log4j-1.2.16.jar:/usr/bin/../share/java/kafka/lz4-1.2.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/scala-library-2.10.4.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.6.jar:/usr/bin/../share/java/kafka/slf4j-log4j12-1.6.1.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.1.7.jar:/usr/bin/../share/java/kafka/zkclient-0.3.jar:/usr/bin/../share/java/kafka/zookeeper-3.4.6.jar:/usr/bin/../core/build/libs/kafka_2.10*.jar (org.apache.zookeeper.ZooKeeper)
[2016-04-01 06:47:56,743] INFO Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper)
[2016-04-01 06:47:56,743] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper)
[2016-04-01 06:47:56,743] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
[2016-04-01 06:47:56,743] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper)
[2016-04-01 06:47:56,743] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper)
[2016-04-01 06:47:56,743] INFO Client environment:os.version=3.10.0-327.10.1.el7.x86_64 (org.apache.zookeeper.ZooKeeper)
[2016-04-01 06:47:56,743] INFO Client environment:user.name=confluent (org.apache.zookeeper.ZooKeeper)
[2016-04-01 06:47:56,743] INFO Client environment:user.home=/home/confluent (org.apache.zookeeper.ZooKeeper)
[2016-04-01 06:47:56,743] INFO Client environment:user.dir=/ (org.apache.zookeeper.ZooKeeper)
[2016-04-01 06:47:56,743] INFO Initiating client connection, connectString=172.17.2.44:2181 sessionTimeout=6000 watcher=org.I0Itec.zkclient.ZkClient@76415e70 (org.apache.zookeeper.ZooKeeper)
[2016-04-01 06:47:56,758] INFO Opening socket connection to server 172.17.2.44/172.17.2.44:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2016-04-01 06:47:56,762] INFO Socket connection established to 172.17.2.44/172.17.2.44:2181, initiating session (org.apache.zookeeper.ClientCnxn)
[2016-04-01 06:47:56,789] INFO Session establishment complete on server 172.17.2.44/172.17.2.44:2181, sessionid = 0x153d09409ca0000, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn)
[2016-04-01 06:47:56,791] INFO zookeeper state changed (SyncConnected) (org.I0Itec.zkclient.ZkClient)
[2016-04-01 06:47:56,919] INFO Loading logs. (kafka.log.LogManager)
[2016-04-01 06:47:56,925] INFO Logs loading complete. (kafka.log.LogManager)
[2016-04-01 06:47:56,976] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager)
[2016-04-01 06:47:56,977] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)
[2016-04-01 06:47:56,992] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor)
[2016-04-01 06:47:56,993] INFO [Socket Server on Broker 0], Started (kafka.network.SocketServer)
[2016-04-01 06:47:57,031] INFO Will not load MX4J, mx4j-tools.jar is not in the classpath (kafka.utils.Mx4jLoader$)
[2016-04-01 06:47:57,065] INFO 0 successfully elected as leader (kafka.server.ZookeeperLeaderElector)
[2016-04-01 06:47:57,130] INFO Registered broker 0 at path /brokers/ids/0 with address 127.0.0.1:9092. (kafka.utils.ZkUtils$)
[2016-04-01 06:47:57,137] INFO [Kafka Server 0], started (kafka.server.KafkaServer)
[2016-04-01 06:47:57,167] INFO New leader is 0 (kafka.server.ZookeeperLeaderElector$LeaderChangeListener)
[2016-04-01 06:49:27,926] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions [_schemas,0] (kafka.server.ReplicaFetcherManager)
[2016-04-01 06:49:27,947] INFO Completed load of log _schemas-0 with log end offset 0 (kafka.log.Log)
[2016-04-01 06:49:27,950] INFO Created log for partition [_schemas,0] in /var/lib/kafka with properties {segment.index.bytes -> 10485760, file.delete.delay.ms -> 60000, segment.bytes -> 1073741824, flush.ms -> 9223372036854775807, delete.retention.ms -> 86400000, index.interval.bytes -> 4096, retention.bytes -> -1, min.insync.replicas -> 1, cleanup.policy -> compact, unclean.leader.election.enable -> true, segment.ms -> 604800000, max.message.bytes -> 1000012, flush.messages -> 9223372036854775807, min.cleanable.dirty.ratio -> 0.5, retention.ms -> 604800000, segment.jitter.ms -> 0}. (kafka.log.LogManager)
[2016-04-01 06:49:27,950] WARN Partition [_schemas,0] on broker 0: No checkpointed highwatermark is found for partition [_schemas,0] (kafka.cluster.Partition)

I appreciate any help. Thank you.

Kafka Can't connect to ZK on Windows.

I am trying to setup Confluent Data Stream on my Windows machine on Docker. The ZK has started up alright and shows up on docker ps.

Then I started Kafka using

docker run -d --name kafka -p 9092:9092 --link zookeeper:zookeeper   --env KAFKA_ADVERTISED_HOST_NAME=$DOCKER_MACHINE confluent/kafka

After having set my KAFKA_ADVERTISED_HOST_NAME. But, it just doesn't connect and get error like:

    [2015-11-28 15:54:47,513] INFO Verifying properties (kafka.utils.VerifiableProperties)
[2015-11-28 15:54:47,554] INFO Property advertised.host.name is overridden to 192.168.59.103     (kafka.utils.VerifiableProperties)
[2015-11-28 15:54:47,554] INFO Property auto.create.topics.enable is overridden to true (kafka.utils.VerifiableProperties)
[2015-11-28 15:54:47,554] INFO Property broker.id is overridden to 0 (kafka.utils.VerifiableProperties)
[2015-11-28 15:54:47,554] INFO Property delete.topic.enable is overridden to true (kafka.utils.VerifiableProperties)
[2015-11-28 15:54:47,554] INFO Property log.cleaner.enable is overridden to true (kafka.utils.VerifiableProperties)
[2015-11-28 15:54:47,554] INFO Property log.dirs is overridden to /var/lib/kafka (kafka.utils.VerifiableProperties)
[2015-11-28 15:54:47,555] INFO Property log.retention.check.interval.ms is overridden to 300000 (kafka.utils.VerifiableProperties)
[2015-11-28 15:54:47,555] INFO Property log.retention.hours is overridden to 168 (kafka.utils.VerifiableProperties)
[2015-11-28 15:54:47,556] INFO Property log.segment.bytes is overridden to 1073741824 (kafka.utils.VerifiableProperties)
[2015-11-28 15:54:47,557] INFO Property num.io.threads is overridden to 8 (kafka.utils.VerifiableProperties)
[2015-11-28 15:54:47,557] INFO Property num.network.threads is overridden to 3 (kafka.utils.VerifiableProperties)
[2015-11-28 15:54:47,557] INFO Property num.partitions is overridden to 1 (kafka.utils.VerifiableProperties)
[2015-11-28 15:54:47,557] INFO Property num.recovery.threads.per.data.dir is overridden to 1 (kafka.utils.VerifiableProperties)
[2015-11-28 15:54:47,557] INFO Property port is overridden to 9092 (kafka.utils.VerifiableProperties)
[2015-11-28 15:54:47,557] INFO Property socket.receive.buffer.bytes is overridden to 102400 (kafka.utils.VerifiableProperties)
[2015-11-28 15:54:47,557] INFO Property socket.request.max.bytes is overridden to 104857600 (kafka.utils.VerifiableProperties)
[2015-11-28 15:54:47,557] INFO Property socket.send.buffer.bytes is overridden to 102400 (kafka.utils.VerifiableProperties)
[2015-11-28 15:54:47,558] INFO Property zookeeper.connect is overridden to : (kafka.utils.VerifiableProperties)
[2015-11-28 15:54:47,558] INFO Property zookeeper.connection.timeout.ms is overridden to 6000 (kafka.utils.VerifiableProperties)
[2015-11-28 15:54:47,592] INFO [Kafka Server 0], starting (kafka.server.KafkaServer)
[2015-11-28 15:54:47,594] INFO [Kafka Server 0], Connecting to zookeeper on : (kafka.server.KafkaServer)
[2015-11-28 15:54:47,606] INFO Starting ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread)
[2015-11-28 15:54:47,612] INFO Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT (org.apache.zookeeper.ZooKeeper)
[2015-11-28 15:54:47,612] INFO Client environment:host.name=daf47a8b0b61 (org.apache.zookeeper.ZooKeeper)
[2015-11-28 15:54:47,612] INFO Client environment:java.version=1.7.0_79 (org.apache.zookeeper.ZooKeeper)
[2015-11-28 15:54:47,612] INFO Client environment:java.vendor=Oracle Corporation (org.apache.zookeeper.ZooKeeper)
[2015-11-28 15:54:47,612] INFO Client environment:java.home=/usr/lib/jvm/java-7-openjdk-amd64/jre (org.apache.zookeeper.ZooKeeper)
[2015-11-28 15:54:47,612] INFO Client environment:java.class.path=:/usr/bin/../core/build/dependant-libs-2.10.4*/*.jar:/usr/bin/../examples/build/libs//kafka-examples*.jar:/usr/bin/../contrib/hadoop-consumer/build/libs//kafka-hadoop-consumer*.jar:/usr/bin/../contrib/hadoop-producer/build/libs//kafka-hadoop-producer*.jar:/usr/bin/../clients/build/libs/kafka-clients*.jar:/usr/bin/../libs/*.jar:/usr/bin/../share/java/kafka/jopt-simple-3.2.jar:/usr/bin/../share/java/kafka/kafka-clients-0.8.2.2.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/kafka_2.10-0.8.2.2-javadoc.jar:/usr/bin/../share/java/kafka/kafka_2.10-0.8.2.2-scaladoc.jar:/usr/bin/../share/java/kafka/kafka_2.10-0.8.2.2-sources.jar:/usr/bin/../share/java/kafka/kafka_2.10-0.8.2.2-test.jar:/usr/bin/../share/java/kafka/kafka_2.10-0.8.2.2.jar:/usr/bin/../share/java/kafka/log4j-1.2.16.jar:/usr/bin/../share/java/kafka/lz4-1.2.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/scala-library-2.10.4.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.6.jar:/usr/bin/../share/java/kafka/slf4j-log4j12-1.6.1.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.1.7.jar:/usr/bin/../share/java/kafka/zkclient-0.3.jar:/usr/bin/../share/java/kafka/zookeeper-3.4.6.jar:/usr/bin/../core/build/libs/kafka_2.10*.jar (org.apache.zookeeper.ZooKeeper)
[2015-11-28 15:54:47,612] INFO Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper)
[2015-11-28 15:54:47,612] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper)
[2015-11-28 15:54:47,612] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
[2015-11-28 15:54:47,612] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper)
[2015-11-28 15:54:47,612] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper)
[2015-11-28 15:54:47,612] INFO Client environment:os.version=3.18.11-tinycore64 (org.apache.zookeeper.ZooKeeper)
[2015-11-28 15:54:47,612] INFO Client environment:user.name=confluent (org.apache.zookeeper.ZooKeeper)
[2015-11-28 15:54:47,612] INFO Client environment:user.home=/home/confluent (org.apache.zookeeper.ZooKeeper)
[2015-11-28 15:54:47,612] INFO Client environment:user.dir=/ (org.apache.zookeeper.ZooKeeper)
[2015-11-28 15:54:47,613] INFO Initiating client connection, connectString=: sessionTimeout=6000 watcher=org.I0Itec.zkclient.ZkClient@24c96a1a (org.apache.zookeeper.ZooKeeper)
[2015-11-28 15:54:47,632] INFO Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2015-11-28 15:54:47,638] WARN Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
java.net.ConnectException: Connection refused
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
        at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:740)
        at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
        at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
[2015-11-28 15:54:48,742] INFO Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2015-11-28 15:54:48,742] WARN Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
java.net.ConnectException: Connection refused
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
        at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:740)
        at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
        at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
[2015-11-28 15:54:49,843] INFO Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2015-11-28 15:54:49,843] WARN Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
java.net.ConnectException: Connection refused
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
        at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:740)
        at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
        at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
[2015-11-28 15:54:50,944] INFO Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2015-11-28 15:54:50,944] WARN Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
java.net.ConnectException: Connection refused
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
        at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:740)
        at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
        at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
[2015-11-28 15:54:52,047] INFO Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2015-11-28 15:54:52,047] WARN Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
java.net.ConnectException: Connection refused
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
        at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:740)
        at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
        at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
[2015-11-28 15:54:53,148] INFO Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2015-11-28 15:54:53,148] WARN Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
java.net.ConnectException: Connection refused
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
        at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:740)
        at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
        at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
[2015-11-28 15:54:53,629] INFO Terminate ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread)
[2015-11-28 15:54:54,250] INFO Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2015-11-28 15:54:54,353] INFO Session: 0x0 closed (org.apache.zookeeper.ZooKeeper)
[2015-11-28 15:54:54,353] INFO EventThread shut down (org.apache.zookeeper.ClientCnxn)
[2015-11-28 15:54:54,353] FATAL [Kafka Server 0], Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
org.I0Itec.zkclient.exception.ZkTimeoutException: Unable to connect to zookeeper server within timeout: 6000
        at org.I0Itec.zkclient.ZkClient.connect(ZkClient.java:880)
        at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:98)
        at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:84)
        at kafka.server.KafkaServer.initZk(KafkaServer.scala:157)
        at kafka.server.KafkaServer.startup(KafkaServer.scala:82)
        at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:29)
        at kafka.Kafka$.main(Kafka.scala:46)
        at kafka.Kafka.main(Kafka.scala)
[2015-11-28 15:54:54,354] INFO [Kafka Server 0], shutting down (kafka.server.KafkaServer)
[2015-11-28 15:54:54,358] INFO [Kafka Server 0], shut down completed (kafka.server.KafkaServer)
[2015-11-28 15:54:54,360] FATAL Fatal error during KafkaServerStartable startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
org.I0Itec.zkclient.exception.ZkTimeoutException: Unable to connect to zookeeper server within timeout: 6000
        at org.I0Itec.zkclient.ZkClient.connect(ZkClient.java:880)
        at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:98)
        at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:84)
        at kafka.server.KafkaServer.initZk(KafkaServer.scala:157)
        at kafka.server.KafkaServer.startup(KafkaServer.scala:82)
        at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:29)
        at kafka.Kafka$.main(Kafka.scala:46)
        at kafka.Kafka.main(Kafka.scala)
[2015-11-28 15:54:54,361] INFO [Kafka Server 0], shutting down (kafka.server.KafkaServer)

It's still trying to connect to Kafka on 127.0.0.1.. Any pointers here?

kafka images for windows containers

Hi,
I want to run the below images on windows containers on windows server 2016 image as linux containers are not supported on it. do we have equivalent images which can run on windows containers for the below images.

confluentinc/cp-zookeeper
confluentinc/cp-enterprise-kafka
confluentinc/cp-schema-registry

Schema registry times out connecting to zookeeper

I am trying out the schema registry, and attempting to connect the docker container to a zookeeper/kafka running on a VM. However, it times out when attempting to connect to zookeeper.

However, when I run the schema registry directly on my laptop (i.e. no docker) it works just fine with the same configuration.

Is there some other configuration/setting I need to make this setup work?

The error I'm getting when trying to run the docker container is the following

metric.reporters = []
    kafkastore.connection.url = mesos-master:2181
    avro.compatibility.level = backward
    debug = false
    shutdown.graceful.ms = 1000
    response.mediatype.preferred = [application/vnd.schemaregistry.v1+json, application/vnd.schemaregistry+json, application/json]
    kafkastore.commit.interval.ms = -1
    response.mediatype.default = application/vnd.schemaregistry.v1+json
    kafkastore.topic = _schemas
    metrics.jmx.prefix = kafka.schema.registry
    access.control.allow.origin = 
    port = 8081
    request.logger.name = io.confluent.rest-utils.requests
    metrics.sample.window.ms = 30000
    kafkastore.zk.session.timeout.ms = 30000
    master.eligibility = true
    kafkastore.topic.replication.factor = 3
    kafkastore.timeout.ms = 500
    host.name = cf1d1b09e022
    schema.registry.zk.namespace = schema_registry
    kafkastore.init.timeout.ms = 60000
    metrics.num.samples = 2
 (io.confluent.kafka.schemaregistry.rest.SchemaRegistryConfig:135)
[2016-04-08 13:19:18,701] INFO Initialized the consumer offset to -1 (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread:86)
[2016-04-08 13:19:28,924] ERROR Server died unexpectedly:  (io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain:51)
org.I0Itec.zkclient.exception.ZkTimeoutException: Unable to connect to zookeeper server within timeout: 6000
    at org.I0Itec.zkclient.ZkClient.connect(ZkClient.java:1120)
    at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:147)
    at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:122)
    at kafka.utils.ZkUtils$.createZkClientAndConnection(ZkUtils.scala:89)
    at kafka.utils.ZkUtils$.apply(ZkUtils.scala:71)
    at kafka.consumer.ZookeeperConsumerConnector.connectZk(ZookeeperConsumerConnector.scala:181)
    at kafka.consumer.ZookeeperConsumerConnector.<init>(ZookeeperConsumerConnector.scala:129)
    at kafka.javaapi.consumer.ZookeeperConsumerConnector.<init>(ZookeeperConsumerConnector.scala:66)
    at kafka.javaapi.consumer.ZookeeperConsumerConnector.<init>(ZookeeperConsumerConnector.scala:69)
    at io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread.<init>(KafkaStoreReaderThread.java:93)
    at io.confluent.kafka.schemaregistry.storage.KafkaStore.<init>(KafkaStore.java:109)
    at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.<init>(KafkaSchemaRegistry.java:136)
    at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.setupResources(SchemaRegistryRestApplication.java:53)
    at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.setupResources(SchemaRegistryRestApplication.java:37)
    at io.confluent.rest.Application.createServer(Application.java:109)
    at io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain.main(SchemaRegistryMain.java:43)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.