Giter Site home page Giter Site logo

embeddedkafka / embedded-kafka-schema-registry Goto Github PK

View Code? Open in Web Editor NEW
107.0 5.0 23.0 402 KB

A library that provides in-memory instances of both Kafka and Confluent Schema Registry to run your tests against.

License: MIT License

Scala 100.00%

embedded-kafka-schema-registry's Issues

Enable Basic and Bearer REST authentication

Currently seems not possible to do that unless I miss something.

SchemaRegistryRestApplication uses JAAS which is configured via -Djava.security.auth.login.config=sample_jaas.config JVM property. As I understand this property could not be simply enabled globally but should be enabled only for the SchemaRegistryRestApplication code so it should run in a separate JVM.

One way to make it work is to provide a modified SchemaRegistryRestApplication that overrides createLoginService method so it returns some fake LoginService that could be configured by the library users.

Probably there is an easier or more elegant way to do that, or simply not to enable REST authentication at all, so feel free to be engaged in the discussion.

Extending trait EmbeddedKafka stopped working on update to v5.4.1

I have some custom code that extends net.manub.embeddedkafka.schemaregistry.EmbeddedKafka that stopped working after updating to version 5.4.1 of embedded-kafka-schema-registry. The problem is that the overridden function withRunningServers uses scala.reflect.io.Directory instead of java.nio.file.Path.
The problem seems to be fixed in commit e33d1b6. I've currently patched this problem locally. But it would be much appreciated if you could publish a patch release to fix the issue (v5.4.1.1).

Thanks ๐Ÿ™‚

how to access the schema registry server in test?

there is a error:Caused by: java.lang.IllegalArgumentException: Login module not specified in JAAS config
how can I set these info for the schemaRegistry?
thanks
like producer.kafka.sasl.jaas.config="org.apache.kafka.common.security.plain.PlainLoginModule required username="USERNAME" password="PASSWORD";"

Not able to start the kafka broker

i am not getting the error with Embeddded Kafka repo , but i need schema registry too :

ERROR kafka.server.BrokerMetadataCheckpoint - Failed to write meta.properties due to
java.nio.file.AccessDeniedException: C:\Users\AVTUMX~1\AppData\Local\Temp\kafka-logs1790594500627329176
at sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:83)
at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
at sun.nio.fs.WindowsFileSystemProvider.newFileChannel(WindowsFileSystemProvider.java:115)
at java.nio.channels.FileChannel.open(FileChannel.java:287)
at java.nio.channels.FileChannel.open(FileChannel.java:335)
at org.apache.kafka.common.utils.Utils.flushDir(Utils.java:953)
at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:941)
at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:916)
at kafka.server.BrokerMetadataCheckpoint.liftedTree1$1(BrokerMetadataCheckpoint.scala:214)
at kafka.server.BrokerMetadataCheckpoint.write(BrokerMetadataCheckpoint.scala:204)
at kafka.server.KafkaServer.$anonfun$checkpointBrokerMetadata$2(KafkaServer.scala:772)
at kafka.server.KafkaServer.$anonfun$checkpointBrokerMetadata$2$adapted(KafkaServer.scala:770)
at scala.collection.TraversableLike$WithFilter.$anonfun$foreach$1(TraversableLike.scala:924)
at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:38)
at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:923)
at kafka.server.KafkaServer.checkpointBrokerMetadata(KafkaServer.scala:770)
at kafka.server.KafkaServer.startup(KafkaServer.scala:322)
at io.github.embeddedkafka.ops.KafkaOps.startKafka(kafkaOps.scala:53)
at io.github.embeddedkafka.ops.KafkaOps.startKafka$(kafkaOps.scala:26)
at io.github.embeddedkafka.schemaregistry.EmbeddedKafka$.startKafka(EmbeddedKafka.scala:65)
at io.github.embeddedkafka.schemaregistry.EmbeddedKafka$.start(EmbeddedKafka.scala:83)

New version

Hi,

Would it be possible for you to release a new version, please?

I am interested in this commit 1175d2a Upgrade to embeddedkafka 3.4.0.

Cheers

Build for Scala 2.13

Hi guys :)

Is it possible to get a build for 2.13 as well?

EDIT: I see it's blocked by Confluent, is there a ticket opened on their side?

Expose custom properties for schema registry

The call to startSchemaRegistry() within the EmbeddedKafka object does not provide a way of passing custom config to the schema registry server.

Perhaps a customSchemaRegistryProperties could be added to the config and passed through here?

Please publish version 6.2.0

It would be great if a version was published against version 6.2.0 of the confluent platform. Thanks for making this library!

Create 7.5.2 release

Hi! Is it possible for you to create a 7.5.2 release for the embedded-kafka-schema-registry?

I am not able to run different specs using the same embedded Kafka

I have tried all possible permutations of the normal embedded-kafka docs for same-kafka tests

  override def beforeAll(): Unit = {
    println(s"EmbeddedKafka.isRunning: ${EmbeddedKafka.isRunning}")
    if (!EmbeddedKafka.isRunning) EmbeddedKafka.start()
    ()
  }

But probably it is trying to create an instance of Zookeeper every time I call the companion object methods

[info] *****.KafkaIngressProducerSpec *** ABORTED ***
[info]   java.net.BindException: Address already in use
[info]   at java.base/sun.nio.ch.Net.bind0(Native Method)
[info]   at java.base/sun.nio.ch.Net.bind(Net.java:555)
[info]   at java.base/sun.nio.ch.ServerSocketChannelImpl.netBind(ServerSocketChannelImpl.java:337)
[info]   at java.base/sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:294)
[info]   at java.base/sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:89)
[info]   at java.base/sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:81)
[info]   at org.apache.zookeeper.server.NIOServerCnxnFactory.configure(NIOServerCnxnFactory.java:687)
[info]   at org.apache.zookeeper.server.ServerCnxnFactory.configure(ServerCnxnFactory.java:76)
[info]   at io.github.embeddedkafka.ops.ZooKeeperOps.startZooKeeper(zooKeeperOps.scala:26)
[info]   at io.github.embeddedkafka.ops.ZooKeeperOps.startZooKeeper$(zooKeeperOps.scala:13)
[info]   ...

Schema registry aborts after startup

Hi!

I tried to use this for an IT that i have been trying. I am using 2.13 scala and the version of kafka libs are using 5.5.0-ccs. So i ended up using 5.5.0.1 version of embedded kafka. It seems to be booting up alright and i do see netstat happening initially. The SR dies midway though

        bearer.auth.token = [hidden]
        proxy.port = -1
        schema.reflection = false
        auto.register.schemas = true
  | => nmax.schemas.per.subject = 1000kaRedisSyncIT 0s
        basic.auth.credentials.source = URL
        specific.avro.reader = false
        value.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy
        schema.registry.url = [http://localhost:8081]
        basic.auth.user.info = [hidden]
        proxy.host = 
        schema.registry.basic.auth.user.info = [hidden]
        bearer.auth.credentials.source = STATIC_TOKEN
        key.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy
 (io.confluent.kafka.serializers.KafkaAvroDeserializerConfig:179)
[2020-05-21 23:23:25,072] INFO KafkaAvroDeserializerConfig values: 
        bearer.auth.token = [hidden]
        proxy.port = -1
        schema.reflection = false
        auto.register.schemas = true
  | => nmax.schemas.per.subject = 1000kaRedisSyncIT 0s
        basic.auth.credentials.source = URL
        specific.avro.reader = false
        value.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy
        schema.registry.url = [http://localhost:8081]
        basic.auth.user.info = [hidden]
        proxy.host = 
        schema.registry.basic.auth.user.info = [hidden]
        bearer.auth.credentials.source = STATIC_TOKEN
        key.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy
 (io.confluent.kafka.serializers.KafkaAvroDeserializerConfig:179)
**Getting connected to localhost:6001 
[2020-05-21 23:23:42,134] INFO SchemaRegistryConfig values: 
        access.control.allow.headers = 
        access.control.allow.methods = 
        access.control.allow.origin = 
        authentication.method = NONE
  | => nauthentication.realm = ync.KafkaRedisSyncIT 17s
        authentication.roles = [*]
        authentication.skip.paths = []
        avro.compatibility.level = 
        compression.enable = true
        debug = false
        host.name = 192.168.0.152
        idle.timeout.ms = 30000
        inter.instance.headers.whitelist = []
        inter.instance.protocol = http
        kafkastore.bootstrap.servers = [localhost:6001]
        kafkastore.connection.url = 
        kafkastore.group.id = 
        kafkastore.init.timeout.ms = 60000
        kafkastore.sasl.kerberos.kinit.cmd = /usr/bin/kinit
        kafkastore.sasl.kerberos.min.time.before.relogin = 60000
        kafkastore.sasl.kerberos.service.name = 
        kafkastore.sasl.kerberos.ticket.renew.jitter = 0.05
        kafkastore.sasl.kerberos.ticket.renew.window.factor = 0.8
        kafkastore.sasl.mechanism = GSSAPI
        kafkastore.security.protocol = PLAINTEXT
        kafkastore.ssl.cipher.suites = 
        kafkastore.ssl.enabled.protocols = TLSv1.2,TLSv1.1,TLSv1
        kafkastore.ssl.endpoint.identification.algorithm = 
        kafkastore.ssl.key.password = [hidden]
        kafkastore.ssl.keymanager.algorithm = SunX509
        kafkastore.ssl.keystore.location = 
        kafkastore.ssl.keystore.password = [hidden]
        kafkastore.ssl.keystore.type = JKS
        kafkastore.ssl.protocol = TLS
        kafkastore.ssl.provider = 
        kafkastore.ssl.trustmanager.algorithm = PKIX
        kafkastore.ssl.truststore.location = 
        kafkastore.ssl.truststore.password = [hidden]
        kafkastore.ssl.truststore.type = JKS
        kafkastore.timeout.ms = 500
        kafkastore.topic = _schemas
        kafkastore.topic.replication.factor = 3
        kafkastore.write.max.retries = 5
        kafkastore.zk.session.timeout.ms = 30000
        listeners = [http://localhost:6002]
        master.eligibility = true
        metric.reporters = []
        metrics.jmx.prefix = kafka.schema.registry
        metrics.num.samples = 2
        metrics.sample.window.ms = 30000
        metrics.tag.map = []
        mode.mutability = false
        port = 8081
        request.logger.name = io.confluent.rest-utils.requests
        resource.extension.class = []
        resource.extension.classes = []
        resource.static.locations = []
        response.mediatype.default = application/vnd.schemaregistry.v1+json
        response.mediatype.preferred = [application/vnd.schemaregistry.v1+json, application/vnd.schemaregistry+json, application/json]
        rest.servlet.initializor.classes = []
        schema.compatibility.level = backward
        schema.providers = []
        schema.registry.group.id = schema-registry
        schema.registry.inter.instance.protocol = 
        schema.registry.resource.extension.class = []
        schema.registry.zk.namespace = schema_registry
        shutdown.graceful.ms = 1000
        ssl.cipher.suites = []
        ssl.client.auth = false
        ssl.client.authentication = NONE
        ssl.enabled.protocols = []
        ssl.endpoint.identification.algorithm = null
        ssl.key.password = [hidden]
        ssl.keymanager.algorithm = 
        ssl.keystore.location = 
        ssl.keystore.password = [hidden]
        ssl.keystore.reload = false
        ssl.keystore.type = JKS
        ssl.keystore.watch.location = 
        ssl.protocol = TLS
        ssl.provider = 
        ssl.trustmanager.algorithm = 
        ssl.truststore.location = 
        ssl.truststore.password = [hidden]
        ssl.truststore.type = JKS
        websocket.path.prefix = /ws
        websocket.servlet.initializor.classes = []
        zookeeper.set.acl = false
 (io.confluent.kafka.schemaregistry.rest.SchemaRegistryConfig:347)
[2020-05-21 23:23:42,153] INFO Logging initialized @44049ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log:169)
[2020-05-21 23:23:42,232] INFO Adding listener: http://localhost:6002 (io.confluent.rest.ApplicationServer:344)
[2020-05-21 23:23:42,308] INFO Initializing KafkaStore with broker endpoints: PLAINTEXT://localhost:6001 (io.confluent.kafka.schemaregistry.storage.KafkaStore:108)
[2020-05-21 23:23:42,309] INFO Registering schema provider for AVRO: io.confluent.kafka.schemaregistry.avro.AvroSchemaProvider (io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry:210)
[2020-05-21 23:23:42,309] INFO Registering schema provider for JSON: io.confluent.kafka.schemaregistry.json.JsonSchemaProvider (io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry:210).smartdrive.refdata.sync.KafkaRedisSyncIT 17s
[2020-05-21 23:23:42,310] INFO Registering schema provider for PROTOBUF: io.confluent.kafka.schemaregistry.protobuf.ProtobufSchemaProvider (io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry:210)
[2020-05-21 23:23:42,345] INFO Creating schemas topic _schemas (io.confluent.kafka.schemaregistry.storage.KafkaStore:193)
[2020-05-21 23:23:42,346] WARN Creating the schema topic _schemas using a replication factor of 1, which is less than the desired one of 3. If this is a production environment, it's crucial to add more brokers and increase the replication factor of the topic. (io.confluent.kafka.schemaregistry.storage.KafkaStore:203)
[2020-05-21 23:23:42,513] INFO Kafka store reader thread starting consumer (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread:110)
[2020-05-21 23:23:42,527] INFO Initialized last consumed offset to -1 (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread:144)
[2020-05-21 23:23:42,528] INFO [kafka-store-reader-thread-_schemas]: Starting (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread:66)
[2020-05-21 23:23:42,623] INFO Wait to catch up until the offset at 0 (io.confluent.kafka.schemaregistry.storage.KafkaStore:304)
[2020-05-21 23:23:42,640] INFO Joining schema registry with Kafka-based coordination (io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry:297)
[2020-05-21 23:23:45,817] INFO Finished rebalance with master election result: Assignment{version=1, error=0, master='sr-1-5e328d55-d8bd-41dc-b711-cfb93e166265', masterIdentity=version=1,host=192.168.0.152,port=6002,scheme=http,masterEligibility=true} (io.confluent.kafka.schemaregistry.masterelector.kafka.KafkaGroupMasterElector:228)
[2020-05-21 23:23:45,821] INFO Wait to catch up until the offset at 1 (io.confluent.kafka.schemaregistry.storage.KafkaStore:304)
[2020-05-21 23:23:45,991] INFO jetty-9.4.24.v20191120; built: 2019-11-20T21:37:49.771Z; git: 363d5f2df3a8a28de40604320230664b9c793c16; jvm 1.8.0_242-b08 (org.eclipse.jetty.server.Server:359)
[2020-05-21 23:23:46,036] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session:333)
[2020-05-21 23:23:46,037] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session:338)
[2020-05-21 23:23:46,038] INFO node0 Scavenging every 660000ms (org.eclipse.jetty.server.session:140)
A provider io.confluent.kafka.schemaregistry.rest.resources.ConfigResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider io.confluent.kafka.schemaregistry.rest.resources.ConfigResource will be ignored.  
A provider io.confluent.kafka.schemaregistry.rest.resources.CompatibilityResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider io.confluent.kafka.schemaregistry.rest.resources.CompatibilityResource will be ignored.  
A provider io.confluent.kafka.schemaregistry.rest.resources.SchemasResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider io.confluent.kafka.schemaregistry.rest.resources.SchemasResource will be ignored.  
A provider io.confluent.kafka.schemaregistry.rest.resources.ServerMetadataResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider io.confluent.kafka.schemaregistry.rest.resources.ServerMetadataResource will be ignored.  
A provider io.confluent.kafka.schemaregistry.rest.resources.SubjectsResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider io.confluent.kafka.schemaregistry.rest.resources.SubjectsResource will be ignored.  
A provider io.confluent.kafka.schemaregistry.rest.resources.ModeResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider io.confluent.kafka.schemaregistry.rest.resources.ModeResource will be ignored.  
A provider io.confluent.kafka.schemaregistry.rest.resources.SubjectVersionsResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider io.confluent.kafka.schemaregistry.rest.resources.SubjectVersionsResource will be ignored.  
[2020-05-21 23:23:46,587] INFO HV000001: Hibernate Validator 6.0.17.Final (org.hibernate.validator.internal.util.Version:21)
[2020-05-21 23:23:46,841] INFO JVM Runtime does not support Modules (org.eclipse.jetty.util.TypeUtil:201)
[2020-05-21 23:23:46,842] INFO Started o.e.j.s.ServletContextHandler@745c35cf{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler:825)
[2020-05-21 23:23:46,867] INFO Started o.e.j.s.ServletContextHandler@7079c42e{/ws,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler:825)
[2020-05-21 23:23:46,881] INFO Started NetworkTrafficServerConnector@4e1c56f1{HTTP/1.1,[http/1.1]}{localhost:6002} (org.eclipse.jetty.server.AbstractConnector:330)
[2020-05-21 23:23:46,881] INFO Started @48777ms (org.eclipse.jetty.server.Server:399)
[2020-05-21 23:23:46,891] INFO KafkaAvroSerializerConfig values: 
        bearer.auth.token = [hidden]
  | => nproxy.port = -1efdata.sync.KafkaRedisSyncIT 22s
        schema.reflection = false
        auto.register.schemas = true
        max.schemas.per.subject = 1000
        basic.auth.credentials.source = URL
        value.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy
        schema.registry.url = [http://localhost:6002]
        basic.auth.user.info = [hidden]
        proxy.host = 
        schema.registry.basic.auth.user.info = [hidden]
        bearer.auth.credentials.source = STATIC_TOKEN
        key.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy
 (io.confluent.kafka.serializers.KafkaAvroSerializerConfig:179)
[2020-05-21 23:23:46,891] INFO KafkaAvroSerializerConfig values: 
        bearer.auth.token = [hidden]
        proxy.port = -1
        schema.reflection = false
        auto.register.schemas = true
        max.schemas.per.subject = 1000
        basic.auth.credentials.source = URL
        value.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy
        schema.registry.url = [http://localhost:6002]
        basic.auth.user.info = [hidden]
        proxy.host = 
        schema.registry.basic.auth.user.info = [hidden]
        bearer.auth.credentials.source = STATIC_TOKEN
        key.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy
 (io.confluent.kafka.serializers.KafkaAvroSerializerConfig:179)
[2020-05-21 23:23:46,952] INFO Stopped NetworkTrafficServerConnector@4e1c56f1{HTTP/1.1,[http/1.1]}{localhost:6002} (org.eclipse.jetty.server.AbstractConnector:380)
[2020-05-21 23:23:46,952] INFO node0 Stopped scavenging (org.eclipse.jetty.server.session:158)
[2020-05-21 23:23:46,956] INFO Stopped o.e.j.s.ServletContextHandler@7079c42e{/ws,null,UNAVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler:1016)
[2020-05-21 23:23:46,960] INFO Stopped o.e.j.s.ServletContextHandler@745c35cf{/,null,UNAVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler:1016)
[2020-05-21 23:23:46,962] INFO Shutting down schema registry (io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry:1084)
[2020-05-21 23:23:46,962] INFO [kafka-store-reader-thread-_schemas]: Shutting down (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread:66)
[2020-05-21 23:23:46,963] INFO [kafka-store-reader-thread-_schemas]: Stopped (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread:66)
[2020-05-21 23:23:46,963] INFO [kafka-store-reader-thread-_schemas]: Shutdown completed (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread:66)
[2020-05-21 23:23:46,965] INFO KafkaStoreReaderThread shutdown complete. (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread:253)
[2020-05-21 23:23:46,967] ERROR Unexpected exception in schema registry group processing thread (io.confluent.kafka.schemaregistry.masterelector.kafka.KafkaGroupMasterElector:200)
org.apache.kafka.common.errors.WakeupException
        at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.maybeTriggerWakeup(ConsumerNetworkClient.java:511)
        at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:275)
        at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:233)
        at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:224)
        at io.confluent.kafka.schemaregistry.masterelector.kafka.SchemaRegistryCoordinator.poll(SchemaRegistryCoordinator.java:120)
        at io.confluent.kafka.schemaregistry.masterelector.kafka.KafkaGroupMasterElector$1.run(KafkaGroupMasterElector.java:197)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
[2020-05-21 23:23:46,952] INFO Stopped NetworkTrafficServerConnector@4e1c56f1{HTTP/1.1,[http/1.1]}{localhost:6002} (org.eclipse.jetty.server.AbstractConnector:380)
[2020-05-21 23:23:46,952] INFO node0 Stopped scavenging (org.eclipse.jetty.server.session:158)
[2020-05-21 23:23:46,956] INFO Stopped o.e.j.s.ServletContextHandler@7079c42e{/ws,null,UNAVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler:1016)
[2020-05-21 23:23:46,960] INFO Stopped o.e.j.s.ServletContextHandler@745c35cf{/,null,UNAVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler:1016)
[2020-05-21 23:23:46,962] INFO Shutting down schema registry (io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry:1084)
[2020-05-21 23:23:46,962] INFO [kafka-store-reader-thread-_schemas]: Shutting down (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread:66)
[2020-05-21 23:23:46,963] INFO [kafka-store-reader-thread-_schemas]: Stopped (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread:66)
[2020-05-21 23:23:46,963] INFO [kafka-store-reader-thread-_schemas]: Shutdown completed (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread:66)
[2020-05-21 23:23:46,965] INFO KafkaStoreReaderThread shutdown complete. (io.confluent.kafka.schemaregistry.storage.KafkaStoreReaderThread:253)
[2020-05-21 23:23:46,967] ERROR Unexpected exception in schema registry group processing thread (io.confluent.kafka.schemaregistry.masterelector.kafka.KafkaGroupMasterElector:200)
org.apache.kafka.common.errors.WakeupException
        at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.maybeTriggerWakeup(ConsumerNetworkClient.java:511)
        at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:275)
        at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:233)
        at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:224)
        at io.confluent.kafka.schemaregistry.masterelector.kafka.SchemaRegistryCoordinator.poll(SchemaRegistryCoordinator.java:120)
        at io.confluent.kafka.schemaregistry.masterelector.kafka.KafkaGroupMasterElector$1.run(KafkaGroupMasterElector.java:197)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
org.apache.kafka.common.errors.TimeoutException: Topic topic_test not present in metadata after 60000 ms.
The End {0} 
org.apache.kafka.common.errors.TimeoutException: Topic topic_test not present in metadata after 60000 ms.

The test is quite simple , send a message and end after asserts

  "RefDataKafkaVerticle" should "start kafka and sink a message" in {
    implicit val kafkaEmbeddedConfig: EmbeddedKafkaConfig = EmbeddedKafkaConfig(zooKeeperPort = 6000, kafkaPort = 6001 , schemaRegistryPort = 6002)
    kafkaEmbeddedConfig.customSchemaRegistryProperties

I have since then tried , changing to broadcast IP (0.0.0.0) , tweaking kafka versions to no avail.

Any pointers deeply appreciated

EmbeddedKafkaConfig.defaultConfig declared type is wrong

EmbeddedKafkaConfig.defaultConfig type is erroneously declared as EmbeddedKafkaConfig in the net.manub.embeddedkafka package, while it must be the one in net.manub.embeddedkafka.schemaregistry in order for it to be imported and implicitly used by methods such as EmbeddedKafka.start().

workaround for sbt

Hi, it may be worth adding to your readme file the fact that the workaround.scala in project is needed to use this library. It had me puzzled for a while.
PS thanks for providing this and embedded kafka libraries!

Schema Registry startup error

Using version 6.1.1

I have some tests failing to start with these errors. Other tests work fine and I'm not sure what can trigger the problem:

Jun 25, 2021 8:54:36 AM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime
WARNING: A provider io.confluent.kafka.schemaregistry.rest.resources.SchemasResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider io.confluent.kafka.schemaregistry.rest.resources.SchemasResource will be ignored.
Jun 25, 2021 8:54:36 AM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime
WARNING: A provider io.confluent.kafka.schemaregistry.rest.resources.SubjectsResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider io.confluent.kafka.schemaregistry.rest.resources.SubjectsResource will be ignored.
Jun 25, 2021 8:54:36 AM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime
WARNING: A provider io.confluent.kafka.schemaregistry.rest.resources.CompatibilityResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider io.confluent.kafka.schemaregistry.rest.resources.CompatibilityResource will be ignored.
Jun 25, 2021 8:54:36 AM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime
WARNING: A provider io.confluent.kafka.schemaregistry.rest.resources.ServerMetadataResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider io.confluent.kafka.schemaregistry.rest.resources.ServerMetadataResource will be ignored.
Jun 25, 2021 8:54:36 AM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime
WARNING: A provider io.confluent.kafka.schemaregistry.rest.resources.SubjectVersionsResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider io.confluent.kafka.schemaregistry.rest.resources.SubjectVersionsResource will be ignored.
Jun 25, 2021 8:54:36 AM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime
WARNING: A provider io.confluent.kafka.schemaregistry.rest.resources.ModeResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider io.confluent.kafka.schemaregistry.rest.resources.ModeResource will be ignored.
Jun 25, 2021 8:54:36 AM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime
WARNING: A provider io.confluent.kafka.schemaregistry.rest.resources.ConfigResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider io.confluent.kafka.schemaregistry.rest.resources.ConfigResource will be ignored.

Then it fails with

08:55:36.940 [pool-10-thread-1] ERROR io.confluent.kafka.schemaregistry.leaderelector.kafka.KafkaGroupLeaderElector - Unexpected exception in schema registry group processing thread
org.apache.kafka.common.errors.WakeupException: null

log level for embedded kafka

Hi guys,
May I know if there is a way to set default log level for embedded kafka instance?
I am doing a property based check, and it actually generates too many lines of [info] log to show on travis, so wondering if there is a finer-ground control for kafka log level? Or should I just set a higher log level for all tests?

Thanks for your time!

Move Kafka Streams to its own project

Would there be any interest in splitting the Kafka Streams components from the main project?
embedded-kafka already has this subdivision, so I figured we could do the same on this project to slim dependencies furthermore.
@NeQuissimus penny for your thoughts.

I'm also planning to move kafka-avro-serializer dependency to the test scope, now that Schema Registry can be used with Avro, Protobuf, or JSON.

Embedded Kafka Problems.

Hey, I am seeing these errors with Embedded Kafka.

17:27:02.816 [kafka-scheduler-1] ERROR kafka.server.LogDirFailureChannel - Error while writing to checkpoint file /var/folders/l3/dxb1954d48d5v_lfj71wz7540000gn/T/kafka5845553986696747582/replication-offset-checkpoint
java.io.FileNotFoundException: /var/folders/l3/dxb1954d48d5v_lfj71wz7540000gn/T/kafka5845553986696747582/replication-offset-checkpoint.tmp (No such file or directory)
	at java.base/java.io.FileOutputStream.open0(Native Method)
	at java.base/java.io.FileOutputStream.open(FileOutputStream.java:291)
	at java.base/java.io.FileOutputStream.<init>(FileOutputStream.java:234)
	at java.base/java.io.FileOutputStream.<init>(FileOutputStream.java:184)
	at kafka.server.checkpoints.CheckpointFile.liftedTree1$1(CheckpointFile.scala:94)
	at kafka.server.checkpoints.CheckpointFile.write(CheckpointFile.scala:92)
	at kafka.server.checkpoints.OffsetCheckpointFile.write(OffsetCheckpointFile.scala:67)
	at kafka.server.ReplicaManager.$anonfun$checkpointHighWatermarks$6(ReplicaManager.scala:1769)
	at kafka.server.ReplicaManager.$anonfun$checkpointHighWatermarks$6$adapted(ReplicaManager.scala:1768)
	at scala.collection.IterableOnceOps.foreach(IterableOnce.scala:563)
	at scala.collection.IterableOnceOps.foreach$(IterableOnce.scala:561)
	at scala.collection.AbstractIterable.foreach(Iterable.scala:919)
	at scala.collection.IterableOps$WithFilter.foreach(Iterable.scala:889)
	at kafka.server.ReplicaManager.checkpointHighWatermarks(ReplicaManager.scala:1768)
	at kafka.server.ReplicaManager.$anonfun$startHighWatermarkCheckPointThread$1(ReplicaManager.scala:292)
	at kafka.utils.KafkaScheduler.$anonfun$schedule$2(KafkaScheduler.scala:114)
	at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
	at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
	at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630)
	at java.base/java.lang.Thread.run(Thread.java:831)
17:27:02.817 [kafka-scheduler-1] ERROR kafka.server.ReplicaManager - [ReplicaManager broker=0] Error while writing to highwatermark file in directory /var/folders/l3/dxb1954d48d5v_lfj71wz7540000gn/T/kafka5845553986696747582
org.apache.kafka.common.errors.KafkaStorageException: Error while writing to checkpoint file /var/folders/l3/dxb1954d48d5v_lfj71wz7540000gn/T/kafka5845553986696747582/replication-offset-checkpoint
Caused by: java.io.FileNotFoundException: /var/folders/l3/dxb1954d48d5v_lfj71wz7540000gn/T/kafka5845553986696747582/replication-offset-checkpoint.tmp (No such file or directory)
	at java.base/java.io.FileOutputStream.open0(Native Method)
	at java.base/java.io.FileOutputStream.open(FileOutputStream.java:291)
	at java.base/java.io.FileOutputStream.<init>(FileOutputStream.java:234)
	at java.base/java.io.FileOutputStream.<init>(FileOutputStream.java:184)
	at kafka.server.checkpoints.CheckpointFile.liftedTree1$1(CheckpointFile.scala:94)
	at kafka.server.checkpoints.CheckpointFile.write(CheckpointFile.scala:92)
	at kafka.server.checkpoints.OffsetCheckpointFile.write(OffsetCheckpointFile.scala:67)
	at kafka.server.ReplicaManager.$anonfun$checkpointHighWatermarks$6(ReplicaManager.scala:1769)
	at kafka.server.ReplicaManager.$anonfun$checkpointHighWatermarks$6$adapted(ReplicaManager.scala:1768)
	at scala.collection.IterableOnceOps.foreach(IterableOnce.scala:563)
	at scala.collection.IterableOnceOps.foreach$(IterableOnce.scala:561)
	at scala.collection.AbstractIterable.foreach(Iterable.scala:919)
	at scala.collection.IterableOps$WithFilter.foreach(Iterable.scala:889)
	at kafka.server.ReplicaManager.checkpointHighWatermarks(ReplicaManager.scala:1768)
	at kafka.server.ReplicaManager.$anonfun$startHighWatermarkCheckPointThread$1(ReplicaManager.scala:292)
	at kafka.utils.KafkaScheduler.$anonfun$schedule$2(KafkaScheduler.scala:114)
	at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
	at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
	at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630)
	at java.base/java.lang.Thread.run(Thread.java:831)
17:27:02.838 [LogDirFailureHandler] ERROR kafka.log.LogManager - Shutdown broker because all log dirs in /var/folders/l3/dxb1954d48d5v_lfj71wz7540000gn/T/kafka5845553986696747582 have failed

Process finished with exit code 1

The documentation does not mention anything about this file path and have no idea how to override this.

Caused by: java.io.FileNotFoundException: /var/folders/l3/dxb1954d48d5v_lfj71wz7540000gn/T/kafka5845553986696747582/replication-offset-checkpoint.tmp
 (No such file or directory)

Anybody able to help with what is going on?

Can't Connect to schema-registry.

Hey ๐Ÿ‘‹,

I am having problems connecting to the schema registry in test when using this library.
My application works test works when I run a separate container for schema registry.

This is the test

import net.manub.embeddedkafka.schemaregistry.{EmbeddedKafka, EmbeddedKafkaConfig}

      withRunningKafka {
        val operationType = NewSubscription
        val organization  = Organization("47Degrees")
        val repository    = Repository("Scala Exercise")
        val messageEvent  = MessageEvent(operationType, organization, repository)

        val config = KafkaConfig(
          Topic("dummy"),
          BootstrapServer("localhost:6001"),
          GroupId("publisher"),
          SchemaRegistryUrl("http://localhost:6002"),
          UserName("Ben"),
          Password("password")
        )

        val consumedResult = for {
          _                         <- KafkaProducerImplementation.imp[IO](config).publish("1", messageEvent)
          committableConsumerRecord <- KafkaConsumerImplementation.imp[IO](config).consume.take(1)
        } yield committableConsumerRecord

        consumedResult
          .map(_.record.value)
          .interruptAfter(10.seconds)
          .compile
          .lastOrError
          .asserting { listOfMessageEvent =>
            listOfMessageEvent.operationType shouldBe operationType
            listOfMessageEvent.organization shouldBe organization
            listOfMessageEvent.repository shouldBe repository
          }
      }

Error Logs.

SLF4J: Actual binding is of type [ch.qos.logback.classic.util.ContextSelectorStaticBinder]



Aug 26, 2021 1:16:52 PM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime
WARNING: A provider io.confluent.kafka.schemaregistry.rest.resources.SubjectsResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider io.confluent.kafka.schemaregistry.rest.resources.SubjectsResource will be ignored. 
Aug 26, 2021 1:16:52 PM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime
WARNING: A provider io.confluent.kafka.schemaregistry.rest.resources.ModeResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider io.confluent.kafka.schemaregistry.rest.resources.ModeResource will be ignored. 
Aug 26, 2021 1:16:52 PM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime
WARNING: A provider io.confluent.kafka.schemaregistry.rest.resources.SchemasResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider io.confluent.kafka.schemaregistry.rest.resources.SchemasResource will be ignored. 
Aug 26, 2021 1:16:52 PM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime
WARNING: A provider io.confluent.kafka.schemaregistry.rest.resources.CompatibilityResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider io.confluent.kafka.schemaregistry.rest.resources.CompatibilityResource will be ignored. 
Aug 26, 2021 1:16:52 PM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime
WARNING: A provider io.confluent.kafka.schemaregistry.rest.resources.ConfigResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider io.confluent.kafka.schemaregistry.rest.resources.ConfigResource will be ignored. 
Aug 26, 2021 1:16:52 PM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime
WARNING: A provider io.confluent.kafka.schemaregistry.rest.resources.ServerMetadataResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider io.confluent.kafka.schemaregistry.rest.resources.ServerMetadataResource will be ignored. 
Aug 26, 2021 1:16:52 PM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime
WARNING: A provider io.confluent.kafka.schemaregistry.rest.resources.SubjectVersionsResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider io.confluent.kafka.schemaregistry.rest.resources.SubjectVersionsResource will be ignored. 
[2021-08-26 13:16:52,503] INFO HV000001: Hibernate Validator 6.1.2.Final (org.hibernate.validator.internal.util.Version:21)
Aug 26, 2021 1:16:52 PM org.glassfish.jersey.server.internal.JerseyResourceContext getResource
WARNING: Lookup and initialization failed for a resource class: class org.glassfish.jersey.server.validation.internal.hibernate.HibernateInjectingConstraintValidatorFactory.
MultiException stack 1 of 1
java.lang.NoClassDefFoundError: org/glassfish/jersey/ext/cdi1x/internal/CdiUtil
	at org.glassfish.jersey.server.validation.internal.hibernate.HibernateInjectingConstraintValidatorFactory.postConstruct(HibernateInjectingConstraintValidatorFactory.java:30)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:78)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:567)
	at org.glassfish.hk2.utilities.reflection.ReflectionHelper.invoke(ReflectionHelper.java:1268)
	at org.jvnet.hk2.internal.Utilities.justPostConstruct(Utilities.java:892)
	at org.jvnet.hk2.internal.ServiceLocatorImpl.postConstruct(ServiceLocatorImpl.java:1026)
	at org.jvnet.hk2.internal.ServiceLocatorImpl.createAndInitialize(ServiceLocatorImpl.java:1074)
	at org.jvnet.hk2.internal.ServiceLocatorImpl.createAndInitialize(ServiceLocatorImpl.java:1064)
	at org.glassfish.jersey.inject.hk2.AbstractHk2InjectionManager.createAndInitialize(AbstractHk2InjectionManager.java:189)
	at org.glassfish.jersey.inject.hk2.ImmediateHk2InjectionManager.createAndInitialize(ImmediateHk2InjectionManager.java:30)
	at org.glassfish.jersey.internal.inject.Injections.getOrCreate(Injections.java:106)
	at org.glassfish.jersey.server.JerseyResourceContextConfigurator.lambda$init$0(JerseyResourceContextConfigurator.java:45)
	at org.glassfish.jersey.server.internal.JerseyResourceContext.getResource(JerseyResourceContext.java:83)
	at org.glassfish.jersey.server.validation.internal.CompositeInjectingConstraintValidatorFactory.postConstruct(CompositeInjectingConstraintValidatorFactory.java:45)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:78)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:567)
	at org.glassfish.hk2.utilities.reflection.ReflectionHelper.invoke(ReflectionHelper.java:1268)
	at org.jvnet.hk2.internal.Utilities.justPostConstruct(Utilities.java:892)
	at org.jvnet.hk2.internal.ServiceLocatorImpl.postConstruct(ServiceLocatorImpl.java:1026)
	at org.jvnet.hk2.internal.ServiceLocatorImpl.createAndInitialize(ServiceLocatorImpl.java:1074)
	at org.jvnet.hk2.internal.ServiceLocatorImpl.createAndInitialize(ServiceLocatorImpl.java:1064)
	at org.glassfish.jersey.inject.hk2.AbstractHk2InjectionManager.createAndInitialize(AbstractHk2InjectionManager.java:189)
	at org.glassfish.jersey.inject.hk2.ImmediateHk2InjectionManager.createAndInitialize(ImmediateHk2InjectionManager.java:30)
	at org.glassfish.jersey.internal.inject.Injections.getOrCreate(Injections.java:106)
	at org.glassfish.jersey.server.JerseyResourceContextConfigurator.lambda$init$0(JerseyResourceContextConfigurator.java:45)
	at org.glassfish.jersey.server.internal.JerseyResourceContext.getResource(JerseyResourceContext.java:83)
	at org.glassfish.jersey.server.validation.internal.ValidationBinder$ConfiguredValidatorProvider.getDefaultValidatorContext(ValidationBinder.java:267)
	at org.glassfish.jersey.server.validation.internal.ValidationBinder$ConfiguredValidatorProvider.getDefaultValidator(ValidationBinder.java:245)
	at org.glassfish.jersey.server.validation.internal.ValidationBinder$ConfiguredValidatorProvider.get(ValidationBinder.java:187)
	at org.glassfish.jersey.server.validation.internal.ValidationBinder$ConfiguredValidatorProvider.get(ValidationBinder.java:157)
	at org.glassfish.jersey.inject.hk2.SupplierFactoryBridge.provide(SupplierFactoryBridge.java:76)
	at org.jvnet.hk2.internal.FactoryCreator.create(FactoryCreator.java:129)
	at org.jvnet.hk2.internal.SystemDescriptor.create(SystemDescriptor.java:463)
	at org.jvnet.hk2.internal.PerLookupContext.findOrCreate(PerLookupContext.java:46)
	at org.jvnet.hk2.internal.Utilities.createService(Utilities.java:2102)
	at org.jvnet.hk2.internal.ServiceLocatorImpl.internalGetService(ServiceLocatorImpl.java:758)
	at org.jvnet.hk2.internal.ServiceLocatorImpl.internalGetService(ServiceLocatorImpl.java:721)
	at org.jvnet.hk2.internal.ServiceLocatorImpl.getService(ServiceLocatorImpl.java:691)
	at org.glassfish.jersey.inject.hk2.AbstractHk2InjectionManager.getInstance(AbstractHk2InjectionManager.java:160)
	at org.glassfish.jersey.inject.hk2.ImmediateHk2InjectionManager.getInstance(ImmediateHk2InjectionManager.java:30)
	at org.glassfish.jersey.server.model.internal.ResourceMethodInvokerConfigurator.lambda$postInit$0(ResourceMethodInvokerConfigurator.java:54)
	at org.glassfish.jersey.server.model.ResourceMethodInvoker$Builder.build(ResourceMethodInvoker.java:194)
	at org.glassfish.jersey.server.internal.routing.RuntimeModelBuilder.createInflector(RuntimeModelBuilder.java:106)
	at org.glassfish.jersey.server.internal.routing.RuntimeModelBuilder.createMethodRouter(RuntimeModelBuilder.java:93)
	at org.glassfish.jersey.server.internal.routing.RuntimeModelBuilder.createResourceMethodRouters(RuntimeModelBuilder.java:287)
	at org.glassfish.jersey.server.internal.routing.RuntimeModelBuilder.buildModel(RuntimeModelBuilder.java:151)
	at org.glassfish.jersey.server.internal.routing.Routing$Builder.buildStage(Routing.java:223)
	at org.glassfish.jersey.server.ApplicationHandler.initialize(ApplicationHandler.java:399)
	at org.glassfish.jersey.server.ApplicationHandler.lambda$initialize$1(ApplicationHandler.java:293)
	at org.glassfish.jersey.internal.Errors.process(Errors.java:292)
	at org.glassfish.jersey.internal.Errors.process(Errors.java:274)
	at org.glassfish.jersey.internal.Errors.processWithException(Errors.java:232)
	at org.glassfish.jersey.server.ApplicationHandler.initialize(ApplicationHandler.java:292)
	at org.glassfish.jersey.server.ApplicationHandler.<init>(ApplicationHandler.java:259)
	at org.glassfish.jersey.servlet.WebComponent.<init>(WebComponent.java:311)
	at org.glassfish.jersey.servlet.ServletContainer.init(ServletContainer.java:154)
	at org.glassfish.jersey.servlet.ServletContainer.init(ServletContainer.java:393)
	at org.eclipse.jetty.servlet.FilterHolder.initialize(FilterHolder.java:140)
	at org.eclipse.jetty.servlet.ServletHandler.lambda$initialize$0(ServletHandler.java:739)
	at java.base/java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948)
	at java.base/java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:734)
	at java.base/java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:762)
	at org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:763)
	at org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:379)
	at org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:911)
	at org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletContextHandler.java:288)
	at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
	at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169)
	at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:117)
	at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:97)
	at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
	at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169)
	at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:117)
	at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:97)
	at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
	at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169)
	at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:110)
	at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:97)
	at org.eclipse.jetty.server.handler.StatisticsHandler.doStart(StatisticsHandler.java:253)
	at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
	at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169)
	at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:117)
	at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:97)
	at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
	at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169)
	at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:110)
	at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:97)
	at org.eclipse.jetty.server.handler.gzip.GzipHandler.doStart(GzipHandler.java:426)
	at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
	at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169)
	at org.eclipse.jetty.server.Server.start(Server.java:423)
	at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:110)
	at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:97)
	at org.eclipse.jetty.server.Server.doStart(Server.java:387)
	at io.confluent.rest.ApplicationServer.doStart(ApplicationServer.java:232)
	at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
	at io.confluent.rest.Application.start(Application.java:610)
	at net.manub.embeddedkafka.schemaregistry.ops.SchemaRegistryOps.startSchemaRegistry(SchemaRegistryOps.scala:50)
	at net.manub.embeddedkafka.schemaregistry.ops.SchemaRegistryOps.startSchemaRegistry$(SchemaRegistryOps.scala:23)
	at kafkaSpec.KafkaSpec.startSchemaRegistry(KafkaSpec.scala:24)
	at net.manub.embeddedkafka.schemaregistry.EmbeddedKafka.withRunningServers(EmbeddedKafka.scala:42)
	at net.manub.embeddedkafka.schemaregistry.EmbeddedKafka.withRunningServers$(EmbeddedKafka.scala:26)
	at kafkaSpec.KafkaSpec.withRunningServers(KafkaSpec.scala:24)
	at kafkaSpec.KafkaSpec.withRunningServers(KafkaSpec.scala:24)
	at net.manub.embeddedkafka.EmbeddedKafkaSupport.$anonfun$withRunningKafka$2(EmbeddedKafka.scala:106)
	at net.manub.embeddedkafka.EmbeddedKafkaSupport.withTempDir(EmbeddedKafka.scala:148)
	at net.manub.embeddedkafka.EmbeddedKafkaSupport.withTempDir$(EmbeddedKafka.scala:143)
	at kafkaSpec.KafkaSpec.withTempDir(KafkaSpec.scala:24)
	at net.manub.embeddedkafka.EmbeddedKafkaSupport.$anonfun$withRunningKafka$1(EmbeddedKafka.scala:105)
	at net.manub.embeddedkafka.EmbeddedKafkaSupport.$anonfun$withRunningKafka$1$adapted(EmbeddedKafka.scala:104)
	at net.manub.embeddedkafka.EmbeddedKafkaSupport.$anonfun$withRunningZooKeeper$1(EmbeddedKafka.scala:136)
	at net.manub.embeddedkafka.EmbeddedKafkaSupport.withTempDir(EmbeddedKafka.scala:148)
	at net.manub.embeddedkafka.EmbeddedKafkaSupport.withTempDir$(EmbeddedKafka.scala:143)
	at kafkaSpec.KafkaSpec.withTempDir(KafkaSpec.scala:24)
	at net.manub.embeddedkafka.EmbeddedKafkaSupport.withRunningZooKeeper(EmbeddedKafka.scala:133)
	at net.manub.embeddedkafka.EmbeddedKafkaSupport.withRunningZooKeeper$(EmbeddedKafka.scala:130)
	at kafkaSpec.KafkaSpec.withRunningZooKeeper(KafkaSpec.scala:24)
	at net.manub.embeddedkafka.EmbeddedKafkaSupport.withRunningKafka(EmbeddedKafka.scala:104)
	at net.manub.embeddedkafka.EmbeddedKafkaSupport.withRunningKafka$(EmbeddedKafka.scala:103)
	at kafkaSpec.KafkaSpec.withRunningKafka(KafkaSpec.scala:24)
	at kafkaSpec.KafkaSpec.$anonfun$new$2(KafkaSpec.scala:30)
	at org.scalatest.freespec.AsyncFreeSpecLike.transformToOutcomeParam$1(AsyncFreeSpecLike.scala:140)
	at org.scalatest.freespec.AsyncFreeSpecLike.$anonfun$registerTestToRun$1(AsyncFreeSpecLike.scala:141)
	at org.scalatest.AsyncTestSuite.$anonfun$transformToOutcome$1(AsyncTestSuite.scala:240)
	at org.scalatest.freespec.AsyncFreeSpecLike$$anon$1.apply(AsyncFreeSpecLike.scala:408)
	at org.scalatest.AsyncTestSuite.withFixture(AsyncTestSuite.scala:313)
	at org.scalatest.AsyncTestSuite.withFixture$(AsyncTestSuite.scala:312)
	at org.scalatest.freespec.AsyncFreeSpec.withFixture(AsyncFreeSpec.scala:2280)
	at org.scalatest.freespec.AsyncFreeSpecLike.invokeWithAsyncFixture$1(AsyncFreeSpecLike.scala:406)
	at org.scalatest.freespec.AsyncFreeSpecLike.$anonfun$runTest$1(AsyncFreeSpecLike.scala:420)
	at org.scalatest.AsyncSuperEngine.runTestImpl(AsyncEngine.scala:374)
	at org.scalatest.freespec.AsyncFreeSpecLike.runTest(AsyncFreeSpecLike.scala:420)
	at org.scalatest.freespec.AsyncFreeSpecLike.runTest$(AsyncFreeSpecLike.scala:400)
	at org.scalatest.freespec.AsyncFreeSpec.runTest(AsyncFreeSpec.scala:2280)
	at org.scalatest.freespec.AsyncFreeSpecLike.$anonfun$runTests$1(AsyncFreeSpecLike.scala:479)
	at org.scalatest.AsyncSuperEngine.$anonfun$runTestsInBranch$1(AsyncEngine.scala:432)
	at scala.collection.LinearSeqOps.foldLeft(LinearSeq.scala:169)
	at scala.collection.LinearSeqOps.foldLeft$(LinearSeq.scala:165)
	at scala.collection.immutable.List.foldLeft(List.scala:79)
	at org.scalatest.AsyncSuperEngine.traverseSubNodes$1(AsyncEngine.scala:406)
	at org.scalatest.AsyncSuperEngine.runTestsInBranch(AsyncEngine.scala:479)
	at org.scalatest.AsyncSuperEngine.$anonfun$runTestsInBranch$1(AsyncEngine.scala:460)
	at scala.collection.LinearSeqOps.foldLeft(LinearSeq.scala:169)
	at scala.collection.LinearSeqOps.foldLeft$(LinearSeq.scala:165)
	at scala.collection.immutable.List.foldLeft(List.scala:79)
	at org.scalatest.AsyncSuperEngine.traverseSubNodes$1(AsyncEngine.scala:406)
	at org.scalatest.AsyncSuperEngine.runTestsInBranch(AsyncEngine.scala:487)
	at org.scalatest.AsyncSuperEngine.runTestsImpl(AsyncEngine.scala:555)
	at org.scalatest.freespec.AsyncFreeSpecLike.runTests(AsyncFreeSpecLike.scala:479)
	at org.scalatest.freespec.AsyncFreeSpecLike.runTests$(AsyncFreeSpecLike.scala:478)
	at org.scalatest.freespec.AsyncFreeSpec.runTests(AsyncFreeSpec.scala:2280)
	at org.scalatest.Suite.run(Suite.scala:1112)
	at org.scalatest.Suite.run$(Suite.scala:1094)
	at org.scalatest.freespec.AsyncFreeSpec.org$scalatest$freespec$AsyncFreeSpecLike$$super$run(AsyncFreeSpec.scala:2280)
	at org.scalatest.freespec.AsyncFreeSpecLike.$anonfun$run$1(AsyncFreeSpecLike.scala:523)
	at org.scalatest.AsyncSuperEngine.runImpl(AsyncEngine.scala:625)
	at org.scalatest.freespec.AsyncFreeSpecLike.run(AsyncFreeSpecLike.scala:523)
	at org.scalatest.freespec.AsyncFreeSpecLike.run$(AsyncFreeSpecLike.scala:522)
	at org.scalatest.freespec.AsyncFreeSpec.run(AsyncFreeSpec.scala:2280)
	at org.scalatest.tools.SuiteRunner.run(SuiteRunner.scala:45)
	at org.scalatest.tools.Runner$.$anonfun$doRunRunRunDaDoRunRun$13(Runner.scala:1322)
	at org.scalatest.tools.Runner$.$anonfun$doRunRunRunDaDoRunRun$13$adapted(Runner.scala:1316)
	at scala.collection.immutable.List.foreach(List.scala:333)
	at org.scalatest.tools.Runner$.doRunRunRunDaDoRunRun(Runner.scala:1316)
	at org.scalatest.tools.Runner$.$anonfun$runOptionallyWithPassFailReporter$24(Runner.scala:993)
	at org.scalatest.tools.Runner$.$anonfun$runOptionallyWithPassFailReporter$24$adapted(Runner.scala:971)
	at org.scalatest.tools.Runner$.withClassLoaderAndDispatchReporter(Runner.scala:1482)
	at org.scalatest.tools.Runner$.runOptionallyWithPassFailReporter(Runner.scala:971)
	at org.scalatest.tools.Runner$.run(Runner.scala:798)
	at org.scalatest.tools.Runner.run(Runner.scala)
	at org.jetbrains.plugins.scala.testingSupport.scalaTest.ScalaTestRunner.runScalaTest2or3(ScalaTestRunner.java:38)
	at org.jetbrains.plugins.scala.testingSupport.scalaTest.ScalaTestRunner.main(ScalaTestRunner.java:25)
Caused by: java.lang.ClassNotFoundException: org.glassfish.jersey.ext.cdi1x.internal.CdiUtil
	at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:636)
	at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:182)
	at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:519)
	... 176 more

13:16:53.508 [pool-8-thread-1] ERROR i.c.k.s.l.k.KafkaGroupLeaderElector - Unexpected exception in schema registry group processing thread
org.apache.kafka.common.errors.WakeupException: null
	at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.maybeTriggerWakeup(ConsumerNetworkClient.java:514)
	at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:278)
	at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:236)
	at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:227)
	at io.confluent.kafka.schemaregistry.leaderelector.kafka.SchemaRegistryCoordinator.poll(SchemaRegistryCoordinator.java:124)
	at io.confluent.kafka.schemaregistry.leaderelector.kafka.KafkaGroupLeaderElector$1.run(KafkaGroupLeaderElector.java:202)
	at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630)
	at java.base/java.lang.Thread.run(Thread.java:831)
13:16:53.553 [scala-execution-context-global-113] ERROR i.c.k.s.client.rest.RestService - Failed to send HTTP request to endpoint: http://localhost:6002/subjects/dummy-value/versions
java.net.ConnectException: Connection refused
	at java.base/sun.nio.ch.Net.pollConnect(Native Method)
	at java.base/sun.nio.ch.Net.pollConnectNow(Net.java:669)
	at java.base/sun.nio.ch.NioSocketImpl.timedFinishConnect(NioSocketImpl.java:542)
	at java.base/sun.nio.ch.NioSocketImpl.connect(NioSocketImpl.java:597)
	at java.base/java.net.Socket.connect(Socket.java:645)
	at java.base/sun.net.NetworkClient.doConnect(NetworkClient.java:177)
	at java.base/sun.net.www.http.HttpClient.openServer(HttpClient.java:497)
	at java.base/sun.net.www.http.HttpClient.openServer(HttpClient.java:600)
	at java.base/sun.net.www.http.HttpClient.<init>(HttpClient.java:246)
	at java.base/sun.net.www.http.HttpClient.New(HttpClient.java:351)
	at java.base/sun.net.www.http.HttpClient.New(HttpClient.java:372)
	at java.base/sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1299)
	at java.base/sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1232)
	at java.base/sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1120)
	at java.base/sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:1051)
	at java.base/sun.net.www.protocol.http.HttpURLConnection.getOutputStream0(HttpURLConnection.java:1419)
	at java.base/sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:1390)
	at io.confluent.kafka.schemaregistry.client.rest.RestService.sendHttpRequest(RestService.java:266)
	at io.confluent.kafka.schemaregistry.client.rest.RestService.httpRequest(RestService.java:355)
	at io.confluent.kafka.schemaregistry.client.rest.RestService.registerSchema(RestService.java:498)
	at io.confluent.kafka.schemaregistry.client.rest.RestService.registerSchema(RestService.java:489)
	at io.confluent.kafka.schemaregistry.client.rest.RestService.registerSchema(RestService.java:462)
	at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.registerAndGetId(CachedSchemaRegistryClient.java:214)
	at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.register(CachedSchemaRegistryClient.java:276)
	at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.register(CachedSchemaRegistryClient.java:252)
	at io.confluent.kafka.serializers.AbstractKafkaAvroSerializer.serializeImpl(AbstractKafkaAvroSerializer.java:84)
	at io.confluent.kafka.serializers.KafkaAvroSerializer.serialize(KafkaAvroSerializer.java:60)
	at fs2.kafka.vulcan.AvroSerializer$.$anonfun$using$4(AvroSerializer.scala:26)
	at cats.effect.internals.IORunLoop$.liftedTree1$1(IORunLoop.scala:114)
	at cats.effect.internals.IORunLoop$.cats$effect$internals$IORunLoop$$loop(IORunLoop.scala:114)
	at cats.effect.internals.IORunLoop$RestartCallback.signal(IORunLoop.scala:463)
	at cats.effect.internals.IORunLoop$RestartCallback.apply(IORunLoop.scala:484)
	at cats.effect.internals.IORunLoop$RestartCallback.apply(IORunLoop.scala:422)
	at cats.effect.internals.IOShift$Tick.run(IOShift.scala:36)
	at java.base/java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1434)
	at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:295)
	at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1016)
	at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1665)
	at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1598)
	at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:183)


Error serializing Avro message
org.apache.kafka.common.errors.SerializationException: Error serializing Avro message
Caused by: java.net.ConnectException: Connection refused
	at java.base/sun.nio.ch.Net.pollConnect(Native Method)
	at java.base/sun.nio.ch.Net.pollConnectNow(Net.java:669)
	at java.base/sun.nio.ch.NioSocketImpl.timedFinishConnect(NioSocketImpl.java:542)
	at java.base/sun.nio.ch.NioSocketImpl.connect(NioSocketImpl.java:597)
	at java.base/java.net.Socket.connect(Socket.java:645)
	at java.base/sun.net.NetworkClient.doConnect(NetworkClient.java:177)
	at java.base/sun.net.www.http.HttpClient.openServer(HttpClient.java:497)
	at java.base/sun.net.www.http.HttpClient.openServer(HttpClient.java:600)
	at java.base/sun.net.www.http.HttpClient.<init>(HttpClient.java:246)
	at java.base/sun.net.www.http.HttpClient.New(HttpClient.java:351)
	at java.base/sun.net.www.http.HttpClient.New(HttpClient.java:372)
	at java.base/sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1299)
	at java.base/sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1232)
	at java.base/sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1120)
	at java.base/sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:1051)
	at java.base/sun.net.www.protocol.http.HttpURLConnection.getOutputStream0(HttpURLConnection.java:1419)
	at java.base/sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:1390)
	at io.confluent.kafka.schemaregistry.client.rest.RestService.sendHttpRequest(RestService.java:266)
	at io.confluent.kafka.schemaregistry.client.rest.RestService.httpRequest(RestService.java:355)
	at io.confluent.kafka.schemaregistry.client.rest.RestService.registerSchema(RestService.java:498)
	at io.confluent.kafka.schemaregistry.client.rest.RestService.registerSchema(RestService.java:489)
	at io.confluent.kafka.schemaregistry.client.rest.RestService.registerSchema(RestService.java:462)
	at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.registerAndGetId(CachedSchemaRegistryClient.java:214)
	at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.register(CachedSchemaRegistryClient.java:276)
	at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.register(CachedSchemaRegistryClient.java:252)
	at io.confluent.kafka.serializers.AbstractKafkaAvroSerializer.serializeImpl(AbstractKafkaAvroSerializer.java:84)
	at io.confluent.kafka.serializers.KafkaAvroSerializer.serialize(KafkaAvroSerializer.java:60)
	at fs2.kafka.vulcan.AvroSerializer$.$anonfun$using$4(AvroSerializer.scala:26)
	at cats.effect.internals.IORunLoop$.liftedTree1$1(IORunLoop.scala:114)
	at cats.effect.internals.IORunLoop$.cats$effect$internals$IORunLoop$$loop(IORunLoop.scala:114)
	at cats.effect.internals.IORunLoop$RestartCallback.signal(IORunLoop.scala:463)
	at cats.effect.internals.IORunLoop$RestartCallback.apply(IORunLoop.scala:484)
	at cats.effect.internals.IORunLoop$RestartCallback.apply(IORunLoop.scala:422)
	at cats.effect.internals.IOShift$Tick.run(IOShift.scala:36)
	at java.base/java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1434)
	at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:295)
	at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1016)
	at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1665)
	at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1598)
	at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:183)


Compilation fails when extending schemaregistry.EmbeddedKafka trait

Making my test extend net.manub.embeddedkafka.schemaregistry.EmbeddedKafka, the compilation fails with
multiple overloaded alternatives of method createCustomTopic define default arguments. The members with defaults are defined in trait AdminOps in package ops and trait EmbeddedKafkaSupport in package embeddedkafka.

Adding a call to the createCustomTopic method, the compilation fails with:
both method createCustomTopic in trait AdminOps of type (topic: String, topicConfig: Map[String,String], partitions: Int, replicationFactor: Int)(implicit config: net.manub.embeddedkafka.schemaregistry.EmbeddedKafkaConfig)Unit and method createCustomTopic in trait EmbeddedKafkaSupport of type (topic: String, topicConfig: Map[String,String], partitions: Int, replicationFactor: Int)(implicit config: net.manub.embeddedkafka.EmbeddedKafkaConfig)Unit match argument types (String,scala.collection.immutable.Map[String,String],Int,Int)

I'm using "io.github.embeddedkafka" %% "embedded-kafka-schema-registry" % "5.1.0"
with scala "2.12.8"

Publish 6.1.1

๐Ÿ‘‹ Hi there, any chance you could cut a rel for 6.1.1 now that #205 is merged?
Thanks!

maven repository has this depend on embedded-kafka-streams v 2.3.0 which brings in all the 2.3.0 stuff

I have this in my ivy-5.3.1.xml

<?xml version="1.0" encoding="UTF-8"?>
<ivy-module version="2.0" xmlns:m="http://ant.apache.org/ivy/maven" xmlns:e="http://ant.apache.org/ivy/extra">
	<info organisation="io.github.embeddedkafka"
		module="embedded-kafka-schema-registry_2.12"
		revision="5.3.1"
		status="release"
		publication="20191024133507"
	>
		<license name="MIT" url="http://opensource.org/licenses/MIT" />
		<description homepage="https://github.com/embeddedkafka/embedded-kafka-schema-registry">
		embedded-kafka-schema-registry
		</description>
		<e:sbtTransformHash>e12b99d712bbbb3e5e1423530986d71caa27c134</e:sbtTransformHash>
	</info>
	<configurations>
		<conf name="default" visibility="public" description="runtime dependencies and master artifact can be used with this conf" extends="runtime,master"/>
		<conf name="master" visibility="public" description="contains only the artifact published by this module itself, with no transitive dependencies"/>
		<conf name="compile" visibility="public" description="this is the default scope, used if none is specified. Compile dependencies are available in all classpaths."/>
		<conf name="provided" visibility="public" description="this is much like compile, but indicates you expect the JDK or a container to provide it. It is only available on the compilation classpath, and is not transitive."/>
		<conf name="runtime" visibility="public" description="this scope indicates that the dependency is not required for compilation, but is for execution. It is in the runtime and test classpaths, but not the compile classpath." extends="compile"/>
		<conf name="test" visibility="private" description="this scope indicates that the dependency is not required for normal use of the application, and is only available for the test compilation and execution phases." extends="runtime"/>
		<conf name="system" visibility="public" description="this scope is similar to provided except that you have to provide the JAR which contains it explicitly. The artifact is always available and is not looked up in a repository."/>
		<conf name="sources" visibility="public" description="this configuration contains the source artifact of this module, if any."/>
		<conf name="javadoc" visibility="public" description="this configuration contains the javadoc artifact of this module, if any."/>
		<conf name="optional" visibility="public" description="contains all optional dependencies"/>
	</configurations>
	<publications>
		<artifact name="embedded-kafka-schema-registry_2.12" type="jar" ext="jar" conf="master"/>
	</publications>
	<dependencies>
		<dependency org="org.scala-lang" name="scala-library" rev="2.12.9" force="true" conf="compile->compile(*),master(compile);runtime->runtime(*)"/>
		<dependency org="io.github.embeddedkafka" name="embedded-kafka-streams_2.12" rev="2.3.0" force="true" conf="compile->compile(*),master(compile);runtime->runtime(*)"/>
		<dependency org="io.confluent" name="kafka-avro-serializer" rev="5.3.1" force="true" conf="compile->compile(*),master(compile);runtime->runtime(*)"/>
		<dependency org="io.confluent" name="kafka-schema-registry" rev="5.3.1" force="true" conf="compile->compile(*),master(compile);runtime->runtime(*)">
			<artifact name="kafka-schema-registry" type="jar" ext="jar" conf="compile,runtime"/>
			<artifact name="kafka-schema-registry" type="jar" ext="jar" conf="compile,runtime" m:classifier="tests"/>
		</dependency>
		<dependency org="org.slf4j" name="slf4j-log4j12" rev="1.7.28" force="true" conf="test->runtime(*),master(compile)"/>
		<dependency org="org.scalatest" name="scalatest_2.12" rev="3.0.8" force="true" conf="test->runtime(*),master(compile)"/>
		<dependency org="com.typesafe.akka" name="akka-actor_2.12" rev="2.5.26" force="true" conf="test->runtime(*),master(compile)"/>
		<dependency org="com.typesafe.akka" name="akka-testkit_2.12" rev="2.5.26" force="true" conf="test->runtime(*),master(compile)"/>
	</dependencies>
</ivy-module>

I should add this definitely is causing problems - I was getting no such method errors trying to start it up. I've grabbed the 5.4.0-snapshot, done sbt publishLocal and am using that and the problem has gone.

Non-breaking Wakeupexception on stop

Hello, I'm aware this is usually just a symptom of something else. In my case everything seems to be working fine, but when I call EmbeddedKafka.stop() there's just this noisy log coming out:

ERROR | i.c.k.s.l.k.KafkaGroupLeaderElector | Unexpected exception in schema registry group processing thread
org.apache.kafka.common.errors.WakeupException: null
	at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.maybeTriggerWakeup(ConsumerNetworkClient.java:514)
	at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:278)
	at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:236)
	at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:227)
	at io.confluent.kafka.schemaregistry.leaderelector.kafka.SchemaRegistryCoordinator.poll(SchemaRegistryCoordinator.java:124)
	at io.confluent.kafka.schemaregistry.leaderelector.kafka.KafkaGroupLeaderElector$1.run(KafkaGroupLeaderElector.java:198)
	at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630)
	at java.base/java.lang.Thread.run(Thread.java:831)

And my test is still passing. Is this a known issue or am I doing something wrong?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.