Giter Site home page Giter Site logo

scalecube / scalecube-cluster Goto Github PK

View Code? Open in Web Editor NEW
248.0 34.0 88.0 1.74 MB

ScaleCube Cluster is a lightweight Java VM implementation of SWIM: Scalable Weakly-consistent Infection-style Process Group Membership Protocol. features cluster membership, failure detection, and gossip protocol library.

Home Page: http://scalecube.github.io/

License: Apache License 2.0

Java 100.00%
cluster-membership gossip-protocol swim-protocol reactor3 reactive-programming service-discovery-protocol distributed-systems distributed-computing failure-detection gossip-protocol-library

scalecube-cluster's Introduction

scalecube-services

Maven Central SourceSpy Dashboard

MICROSERVICES 2.0

ScaleCube is a library that simplifies the development of reactive and distributed applications by providing an embeddable microservices library. It connects distributed microservices in a way that resembles a fabric when viewed collectively. It greatly simplifies and streamlines asynchronous programming and provides a tool-set for managing microservices architecture. ScaleCube is built based on ScaleCube Cluster, which provides a built-in service discovery. The discovery uses SWIM protocol and gossip that scales better and has inherent failure detection and superior coherent understanding of the cluster state and cluster membership taking part in a swarm of services. ScaleCube cluster is a membership protocol that maintains membership amongst processes in a distributed system

An open-source project that is focused on streamlining reactive-programming of Microservices Reactive-systems that scale, built by developers for developers.

ScaleCube Services provides a low latency Reactive Microservices library for peer-to-peer service registry and discovery based on gossip protocol, without single point-of-failure or bottlenecks.

Scalecube more gracefully address the cross cutting concernes of distributed microservices architecture.

ScaleCube Services Features:
  • Provision and interconnect microservices peers in a cluster
  • Fully Distributed with No single-point-of-failure or single-point-of-bottleneck
  • Fast - Low latency and high throughput
  • Scaleable over- cores, jvms, clusters, regions.
  • Built-in Service Discovery and service routing
  • Zero configuration, automatic peer-to-peer service discovery using SWIM cluster membership protocol
  • Simple non-blocking, asynchronous programming model
  • Reactive Streams support.
    • Fire And Forget - Send and not wait for a reply
    • Request Response - Send single request and expect single reply
    • Request Stream - Send single request and expect stream of responses.
    • Request bidirectional - send stream of requests and expect stream of responses.
  • Built-in failure detection, fault tolerance, and elasticity
  • Routing and balancing strategies for both stateless and stateful services
  • Embeddable into existing applications
  • Natural Circuit-Breaker via scalecube-cluster discovery and failure detector.
  • Support Service instance tagging.
  • Support Service discovery partitioning using hierarchy of namespaces in a multi-cluster deployments.
  • Modular, flexible deployment models and topology
  • pluggable api-gateway providers (http / websocket / rsocket)
  • pluggable service transports (tcp / aeron / rsocket)
  • pluggable encoders (json, SBE, Google protocol buffers)
  • pluggable service security authentication and authorization providers.

User Guide:

Basic Usage:

The example provisions 2 cluster nodes and making a remote interaction.

  1. seed is a member node and provision no services of its own.
  2. then microservices variable is a member that joins seed member and provision GreetingService instance.
  3. finally from seed node - create a proxy by the GreetingService api and send a greeting request.
// service definition
@Service("io.scalecube.Greetings")
public interface GreetingsService {
  @ServiceMethod("sayHello")
	  Mono<Greeting> sayHello(String name);
	}
}
// service implementation
public class GreetingServiceImpl implements GreetingsService {
 @Override
 public Mono<Greeting> sayHello(String name) {
   return Mono.just(new Greeting("Nice to meet you " + name + " and welcome to ScaleCube"));
	}
}

//1. ScaleCube Node node with no members (container 1)
Microservices seed = Microservices.builder()
  .discovery("seed", ScalecubeServiceDiscovery::new)
	.transport(RSocketServiceTransport::new)
	.startAwait();

// get the address of the seed member - will be used to join any other members to the cluster.
final Address seedAddress = seed.discovery("seed").address();

//2. Construct a ScaleCube node which joins the cluster hosting the Greeting Service (container 2)
Microservices serviceNode = Microservices.builder()
  .discovery("seed", ep -> new ScalecubeServiceDiscovery(ep)
		.membership(cfg -> cfg.seedMembers(seedAddress)))
	.transport(RSocketServiceTransport::new)
	.services(new GreetingServiceImpl())
	.startAwait();

//3. Create service proxy (can be created from any node or container in the cluster)
//   and Execute the service and subscribe to incoming service events
seed.call().api(GreetingsService.class)
  .sayHello("joe").subscribe(consumer -> {
    System.out.println(consumer.message());
  });

// await all instances to shutdown.
Mono.whenDelayError(seed.shutdown(), serviceNode.shutdown()).block();

Basic Service Example:

  • RequestOne: Send single request and expect single reply
  • RequestStream: Send single request and expect stream of responses.
  • RequestBidirectional: send stream of requests and expect stream of responses.

A service is nothing but an interface declaring what methods we wish to provision at our cluster.

@Service
public interface ExampleService {

  @ServiceMethod
  Mono<String> sayHello(String request);

  @ServiceMethod
  Flux<MyResponse> helloStream();

  @ServiceMethod
  Flux<MyResponse> helloBidirectional(Flux<MyRequest> requests);
}

API-Gateway:

Available api-gateways are rsocket, http and websocket

Basic API-Gateway example:

    Microservices.builder()
        .discovery(options -> options.seeds(seed.discoveryAddress()))
        .services(...) // OPTIONAL: services (if any) as part of this node.

        // configure list of gateways plugins exposing the apis
        .gateway(options -> new WebsocketGateway(options.id("ws").port(8080)))
        .gateway(options -> new HttpGateway(options.id("http").port(7070)))
        .gateway(options -> new RSocketGateway(options.id("rsws").port(9090)))

        .startAwait();

        // HINT: you can try connect using the api sandbox to these ports to try the api.
        // https://scalecube.github.io/api-sandbox/app/index.html

Maven

With scalecube-services you may plug-and-play alternative providers for Transport,Codecs and discovery. Scalecube is using ServiceLoader to load providers from class path,

You can think about scalecube as slf4j for microservices - Currently supported SPIs:

Transport providers:

  • scalecube-services-transport-rsocket: using rsocket to communicate with remote services.

Message codec providers:

Service discovery providers:

Binaries and dependency information for Maven can be found at http://search.maven.org.

https://mvnrepository.com/artifact/io.scalecube

To add a dependency on ScaleCube Services using Maven, use the following:

Maven Central

 <properties>
   <scalecube.version>2.x.x</scalecube.version>
 </properties>

 <!-- -------------------------------------------
   scalecube core and api:
 ------------------------------------------- -->

 <!-- scalecube apis   -->
 <dependency>
  <groupId>io.scalecube</groupId>
  <artifactId>scalecube-services-api</artifactId>
  <version>${scalecube.version}</version>
 </dependency>

 <!-- scalecube services module   -->
 <dependency>
  <groupId>io.scalecube</groupId>
  <artifactId>scalecube-services</artifactId>
  <version>${scalecube.version}</version>
 </dependency>


 <!--

     Plugins / SPIs: bellow a list of providers you may choose from. to constract your own configuration:
     you are welcome to build/contribute your own plugins please consider the existing ones as example.

  -->

 <!-- scalecube transport providers:  -->
 <dependency>
  <groupId>io.scalecube</groupId>
  <artifactId>scalecube-services-transport-rsocket</artifactId>
  <version>${scalecube.version}</version>
 </dependency>

Sponsored by:

We Hire at exberry.io

https://exberry.io/career/

website

https://scalecube.github.io/

scalecube-cluster's People

Contributors

aharonha avatar alexlikho avatar artem-v avatar dependabot[bot] avatar dmytro-lazebnyi avatar eutkin avatar io-scalecube-ci avatar lightzebra avatar linux-china avatar matyasberry avatar olegdokuka avatar ptupitsyn avatar ronenhamias avatar rpuch avatar sammyvimes avatar sashapolo avatar segabriel avatar smakovskyi avatar snripa avatar snyk-bot avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

scalecube-cluster's Issues

Failure Detector Configurator

Implement Failure Detector Configurator component. Failure Detector Configurator is responsible to compute specific FD algorithm parameters based on the required QoS and given network reliability parameters.

Input:

  • Parameters of failure detector quality of service (QoS) requirements. QoS parameters should be set by configuration.
  • Network reliability parameters. Network Reliability Parameters should be (for now) statically configured with some typical values.

Output:

Computed configuration parameters of used Failure Detector Algorithm (Tping, K) based on input parameters.

More on the math for used FD algorithm can be found at paper "On Scalable and Efficient Distributed Failure Detectors" and general information regards QoS parameters of FD at "On the Quality of Service of Failure Detectors".

ClusterImpl's `members` and `memberAddressIndex` maps must get single threaded access

In ClusterImpl:

  // State
  private final ConcurrentMap<String, Member> members = new ConcurrentHashMap<>();
  private final ConcurrentMap<Address, String> memberAddressIndex = new ConcurrentHashMap<>();

Those two fields are getting updated in method io.scalecube.cluster.ClusterImpl#onMemberEvent. They are needed for driving following functions: members(), otherMembers(), member(String id) and member(Address address).

What's not great is to keep them in ClusterImpl and worry about concurrent access. That's why they were defined as ConcurrentHashMaps.

Idea of this task is to make those two fields as HashMaps and guarantee single threaded access to them. IMHO they ( + functions they provide) have to be moved to MembershipProtocol .

Enahce Cluster metadata algorithm

Run code snippet:

    Cluster cluster0 = Cluster.joinAwait();
    Cluster cluster1 = Cluster.joinAwait(cluster0.address());

    cluster1
        .listenMembership()
        .filter(MembershipEvent::isUpdated)
        .subscribe(
            event -> {
              Map<String, String> metadata = cluster1.metadata(event.member());
              System.out.println(
                  "### metadata: "
                      + metadata
                      + " | "
                      + System.currentTimeMillis()
                      + " | "
                      + Thread.currentThread().getName());
            });

    cluster0.updateMetadataProperty("key", "value1").subscribe(); // 1
    cluster0.updateMetadataProperty("key", "value2").subscribe(); // 2
    cluster0.updateMetadataProperty("key", "value3").subscribe(); // 3

    Thread.currentThread().join();

May give following:

### metadata: {key=value3} | 1544874450633 | sc-cluster-44033-2
### metadata: {key=value3} | 1544874450634 | sc-cluster-44033-2
### metadata: {key=value3} | 1544874450641 | sc-cluster-44033-2

Which tells us following:
Cluster metadata algorithm did three rpc calls and opon each call he received the latest metdata value. Why latest? Because update on cluster0 occurs immediately, while incarnation-update travelling by network. That's why locally at cluster0 I get value3 but 3 incarnation updates in flyight to other nodes.

What's not great -- extra network calls to get metadata. Propose a solution.

Allow transport bind on port 0

When transport starts it scanning port in range 4801
i suggest to remove this functionality by simply allow to bind on port 0

currently the port scan feature is spamming the log:
W 0803-1346:28,394 i.s.t.TransportImpl Can't bind to address 127.0.1.1:4801, try again on different port [cause=io.netty.channel.unix.Errors$NativeIoException: bind(..) failed: Address already in use] [sc-boss-100-1]
I 0803-1346:28,401 i.s.t.TransportImpl Bound to: 127.0.1.1:4802 [sc-boss-100-2]
W 0803-1346:28,403 i.s.t.TransportImpl Can't bind to address 127.0.1.1:4801, try again on different port [cause=io.netty.channel.unix.Errors$NativeIoException: bind(..) failed: Address already in use] [sc-boss-102-1]
W 0803-1346:28,410 i.s.t.TransportImpl Can't bind to address 127.0.1.1:4802, try again on different port [cause=io.netty.channel.unix.Errors$NativeIoException: bind(..) failed: Address already in use] [sc-boss-102-2]
I 0803-1346:28,414 i.s.t.TransportImpl Bound to: 127.0.1.1:4803 [sc-boss-102-1]
W 0803-1346:28,420 i.s.t.TransportImpl Can't bind to address 127.0.1.1:4801, try again on different port [cause=io.netty.channel.unix.Errors$NativeIoException: bind(..) failed: Address already in use] [sc-boss-104-1]
W 0803-1346:28,426 i.s.t.TransportImpl Can't bind to address 127.0.1.1:4802, try again on different port [cause=io.netty.channel.unix.Errors$NativeIoException: bind(..) failed: Address already in use] [sc-boss-104-2]
W 0803-1346:28,436 i.s.t.TransportImpl Can't bind to address 127.0.1.1:4803, try again on different port [cause=io.netty.channel.unix.Errors$NativeIoException: bind(..) failed: Address already in use] [sc-boss-104-1]
I 0803-1346:28,446 i.s.t.TransportImpl Bound to: 127.0.1.1:4804 [sc-boss-104-2]
W 0803-1346:28,453 i.s.t.TransportImpl Can't bind to address 127.0.1.1:4801, try again on different port [cause=io.netty.channel.unix.Errors$NativeIoException: bind(..) failed: Address already in use] [sc-boss-106-1]
W 0803-1346:28,456 i.s.t.TransportImpl Can't bind to address 127.0.1.1:4802, try again on different port [cause=io.netty.channel.unix.Errors$NativeIoException: bind(..) failed: Address already in use] [sc-boss-106-2]
W 0803-1346:28,457 i.s.t.TransportImpl Can't bind to address 127.0.1.1:4803, try again on different port [cause=io.netty.channel.unix.Errors$NativeIoException: bind(..) failed: Address already in use] [sc-boss-106-1]
W 0803-1346:28,457 i.s.t.TransportImpl Can't bind to address 127.0.1.1:4804, try again on different port [cause=io.netty.channel.unix.Errors$NativeIoException: bind(..) failed: Address already in use] [sc-boss-106-2]

Leader Election

add support for leader election

cluster.leaderElection().name("/service/name1").listen();

Link Quality Estimator

Implement Link Quality Estimator component.

This component tracks response times and loss rate of messages sent by Failure Detector Algorithm (Ack, Ping and Ping-Req) and compute network reliability parameters based on some significant amount of analyzed messages:

  • Message loss probability
  • Average message delay
  • Message delay variance

Measured metrics passed as input to Failure Detector Configurator in order to dynamically adjust FD algorithm parameters to meet QoS requirements under changed network conditions.

Note: Take into account that this algorithm do not like "0" probability. The fact that you didn't saw any message loss doesn't say that it is impossible event, but it means that you just have small dataset. In order to break "0" probabilities should be used Laplace's Rule of Succession (see http://en.wikipedia.org/wiki/Rule_of_succession) for computing message loss probability.

testNetworkPartitionThenRecovery test fails on travis

[ERROR] Failures:
[ERROR] MembershipProtocolTest.testNetworkPartitionThenRecovery:99->assertTrusted:612 Expected 1 trusted members [0.0.0.0:46437], but actual: [0.0.0.0:46535, 0.0.0.0:46437] ==> expected: <1> but was: <2>

Support batches for gossips

Enhancement for GossipProtocol.

A queue in GP may become uncontrollably large, next tick of algorithm will create large GossipRequest. Need to limit size of GossipRequest with some use batch_size when composing GossipRequest .

Dead members in GossipProtocolImpl class

Version 1.X has a bug where dead members are not removed from the GossipProtocolmpl class, because the 'backpressuredrop' is used, which causes some events to be dropped. What you see in the logs are connect errors to the dead hosts. These errors will never go away, and will always appear when the class is trying to gossip to these dead hosts.

Unstable network when partial connectivity available

Hi,

We've been experiencing continuously crashing nodes under the following condition:

  • A node is able to connect to a seed node
  • The same node cannot receive incoming connections

This will cause disconnect/dead messages in the whole network, and these messages will keep being generated, as the node with the partial connectivity will keep connecting. At some point there are so many messages being passed around, nodes start dying because of OOM errors.

New Transport implementation based on reactor-netty

POC for alternative cluster transport impl. using reactor-netty lib.
Current self-written transport could be rewritten and still could serve to major cluster transport demands: multiplexing, one connection per address, protobuff, request/response correlation and etc. etc.
Candidate libraries: reactor-netty (obviously) and rSocket (makes sense very much IMHO).

FailureDetector's logged period might be incorrect.

FailureDetector component, while receiving messages, attampts to log to which perion of pings that message concerns.

It's using shared field
long period
for that purpose.
Due to the fact that it might be updated in another thread (when in the next ping phase), the log message might be wrong and might use updated period value.

Get rid of protostuff and protobuf

  • use jackson
  • remove io.scalecube.transport.MessageSchema
  • remove setters at Message
  • remove dependency on :
      <!-- Protostuff/Protobuf -->
      <dependency>
        <groupId>io.protostuff</groupId>
        <artifactId>protostuff-api</artifactId>
        <version>${protostuff.version}</version>
      </dependency>
      <dependency>
        <groupId>io.protostuff</groupId>
        <artifactId>protostuff-runtime</artifactId>
        <version>${protostuff.version}</version>
      </dependency>
      <dependency>
        <groupId>io.protostuff</groupId>
        <artifactId>protostuff-core</artifactId>
        <version>${protostuff.version}</version>
      </dependency>
      <dependency>
        <groupId>io.protostuff</groupId>
        <artifactId>protostuff-runtime-registry</artifactId>
        <version>${protostuff.version}</version>
      </dependency>
      <dependency>
        <groupId>io.protostuff</groupId>
        <artifactId>protostuff-collectionschema</artifactId>
        <version>${protostuff.version}</version>
      </dependency>
      <dependency>
        <groupId>com.google.protobuf</groupId>
        <artifactId>protobuf-java</artifactId>
        <version>${protobuf.version}</version>
      </dependency>

Membership should send SYNC message to both seed and other members at running phase

Currently ClusterMembership class implements membership algorithm incorrectly since it sends SYNC messages only to seed nodes at initial phase and at running phase, but at running phase it should send SYNC to either seed members (periodically in order to restore from network partitioning) and to other members.

I think we should introduce additional settings parameter (e.g. seedSyncRation) which says with what probability we should send sync to one of seed members or just to random cluster member.

@artem-v @myroslavlisniak

Use Transport.requestResponse()

Use new method io.scalecube.transport.Transport#requestResponse for calling on caller side. Such places are: sending sync (initial and periodic), sending ping and ping-req, fetching metadata.
FailureDetectorImpl(149, 200), MembershipProtocolImpl(218), MetadataStoreImpl(163).

Migrate to RSocket and Remove Scalecube Transport

Motivation:

the idea is to completely refactor and delete scalecube-transport that is now days only used by scalecube-cluster.

once we complete this migration and use pluggable RSocket (Netty or Aeron)
Cluster may use RSocket-Aeron as its transport layer and as result remove current custom transport layer.

The Value:

  1. you want better communication modes then send and receive provided by scalecube self invented transport.
  2. you want to enjoy all the power of scalecube-cluster and swim and gossip features plus tags ect.
  3. we want to use widely adopted standards or a standard then self invented ones.

if the answer is yes for the points above then ScaleCube-cluster is the right answer for you. :)

Design considerations:

  1. remove the transport module completely. we will no longer have transport at all and RSocket will be our abstraction.
  2. make rsocket pluggable to cluster so we can choose aeron or netty or any rsocket implementation - user will bring his own RSocket(Factory?).

Remove module scalecube-utils

Move Address classes and tests to transport module. Move IdGenerator to cluster (tests should be moved as well). Remove utils module.

Cluster member has not removed on shutdown when using updateMetadata.

Hello, I am using clustercube-cluster.

Cluster member has not removed on shutdown when using updateMetadata.

Cluster.shutdown method send member-remove gossip message to GossipProtocolImpl. Received gossip message processing is here.

Its shutdown message contains metadata, but remoteMembers not contains metadata.
List.remove method call Member.equals method, both messages not equals metadata.

Change this

.subscribe(remoteMember -> remoteMembers.removeIf(member -> remoteMember.id().equals(member.id()));

Test case works on eclipse
https://github.com/umishu/scalecube-test

Investigate reactor.core.Exceptions$ErrorCallbackNotImplemented at 2.1.6

On 2.1.6 during one of the regular test runs noticed following:

E 1108-1439:40,870 i.s.c.g.GossipProtocolImpl Exception on sending GossipReq[15] exception: io.netty.channel.AbstractChannel$AnnotatedConnectException: syscall:getsockopt(..) failed: Connection refused: /127.0.1.1:45297 [sc-cluster-36523-37]reactor.core.Exceptions$ErrorCallbackNotImplemented: io.netty.channel.AbstractChannel$AnnotatedConnectException: syscall:getsockopt(..) failed: Connection refused: /127.0.1.1:45297
Caused by: io.netty.channel.AbstractChannel$AnnotatedConnectException: syscall:getsockopt(..) failed: Connection refused: /127.0.1.1:45297
	at io.netty.channel.unix.Socket.finishConnect(..)(Unknown Source) ~[netty-transport-native-unix-common-4.1.29.Final.jar:4.1.29.Final]
Caused by: io.netty.channel.unix.Errors$NativeConnectException: syscall:getsockopt(..) failed: Connection refused
	at io.netty.channel.unix.Socket.finishConnect(..)(Unknown Source) ~[netty-transport-native-unix-common-4.1.29.Final.jar:4.1.29.Final]

Basically it says that we flied on stack to try-catch at sending gossip on .subscribe() . Despite the fact that .doOnError() was declared inside transport.send(). Is having doOnError not enough to prevent ErrorCallbackNotImplemented?

reactor.core.Exceptions$ReactorRejectedExecutionException: Scheduler unavailable

Just recently were about running c5.2xlarge perf test with 1 client 1 gw and 1 service. Test didn't start and failed with following in logs.

On the client:

I 2018-10-27T20:46:08,358 i.s.b.BenchmarkState Benchmarks settings: BenchmarkSettings{numberThreads=8, concurrency=16, minInterval=PT0.1S, executionTaskDuration=PT6M, executionTaskInterval=PT8S, numOfIterations=9223372036854775807, reporterInterval=PT3S, csvReporterDirectory=reports/benchmarks/i.s.g.b.r.r.RemoteInfiniteStreamBenchmark/2018-10-27-20-46-08, taskName='i.s.g.b.r.r.RemoteInfiniteStreamBenchmark', durationUnit=MILLISECONDS, rateUnit=SECONDS, warmUpDuration=PT1M, rampUpDuration=PT8S, rampUpInterval=PT1S, consoleReporterEnabled=true, injectors=8, messageRate=1, injectorsPerRampUpInterval=1, messagesPerExecutionInterval=1, options={rateLimit=32, gatewayHost=10.200.2.40}} [main]
I 2018-10-27T20:46:09,851 i.s.g.c.r.RSocketClientTransport Connected successfully on 10.200.2.40:9090 [worker-client-sdk-client-epoll-9]
I 2018-10-27T20:46:09,851 i.s.g.c.r.RSocketClientTransport Connected successfully on 10.200.2.40:9090 [worker-client-sdk-client-epoll-10]
E 2018-10-27T20:46:09,970 i.s.b.BenchmarkState Exception occured on setUp at rampUpIteration: 0, cause: io.rsocket.exceptions.ApplicationErrorException: No reachable member with such service: /benchmarks/one, task won't start [worker-client-sdk-client-epoll-9]
io.rsocket.exceptions.ApplicationErrorException: No reachable member with such service: /benchmarks/one
	at io.rsocket.exceptions.Exceptions.from(Exceptions.java:53) ~[rsocket-core-0.11.8.jar:?]
	at io.rsocket.RSocketClient.handleFrame(RSocketClient.java:456) ~[rsocket-core-0.11.8.jar:?]
	at io.rsocket.RSocketClient.handleIncomingFrames(RSocketClient.java:419) ~[rsocket-core-0.11.8.jar:?]
	at reactor.core.publisher.LambdaSubscriber.onNext(LambdaSubscriber.java:130) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.MonoFlatMapMany$FlatMapManyInner.onNext(MonoFlatMapMany.java:238) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxGroupBy$UnicastGroupedFlux.drainRegular(FluxGroupBy.java:554) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxGroupBy$UnicastGroupedFlux.drain(FluxGroupBy.java:630) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxGroupBy$UnicastGroupedFlux.subscribe(FluxGroupBy.java:696) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.Flux.subscribe(Flux.java:6877) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.MonoFlatMapMany$FlatMapManyMain.onNext(MonoFlatMapMany.java:184) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1083) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.MonoProcessor.onNext(MonoProcessor.java:389) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at io.rsocket.internal.ClientServerInputMultiplexer.lambda$new$1(ClientServerInputMultiplexer.java:98) ~[rsocket-core-0.11.8.jar:?]
	at reactor.core.publisher.LambdaSubscriber.onNext(LambdaSubscriber.java:130) [reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxGroupBy$GroupByMain.drainLoop(FluxGroupBy.java:380) [reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxGroupBy$GroupByMain.drain(FluxGroupBy.java:316) [reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxGroupBy$GroupByMain.onNext(FluxGroupBy.java:201) [reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:108) [reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:108) [reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.netty.channel.FluxReceive.drainReceiver(FluxReceive.java:211) [reactor-netty-0.8.1.RELEASE.jar:0.8.1.RELEASE]
	at reactor.netty.channel.FluxReceive.onInboundNext(FluxReceive.java:327) [reactor-netty-0.8.1.RELEASE.jar:0.8.1.RELEASE]
	at reactor.netty.channel.ChannelOperations.onInboundNext(ChannelOperations.java:310) [reactor-netty-0.8.1.RELEASE.jar:0.8.1.RELEASE]
	at reactor.netty.http.client.HttpClientOperations.onInboundNext(HttpClientOperations.java:537) [reactor-netty-0.8.1.RELEASE.jar:0.8.1.RELEASE]
	at reactor.netty.http.client.WebsocketClientOperations.onInboundNext(WebsocketClientOperations.java:141) [reactor-netty-0.8.1.RELEASE.jar:0.8.1.RELEASE]
	at reactor.netty.channel.ChannelOperationsHandler.channelRead(ChannelOperationsHandler.java:141) [reactor-netty-0.8.1.RELEASE.jar:0.8.1.RELEASE]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) [netty-codec-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284) [netty-codec-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:808) [netty-transport-native-epoll-4.1.29.Final-linux-x86_64.jar:4.1.29.Final]
	at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:410) [netty-transport-native-epoll-4.1.29.Final-linux-x86_64.jar:4.1.29.Final]
	at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:310) [netty-transport-native-epoll-4.1.29.Final-linux-x86_64.jar:4.1.29.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:884) [netty-common-4.1.29.Final.jar:4.1.29.Final]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]
E 2018-10-27T20:46:09,970 i.s.b.BenchmarkState Exception occured on setUp at rampUpIteration: 1, cause: io.rsocket.exceptions.ApplicationErrorException: No reachable member with such service: /benchmarks/one, task won't start [worker-client-sdk-client-epoll-10]
io.rsocket.exceptions.ApplicationErrorException: No reachable member with such service: /benchmarks/one
	at io.rsocket.exceptions.Exceptions.from(Exceptions.java:53) ~[rsocket-core-0.11.8.jar:?]
	at io.rsocket.RSocketClient.handleFrame(RSocketClient.java:456) ~[rsocket-core-0.11.8.jar:?]
	at io.rsocket.RSocketClient.handleIncomingFrames(RSocketClient.java:419) ~[rsocket-core-0.11.8.jar:?]
	at reactor.core.publisher.LambdaSubscriber.onNext(LambdaSubscriber.java:130) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.MonoFlatMapMany$FlatMapManyInner.onNext(MonoFlatMapMany.java:238) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxGroupBy$UnicastGroupedFlux.drainRegular(FluxGroupBy.java:554) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxGroupBy$UnicastGroupedFlux.drain(FluxGroupBy.java:630) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxGroupBy$UnicastGroupedFlux.subscribe(FluxGroupBy.java:696) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.Flux.subscribe(Flux.java:6877) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.MonoFlatMapMany$FlatMapManyMain.onNext(MonoFlatMapMany.java:184) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1083) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.MonoProcessor.onNext(MonoProcessor.java:389) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at io.rsocket.internal.ClientServerInputMultiplexer.lambda$new$1(ClientServerInputMultiplexer.java:98) ~[rsocket-core-0.11.8.jar:?]
	at reactor.core.publisher.LambdaSubscriber.onNext(LambdaSubscriber.java:130) [reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxGroupBy$GroupByMain.drainLoop(FluxGroupBy.java:380) [reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxGroupBy$GroupByMain.drain(FluxGroupBy.java:316) [reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxGroupBy$GroupByMain.onNext(FluxGroupBy.java:201) [reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:108) [reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:108) [reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.netty.channel.FluxReceive.drainReceiver(FluxReceive.java:211) [reactor-netty-0.8.1.RELEASE.jar:0.8.1.RELEASE]
	at reactor.netty.channel.FluxReceive.onInboundNext(FluxReceive.java:327) [reactor-netty-0.8.1.RELEASE.jar:0.8.1.RELEASE]
	at reactor.netty.channel.ChannelOperations.onInboundNext(ChannelOperations.java:310) [reactor-netty-0.8.1.RELEASE.jar:0.8.1.RELEASE]
	at reactor.netty.http.client.HttpClientOperations.onInboundNext(HttpClientOperations.java:537) [reactor-netty-0.8.1.RELEASE.jar:0.8.1.RELEASE]
	at reactor.netty.http.client.WebsocketClientOperations.onInboundNext(WebsocketClientOperations.java:141) [reactor-netty-0.8.1.RELEASE.jar:0.8.1.RELEASE]
	at reactor.netty.channel.ChannelOperationsHandler.channelRead(ChannelOperationsHandler.java:141) [reactor-netty-0.8.1.RELEASE.jar:0.8.1.RELEASE]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) [netty-codec-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284) [netty-codec-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:808) [netty-transport-native-epoll-4.1.29.Final-linux-x86_64.jar:4.1.29.Final]
	at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:410) [netty-transport-native-epoll-4.1.29.Final-linux-x86_64.jar:4.1.29.Final]
	at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:310) [netty-transport-native-epoll-4.1.29.Final-linux-x86_64.jar:4.1.29.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:884) [netty-common-4.1.29.Final.jar:4.1.29.Final]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]
I 2018-10-27T20:46:10,739 i.s.g.c.r.RSocketClientTransport Connected successfully on 10.200.2.40:9090 [worker-client-sdk-client-epoll-11]
E 2018-10-27T20:46:10,746 i.s.b.BenchmarkState Exception occured on setUp at rampUpIteration: 2, cause: io.rsocket.exceptions.ApplicationErrorException: No reachable member with such service: /benchmarks/one, task won't start [worker-client-sdk-client-epoll-11]
io.rsocket.exceptions.ApplicationErrorException: No reachable member with such service: /benchmarks/one
	at io.rsocket.exceptions.Exceptions.from(Exceptions.java:53) ~[rsocket-core-0.11.8.jar:?]
	at io.rsocket.RSocketClient.handleFrame(RSocketClient.java:456) ~[rsocket-core-0.11.8.jar:?]
	at io.rsocket.RSocketClient.handleIncomingFrames(RSocketClient.java:419) ~[rsocket-core-0.11.8.jar:?]
	at reactor.core.publisher.LambdaSubscriber.onNext(LambdaSubscriber.java:130) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.MonoFlatMapMany$FlatMapManyInner.onNext(MonoFlatMapMany.java:238) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxGroupBy$UnicastGroupedFlux.drainRegular(FluxGroupBy.java:554) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxGroupBy$UnicastGroupedFlux.drain(FluxGroupBy.java:630) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxGroupBy$UnicastGroupedFlux.subscribe(FluxGroupBy.java:696) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.Flux.subscribe(Flux.java:6877) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.MonoFlatMapMany$FlatMapManyMain.onNext(MonoFlatMapMany.java:184) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1083) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.MonoProcessor.onNext(MonoProcessor.java:389) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at io.rsocket.internal.ClientServerInputMultiplexer.lambda$new$1(ClientServerInputMultiplexer.java:98) ~[rsocket-core-0.11.8.jar:?]
	at reactor.core.publisher.LambdaSubscriber.onNext(LambdaSubscriber.java:130) [reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxGroupBy$GroupByMain.drainLoop(FluxGroupBy.java:380) [reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxGroupBy$GroupByMain.drain(FluxGroupBy.java:316) [reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxGroupBy$GroupByMain.onNext(FluxGroupBy.java:201) [reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:108) [reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:108) [reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.netty.channel.FluxReceive.drainReceiver(FluxReceive.java:211) [reactor-netty-0.8.1.RELEASE.jar:0.8.1.RELEASE]
	at reactor.netty.channel.FluxReceive.onInboundNext(FluxReceive.java:327) [reactor-netty-0.8.1.RELEASE.jar:0.8.1.RELEASE]
	at reactor.netty.channel.ChannelOperations.onInboundNext(ChannelOperations.java:310) [reactor-netty-0.8.1.RELEASE.jar:0.8.1.RELEASE]
	at reactor.netty.http.client.HttpClientOperations.onInboundNext(HttpClientOperations.java:537) [reactor-netty-0.8.1.RELEASE.jar:0.8.1.RELEASE]
	at reactor.netty.http.client.WebsocketClientOperations.onInboundNext(WebsocketClientOperations.java:141) [reactor-netty-0.8.1.RELEASE.jar:0.8.1.RELEASE]
	at reactor.netty.channel.ChannelOperationsHandler.channelRead(ChannelOperationsHandler.java:141) [reactor-netty-0.8.1.RELEASE.jar:0.8.1.RELEASE]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) [netty-codec-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284) [netty-codec-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:808) [netty-transport-native-epoll-4.1.29.Final-linux-x86_64.jar:4.1.29.Final]
	at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:410) [netty-transport-native-epoll-4.1.29.Final-linux-x86_64.jar:4.1.29.Final]
	at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:310) [netty-transport-native-epoll-4.1.29.Final-linux-x86_64.jar:4.1.29.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:884) [netty-common-4.1.29.Final.jar:4.1.29.Final]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]
I 2018-10-27T20:46:11,739 i.s.g.c.r.RSocketClientTransport Connected successfully on 10.200.2.40:9090 [worker-client-sdk-client-epoll-12]
E 2018-10-27T20:46:11,747 i.s.b.BenchmarkState Exception occured on setUp at rampUpIteration: 3, cause: io.rsocket.exceptions.ApplicationErrorException: No reachable member with such service: /benchmarks/one, task won't start [worker-client-sdk-client-epoll-12]
io.rsocket.exceptions.ApplicationErrorException: No reachable member with such service: /benchmarks/one
	at io.rsocket.exceptions.Exceptions.from(Exceptions.java:53) ~[rsocket-core-0.11.8.jar:?]
	at io.rsocket.RSocketClient.handleFrame(RSocketClient.java:456) ~[rsocket-core-0.11.8.jar:?]
	at io.rsocket.RSocketClient.handleIncomingFrames(RSocketClient.java:419) ~[rsocket-core-0.11.8.jar:?]
	at reactor.core.publisher.LambdaSubscriber.onNext(LambdaSubscriber.java:130) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.MonoFlatMapMany$FlatMapManyInner.onNext(MonoFlatMapMany.java:238) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxGroupBy$UnicastGroupedFlux.drainRegular(FluxGroupBy.java:554) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxGroupBy$UnicastGroupedFlux.drain(FluxGroupBy.java:630) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxGroupBy$UnicastGroupedFlux.subscribe(FluxGroupBy.java:696) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.Flux.subscribe(Flux.java:6877) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.MonoFlatMapMany$FlatMapManyMain.onNext(MonoFlatMapMany.java:184) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1083) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.MonoProcessor.onNext(MonoProcessor.java:389) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at io.rsocket.internal.ClientServerInputMultiplexer.lambda$new$1(ClientServerInputMultiplexer.java:98) ~[rsocket-core-0.11.8.jar:?]
	at reactor.core.publisher.LambdaSubscriber.onNext(LambdaSubscriber.java:130) [reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxGroupBy$GroupByMain.drainLoop(FluxGroupBy.java:380) [reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxGroupBy$GroupByMain.drain(FluxGroupBy.java:316) [reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxGroupBy$GroupByMain.onNext(FluxGroupBy.java:201) [reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:108) [reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:108) [reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.netty.channel.FluxReceive.drainReceiver(FluxReceive.java:211) [reactor-netty-0.8.1.RELEASE.jar:0.8.1.RELEASE]
	at reactor.netty.channel.FluxReceive.onInboundNext(FluxReceive.java:327) [reactor-netty-0.8.1.RELEASE.jar:0.8.1.RELEASE]
	at reactor.netty.channel.ChannelOperations.onInboundNext(ChannelOperations.java:310) [reactor-netty-0.8.1.RELEASE.jar:0.8.1.RELEASE]
	at reactor.netty.http.client.HttpClientOperations.onInboundNext(HttpClientOperations.java:537) [reactor-netty-0.8.1.RELEASE.jar:0.8.1.RELEASE]
	at reactor.netty.http.client.WebsocketClientOperations.onInboundNext(WebsocketClientOperations.java:141) [reactor-netty-0.8.1.RELEASE.jar:0.8.1.RELEASE]
	at reactor.netty.channel.ChannelOperationsHandler.channelRead(ChannelOperationsHandler.java:141) [reactor-netty-0.8.1.RELEASE.jar:0.8.1.RELEASE]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) [netty-codec-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284) [netty-codec-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:808) [netty-transport-native-epoll-4.1.29.Final-linux-x86_64.jar:4.1.29.Final]
	at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:410) [netty-transport-native-epoll-4.1.29.Final-linux-x86_64.jar:4.1.29.Final]
	at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:310) [netty-transport-native-epoll-4.1.29.Final-linux-x86_64.jar:4.1.29.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:884) [netty-common-4.1.29.Final.jar:4.1.29.Final]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]
I 2018-10-27T20:46:12,738 i.s.g.c.r.RSocketClientTransport Connected successfully on 10.200.2.40:9090 [worker-client-sdk-client-epoll-13]
E 2018-10-27T20:46:12,746 i.s.b.BenchmarkState Exception occured on setUp at rampUpIteration: 4, cause: io.rsocket.exceptions.ApplicationErrorException: No reachable member with such service: /benchmarks/one, task won't start [worker-client-sdk-client-epoll-13]
io.rsocket.exceptions.ApplicationErrorException: No reachable member with such service: /benchmarks/one
	at io.rsocket.exceptions.Exceptions.from(Exceptions.java:53) ~[rsocket-core-0.11.8.jar:?]
	at io.rsocket.RSocketClient.handleFrame(RSocketClient.java:456) ~[rsocket-core-0.11.8.jar:?]
	at io.rsocket.RSocketClient.handleIncomingFrames(RSocketClient.java:419) ~[rsocket-core-0.11.8.jar:?]
	at reactor.core.publisher.LambdaSubscriber.onNext(LambdaSubscriber.java:130) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.MonoFlatMapMany$FlatMapManyInner.onNext(MonoFlatMapMany.java:238) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxGroupBy$UnicastGroupedFlux.drainRegular(FluxGroupBy.java:554) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxGroupBy$UnicastGroupedFlux.drain(FluxGroupBy.java:630) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxGroupBy$UnicastGroupedFlux.subscribe(FluxGroupBy.java:696) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.Flux.subscribe(Flux.java:6877) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.MonoFlatMapMany$FlatMapManyMain.onNext(MonoFlatMapMany.java:184) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1083) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.MonoProcessor.onNext(MonoProcessor.java:389) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at io.rsocket.internal.ClientServerInputMultiplexer.lambda$new$1(ClientServerInputMultiplexer.java:98) ~[rsocket-core-0.11.8.jar:?]
	at reactor.core.publisher.LambdaSubscriber.onNext(LambdaSubscriber.java:130) [reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxGroupBy$GroupByMain.drainLoop(FluxGroupBy.java:380) [reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxGroupBy$GroupByMain.drain(FluxGroupBy.java:316) [reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxGroupBy$GroupByMain.onNext(FluxGroupBy.java:201) [reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:108) [reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:108) [reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.netty.channel.FluxReceive.drainReceiver(FluxReceive.java:211) [reactor-netty-0.8.1.RELEASE.jar:0.8.1.RELEASE]
	at reactor.netty.channel.FluxReceive.onInboundNext(FluxReceive.java:327) [reactor-netty-0.8.1.RELEASE.jar:0.8.1.RELEASE]
	at reactor.netty.channel.ChannelOperations.onInboundNext(ChannelOperations.java:310) [reactor-netty-0.8.1.RELEASE.jar:0.8.1.RELEASE]
	at reactor.netty.http.client.HttpClientOperations.onInboundNext(HttpClientOperations.java:537) [reactor-netty-0.8.1.RELEASE.jar:0.8.1.RELEASE]
	at reactor.netty.http.client.WebsocketClientOperations.onInboundNext(WebsocketClientOperations.java:141) [reactor-netty-0.8.1.RELEASE.jar:0.8.1.RELEASE]
	at reactor.netty.channel.ChannelOperationsHandler.channelRead(ChannelOperationsHandler.java:141) [reactor-netty-0.8.1.RELEASE.jar:0.8.1.RELEASE]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) [netty-codec-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284) [netty-codec-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:808) [netty-transport-native-epoll-4.1.29.Final-linux-x86_64.jar:4.1.29.Final]
	at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:410) [netty-transport-native-epoll-4.1.29.Final-linux-x86_64.jar:4.1.29.Final]
	at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:310) [netty-transport-native-epoll-4.1.29.Final-linux-x86_64.jar:4.1.29.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:884) [netty-common-4.1.29.Final.jar:4.1.29.Final]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]
I 2018-10-27T20:46:13,739 i.s.g.c.r.RSocketClientTransport Connected successfully on 10.200.2.40:9090 [worker-client-sdk-client-epoll-14]
E 2018-10-27T20:46:13,750 i.s.b.BenchmarkState Exception occured on setUp at rampUpIteration: 5, cause: io.rsocket.exceptions.ApplicationErrorException: No reachable member with such service: /benchmarks/one, task won't start [worker-client-sdk-client-epoll-14]
io.rsocket.exceptions.ApplicationErrorException: No reachable member with such service: /benchmarks/one
	at io.rsocket.exceptions.Exceptions.from(Exceptions.java:53) ~[rsocket-core-0.11.8.jar:?]
	at io.rsocket.RSocketClient.handleFrame(RSocketClient.java:456) ~[rsocket-core-0.11.8.jar:?]
	at io.rsocket.RSocketClient.handleIncomingFrames(RSocketClient.java:419) ~[rsocket-core-0.11.8.jar:?]
	at reactor.core.publisher.LambdaSubscriber.onNext(LambdaSubscriber.java:130) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.MonoFlatMapMany$FlatMapManyInner.onNext(MonoFlatMapMany.java:238) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxGroupBy$UnicastGroupedFlux.drainRegular(FluxGroupBy.java:554) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxGroupBy$UnicastGroupedFlux.drain(FluxGroupBy.java:630) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxGroupBy$UnicastGroupedFlux.subscribe(FluxGroupBy.java:696) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.Flux.subscribe(Flux.java:6877) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.MonoFlatMapMany$FlatMapManyMain.onNext(MonoFlatMapMany.java:184) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1083) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.MonoProcessor.onNext(MonoProcessor.java:389) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at io.rsocket.internal.ClientServerInputMultiplexer.lambda$new$1(ClientServerInputMultiplexer.java:98) ~[rsocket-core-0.11.8.jar:?]
	at reactor.core.publisher.LambdaSubscriber.onNext(LambdaSubscriber.java:130) [reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxGroupBy$GroupByMain.drainLoop(FluxGroupBy.java:380) [reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxGroupBy$GroupByMain.drain(FluxGroupBy.java:316) [reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxGroupBy$GroupByMain.onNext(FluxGroupBy.java:201) [reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:108) [reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:108) [reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.netty.channel.FluxReceive.drainReceiver(FluxReceive.java:211) [reactor-netty-0.8.1.RELEASE.jar:0.8.1.RELEASE]
	at reactor.netty.channel.FluxReceive.onInboundNext(FluxReceive.java:327) [reactor-netty-0.8.1.RELEASE.jar:0.8.1.RELEASE]
	at reactor.netty.channel.ChannelOperations.onInboundNext(ChannelOperations.java:310) [reactor-netty-0.8.1.RELEASE.jar:0.8.1.RELEASE]
	at reactor.netty.http.client.HttpClientOperations.onInboundNext(HttpClientOperations.java:537) [reactor-netty-0.8.1.RELEASE.jar:0.8.1.RELEASE]
	at reactor.netty.http.client.WebsocketClientOperations.onInboundNext(WebsocketClientOperations.java:141) [reactor-netty-0.8.1.RELEASE.jar:0.8.1.RELEASE]
	at reactor.netty.channel.ChannelOperationsHandler.channelRead(ChannelOperationsHandler.java:141) [reactor-netty-0.8.1.RELEASE.jar:0.8.1.RELEASE]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) [netty-codec-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284) [netty-codec-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:808) [netty-transport-native-epoll-4.1.29.Final-linux-x86_64.jar:4.1.29.Final]
	at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:410) [netty-transport-native-epoll-4.1.29.Final-linux-x86_64.jar:4.1.29.Final]
	at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:310) [netty-transport-native-epoll-4.1.29.Final-linux-x86_64.jar:4.1.29.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:884) [netty-common-4.1.29.Final.jar:4.1.29.Final]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]
I 2018-10-27T20:46:14,737 i.s.g.c.r.RSocketClientTransport Connected successfully on 10.200.2.40:9090 [worker-client-sdk-client-epoll-15]
E 2018-10-27T20:46:14,746 i.s.b.BenchmarkState Exception occured on setUp at rampUpIteration: 6, cause: io.rsocket.exceptions.ApplicationErrorException: No reachable member with such service: /benchmarks/one, task won't start [worker-client-sdk-client-epoll-15]
io.rsocket.exceptions.ApplicationErrorException: No reachable member with such service: /benchmarks/one
	at io.rsocket.exceptions.Exceptions.from(Exceptions.java:53) ~[rsocket-core-0.11.8.jar:?]
	at io.rsocket.RSocketClient.handleFrame(RSocketClient.java:456) ~[rsocket-core-0.11.8.jar:?]
	at io.rsocket.RSocketClient.handleIncomingFrames(RSocketClient.java:419) ~[rsocket-core-0.11.8.jar:?]
	at reactor.core.publisher.LambdaSubscriber.onNext(LambdaSubscriber.java:130) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.MonoFlatMapMany$FlatMapManyInner.onNext(MonoFlatMapMany.java:238) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxGroupBy$UnicastGroupedFlux.drainRegular(FluxGroupBy.java:554) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxGroupBy$UnicastGroupedFlux.drain(FluxGroupBy.java:630) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxGroupBy$UnicastGroupedFlux.subscribe(FluxGroupBy.java:696) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.Flux.subscribe(Flux.java:6877) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.MonoFlatMapMany$FlatMapManyMain.onNext(MonoFlatMapMany.java:184) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1083) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.MonoProcessor.onNext(MonoProcessor.java:389) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at io.rsocket.internal.ClientServerInputMultiplexer.lambda$new$1(ClientServerInputMultiplexer.java:98) ~[rsocket-core-0.11.8.jar:?]
	at reactor.core.publisher.LambdaSubscriber.onNext(LambdaSubscriber.java:130) [reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxGroupBy$GroupByMain.drainLoop(FluxGroupBy.java:380) [reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxGroupBy$GroupByMain.drain(FluxGroupBy.java:316) [reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxGroupBy$GroupByMain.onNext(FluxGroupBy.java:201) [reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:108) [reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:108) [reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.netty.channel.FluxReceive.drainReceiver(FluxReceive.java:211) [reactor-netty-0.8.1.RELEASE.jar:0.8.1.RELEASE]
	at reactor.netty.channel.FluxReceive.onInboundNext(FluxReceive.java:327) [reactor-netty-0.8.1.RELEASE.jar:0.8.1.RELEASE]
	at reactor.netty.channel.ChannelOperations.onInboundNext(ChannelOperations.java:310) [reactor-netty-0.8.1.RELEASE.jar:0.8.1.RELEASE]
	at reactor.netty.http.client.HttpClientOperations.onInboundNext(HttpClientOperations.java:537) [reactor-netty-0.8.1.RELEASE.jar:0.8.1.RELEASE]
	at reactor.netty.http.client.WebsocketClientOperations.onInboundNext(WebsocketClientOperations.java:141) [reactor-netty-0.8.1.RELEASE.jar:0.8.1.RELEASE]
	at reactor.netty.channel.ChannelOperationsHandler.channelRead(ChannelOperationsHandler.java:141) [reactor-netty-0.8.1.RELEASE.jar:0.8.1.RELEASE]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) [netty-codec-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284) [netty-codec-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:808) [netty-transport-native-epoll-4.1.29.Final-linux-x86_64.jar:4.1.29.Final]
	at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:410) [netty-transport-native-epoll-4.1.29.Final-linux-x86_64.jar:4.1.29.Final]
	at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:310) [netty-transport-native-epoll-4.1.29.Final-linux-x86_64.jar:4.1.29.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:884) [netty-common-4.1.29.Final.jar:4.1.29.Final]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]
I 2018-10-27T20:46:15,739 i.s.g.c.r.RSocketClientTransport Connected successfully on 10.200.2.40:9090 [worker-client-sdk-client-epoll-16]
E 2018-10-27T20:46:15,747 i.s.b.BenchmarkState Exception occured on setUp at rampUpIteration: 7, cause: io.rsocket.exceptions.ApplicationErrorException: No reachable member with such service: /benchmarks/one, task won't start [worker-client-sdk-client-epoll-16]
io.rsocket.exceptions.ApplicationErrorException: No reachable member with such service: /benchmarks/one
	at io.rsocket.exceptions.Exceptions.from(Exceptions.java:53) ~[rsocket-core-0.11.8.jar:?]
	at io.rsocket.RSocketClient.handleFrame(RSocketClient.java:456) ~[rsocket-core-0.11.8.jar:?]
	at io.rsocket.RSocketClient.handleIncomingFrames(RSocketClient.java:419) ~[rsocket-core-0.11.8.jar:?]
	at reactor.core.publisher.LambdaSubscriber.onNext(LambdaSubscriber.java:130) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.MonoFlatMapMany$FlatMapManyInner.onNext(MonoFlatMapMany.java:238) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxGroupBy$UnicastGroupedFlux.drainRegular(FluxGroupBy.java:554) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxGroupBy$UnicastGroupedFlux.drain(FluxGroupBy.java:630) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxGroupBy$UnicastGroupedFlux.subscribe(FluxGroupBy.java:696) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.Flux.subscribe(Flux.java:6877) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.MonoFlatMapMany$FlatMapManyMain.onNext(MonoFlatMapMany.java:184) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1083) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.MonoProcessor.onNext(MonoProcessor.java:389) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at io.rsocket.internal.ClientServerInputMultiplexer.lambda$new$1(ClientServerInputMultiplexer.java:98) ~[rsocket-core-0.11.8.jar:?]
	at reactor.core.publisher.LambdaSubscriber.onNext(LambdaSubscriber.java:130) [reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxGroupBy$GroupByMain.drainLoop(FluxGroupBy.java:380) [reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxGroupBy$GroupByMain.drain(FluxGroupBy.java:316) [reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxGroupBy$GroupByMain.onNext(FluxGroupBy.java:201) [reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:108) [reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:108) [reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.netty.channel.FluxReceive.drainReceiver(FluxReceive.java:211) [reactor-netty-0.8.1.RELEASE.jar:0.8.1.RELEASE]
	at reactor.netty.channel.FluxReceive.onInboundNext(FluxReceive.java:327) [reactor-netty-0.8.1.RELEASE.jar:0.8.1.RELEASE]
	at reactor.netty.channel.ChannelOperations.onInboundNext(ChannelOperations.java:310) [reactor-netty-0.8.1.RELEASE.jar:0.8.1.RELEASE]
	at reactor.netty.http.client.HttpClientOperations.onInboundNext(HttpClientOperations.java:537) [reactor-netty-0.8.1.RELEASE.jar:0.8.1.RELEASE]
	at reactor.netty.http.client.WebsocketClientOperations.onInboundNext(WebsocketClientOperations.java:141) [reactor-netty-0.8.1.RELEASE.jar:0.8.1.RELEASE]
	at reactor.netty.channel.ChannelOperationsHandler.channelRead(ChannelOperationsHandler.java:141) [reactor-netty-0.8.1.RELEASE.jar:0.8.1.RELEASE]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) [netty-codec-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284) [netty-codec-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:808) [netty-transport-native-epoll-4.1.29.Final-linux-x86_64.jar:4.1.29.Final]
	at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:410) [netty-transport-native-epoll-4.1.29.Final-linux-x86_64.jar:4.1.29.Final]
	at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:310) [netty-transport-native-epoll-4.1.29.Final-linux-x86_64.jar:4.1.29.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:884) [netty-common-4.1.29.Final.jar:4.1.29.Final]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]
I 2018-10-27T20:46:16,761 i.s.g.c.r.RSocketClientTransport Connection closed on 10.200.2.40:9090 [worker-client-sdk-client-epoll-14]
I 2018-10-27T20:46:16,761 i.s.g.c.r.RSocketClientTransport Connection closed on 10.200.2.40:9090 [worker-client-sdk-client-epoll-11]
I 2018-10-27T20:46:16,761 i.s.g.c.r.RSocketClientTransport Connection closed on 10.200.2.40:9090 [worker-client-sdk-client-epoll-15]
I 2018-10-27T20:46:16,761 i.s.g.c.r.RSocketClientTransport Connection closed on 10.200.2.40:9090 [worker-client-sdk-client-epoll-13]
I 2018-10-27T20:46:16,761 i.s.g.c.r.RSocketClientTransport Connection closed on 10.200.2.40:9090 [worker-client-sdk-client-epoll-9]
I 2018-10-27T20:46:16,761 i.s.g.c.r.RSocketClientTransport Connection closed on 10.200.2.40:9090 [worker-client-sdk-client-epoll-12]
I 2018-10-27T20:46:16,761 i.s.g.c.r.RSocketClientTransport Connection closed on 10.200.2.40:9090 [worker-client-sdk-client-epoll-10]
I 2018-10-27T20:46:16,762 i.s.g.c.r.RSocketClientTransport Connection closed on 10.200.2.40:9090 [worker-client-sdk-client-epoll-16]

On the gateway:

I 2018-10-27T20:44:32,541 i.s.c.a.Slf4JConfigEventListener Config property changed: [
CA_CERTIFICATES_JAVA_VERSION=null->***,	source=null->env_var,	origin=null
DEFAULT_JAVA_OPTS=null->***,	source=null->env_var,	origin=null
DEFAULT_JMX_OPTS=null->***,	source=null->env_var,	origin=null
DEFAULT_OOM_OPTS=null->***,	source=null->env_var,	origin=null
HOME=null->***,	source=null->env_var,	origin=null
HOSTNAME=null->***,	source=null->env_var,	origin=null
JAVA_DEBIAN_VERSION=null->***,	source=null->env_var,	origin=null
JAVA_HOME=null->***,	source=null->env_var,	origin=null
JAVA_OPTS=null->***,	source=null->env_var,	origin=null
JAVA_VERSION=null->***,	source=null->env_var,	origin=null
LANG=null->***,	source=null->env_var,	origin=null
PATH=null->***,	source=null->env_var,	origin=null
PROGRAM_ARGS=null->***,	source=null->env_var,	origin=null
PWD=null->***,	source=null->env_var,	origin=null
YOURKIT_AGENT=null->***,	source=null->env_var,	origin=null
awt.toolkit=null->***,	source=null->sys_prop,	origin=null
com.sun.management.jmxremote.authenticate=null->***,	source=null->sys_prop,	origin=null
com.sun.management.jmxremote.port=null->***,	source=null->sys_prop,	origin=null
com.sun.management.jmxremote.rmi.port=null->***,	source=null->sys_prop,	origin=null
com.sun.management.jmxremote.ssl=null->***,	source=null->sys_prop,	origin=null
file.encoding=null->***,	source=null->sys_prop,	origin=null
file.encoding.pkg=null->***,	source=null->sys_prop,	origin=null
file.separator=null->***,	source=null->sys_prop,	origin=null
io.scalecube.gateway.discoveryPort=null->***,	source=null->cp,	origin=null->config-gateway.properties
io.scalecube.gateway.seeds=null->***,	source=null->sys_prop,	origin=null
io.scalecube.gateway.servicePort=null->***,	source=null->cp,	origin=null->config-gateway.properties
java.awt.graphicsenv=null->***,	source=null->sys_prop,	origin=null
java.awt.printerjob=null->***,	source=null->sys_prop,	origin=null
java.class.path=null->***,	source=null->sys_prop,	origin=null
java.class.version=null->***,	source=null->sys_prop,	origin=null
java.endorsed.dirs=null->***,	source=null->sys_prop,	origin=null
java.ext.dirs=null->***,	source=null->sys_prop,	origin=null
java.home=null->***,	source=null->sys_prop,	origin=null
java.io.tmpdir=null->***,	source=null->sys_prop,	origin=null
java.library.path=null->***,	source=null->sys_prop,	origin=null
java.rmi.server.hostname=null->***,	source=null->sys_prop,	origin=null
java.rmi.server.randomIDs=null->***,	source=null->sys_prop,	origin=null
java.runtime.name=null->***,	source=null->sys_prop,	origin=null
java.runtime.version=null->***,	source=null->sys_prop,	origin=null
java.specification.name=null->***,	source=null->sys_prop,	origin=null
java.specification.vendor=null->***,	source=null->sys_prop,	origin=null
java.specification.version=null->***,	source=null->sys_prop,	origin=null
java.vendor=null->***,	source=null->sys_prop,	origin=null
java.vendor.url=null->***,	source=null->sys_prop,	origin=null
java.vendor.url.bug=null->***,	source=null->sys_prop,	origin=null
java.version=null->***,	source=null->sys_prop,	origin=null
java.vm.info=null->***,	source=null->sys_prop,	origin=null
java.vm.name=null->***,	source=null->sys_prop,	origin=null
java.vm.specification.name=null->***,	source=null->sys_prop,	origin=null
java.vm.specification.vendor=null->***,	source=null->sys_prop,	origin=null
java.vm.specification.version=null->***,	source=null->sys_prop,	origin=null
java.vm.vendor=null->***,	source=null->sys_prop,	origin=null
java.vm.version=null->***,	source=null->sys_prop,	origin=null
line.separator=null->***,	source=null->sys_prop,	origin=null
log4j.configurationFile=null->***,	source=null->sys_prop,	origin=null
os.arch=null->***,	source=null->sys_prop,	origin=null
os.name=null->***,	source=null->sys_prop,	origin=null
os.version=null->***,	source=null->sys_prop,	origin=null
path.separator=null->***,	source=null->sys_prop,	origin=null
sun.arch.data.model=null->***,	source=null->sys_prop,	origin=null
sun.boot.class.path=null->***,	source=null->sys_prop,	origin=null
sun.boot.library.path=null->***,	source=null->sys_prop,	origin=null
sun.cpu.endian=null->***,	source=null->sys_prop,	origin=null
sun.cpu.isalist=null->***,	source=null->sys_prop,	origin=null
sun.io.unicode.encoding=null->***,	source=null->sys_prop,	origin=null
sun.java.command=null->***,	source=null->sys_prop,	origin=null
sun.java.launcher=null->***,	source=null->sys_prop,	origin=null
sun.jnu.encoding=null->***,	source=null->sys_prop,	origin=null
sun.management.compiler=null->***,	source=null->sys_prop,	origin=null
sun.os.patch.level=null->***,	source=null->sys_prop,	origin=null
sun.rmi.dgc.client.gcInterval=null->***,	source=null->sys_prop,	origin=null
sun.rmi.dgc.server.gcInterval=null->***,	source=null->sys_prop,	origin=null
user.dir=null->***,	source=null->sys_prop,	origin=null
user.home=null->***,	source=null->sys_prop,	origin=null
user.language=null->***,	source=null->sys_prop,	origin=null
user.name=null->***,	source=null->sys_prop,	origin=null
user.timezone=null->***,	source=null->sys_prop,	origin=null
] [main]
I 2018-10-27T20:44:32,557 i.s.c.ConfigRegistryImpl Registered JMX MBean: javax.management.MBeanInfo[description=Information on the management interface of the MBean, attributes=[javax.management.MBeanAttributeInfo[description=Attribute exposed for management, name=Properties, type=java.util.Collection, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Attribute exposed for management, name=Settings, type=java.util.Collection, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Attribute exposed for management, name=Sources, type=java.util.Collection, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Attribute exposed for management, name=Events, type=java.util.Collection, read-only, descriptor={}]], constructors=[javax.management.MBeanConstructorInfo[description=Public constructor of the MBean, name=io.scalecube.config.jmx.JmxConfigRegistry, signature=[javax.management.MBeanParameterInfo[description=, name=p1, type=io.scalecube.config.ConfigRegistry, descriptor={}]], descriptor={}]], operations=[], notifications=[], descriptor={immutableInfo=true, interfaceClassName=io.scalecube.config.jmx.JmxConfigRegistryMBean, mxbean=false}] [main]
I 2018-10-27T20:44:32,577 i.s.g.GatewayRunner ####################################################################### [main]
I 2018-10-27T20:44:32,577 i.s.g.GatewayRunner Starting Gateway on Config{servicePort=5801, discoveryPort=4801, seeds=[10.200.2.40:4801], memberHost=null, memberPort=null} [main]
I 2018-10-27T20:44:32,577 i.s.g.GatewayRunner ####################################################################### [main]
I 2018-10-27T20:44:33,819 i.s.s.d.a.ServiceDiscovery Start scalecube service discovery with config: ClusterConfig{seedMembers=[10.200.2.40:4801], metadata={}, syncInterval=30000, syncTimeout=3000, suspicionMult=5, syncGroup='default', pingInterval=1000, pingTimeout=500, pingReqMembers=3, gossipInterval=200, gossipFanout=3, gossipRepeatMult=3, transportConfig=TransportConfig{listenAddress=null, listenInterface=null, preferIPv6=false, port=4801, connectTimeout=3000, useNetworkEmulator=false, enableEpoll=true, bossThreads=2, workerThreads=0}, memberHost=null, memberPort=null} [rsocket-boss-3-1]
I 2018-10-27T20:44:33,845 i.s.t.BootstrapFactory Use epoll transport [rsocket-boss-3-1]
I 2018-10-27T20:44:33,864 i.s.t.TransportImpl Bound to: 10.200.2.40:4801 [sc-boss-4-1]
I 2018-10-27T20:44:33,936 i.s.g.h.HttpGateway Starting gateway with GatewayConfig{name='http', gatewayClass=io.scalecube.gateway.http.HttpGateway, options={}, port=8080, workerThreadPool=null} [sc-boss-4-1]
I 2018-10-27T20:44:33,974 i.s.g.h.HttpGateway Gateway has been started successfully on /0:0:0:0:0:0:0:0%0:8080 [sc-boss-4-1]
I 2018-10-27T20:44:33,978 i.s.g.w.WebsocketGateway Starting gateway with GatewayConfig{name='ws', gatewayClass=io.scalecube.gateway.websocket.WebsocketGateway, options={}, port=7070, workerThreadPool=null} [sc-boss-4-1]
I 2018-10-27T20:44:33,984 i.s.g.w.WebsocketGateway Gateway has been started successfully on /0:0:0:0:0:0:0:0%0:7070 [sc-boss-4-1]
I 2018-10-27T20:44:33,985 i.s.g.r.w.RSocketWebsocketGateway Starting gateway with GatewayConfig{name='rsws', gatewayClass=io.scalecube.gateway.rsocket.websocket.RSocketWebsocketGateway, options={}, port=9090, workerThreadPool=null} [sc-boss-4-1]
I 2018-10-27T20:44:33,992 i.s.g.r.w.RSocketWebsocketGateway Gateway has been started successfully on /0:0:0:0:0:0:0:0%0:9090 [sc-boss-4-1]
I 2018-10-27T20:44:53,007 i.s.s.d.a.ServiceDiscovery Service Reference was ADDED since new Member has joined the cluster [email protected]:4802{{"id":"6dcdd1be-84c8-45d2-94c8-3450ea4bdb2c","host":"10.200.2.176","port":5802,"contentTypes":["application/json"],"tags":{},"serviceRegistrations":[{"namespace":"benchmarks","tags":{},"methods":[{"action":"failure","tags":{},"communicationMode":"REQUEST_RESPONSE"},{"action":"infiniteStream","tags":{},"communicationMode":"REQUEST_STREAM"},{"action":"one","tags":{},"communicationMode":"REQUEST_RESPONSE"}]},{"namespace":"greeting","tags":{},"methods":[{"action":"pojo/one","tags":{},"communicationMode":"REQUEST_RESPONSE"},{"action":"failing/one","tags":{},"communicationMode":"REQUEST_RESPONSE"},{"action":"many","tags":{},"communicationMode":"REQUEST_STREAM"},{"action":"never/one","tags":{},"communicationMode":"REQUEST_RESPONSE"},{"action":"pojo/many","tags":{},"communicationMode":"REQUEST_STREAM"},{"action":"delay/many","tags":{},"communicationMode":"REQUEST_STREAM"},{"action":"manyStream","tags":{},"communicationMode":"REQUEST_STREAM"},{"action":"failing/many","tags":{},"communicationMode":"REQUEST_STREAM"},{"action":"empty/one","tags":{},"communicationMode":"REQUEST_RESPONSE"},{"action":"empty/many","tags":{},"communicationMode":"REQUEST_STREAM"},{"action":"delay/one","tags":{},"communicationMode":"REQUEST_RESPONSE"},{"action":"one","tags":{},"communicationMode":"REQUEST_RESPONSE"}]}]}=service} : ServiceEndpoint{id='6dcdd1be-84c8-45d2-94c8-3450ea4bdb2c', host='10.200.2.176', port=5802, tags={}, serviceRegistrations=[ServiceRegistration{namespace='benchmarks', tags={}, methods=[ServiceMethodDefinition{action='failure', tags={}}, ServiceMethodDefinition{action='infiniteStream', tags={}}, ServiceMethodDefinition{action='one', tags={}}]}, ServiceRegistration{namespace='greeting', tags={}, methods=[ServiceMethodDefinition{action='pojo/one', tags={}}, ServiceMethodDefinition{action='failing/one', tags={}}, ServiceMethodDefinition{action='many', tags={}}, ServiceMethodDefinition{action='never/one', tags={}}, ServiceMethodDefinition{action='pojo/many', tags={}}, ServiceMethodDefinition{action='delay/many', tags={}}, ServiceMethodDefinition{action='manyStream', tags={}}, ServiceMethodDefinition{action='failing/many', tags={}}, ServiceMethodDefinition{action='empty/one', tags={}}, ServiceMethodDefinition{action='empty/many', tags={}}, ServiceMethodDefinition{action='delay/one', tags={}}, ServiceMethodDefinition{action='one', tags={}}]}]} [sc-membership-4801]
E 2018-10-27T20:45:07,669 r.c.p.Operators Operator called default onErrorDropped [sc-io-5-1]
reactor.core.Exceptions$ReactorRejectedExecutionException: Scheduler unavailable
	at reactor.core.Exceptions.failWithRejected(Exceptions.java:249) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.Operators.onRejectedExecution(Operators.java:412) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxPublishOn$PublishOnConditionalSubscriber.trySchedule(FluxPublishOn.java:759) [reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxPublishOn$PublishOnConditionalSubscriber.onNext(FluxPublishOn.java:687) [reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxOnBackpressureBuffer$BackpressureBufferSubscriber.drainFused(FluxOnBackpressureBuffer.java:275) [reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxOnBackpressureBuffer$BackpressureBufferSubscriber.drain(FluxOnBackpressureBuffer.java:199) [reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxOnBackpressureBuffer$BackpressureBufferSubscriber.onNext(FluxOnBackpressureBuffer.java:164) [reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.DirectProcessor$DirectInner.onNext(DirectProcessor.java:297) [reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.DirectProcessor.onNext(DirectProcessor.java:106) [reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.SerializedSubscriber.onNext(SerializedSubscriber.java:89) [reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.DelegateProcessor.onNext(DelegateProcessor.java:64) [reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxCreate$IgnoreSink.next(FluxCreate.java:573) [reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxCreate$SerializedSink.next(FluxCreate.java:151) [reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at io.scalecube.transport.MessageHandler.channelRead(MessageHandler.java:33) [scalecube-transport-2.1.1.jar:?]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102) [netty-codec-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) [netty-codec-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284) [netty-codec-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:808) [netty-transport-native-epoll-4.1.29.Final-linux-x86_64.jar:4.1.29.Final]
	at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:410) [netty-transport-native-epoll-4.1.29.Final-linux-x86_64.jar:4.1.29.Final]
	at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:310) [netty-transport-native-epoll-4.1.29.Final-linux-x86_64.jar:4.1.29.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:884) [netty-common-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [netty-common-4.1.29.Final.jar:4.1.29.Final]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]
Caused by: java.util.concurrent.RejectedExecutionException: Scheduler unavailable
	at reactor.core.Exceptions.<clinit>(Exceptions.java:502) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.scheduler.Schedulers.workerSchedule(Schedulers.java:708) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.scheduler.ExecutorServiceWorker.schedule(ExecutorServiceWorker.java:43) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxPublishOn$PublishOnConditionalSubscriber.trySchedule(FluxPublishOn.java:755) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	... 33 more
W 2018-10-27T20:45:07,688 i.s.t.ExceptionHandler Exception caught for channel [id: 0x13a5c4ac, L:/10.200.2.40:4801 - R:/10.200.2.176:49374], reactor.core.Exceptions$ReactorRejectedExecutionException: Scheduler unavailable [sc-io-5-1]
reactor.core.Exceptions$BubblingException: reactor.core.Exceptions$ReactorRejectedExecutionException: Scheduler unavailable
	at reactor.core.Exceptions.bubble(Exceptions.java:154) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.Operators.onErrorDropped(Operators.java:263) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxTimeout$TimeoutMainSubscriber.onError(FluxTimeout.java:182) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxTake$TakeFuseableSubscriber.onError(FluxTake.java:415) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxFilterFuseable$FilterFuseableSubscriber.onError(FluxFilterFuseable.java:142) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxFilterFuseable$FilterFuseableConditionalSubscriber.onError(FluxFilterFuseable.java:319) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxPublishOn$PublishOnConditionalSubscriber.trySchedule(FluxPublishOn.java:759) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxPublishOn$PublishOnConditionalSubscriber.onNext(FluxPublishOn.java:687) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxOnBackpressureBuffer$BackpressureBufferSubscriber.drainFused(FluxOnBackpressureBuffer.java:275) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxOnBackpressureBuffer$BackpressureBufferSubscriber.drain(FluxOnBackpressureBuffer.java:199) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxOnBackpressureBuffer$BackpressureBufferSubscriber.onNext(FluxOnBackpressureBuffer.java:164) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.DirectProcessor$DirectInner.onNext(DirectProcessor.java:297) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.DirectProcessor.onNext(DirectProcessor.java:106) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.SerializedSubscriber.onNext(SerializedSubscriber.java:89) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.DelegateProcessor.onNext(DelegateProcessor.java:64) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxCreate$IgnoreSink.next(FluxCreate.java:573) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxCreate$SerializedSink.next(FluxCreate.java:151) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at io.scalecube.transport.MessageHandler.channelRead(MessageHandler.java:33) ~[scalecube-transport-2.1.1.jar:?]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102) [netty-codec-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) [netty-codec-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284) [netty-codec-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:808) [netty-transport-native-epoll-4.1.29.Final-linux-x86_64.jar:4.1.29.Final]
	at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:410) [netty-transport-native-epoll-4.1.29.Final-linux-x86_64.jar:4.1.29.Final]
	at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:310) [netty-transport-native-epoll-4.1.29.Final-linux-x86_64.jar:4.1.29.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:884) [netty-common-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [netty-common-4.1.29.Final.jar:4.1.29.Final]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]
Caused by: reactor.core.Exceptions$ReactorRejectedExecutionException: Scheduler unavailable
	at reactor.core.Exceptions.failWithRejected(Exceptions.java:249) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.Operators.onRejectedExecution(Operators.java:412) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	... 34 more
Caused by: java.util.concurrent.RejectedExecutionException: Scheduler unavailable
	at reactor.core.Exceptions.<clinit>(Exceptions.java:502) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.scheduler.Schedulers.workerSchedule(Schedulers.java:708) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.scheduler.ExecutorServiceWorker.schedule(ExecutorServiceWorker.java:43) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	at reactor.core.publisher.FluxPublishOn$PublishOnConditionalSubscriber.trySchedule(FluxPublishOn.java:755) ~[reactor-core-3.1.8.RELEASE.jar:3.1.8.RELEASE]
	... 33 more
I 2018-10-27T20:45:17,672 i.s.s.d.a.ServiceDiscovery Service Reference was REMOVED since Member have left the cluster [email protected]:4802{{"id":"6dcdd1be-84c8-45d2-94c8-3450ea4bdb2c","host":"10.200.2.176","port":5802,"contentTypes":["application/json"],"tags":{},"serviceRegistrations":[{"namespace":"benchmarks","tags":{},"methods":[{"action":"failure","tags":{},"communicationMode":"REQUEST_RESPONSE"},{"action":"infiniteStream","tags":{},"communicationMode":"REQUEST_STREAM"},{"action":"one","tags":{},"communicationMode":"REQUEST_RESPONSE"}]},{"namespace":"greeting","tags":{},"methods":[{"action":"pojo/one","tags":{},"communicationMode":"REQUEST_RESPONSE"},{"action":"failing/one","tags":{},"communicationMode":"REQUEST_RESPONSE"},{"action":"many","tags":{},"communicationMode":"REQUEST_STREAM"},{"action":"never/one","tags":{},"communicationMode":"REQUEST_RESPONSE"},{"action":"pojo/many","tags":{},"communicationMode":"REQUEST_STREAM"},{"action":"delay/many","tags":{},"communicationMode":"REQUEST_STREAM"},{"action":"manyStream","tags":{},"communicationMode":"REQUEST_STREAM"},{"action":"failing/many","tags":{},"communicationMode":"REQUEST_STREAM"},{"action":"empty/one","tags":{},"communicationMode":"REQUEST_RESPONSE"},{"action":"empty/many","tags":{},"communicationMode":"REQUEST_STREAM"},{"action":"delay/one","tags":{},"communicationMode":"REQUEST_RESPONSE"},{"action":"one","tags":{},"communicationMode":"REQUEST_RESPONSE"}]}]}=service} : ServiceEndpoint{id='6dcdd1be-84c8-45d2-94c8-3450ea4bdb2c', host='10.200.2.176', port=5802, tags={}, serviceRegistrations=[ServiceRegistration{namespace='benchmarks', tags={}, methods=[ServiceMethodDefinition{action='failure', tags={}}, ServiceMethodDefinition{action='infiniteStream', tags={}}, ServiceMethodDefinition{action='one', tags={}}]}, ServiceRegistration{namespace='greeting', tags={}, methods=[ServiceMethodDefinition{action='pojo/one', tags={}}, ServiceMethodDefinition{action='failing/one', tags={}}, ServiceMethodDefinition{action='many', tags={}}, ServiceMethodDefinition{action='never/one', tags={}}, ServiceMethodDefinition{action='pojo/many', tags={}}, ServiceMethodDefinition{action='delay/many', tags={}}, ServiceMethodDefinition{action='manyStream', tags={}}, ServiceMethodDefinition{action='failing/many', tags={}}, ServiceMethodDefinition{action='empty/one', tags={}}, ServiceMethodDefinition{action='empty/many', tags={}}, ServiceMethodDefinition{action='delay/one', tags={}}, ServiceMethodDefinition{action='one', tags={}}]}]} [sc-membership-4801]
I 2018-10-27T20:46:09,890 i.s.g.r.w.RSocketWebsocketAcceptor Accepted rsocket websocket: io.rsocket.RSocketClient@63bc74a9, connectionSetup: io.rsocket.ConnectionSetupPayload$DefaultConnectionSetupPayload@2d1a9cf9 [rsocket-worker-1-1]
I 2018-10-27T20:46:09,890 i.s.g.r.w.RSocketWebsocketAcceptor Accepted rsocket websocket: io.rsocket.RSocketClient@15fdff73, connectionSetup: io.rsocket.ConnectionSetupPayload$DefaultConnectionSetupPayload@768c9334 [rsocket-worker-1-2]
E 2018-10-27T20:46:09,947 i.s.s.ServiceCall Failed  to invoke service, No reachable member with such service definition [/benchmarks/one], args [ServiceMessage {headers: {q=/benchmarks/one}, data: null}] [rsocket-worker-1-2]
E 2018-10-27T20:46:09,947 i.s.s.ServiceCall Failed  to invoke service, No reachable member with such service definition [/benchmarks/one], args [ServiceMessage {headers: {q=/benchmarks/one}, data: null}] [rsocket-worker-1-1]
I 2018-10-27T20:46:10,745 i.s.g.r.w.RSocketWebsocketAcceptor Accepted rsocket websocket: io.rsocket.RSocketClient@66086623, connectionSetup: io.rsocket.ConnectionSetupPayload$DefaultConnectionSetupPayload@6237fe3a [rsocket-worker-1-3]
E 2018-10-27T20:46:10,747 i.s.s.ServiceCall Failed  to invoke service, No reachable member with such service definition [/benchmarks/one], args [ServiceMessage {headers: {q=/benchmarks/one}, data: null}] [rsocket-worker-1-3]
I 2018-10-27T20:46:11,746 i.s.g.r.w.RSocketWebsocketAcceptor Accepted rsocket websocket: io.rsocket.RSocketClient@4ecf3ce0, connectionSetup: io.rsocket.ConnectionSetupPayload$DefaultConnectionSetupPayload@4af0a1e1 [rsocket-worker-1-4]
E 2018-10-27T20:46:11,747 i.s.s.ServiceCall Failed  to invoke service, No reachable member with such service definition [/benchmarks/one], args [ServiceMessage {headers: {q=/benchmarks/one}, data: null}] [rsocket-worker-1-4]
I 2018-10-27T20:46:12,745 i.s.g.r.w.RSocketWebsocketAcceptor Accepted rsocket websocket: io.rsocket.RSocketClient@10da0d80, connectionSetup: io.rsocket.ConnectionSetupPayload$DefaultConnectionSetupPayload@14e59b43 [rsocket-worker-1-5]
E 2018-10-27T20:46:12,746 i.s.s.ServiceCall Failed  to invoke service, No reachable member with such service definition [/benchmarks/one], args [ServiceMessage {headers: {q=/benchmarks/one}, data: null}] [rsocket-worker-1-5]
I 2018-10-27T20:46:13,745 i.s.g.r.w.RSocketWebsocketAcceptor Accepted rsocket websocket: io.rsocket.RSocketClient@efc7886, connectionSetup: io.rsocket.ConnectionSetupPayload$DefaultConnectionSetupPayload@4bfee73d [rsocket-worker-1-6]
E 2018-10-27T20:46:13,747 i.s.s.ServiceCall Failed  to invoke service, No reachable member with such service definition [/benchmarks/one], args [ServiceMessage {headers: {q=/benchmarks/one}, data: null}] [rsocket-worker-1-6]
I 2018-10-27T20:46:14,744 i.s.g.r.w.RSocketWebsocketAcceptor Accepted rsocket websocket: io.rsocket.RSocketClient@615b99ad, connectionSetup: io.rsocket.ConnectionSetupPayload$DefaultConnectionSetupPayload@6564015e [rsocket-worker-1-7]
E 2018-10-27T20:46:14,746 i.s.s.ServiceCall Failed  to invoke service, No reachable member with such service definition [/benchmarks/one], args [ServiceMessage {headers: {q=/benchmarks/one}, data: null}] [rsocket-worker-1-7]
I 2018-10-27T20:46:15,746 i.s.g.r.w.RSocketWebsocketAcceptor Accepted rsocket websocket: io.rsocket.RSocketClient@12d12439, connectionSetup: io.rsocket.ConnectionSetupPayload$DefaultConnectionSetupPayload@57d3a807 [rsocket-worker-1-8]
E 2018-10-27T20:46:15,748 i.s.s.ServiceCall Failed  to invoke service, No reachable member with such service definition [/benchmarks/one], args [ServiceMessage {headers: {q=/benchmarks/one}, data: null}] [rsocket-worker-1-8]
I 2018-10-27T20:46:16,765 i.s.g.r.w.RSocketWebsocketAcceptor Client disconnected: io.rsocket.RSocketClient@efc7886 [rsocket-worker-1-6]
I 2018-10-27T20:46:16,765 i.s.g.r.w.RSocketWebsocketAcceptor Client disconnected: io.rsocket.RSocketClient@15fdff73 [rsocket-worker-1-2]
I 2018-10-27T20:46:16,765 i.s.g.r.w.RSocketWebsocketAcceptor Client disconnected: io.rsocket.RSocketClient@66086623 [rsocket-worker-1-3]
I 2018-10-27T20:46:16,765 i.s.g.r.w.RSocketWebsocketAcceptor Client disconnected: io.rsocket.RSocketClient@12d12439 [rsocket-worker-1-8]
I 2018-10-27T20:46:16,765 i.s.g.r.w.RSocketWebsocketAcceptor Client disconnected: io.rsocket.RSocketClient@10da0d80 [rsocket-worker-1-5]
I 2018-10-27T20:46:16,765 i.s.g.r.w.RSocketWebsocketAcceptor Client disconnected: io.rsocket.RSocketClient@63bc74a9 [rsocket-worker-1-1]
I 2018-10-27T20:46:16,765 i.s.g.r.w.RSocketWebsocketAcceptor Client disconnected: io.rsocket.RSocketClient@4ecf3ce0 [rsocket-worker-1-4]
I 2018-10-27T20:46:16,767 i.s.g.r.w.RSocketWebsocketAcceptor Client disconnected: io.rsocket.RSocketClient@615b99ad [rsocket-worker-1-7]

On the service:

I 2018-10-27T20:44:51,112 i.s.c.a.Slf4JConfigEventListener Config property changed: [
CA_CERTIFICATES_JAVA_VERSION=null->***,	source=null->env_var,	origin=null
DEFAULT_JAVA_OPTS=null->***,	source=null->env_var,	origin=null
DEFAULT_JMX_OPTS=null->***,	source=null->env_var,	origin=null
DEFAULT_OOM_OPTS=null->***,	source=null->env_var,	origin=null
HOME=null->***,	source=null->env_var,	origin=null
HOSTNAME=null->***,	source=null->env_var,	origin=null
JAVA_DEBIAN_VERSION=null->***,	source=null->env_var,	origin=null
JAVA_HOME=null->***,	source=null->env_var,	origin=null
JAVA_OPTS=null->***,	source=null->env_var,	origin=null
JAVA_VERSION=null->***,	source=null->env_var,	origin=null
LANG=null->***,	source=null->env_var,	origin=null
PATH=null->***,	source=null->env_var,	origin=null
PROGRAM_ARGS=null->***,	source=null->env_var,	origin=null
PWD=null->***,	source=null->env_var,	origin=null
YOURKIT_AGENT=null->***,	source=null->env_var,	origin=null
awt.toolkit=null->***,	source=null->sys_prop,	origin=null
com.sun.management.jmxremote.authenticate=null->***,	source=null->sys_prop,	origin=null
com.sun.management.jmxremote.port=null->***,	source=null->sys_prop,	origin=null
com.sun.management.jmxremote.rmi.port=null->***,	source=null->sys_prop,	origin=null
com.sun.management.jmxremote.ssl=null->***,	source=null->sys_prop,	origin=null
file.encoding=null->***,	source=null->sys_prop,	origin=null
file.encoding.pkg=null->***,	source=null->sys_prop,	origin=null
file.separator=null->***,	source=null->sys_prop,	origin=null
io.scalecube.gateway.examples.discoveryPort=null->***,	source=null->cp,	origin=null->config-examples.properties
io.scalecube.gateway.examples.seeds=null->***,	source=null->sys_prop,	origin=null
io.scalecube.gateway.examples.servicePort=null->***,	source=null->cp,	origin=null->config-examples.properties
java.awt.graphicsenv=null->***,	source=null->sys_prop,	origin=null
java.awt.printerjob=null->***,	source=null->sys_prop,	origin=null
java.class.path=null->***,	source=null->sys_prop,	origin=null
java.class.version=null->***,	source=null->sys_prop,	origin=null
java.endorsed.dirs=null->***,	source=null->sys_prop,	origin=null
java.ext.dirs=null->***,	source=null->sys_prop,	origin=null
java.home=null->***,	source=null->sys_prop,	origin=null
java.io.tmpdir=null->***,	source=null->sys_prop,	origin=null
java.library.path=null->***,	source=null->sys_prop,	origin=null
java.rmi.server.hostname=null->***,	source=null->sys_prop,	origin=null
java.rmi.server.randomIDs=null->***,	source=null->sys_prop,	origin=null
java.runtime.name=null->***,	source=null->sys_prop,	origin=null
java.runtime.version=null->***,	source=null->sys_prop,	origin=null
java.specification.name=null->***,	source=null->sys_prop,	origin=null
java.specification.vendor=null->***,	source=null->sys_prop,	origin=null
java.specification.version=null->***,	source=null->sys_prop,	origin=null
java.vendor=null->***,	source=null->sys_prop,	origin=null
java.vendor.url=null->***,	source=null->sys_prop,	origin=null
java.vendor.url.bug=null->***,	source=null->sys_prop,	origin=null
java.version=null->***,	source=null->sys_prop,	origin=null
java.vm.info=null->***,	source=null->sys_prop,	origin=null
java.vm.name=null->***,	source=null->sys_prop,	origin=null
java.vm.specification.name=null->***,	source=null->sys_prop,	origin=null
java.vm.specification.vendor=null->***,	source=null->sys_prop,	origin=null
java.vm.specification.version=null->***,	source=null->sys_prop,	origin=null
java.vm.vendor=null->***,	source=null->sys_prop,	origin=null
java.vm.version=null->***,	source=null->sys_prop,	origin=null
line.separator=null->***,	source=null->sys_prop,	origin=null
log4j.configurationFile=null->***,	source=null->sys_prop,	origin=null
os.arch=null->***,	source=null->sys_prop,	origin=null
os.name=null->***,	source=null->sys_prop,	origin=null
os.version=null->***,	source=null->sys_prop,	origin=null
path.separator=null->***,	source=null->sys_prop,	origin=null
sun.arch.data.model=null->***,	source=null->sys_prop,	origin=null
sun.boot.class.path=null->***,	source=null->sys_prop,	origin=null
sun.boot.library.path=null->***,	source=null->sys_prop,	origin=null
sun.cpu.endian=null->***,	source=null->sys_prop,	origin=null
sun.cpu.isalist=null->***,	source=null->sys_prop,	origin=null
sun.io.unicode.encoding=null->***,	source=null->sys_prop,	origin=null
sun.java.command=null->***,	source=null->sys_prop,	origin=null
sun.java.launcher=null->***,	source=null->sys_prop,	origin=null
sun.jnu.encoding=null->***,	source=null->sys_prop,	origin=null
sun.management.compiler=null->***,	source=null->sys_prop,	origin=null
sun.os.patch.level=null->***,	source=null->sys_prop,	origin=null
sun.rmi.dgc.client.gcInterval=null->***,	source=null->sys_prop,	origin=null
sun.rmi.dgc.server.gcInterval=null->***,	source=null->sys_prop,	origin=null
user.dir=null->***,	source=null->sys_prop,	origin=null
user.home=null->***,	source=null->sys_prop,	origin=null
user.language=null->***,	source=null->sys_prop,	origin=null
user.name=null->***,	source=null->sys_prop,	origin=null
user.timezone=null->***,	source=null->sys_prop,	origin=null
] [main]
I 2018-10-27T20:44:51,127 i.s.c.ConfigRegistryImpl Registered JMX MBean: javax.management.MBeanInfo[description=Information on the management interface of the MBean, attributes=[javax.management.MBeanAttributeInfo[description=Attribute exposed for management, name=Properties, type=java.util.Collection, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Attribute exposed for management, name=Settings, type=java.util.Collection, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Attribute exposed for management, name=Sources, type=java.util.Collection, read-only, descriptor={}], javax.management.MBeanAttributeInfo[description=Attribute exposed for management, name=Events, type=java.util.Collection, read-only, descriptor={}]], constructors=[javax.management.MBeanConstructorInfo[description=Public constructor of the MBean, name=io.scalecube.config.jmx.JmxConfigRegistry, signature=[javax.management.MBeanParameterInfo[description=, name=p1, type=io.scalecube.config.ConfigRegistry, descriptor={}]], descriptor={}]], operations=[], notifications=[], descriptor={immutableInfo=true, interfaceClassName=io.scalecube.config.jmx.JmxConfigRegistryMBean, mxbean=false}] [main]
I 2018-10-27T20:44:51,148 i.s.g.e.ExamplesRunner *********************************************************************** [main]
I 2018-10-27T20:44:51,148 i.s.g.e.ExamplesRunner Starting Examples services on Config{servicePort=5802, discoveryPort=4802, numOfThreads=null, seeds=[10.200.2.40:4801], memberHost=null, memberPort=null} [main]
I 2018-10-27T20:44:51,148 i.s.g.e.ExamplesRunner *********************************************************************** [main]
I 2018-10-27T20:44:51,148 i.s.g.e.ExamplesRunner Number of worker threads: 8 [main]
I 2018-10-27T20:44:52,356 i.s.s.d.a.ServiceDiscovery Start scalecube service discovery with config: ClusterConfig{seedMembers=[10.200.2.40:4801], metadata={{"id":"6dcdd1be-84c8-45d2-94c8-3450ea4bdb2c","host":"10.200.2.176","port":5802,"contentTypes":["application/json"],"tags":{},"serviceRegistrations":[{"namespace":"benchmarks","tags":{},"methods":[{"action":"failure","tags":{},"communicationMode":"REQUEST_RESPONSE"},{"action":"infiniteStream","tags":{},"communicationMode":"REQUEST_STREAM"},{"action":"one","tags":{},"communicationMode":"REQUEST_RESPONSE"}]},{"namespace":"greeting","tags":{},"methods":[{"action":"pojo/one","tags":{},"communicationMode":"REQUEST_RESPONSE"},{"action":"failing/one","tags":{},"communicationMode":"REQUEST_RESPONSE"},{"action":"many","tags":{},"communicationMode":"REQUEST_STREAM"},{"action":"never/one","tags":{},"communicationMode":"REQUEST_RESPONSE"},{"action":"pojo/many","tags":{},"communicationMode":"REQUEST_STREAM"},{"action":"delay/many","tags":{},"communicationMode":"REQUEST_STREAM"},{"action":"manyStream","tags":{},"communicationMode":"REQUEST_STREAM"},{"action":"failing/many","tags":{},"communicationMode":"REQUEST_STREAM"},{"action":"empty/one","tags":{},"communicationMode":"REQUEST_RESPONSE"},{"action":"empty/many","tags":{},"communicationMode":"REQUEST_STREAM"},{"action":"delay/one","tags":{},"communicationMode":"REQUEST_RESPONSE"},{"action":"one","tags":{},"communicationMode":"REQUEST_RESPONSE"}]}]}=service}, syncInterval=30000, syncTimeout=3000, suspicionMult=5, syncGroup='default', pingInterval=1000, pingTimeout=500, pingReqMembers=3, gossipInterval=200, gossipFanout=3, gossipRepeatMult=3, transportConfig=TransportConfig{listenAddress=null, listenInterface=null, preferIPv6=false, port=4802, connectTimeout=3000, useNetworkEmulator=false, enableEpoll=true, bossThreads=2, workerThreads=0}, memberHost=null, memberPort=null} [rsocket-boss-3-1]
I 2018-10-27T20:44:52,383 i.s.t.BootstrapFactory Use epoll transport [rsocket-boss-3-1]
I 2018-10-27T20:44:52,403 i.s.t.TransportImpl Bound to: 10.200.2.176:4802 [sc-boss-4-1]
I 2018-10-27T20:44:53,086 i.s.c.m.MembershipProtocolImpl Joined cluster 'default': [{m: [email protected]:4801, s: ALIVE, inc: 0}, {m: [email protected]:4802{{"id":"6dcdd1be-84c8-45d2-94c8-3450ea4bdb2c","host":"10.200.2.176","port":5802,"contentTypes":["application/json"],"tags":{},"serviceRegistrations":[{"namespace":"benchmarks","tags":{},"methods":[{"action":"failure","tags":{},"communicationMode":"REQUEST_RESPONSE"},{"action":"infiniteStream","tags":{},"communicationMode":"REQUEST_STREAM"},{"action":"one","tags":{},"communicationMode":"REQUEST_RESPONSE"}]},{"namespace":"greeting","tags":{},"methods":[{"action":"pojo/one","tags":{},"communicationMode":"REQUEST_RESPONSE"},{"action":"failing/one","tags":{},"communicationMode":"REQUEST_RESPONSE"},{"action":"many","tags":{},"communicationMode":"REQUEST_STREAM"},{"action":"never/one","tags":{},"communicationMode":"REQUEST_RESPONSE"},{"action":"pojo/many","tags":{},"communicationMode":"REQUEST_STREAM"},{"action":"delay/many","tags":{},"communicationMode":"REQUEST_STREAM"},{"action":"manyStream","tags":{},"communicationMode":"REQUEST_STREAM"},{"action":"failing/many","tags":{},"communicationMode":"REQUEST_STREAM"},{"action":"empty/one","tags":{},"communicationMode":"REQUEST_RESPONSE"},{"action":"empty/many","tags":{},"communicationMode":"REQUEST_STREAM"},{"action":"delay/one","tags":{},"communicationMode":"REQUEST_RESPONSE"},{"action":"one","tags":{},"communicationMode":"REQUEST_RESPONSE"}]}]}=service}, s: ALIVE, inc: 0}] [sc-membership-4802]

keeping metadata store offheap?

currently metadata store is kept in memory.
alternative is to keep metadata store offheap (mem-map-file?)
option can be to be able to choose/change/plug a store

Adjusted frame length exceeds 32768

After migrating to json Cluster at Microservices started to produce error like following:

I 2018-12-05T09:38:37,755 i.s.s.g.GatewayRunner ####################################################################### [main]
I 2018-12-05T09:38:37,755 i.s.s.g.GatewayRunner Starting Gateway on Config{servicePort=5801, discoveryPort=4801, seeds=[announce:4801], memberHost=null, memberPort=null} [main]
I 2018-12-05T09:38:37,755 i.s.s.g.GatewayRunner ####################################################################### [main]
I 2018-12-05T09:38:40,160 i.s.s.d.a.ServiceDiscovery Start scalecube service discovery with config: ClusterConfig{seedMembers=[announce:4801], metadata={}, syncInterval=30000, syncTimeout=3000, suspicionMult=5, syncGroup='default', pingInterval=1000, pingTimeout=500, pingReqMembers=3, gossipInterval=200, gossipFanout=3, gossipRepeatMult=3, transportConfig=TransportConfig{port=4801, connectTimeout=3000, useNetworkEmulator=false}, memberHost=null, memberPort=null} [rsocket-boss-3-1]
I 2018-12-05T09:38:40,249 i.s.t.TransportImpl Bound cluster transport on 0:0:0:0:0:0:0:0%0:4801 [sc-cluster-io-select-epoll-1]
I 2018-12-05T09:38:43,374 i.s.c.m.MembershipProtocolImpl Timeout getting initial SyncAck from seed members: [announce:4801] [sc-cluster-4801-1]
I 2018-12-05T09:38:43,447 i.s.s.g.w.WebsocketGateway Starting gateway with GatewayConfig{name='ws', gatewayClass=io.scalecube.services.gateway.ws.WebsocketGateway, options={}, port=7070} [sc-cluster-4801-1]
I 2018-12-05T09:38:43,554 i.s.s.g.w.WebsocketGateway Websocket Gateway has been started successfully on /0:0:0:0:0:0:0:0%0:7070 [ws-boss-6-1]
I 2018-12-05T09:38:43,555 i.s.s.g.h.HttpGateway Starting gateway with GatewayConfig{name='http', gatewayClass=io.scalecube.services.gateway.http.HttpGateway, options={}, port=8080} [sc-cluster-4801-1]
I 2018-12-05T09:38:43,560 i.s.s.g.h.HttpGateway HTTP Gateway has been started successfully on /0:0:0:0:0:0:0:0%0:8080 [http-boss-4-1]
I 2018-12-05T09:38:43,561 i.s.s.g.r.RSocketGateway Starting gateway with GatewayConfig{name='rsws', gatewayClass=io.scalecube.services.gateway.rsocket.RSocketGateway, options={}, port=9090} [sc-cluster-4801-1]
I 2018-12-05T09:38:43,632 i.s.s.g.r.RSocketGateway Rsocket Gateway has been started successfully on /0:0:0:0:0:0:0:0%0:9090 [rsws-boss-5-1]
W 2018-12-05T09:39:02,048 i.s.t.ExceptionHandler Exception caught for channel [id: 0x709b11ab, L:/100.106.0.8:4801 - R:/100.102.0.13:32856], Adjusted frame length exceeds 32768: 38152 - discarded [sc-cluster-io-epoll-2]
io.netty.handler.codec.TooLongFrameException: Adjusted frame length exceeds 32768: 38152 - discarded
	at io.netty.handler.codec.LengthFieldBasedFrameDecoder.fail(LengthFieldBasedFrameDecoder.java:522) ~[netty-codec-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.handler.codec.LengthFieldBasedFrameDecoder.failIfNecessary(LengthFieldBasedFrameDecoder.java:500) ~[netty-codec-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.handler.codec.LengthFieldBasedFrameDecoder.exceededFrameLength(LengthFieldBasedFrameDecoder.java:387) ~[netty-codec-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.handler.codec.LengthFieldBasedFrameDecoder.decode(LengthFieldBasedFrameDecoder.java:430) ~[netty-codec-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.handler.codec.LengthFieldBasedFrameDecoder.decode(LengthFieldBasedFrameDecoder.java:343) ~[netty-codec-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:489) ~[netty-codec-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:428) ~[netty-codec-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265) ~[netty-codec-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:808) [netty-transport-native-epoll-4.1.29.Final-linux-x86_64.jar:4.1.29.Final]
	at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:410) [netty-transport-native-epoll-4.1.29.Final-linux-x86_64.jar:4.1.29.Final]
	at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:310) [netty-transport-native-epoll-4.1.29.Final-linux-x86_64.jar:4.1.29.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:884) [netty-common-4.1.29.Final.jar:4.1.29.Final]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]
W 2018-12-05T09:39:09,336 i.s.t.ExceptionHandler Exception caught for channel [id: 0x56bbfead, L:/100.106.0.8:4801 - R:/100.106.0.10:45456], Adjusted frame length exceeds 32768: 38152 - discarded [sc-cluster-io-epoll-2]
io.netty.handler.codec.TooLongFrameException: Adjusted frame length exceeds 32768: 38152 - discarded
	at io.netty.handler.codec.LengthFieldBasedFrameDecoder.fail(LengthFieldBasedFrameDecoder.java:522) ~[netty-codec-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.handler.codec.LengthFieldBasedFrameDecoder.failIfNecessary(LengthFieldBasedFrameDecoder.java:500) ~[netty-codec-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.handler.codec.LengthFieldBasedFrameDecoder.exceededFrameLength(LengthFieldBasedFrameDecoder.java:387) ~[netty-codec-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.handler.codec.LengthFieldBasedFrameDecoder.decode(LengthFieldBasedFrameDecoder.java:430) ~[netty-codec-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.handler.codec.LengthFieldBasedFrameDecoder.decode(LengthFieldBasedFrameDecoder.java:343) ~[netty-codec-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:489) ~[netty-codec-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:428) ~[netty-codec-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265) ~[netty-codec-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:808) [netty-transport-native-epoll-4.1.29.Final-linux-x86_64.jar:4.1.29.Final]
	at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:410) [netty-transport-native-epoll-4.1.29.Final-linux-x86_64.jar:4.1.29.Final]
	at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:310) [netty-transport-native-epoll-4.1.29.Final-linux-x86_64.jar:4.1.29.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:884) [netty-common-4.1.29.Final.jar:4.1.29.Final]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]
W 2018-12-05T09:39:13,726 i.s.t.ExceptionHandler Exception caught for channel [id: 0xff64fcb1, L:/100.106.0.8:4801 - R:/100.102.0.14:40988], Adjusted frame length exceeds 32768: 54796 - discarded [sc-cluster-io-epoll-2]
io.netty.handler.codec.TooLongFrameException: Adjusted frame length exceeds 32768: 54796 - discarded
	at io.netty.handler.codec.LengthFieldBasedFrameDecoder.fail(LengthFieldBasedFrameDecoder.java:522) ~[netty-codec-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.handler.codec.LengthFieldBasedFrameDecoder.failIfNecessary(LengthFieldBasedFrameDecoder.java:500) ~[netty-codec-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.handler.codec.LengthFieldBasedFrameDecoder.exceededFrameLength(LengthFieldBasedFrameDecoder.java:387) ~[netty-codec-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.handler.codec.LengthFieldBasedFrameDecoder.decode(LengthFieldBasedFrameDecoder.java:430) ~[netty-codec-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.handler.codec.LengthFieldBasedFrameDecoder.decode(LengthFieldBasedFrameDecoder.java:343) ~[netty-codec-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:489) ~[netty-codec-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:428) ~[netty-codec-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265) ~[netty-codec-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:808) [netty-transport-native-epoll-4.1.29.Final-linux-x86_64.jar:4.1.29.Final]
	at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:410) [netty-transport-native-epoll-4.1.29.Final-linux-x86_64.jar:4.1.29.Final]
	at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:310) [netty-transport-native-epoll-4.1.29.Final-linux-x86_64.jar:4.1.29.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:884) [netty-common-4.1.29.Final.jar:4.1.29.Final]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]
W 2018-12-05T09:39:14,147 i.s.t.ExceptionHandler Exception caught for channel [id: 0xbfa8518a, L:/100.106.0.8:4801 - R:/100.106.0.11:36020], Adjusted frame length exceeds 32768: 54796 - discarded [sc-cluster-io-epoll-2]
io.netty.handler.codec.TooLongFrameException: Adjusted frame length exceeds 32768: 54796 - discarded
	at io.netty.handler.codec.LengthFieldBasedFrameDecoder.fail(LengthFieldBasedFrameDecoder.java:522) ~[netty-codec-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.handler.codec.LengthFieldBasedFrameDecoder.failIfNecessary(LengthFieldBasedFrameDecoder.java:500) ~[netty-codec-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.handler.codec.LengthFieldBasedFrameDecoder.exceededFrameLength(LengthFieldBasedFrameDecoder.java:387) ~[netty-codec-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.handler.codec.LengthFieldBasedFrameDecoder.decode(LengthFieldBasedFrameDecoder.java:430) ~[netty-codec-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.handler.codec.LengthFieldBasedFrameDecoder.decode(LengthFieldBasedFrameDecoder.java:343) ~[netty-codec-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:489) ~[netty-codec-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:428) ~[netty-codec-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265) ~[netty-codec-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965) [netty-transport-4.1.29.Final.jar:4.1.29.Final]
	at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:808) [netty-transport-native-epoll-4.1.29.Final-linux-x86_64.jar:4.1.29.Final]
	at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:410) [netty-transport-native-epoll-4.1.29.Final-linux-x86_64.jar:4.1.29.Final]
	at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:310) [netty-transport-native-epoll-4.1.29.Final-linux-x86_64.jar:4.1.29.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:884) [netty-common-4.1.29.Final.jar:4.1.29.Final]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]

java.lang.IllegalStateException

this line of code throws IllegalStateException as its using block();

  @Override
  public Collection<Member> members() {
    return Mono.fromCallable(() -> Collections.unmodifiableCollection(members.values()))
        .subscribeOn(scheduler)
        .block();
  }

Caused by: java.lang.IllegalStateException: block()/blockFirst()/blockLast() are blocking, which is not supported in thread parallel-2
at reactor.core.publisher.BlockingSingleSubscriber.blockingGet(BlockingSingleSubscriber.java:77) ~[reactor-core-3.2.3.RELEASE.jar:3.2.3.RELEASE]
at reactor.core.publisher.Mono.block(Mono.java:1493) ~[reactor-core-3.2.3.RELEASE.jar:3.2.3.RELEASE]
at io.scalecube.cluster.membership.MembershipProtocolImpl.members(MembershipProtocolImpl.java:265) ~[classes/:?]
at io.scalecube.cluster.membership.MembershipProtocolImpl.otherMembers(MembershipProtocolImpl.java:270) ~[classes/:?]
at io.scalecube.cluster.ClusterImpl.otherMembers(ClusterImpl.java:212) ~[classes/:?]

a fix is:

 @Override
  public Collection<Member> members() {
    return  Collections.unmodifiableCollection(members.values());
  }

Fix MembershipProtocol sync address selection

In MembershipProtocolImpl fix TODO:

  private Address selectSyncAddress() {
    // TODO [AK]: During running phase it should send to both seed or not seed members (issue #38)
    return !seedMembers.isEmpty()
        ? seedMembers.get(ThreadLocalRandom.current().nextInt(seedMembers.size()))
        : null;
  }

Looks like idea was to not limit algorithm to make Sync calls only to Seed nodes.

Transport security based on symmetric key cryptography

Currently all transport messages are sent as plain data and as a result are vulnerable to different kinds of attacks (eavesdropping, tampering, or attempts to generate fake events) in case if malicious user can listen server-to-server communication. In order to introduce security level we want to support symmetric key cryptography which means that key is preliminary distributed on each node of the cluster (as part of configuration) and each message which sent over the network is encrypted/decrypted by this key.

Initial implementation should use the AES-128 standard. The AES standard is considered one of the most secure and modern encryption standards. Additionally, it is a fast algorithm, and modern CPUs provide hardware instructions to make encryption and decryption very lightweight.

Possible extensions of this feature may include (out of the scope for the initial implementation):

  • Support of different configurable encryption algorithms
  • Support of key rotation. Since we trust each node which has key in the cluster we can distribute via gossips new key. So for some time both keys will be active and then old key is removed after new key is converged over the cluster.

Create more informative configuration documentation.

  1. ClusterConfig contains configuration without javadoc and therefore without source code it is impossible to get meaning of most of configuration.
  2. Add TimeUnit to all duration entries. It is hard to understand that they are in ms.
  3. Add configurable constant gossip period.

Gossip on AWS instances of different regions

So I am using the scalecube-cluster for reasons of membership and gossip among different nodes hosted on different aws zones. The problem I am facing is that although I am able to join a cluster when I use spreadGossip other nodes do not receive anything. But the problem vanishes if I am using all the nodes of the same zone.

For example:
there are 3 nodes in us-west (N. California) and 1 in Sydney and 1 in Brazil.
The 3 USA nodes are able to send and receive messages through spreadGossip but the Sydney and brazil nodes are not taking part in the communication. And if I print the size of the cluster it says 4. So although the nodes are in the cluster, they are not able to send/receive messages.

Can anyone shed some light on this? Are there any specific ports that need to be opened on the aws instances. ( I am running the cluster through ClusterConfig clientConfigWithFixedPort with the port as 3000).

Listen address is being set as message.sender

I 2018-11-27T10:15:34,677 i.s.s.g.GatewayRunner ####################################################################### [main]
I 2018-11-27T10:15:34,677 i.s.s.g.GatewayRunner Starting Gateway on Config{servicePort=5801, discoveryPort=4801, seeds=[localhost:4801, localhost:4802], memberHost=null, memberPort=null} [main]
I 2018-11-27T10:15:34,677 i.s.s.g.GatewayRunner ####################################################################### [main]
I 2018-11-27T10:15:36,973 i.s.s.d.a.ServiceDiscovery Start scalecube service discovery with config: ClusterConfig{seedMembers=[localhost:4801, localhost:4802], metadata={}, syncInterval=30000, syncTimeout
=3000, suspicionMult=5, syncGroup='default', pingInterval=1000, pingTimeout=500, pingReqMembers=3, gossipInterval=200, gossipFanout=3, gossipRepeatMult=3, transportConfig=TransportConfig{port=4801, connec
tTimeout=3000, useNetworkEmulator=false}, memberHost=null, memberPort=null} [rsocket-boss-3-1]
I 2018-11-27T10:15:37,069 i.s.t.TransportImpl Bound cluster transport on: 0:0:0:0:0:0:0:0%0:4801 [cluster-transport-select-epoll-2]
W 2018-11-27T10:15:37,375 i.s.t.TransportImpl Failed to connect to remote address localhost:4802, cause: io.netty.channel.AbstractChannel$AnnotatedConnectException: syscall:getsockopt(..) failed: Connecti
on refused: localhost/127.0.0.1:4802 [cluster-transport-client-epoll-3]
I 2018-11-27T10:15:40,175 i.s.c.m.MembershipProtocolImpl Timeout getting initial SyncAck from seed members: [localhost:4802, localhost:4801] [sc-cluster-4801-1]
I 2018-11-27T10:15:40,202 i.s.s.g.w.WebsocketGateway Starting gateway with GatewayConfig{name='ws', gatewayClass=io.scalecube.services.gateway.ws.WebsocketGateway, options={}, port=7070, workerThreadPool=
null} [sc-cluster-4801-1]
I 2018-11-27T10:15:40,280 i.s.s.g.w.WebsocketGateway Websocket Gateway has been started successfully on /0:0:0:0:0:0:0:0%0:7070 [ws-boss-6-1]
I 2018-11-27T10:15:40,281 i.s.s.g.h.HttpGateway Starting gateway with GatewayConfig{name='http', gatewayClass=io.scalecube.services.gateway.http.HttpGateway, options={}, port=8080, workerThreadPool=null} 
[sc-cluster-4801-1]
I 2018-11-27T10:15:40,286 i.s.s.g.h.HttpGateway HTTP Gateway has been started successfully on /0:0:0:0:0:0:0:0%0:8080 [http-boss-4-1]
I 2018-11-27T10:15:40,287 i.s.s.g.r.RSocketGateway Starting gateway with GatewayConfig{name='rsws', gatewayClass=io.scalecube.services.gateway.rsocket.RSocketGateway, options={}, port=9090, workerThreadPo
ol=null} [sc-cluster-4801-1]
I 2018-11-27T10:15:40,300 i.s.s.g.r.RSocketGateway Rsocket Gateway has been started successfully on /0:0:0:0:0:0:0:0%0:9090 [rsws-boss-5-1]
I 2018-11-27T10:15:49,687 i.s.c.a.Slf4JConfigEventListener Config property changed: [
sun.nio.ch.bugLevel=null->***,	source=null->sys_prop,	origin=null
] [config-reloader]
W 2018-11-27T10:16:10,179 i.s.t.TransportImpl Failed to connect to remote address localhost:4802, cause: io.netty.channel.AbstractChannel$AnnotatedConnectException: syscall:getsockopt(..) failed: Connecti
on refused: localhost/127.0.0.1:4802 [cluster-transport-client-epoll-3]
I 2018-11-27T10:16:26,888 i.s.s.d.a.ServiceDiscovery ServiceEndpoint was ADDED since new Member has joined the cluster [email protected]:43145/1776851854 : ServiceEndpoint{id='dd5ae5bd-6f10
-4262-9c21-e0147af69859', host='100.116.0.5', port=32911, tags={}, serviceRegistrations=[ServiceRegistration{namespace='om2.exchange.trade', tags={}, methods=[ServiceMethodDefinition{action='orderFillEven
ts', tags={}}, ServiceMethodDefinition{action='orderFillInfo', tags={}}, ServiceMethodDefinition{action='errorEvents', tags={}}, ServiceMethodDefinition{action='marketRateEvents', tags={}}]}, ServiceRegis
tration{namespace='om2.exchange.userTrade', tags={}, methods=[ServiceMethodDefinition{action='orderFilledEvents', tags={}}, ServiceMethodDefinition{action='positionEvents', tags={}}]}]} [sc-cluster-4801-1
]
I 2018-11-27T10:16:26,890 i.s.s.d.a.ServiceDiscovery Publish services registered: ServiceDiscoveryEvent{serviceEndpoint=ServiceEndpoint{id='dd5ae5bd-6f10-4262-9c21-e0147af69859', host='100.116.0.5', port=
32911, tags={}, serviceRegistrations=[ServiceRegistration{namespace='om2.exchange.trade', tags={}, methods=[ServiceMethodDefinition{action='orderFillEvents', tags={}}, ServiceMethodDefinition{action='orde
rFillInfo', tags={}}, ServiceMethodDefinition{action='errorEvents', tags={}}, ServiceMethodDefinition{action='marketRateEvents', tags={}}]}, ServiceRegistration{namespace='om2.exchange.userTrade', tags={}
, methods=[ServiceMethodDefinition{action='orderFilledEvents', tags={}}, ServiceMethodDefinition{action='positionEvents', tags={}}]}]}, type=REGISTERED} [sc-cluster-4801-1]
W 2018-11-27T10:16:26,897 i.s.t.TransportImpl Failed to connect to remote address 0:0:0:0:0:0:0:0%0:43145, cause: io.netty.channel.AbstractChannel$AnnotatedConnectException: syscall:getsockopt(..) failed:
 Connection refused: /0:0:0:0:0:0:0:0:43145 [cluster-transport-client-epoll-3]
I 2018-11-27T10:16:33,427 i.s.s.d.a.ServiceDiscovery ServiceEndpoint was ADDED since new Member has joined the cluster [email protected]:45143/290094393 : ServiceEndpoint{id='90c9d5c2-9320-
45de-b83d-d4e068b08f01', host='100.116.0.2', port=40791, tags={}, serviceRegistrations=[ServiceRegistration{namespace='om2.exchange.orders', tags={}, methods=[ServiceMethodDefinition{action='placeOrder', 
tags={}}, ServiceMethodDefinition{action='cancelOrder', tags={}}, ServiceMethodDefinition{action='orderEvents', tags={}}, ServiceMethodDefinition{action='findOrder', tags={}}, ServiceMethodDefinition{acti
on='userOrderEvents', tags={}}]}]} [sc-cluster-4801-1]
I 2018-11-27T10:16:33,427 i.s.s.d.a.ServiceDiscovery Publish services registered: ServiceDiscoveryEvent{serviceEndpoint=ServiceEndpoint{id='90c9d5c2-9320-45de-b83d-d4e068b08f01', host='100.116.0.2', port=
40791, tags={}, serviceRegistrations=[ServiceRegistration{namespace='om2.exchange.orders', tags={}, methods=[ServiceMethodDefinition{action='placeOrder', tags={}}, ServiceMethodDefinition{action='cancelOr
der', tags={}}, ServiceMethodDefinition{action='orderEvents', tags={}}, ServiceMethodDefinition{action='findOrder', tags={}}, ServiceMethodDefinition{action='userOrderEvents', tags={}}]}]}, type=REGISTERE
D} [sc-cluster-4801-1]
W 2018-11-27T10:16:33,429 i.s.t.TransportImpl Failed to connect to remote address 0:0:0:0:0:0:0:0%0:45143, cause: io.netty.channel.AbstractChannel$AnnotatedConnectException: syscall:getsockopt(..) failed:
 Connection refused: /0:0:0:0:0:0:0:0:45143 [cluster-transport-client-epoll-3]
I 2018-11-27T10:16:34,082 i.s.s.d.a.ServiceDiscovery ServiceEndpoint was ADDED since new Member has joined the cluster [email protected]:44553/1572369528 : ServiceEndpoint{id='ef489e95-7f54
-4c56-9ec3-ef46d33c56ca', host='100.112.0.4', port=43307, tags={}, serviceRegistrations=[ServiceRegistration{namespace='om2.exchange.marketdata', tags={}, methods=[ServiceMethodDefinition{action='orderAcc

When member was restarted previous incarnation can be marked as failed faster

Currently when you restart the node it will try to wait for configured timeout and ping previous incarnation during this time. But since on the same address already running member with another memberId it actually can be deducted that previous member will never return and can be cleaned out of membership table without waiting for timeouts.

Add basic JMX support

Add some basic JMX support to the library. At the moment user is left just with logs to see what going on. For reference on JMX in general see this link.
Proposed jmx object name: io.scalecube.cluster:name=Cluster
Attributes:
String member()
Int incarnation()
Map<String, String> metadata()
List<String> aliveMembers() // all alive members except local one
List<String> deadMembers() // let's keep recent N dead members for full-filling this method
List<String> suspectedMembers()

In this task it's assumed new config property jmxEnabled will be added with true by default. It's also assumed that new jmx component will be bound to lifecycle of the ClusterImpl object and live inside it, i.e. not injected from outside as composition, by rather as aggregation object which will be initialized as last step after all usual cluster components would be started.

In io.scalecube.transport.TransportImpl#bind0() bind on wildcard address and given port

Motivation
In bind0 we look for ip address and bind on it:

  public Mono<Transport> bind0() {
    return Mono.defer(
        () -> {
          ServerBootstrap server =
              bootstrapFactory.serverBootstrap().childHandler(incomingChannelInitializer);

          // Resolve listen IP address
          InetAddress listenAddress =
              Addressing.getLocalIpAddress(
                  config.getListenAddress(), config.getListenInterface(), config.isPreferIPv6());

          // Listen port
          int bindPort = config.getPort();

          return bind0(server, listenAddress, bindPort);
        });
  }

Later on we use this bound on bind0() address for exposing as cluster member address. Hence information from higher level leaks to lowest level which is class Transport.
A goal of this ticket is to make bind0 code as clean as possible and move all [cluster][member][addressing] logic on higher level in order to keep good separation of layers.

Is unicast possible?

Suppose there is a cluster with three members A, B and C. Is it possible to send a message from A to just B. Every example included is about broadcasting. Is Uni-casting(sending a message to just one receiver) possible?

Use UDP transport protocol for gossip messages

Since Gossip Protocol by design introduces a lot of reliability it is safe and more efficient to use UDP transport for sending Gossip messages instead of TCP. Take into account that Failure Detector messages (Ack, Ping, Ping-Req) as well as Sync messages still should continue to use TCP on trasnport layer.

Downstream dependencies:

  • Issue #53 Support UDP protocol by Transport

Clsuter.updateMetadata must apply some ordering mechanism

Cluster.updateMetadata() if called three times would result in three gossips there by ordering of applying updateMetadata calls on remote nodes is not guaranteed. Let's say metadata hashmap is:
[ k -> "1"], and you make four subsequent updateMetadata calls on node1: [ k -> "1"], [ k -> "2"], [ k -> "3"], [ k -> "4"] . As the client of this function one certainly expect that metadata is [ k -> "4"] on all nodes (eventually, once gossips arrives). But in reality requests on remote node come in non-deterministic order, hence k may contain contain anything.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.