Giter Site home page Giter Site logo

fengjiachun / jupiter Goto Github PK

View Code? Open in Web Editor NEW
1.5K 162.0 559.0 9.61 MB

Jupiter是一款性能非常不错的, 轻量级的分布式服务框架

License: Apache License 2.0

Java 100.00%
jupiter rpc rpc-framework java high-performance distributed-systems cluster spring service-consumer service-provider service-discovery service-registry protostuff kryo hessian nio socket netty netty4 microservice

jupiter's Introduction

License Maven Central Build Status Code Quality: Java Total Alerts

Jupiter:

  • Jupiter 是一款性能非常不错的, 轻量级的分布式服务框架

Jupiter Architecture:

       ═ ═ ═▷ init         ─ ─ ─ ▷ async       ──────▶ sync
----------------------------------------------------------------------------------------

                            ┌ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─
                                       ┌ ─ ─ ─ ┐ │
           ─ ─ ─ ─ ─ ─ ─ ─ ─│ Registry  Monitor ───────────────────────────┐
          │                            └ ─ ─ ─ ┘ │                         │
                            └ ─ ─△─ ─ ─ ─ ─△─ ─ ─                          ▼
          │                                                           ┌ ─ ─ ─ ─
        Notify                   ║         ║                            Telnet │
          │         ═ ═ ═ ═ ═ ═ ═           ═ ═ ═ ═ ═ ═ ═ ═ ═         └ ─ ─ ─ ─
                   ║                                         ║             ▲
          │    Subscribe                                  Register         │
                   ║                                         ║             │
          │  ┌ ─ ─ ─ ─ ─                          ┌ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─    │
                        │─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ▷           ┌ ─ ─ ─ ┐ │   │
          └ ▷│ Consumer           Invoke          │ Provider  Monitor ─────┘
                        │────────────────────────▶           └ ─ ─ ─ ┘ │
             └ ─ ─ ─ ─ ─                          └ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─

---------------------------------------------------------------------------------------

性能:

文档:

一次 RPC 调用:

感谢 @远墨 提供的图

快速开始:

工程依赖:
  • JDK1.8 或更高版本
  • 依赖管理工具: Maven3.x 版本
Maven依赖:
<dependency>
    <groupId>org.jupiter-rpc</groupId>
    <artifactId>jupiter-all</artifactId>
    <version>${jupiter.version}</version>
</dependency>
简单调用示例:
1. 创建服务接口:
@ServiceProvider(group = "test", name = "serviceTest")
public interface ServiceTest {
    String sayHelloString();
}

@ServiceProvider:
    - 建议每个服务接口通过此注解来指定服务信息, 如不希望业务代码对jupiter依赖也可以不使用此注解而手动去设置服务信息
        + group: 服务组别(选填, 默认组别为'Jupiter')
        + name: 服务名称(选填, 默认名称为接口全限定名称)
2. 创建服务实现:
@ServiceProviderImpl(version = "1.0.0")
public class ServiceTestImpl implements ServiceTest {

    @Override
    public String sayHelloString() {
        return "Hello jupiter";
    }
}

@ServiceProviderImpl:
    - 建议每个服务实现通过此注解来指定服务版本信息, 如不希望业务代码对jupiter依赖也可以不使用此注解而手动去设置版本信息
        + version: 服务版本号(选填, 默认版本号为'1.0.0')
3. 启动注册中心:
- 选择1: 使用 jupiter 默认的注册中心:
public class HelloJupiterRegistryServer {

    public static void main(String[] args) {
        // 注册中心
        RegistryServer registryServer = RegistryServer.Default.createRegistryServer(20001, 1);
        try {
            registryServer.startRegistryServer();
        } catch (InterruptedException e) {
            e.printStackTrace();
        }
    }
}
- 选择2: 使用 zookeeper 作为注册中心:
默认注册中心只建议在测试环境使用, 线上建议使用 zookeeper 实现

// 设置使用 zookeeper 作为注册中心
JServer server = new DefaultServer(RegistryService.RegistryType.ZOOKEEPER)
JClient client = new DefaultClient(RegistryService.RegistryType.ZOOKEEPER)

在 server 和 client 中配置 jupiter-registry-zookeeper 依赖(jupiter-all 包含 jupiter-registry-zookeeper)

<dependency>
    <groupId>org.jupiter-rpc</groupId>
    <artifactId>jupiter-registry-zookeeper</artifactId>
    <version>${jupiter.version}</version>
</dependency>
4. 启动服务提供(Server):
public class HelloJupiterServer {

    public static void main(String[] args) throws Exception {
        JServer server = new DefaultServer().withAcceptor(new JNettyTcpAcceptor(18090));
        // provider
        ServiceTestImpl service = new ServiceTestImpl();
        // 本地注册
        ServiceWrapper provider = server.serviceRegistry()
                .provider(service)
                .register();
        // 连接注册中心
        server.connectToRegistryServer("127.0.0.1:20001");
        // 向注册中心发布服务
        server.publish(provider);
        // 启动server
        server.start();
    }
}
5. 启动服务消费者(Client)
public class HelloJupiterClient {

    public static void main(String[] args) {
        JClient client = new DefaultClient().withConnector(new JNettyTcpConnector());
        // 连接RegistryServer
        client.connectToRegistryServer("127.0.0.1:20001");
        // 自动管理可用连接
        JConnector.ConnectionWatcher watcher = client.watchConnections(ServiceTest.class);
        // 等待连接可用
        if (!watcher.waitForAvailable(3000)) {
            throw new ConnectFailedException();
        }

        ServiceTest service = ProxyFactory.factory(ServiceTest.class)
                .version("1.0.0")
                .client(client)
                .newProxyInstance();

        service.sayHelloString();
    }
}

Server/Client 代码示例

新特性

v1.3 新增 InvokeType.AUTO, 当你的接口返回值是一个 CompletableFuture 或者它的子类将自动适配为异步调用, 否则为同步调用 具体 demo 请参考这里

结合Spring使用示例:
<jupiter:server id="jupiterServer" registryType="default"> <!-- registryType="zookeeper" 代表使用zk作为注册中心 -->
    <jupiter:property registryServerAddresses="127.0.0.1:20001" />
</jupiter:server>

<!-- provider -->
<bean id="serviceTest" class="org.jupiter.example.ServiceTestImpl" />

<jupiter:provider id="serviceTestProvider" server="jupiterServer" providerImpl="serviceTest">
    <jupiter:property weight="100"/>
</jupiter:provider>
<jupiter:client id="jupiterClient" registryType="default"> <!-- registryType="zookeeper" 代表使用zk作为注册中心 -->
    <jupiter:property registryServerAddresses="127.0.0.1:20001" />
</jupiter:client>

<!-- consumer -->
<jupiter:consumer id="serviceTest" client="jupiterClient" interfaceClass="org.jupiter.example.ServiceTest">
    <jupiter:property version="1.0.0.daily" />
    <jupiter:property serializerType="proto_stuff" />
    <jupiter:property loadBalancerType="round_robin" />
    <jupiter:property timeoutMillis="3000" />
    <jupiter:property clusterStrategy="fail_over" />
    <jupiter:property failoverRetries="2" />
    <jupiter:methodSpecials>
        <!-- 方法的单独配置 -->
        <jupiter:methodSpecial methodName="sayHello" timeoutMillis="5000" clusterStrategy="fail_fast" />
    </jupiter:methodSpecials>
</jupiter:consumer>

SpringServer/SpringClient 代码示例

其他

jupiter's People

Contributors

berrycol avatar buptunixguys avatar chenzhang22 avatar dependabot[bot] avatar dyu avatar fengjiachun avatar xcorail avatar xingguang2013 avatar zdxue avatar zhanghailin1995 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

jupiter's Issues

增加针对Linux平台的一些TCP参数

`

public static final JOption<Boolean> SO_REUSEPORT = valueOf("SO_REUSEPORT");

public static final JOption<Boolean> TCP_CORK = valueOf("TCP_CORK");

public static final JOption<Long> TCP_NOTSENT_LOWAT = valueOf("TCP_NOTSENT_LOWAT");

public static final JOption<Integer> TCP_KEEPIDLE = valueOf("TCP_KEEPIDLE");

public static final JOption<Integer> TCP_KEEPINTVL = valueOf("TCP_KEEPINTVL");

public static final JOption<Integer> TCP_KEEPCNT = valueOf("TCP_KEEPCNT");

public static final JOption<Integer> TCP_USER_TIMEOUT = valueOf("TCP_USER_TIMEOUT");

public static final JOption<Boolean> IP_FREEBIND = valueOf("IP_FREEBIND");

public static final JOption<Boolean> IP_TRANSPARENT = valueOf("IP_TRANSPARENT");

public static final JOption<Integer> TCP_FASTOPEN = valueOf("TCP_FASTOPEN");

public static final JOption<Boolean> TCP_FASTOPEN_CONNECT = valueOf("TCP_FASTOPEN_CONNECT");

public static final JOption<Integer> TCP_DEFER_ACCEPT = valueOf("TCP_DEFER_ACCEPT");`

支持JDK9

可能有些依赖的jar暂不支持JDK9, 慢慢来

zk server闪断导致服务信息丢失, 重新建立连接后无法再次发布该服务

Jupiter(397633380)群 @jacksun 指出的问题:
在zk 注册这块是一个异步过程,如果出现zkserver 闪断情况, RegisterMeta 就没法加入registerMetaSet 里面。 所以再重新链接时候registerMetaSet不包含这条数据

@jacksun 的修改建议是1. 启动一个 watchDog 来监控所有节点是否都registerMetaSet已经注册。 2. doRegister 在方法开始加入registerMetaSet 中。 我在zk 闪断测试中,有几率会产生 register Node 注册丢失问题。

TCP_FASTOPEN_CONNECT报错

Failed to set channel option 'io.netty.channel.epoll.EpollChannelOption#TCP_FASTOPEN_CONNECT' with value 'false' for channel

升级到最新版后报这个错

能否优化Future的 AbstractFuture#awaitDone(boolean timed, long nanos) 的性能

在await时,能否添加自旋,减少waiter的上线文切换 ?

  nanos = deadline - System.nanoTime();
    if (nanos <= 0L) { // 设置超时, 阻塞当前线程(阻塞指定时间)
        removeWaiter(q);
        return state;
    }
   LockSupport.parkNanos(this, nanos);

如果等待时间过短,比如小于1000ns ,那么上下文的切换没有必要。此处请参考Guava 23 的实现。
com.google.common.util.concurrent.AbstractFuture#get(long, java.util.concurrent.TimeUnit)
另外 System.nanoTime() 有可能返回0

LowCopyProtocolEncoder throw java.lang.NullPointerException

在DefaultProviderProcessor中doHandleException是,对JResponsePayload没有初始化outputBuf,导致LowCopyProtocolEncoder在encode是,报NullPointerException,具体代码行:
ByteBuf byteBuf = (ByteBuf) response.outputBuf().backingObject();

Exception:

15:30:53.860 [jupiter.acceptor.worker-2-4] WARN o.j.r.p.p.DefaultProviderProcessor - Service error message sent failed: [id: 0xbce01912, L:/127.0.0.1:18090 - R:/127.0.0.1:53846], io.netty.handler.codec.EncoderException: java.lang.NullPointerException
at org.jupiter.transport.netty.handler.LowCopyProtocolEncoder.write(LowCopyProtocolEncoder.java:75)
at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:738)
at io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:730)
at io.netty.channel.AbstractChannelHandlerContext.access$1900(AbstractChannelHandlerContext.java:38)
at io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.write(AbstractChannelHandlerContext.java:1081)
at io.netty.channel.AbstractChannelHandlerContext$WriteAndFlushTask.write(AbstractChannelHandlerContext.java:1128)
at io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.run(AbstractChannelHandlerContext.java:1070)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute$$$capture(AbstractEventExecutor.java:163)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasksFrom(SingleThreadEventExecutor.java:380)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:355)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:454)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:886)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NullPointerException
at org.jupiter.transport.netty.handler.LowCopyProtocolEncoder.doEncodeResponse(LowCopyProtocolEncoder.java:118)
at org.jupiter.transport.netty.handler.LowCopyProtocolEncoder.encode(LowCopyProtocolEncoder.java:87)
at org.jupiter.transport.netty.handler.LowCopyProtocolEncoder.write(LowCopyProtocolEncoder.java:66)
... 14 more

Jupiter 性能提升明显

现在 turbo 只在 existUser 一个项目上占优了,其他项目 Jupiter 都是第一了。
抽空发个文章吧,分享下经验。

No reservable CPU for Thread[affinity.jupiter-provider-processor

17:22:39.286 [affinity.jupiter-provider-processor #1] WARN  net.openhft.affinity.LockInventory - Unable to acquire lock on CPU 6 for thread Thread[affinity.jupiter-provider-processor #1,5,main], trying to find another CPU
17:22:39.297 [affinity.jupiter-provider-processor #2] WARN  net.openhft.affinity.LockInventory - Unable to acquire lock on CPU 5 for thread Thread[affinity.jupiter-provider-processor #2,5,main], trying to find another CPU
17:22:39.299 [affinity.jupiter-provider-processor #3] WARN  net.openhft.affinity.LockInventory - Unable to acquire lock on CPU 4 for thread Thread[affinity.jupiter-provider-processor #3,5,main], trying to find another CPU
17:22:39.301 [affinity.jupiter-provider-processor #5] WARN  net.openhft.affinity.LockInventory - Unable to acquire lock on CPU 3 for thread Thread[affinity.jupiter-provider-processor #5,5,main], trying to find another CPU
17:22:39.311 [affinity.jupiter-provider-processor #4] WARN  net.openhft.affinity.LockInventory - Unable to acquire lock on CPU 2 for thread Thread[affinity.jupiter-provider-processor #4,5,main], trying to find another CPU
17:22:39.323 [affinity.jupiter-provider-processor #7] WARN  net.openhft.affinity.LockInventory - Unable to acquire lock on CPU 1 for thread Thread[affinity.jupiter-provider-processor #7,5,main], trying to find another CPU
17:22:39.326 [affinity.jupiter-provider-processor #7] WARN  net.openhft.affinity.LockInventory - No reservable CPU for Thread[affinity.jupiter-provider-processor #7,5,main]
17:22:39.328 [affinity.jupiter-provider-processor #6] WARN  net.openhft.affinity.LockInventory - Unable to acquire lock on CPU 1 for thread Thread[affinity.jupiter-provider-processor #6,5,main], trying to find another CPU
17:22:39.330 [affinity.jupiter-provider-processor #6] WARN  net.openhft.affinity.LockInventory - No reservable CPU for Thread[affinity.jupiter-provider-processor #6,5,main]

日志信息缺少关键的错误地址信息

建议在failover策略的错误日志中加上对请求主机地址的记录,方便在生产环境下快速排查和恢复错误。
dubbo的记录信息如下,我觉得是比较合理的。

logger.warn("Although retry the method " + invocation.getMethodName()
                            + " in the service " + getInterface().getName()
                            + " was successful by the provider " + invoker.getUrl().getAddress()
                            + ", but there have been failed providers " + providers
                            + " (" + providers.size() + "/" + copyinvokers.size()
                            + ") from the registry " + directory.getUrl().getAddress()
                            + " on the consumer " + NetUtils.getLocalHost()
                            + " using the dubbo version " + Version.getVersion() + ". Last error is: "
                            + le.getMessage(), le);

建议接口定义时ServiceProvider不是必须的

定义接口时必须使用ServiceProvider,这种强类型的方式在有些场景缺乏一些灵活性。接口定义的工程必须依赖Jupiter 的包,这样接口给到别处会造成一些不便。所以希望可以支持弱类型的配置方式而不是一定要使用ServiceProvider.

AbstractFuture isDone方法可能的问题

AbstractFutureisDone 方法我觉得应该改成

public boolean isDone() {
    return state > COMPLETING;
}

因为在state被设置为 COMPLETING 和设置为 NORMAL EXCEPTIONAL 之间不是原子的,如果在这个空窗期调用了

        if (isDone()) {
            notifyListeners(state(), outcome());
        }

可能会导致结果还没准备好就发出通知了。

不知道我理解的有没有问题,如有不对还请指正:-)

关于协议

dubbo实现了那么多可选协议:dubbo,rmi。。。
dubbo为什么需要这些协议?
Jupiter为什么不实现这些协议?

关于重载方法匹配的可优化空间

根据我看到的代码目前的重载方法匹配是在server端运行时进行的,但是这会带来额外的overhead,而且代码实现偏复杂,可能有隐藏bug。

所以我想到的一个思路是,这个重载方法匹配可否在编译期由编译器帮我们完成呢?每一个方法有一个唯一ID(如根据包+类+方法名+函数列表唯一限定),client在调用stub时已通过编译器确定了这个最佳匹配的ID,而发送的message实际包含这个方法ID和参数,这样便可以把匹配的运行时开销移到编译期,而且代码实现会比自己去找匹配方法简单。

这是我的一个初步想法,没考虑实现细节,不知道你觉得怎么样。

整合disruptor问题

看您的common包中有对disruptor的封装,但在实际代码中没有搜索到使用的地方,请问您会对此应用在相关模块中么

java.lang.NoClassDefFoundError: io/opentracing/Tracer

maven配置

org.jupiter-rpc
jupiter-all
1.2.16

当我运行README里面的HelloJupiterClient demo时,会抛标题上的错误,栈信息
Exception in thread "main" java.lang.NoClassDefFoundError: io/opentracing/Tracer
at org.jupiter.tracing.TracerFactory.(TracerFactory.java:38)
at org.jupiter.tracing.OpenTracingContext.(OpenTracingContext.java:33)
at org.jupiter.tracing.OpenTracingFilter.doFilter(OpenTracingFilter.java:59)
at org.jupiter.rpc.DefaultFilterChain.doFilter(DefaultFilterChain.java:34)
at org.jupiter.rpc.consumer.invoker.AbstractInvoker$Chains.invoke(AbstractInvoker.java:146)
at org.jupiter.rpc.consumer.invoker.AbstractInvoker.doInvoke(AbstractInvoker.java:43)
at org.jupiter.rpc.consumer.invoker.SyncInvoker.invoke(SyncInvoker.java:52)
at com.luv.ServiceTest$ByteBuddy$0TeSwTu2.sayHelloString(Unknown Source)
at com.luv.HelloJupiterClient.main(HelloJupiterClient.java:31)
Caused by: java.lang.ClassNotFoundException: io.opentracing.Tracer
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)

是不是jupiter-all的maven打包有问题?我还试了前面若干个版本,发现这个问题一直存在,但是好奇为什么一直没人提这个issue,难道只是我自己环境的问题?

最近刚开始学习RPC框架,在研究Jupiter源码,如果问得有什么不对还请不要介意...

接口传参中,有null时,处理异常

@ServiceProvider(group = "demo", name = "helloworld")
public interface HelloService {
    String m1(String arg1, String arg2, List<String> arg3);
}
helloService.m1("arg1", null, Arrays.asList("arg3.1", "arg3.2"))

调用后报错:
Exception in thread "main" org.jupiter.rpc.exception.JupiterBizException: java.lang.ClassCastException: java.util.ArrayList cannot be cast to java.lang.String
at org.jupiter.rpc.consumer.future.DefaultInvokeFuture.setException(DefaultInvokeFuture.java:215)
at org.jupiter.rpc.consumer.future.DefaultInvokeFuture.doReceived(DefaultInvokeFuture.java:189)
at org.jupiter.rpc.consumer.future.DefaultInvokeFuture.received(DefaultInvokeFuture.java:243)
at org.jupiter.rpc.consumer.processor.task.MessageTask.run(MessageTask.java:81)
at org.jupiter.rpc.executor.CallerRunsExecutorFactory$1.execute(CallerRunsExecutorFactory.java:36)
at org.jupiter.rpc.consumer.processor.DefaultConsumerProcessor.handleResponse(DefaultConsumerProcessor.java:52)
at org.jupiter.transport.netty.handler.connector.ConnectorHandler.channelRead(ConnectorHandler.java:52)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at org.jupiter.transport.netty.handler.IdleStateChecker.channelRead(IdleStateChecker.java:186)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:141)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:451)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:886)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)

Netty 相关问题请教

看 Jupiter 源码的时候我觉得你对 Netty 的研究比我深入得多,现在遇到两个没研究明白的 Netty 问题想向你请教一下。

  1. 使用自定义的 DefaultThreadFactory 性能会发生下降。
    这个是在我的 Window 10 机器上测出来的,没找到合理的解释,看代码也觉得本质上没差别。
    不过结果就是差了个20%左右
    netty/netty#7538

  2. 多 Channel 只在大消息体场景下性能才比单 Channel 性能好。
    比如我的测试项目 rpc-benchmark 中,只有 listUser 这一项多 Channel 才比单 Channel 性能高。
    这个违反直觉,不论什么情况都应该是跟 CPU 数量相同的 Channel 性能最高才对。

是否能够提供 RestApi 调用支持?

Jupiter 目前技术架构,只能运行在JAVA技术栈上。
在我接触的场景中,一个 SOA 架构的系统内,会存在多种语言栈的微服务系统,比如 node go php 等。
Jupiter 通过提供 RestApi 来对接其他语言,使用 http(s) 协议解决 协议的问题,提供标准 RestApi 定义为其他语言客户端提供便利。

参考
https://dangdangdotcom.github.io/dubbox/rest.html
https://github.com/brpc/brpc/blob/master/docs/cn/http_client.md

对Reflects.fastInvoke benchmark结果的疑惑

feng大好久不见。
最近对Jupiter provider调用本地方法的部分比较感兴趣,研究了下源码觉得实现十分巧妙,后来在网上看到ReflectAsm也是这么设计的一个高性能反射框架,于是萌生了对这两个做benchmark的想法,但是结果似乎是ReflectAsm更优。

benchmark代码贴在下面:

    <!-- https://mvnrepository.com/artifact/com.esotericsoftware.reflectasm/reflectasm -->
    <dependency>
        <groupId>com.esotericsoftware.reflectasm</groupId>
        <artifactId>reflectasm</artifactId>
        <version>1.05</version>
    </dependency>

public static class Mock{
    static int i = 1;
    public int get() {
        return i++;
    }
}

public static void main(String[] args) {
    int times = 10000000;//循环执行次数
    Mock mock = new Mock();
    MethodAccess access = MethodAccess.get(Mock.class);
    int index = access.getIndex("get");
    Class[] classes = new Class[]{};
    Object result=null;

    //warmup
    System.out.println("Warmup");
    long t1 = System.currentTimeMillis();
    for (int i = 0; i< times; i++) {
        result = Reflects.fastInvoke(mock, "get", classes, null);
    }
    long t2 = System.currentTimeMillis();
    System.out.println(String.format("fastInvoke:%d %d",t2-t1,result));
    for (int i = 0; i< times; i++) {
        result = Reflects.invoke(mock, "get", classes, null);
    }
    long t3 = System.currentTimeMillis();
    System.out.println(String.format("invoke:%d %d",t3-t2,result));
    for (int i = 0; i< times; i++) {
        result = mock.get();
    }
    long t4 = System.currentTimeMillis();
    System.out.println(String.format("native:%d %d",t4-t3, result));
    for (int i = 0; i< times; i++) {
        result = access.invoke(mock, "get");
    }
    long t5 = System.currentTimeMillis();
    System.out.println(String.format("ReflectASM:%d %d",t5-t4,result));
    for (int i = 0; i< times; i++) {
        result = access.invoke(mock, index);
    }
    long t6 = System.currentTimeMillis();
    System.out.println(String.format("ReflectASM with index:%d %d",t6-t5, result));
    for (int i = 0; i< times; i++) {
        result = access.invoke(mock, access.getIndex("get", classes));
    }
    long t7 = System.currentTimeMillis();
    System.out.println(String.format("ReflectASM with full match:%d %d",t7-t6, result));


    //benchmark begin
    for (int j = 1; j <= 10; j++) {
        System.out.println();
        System.out.println("iterate " + j);
        System.out.println();
        t1 = System.currentTimeMillis();
        for (int i = 0; i < times; i++) {
            result =  Reflects.fastInvoke(mock, "get", classes, null);
        }
        t2 = System.currentTimeMillis();
        System.out.println(String.format("fastInvoke:%d %d", t2 - t1, result));
        for (int i = 0; i < times; i++) {
            result =  Reflects.invoke(mock, "get", classes, null);
        }
        t3 = System.currentTimeMillis();
        System.out.println(String.format("invoke:%d %d", t3 - t2, result));
        for (int i = 0; i < times; i++) {
            result =  mock.get();
        }
        t4 = System.currentTimeMillis();
        System.out.println(String.format("native:%d %d", t4 - t3, result));
        for (int i = 0; i < times; i++) {
            result =  access.invoke(mock, "get");
        }
        t5 = System.currentTimeMillis();
        System.out.println(String.format("ReflectASM:%d %d", t5 - t4, result));
        for (int i = 0; i < times; i++) {
            result =  access.invoke(mock, index);
        }
        t6 = System.currentTimeMillis();
        System.out.println(String.format("ReflectASM with index:%d %d", t6 - t5, result));
        for (int i = 0; i < times; i++) {
            result =  access.invoke(mock, access.getIndex("get", classes));
        }
        t7 = System.currentTimeMillis();
        System.out.println(String.format("ReflectASM with full match:%d %d", t7 - t6, result));
    }
}

结果:
Warmup
fastInvoke:227 10000000
invoke:2033 20000000
native:51 30000000
ReflectASM:62 40000000
ReflectASM with index:55 50000000
ReflectASM with full match:72 60000000

iterate 1

fastInvoke:109 70000000
invoke:1339 80000000
native:55 90000000
ReflectASM:66 100000000
ReflectASM with index:55 110000000
ReflectASM with full match:65 120000000

iterate 2

fastInvoke:142 130000000
invoke:1336 140000000
native:31 150000000
ReflectASM:36 160000000
ReflectASM with index:31 170000000
ReflectASM with full match:44 180000000

iterate 3

fastInvoke:97 190000000
invoke:1302 200000000
native:30 210000000
ReflectASM:36 220000000
ReflectASM with index:31 230000000
ReflectASM with full match:44 240000000

iterate 4

fastInvoke:95 250000000
invoke:1268 260000000
native:32 270000000
ReflectASM:35 280000000
ReflectASM with index:34 290000000
ReflectASM with full match:44 300000000

iterate 5

fastInvoke:95 310000000
invoke:1271 320000000
native:31 330000000
ReflectASM:35 340000000
ReflectASM with index:31 350000000
ReflectASM with full match:45 360000000

iterate 6

fastInvoke:95 370000000
invoke:1266 380000000
native:32 390000000
ReflectASM:36 400000000
ReflectASM with index:31 410000000
ReflectASM with full match:45 420000000

iterate 7

fastInvoke:95 430000000
invoke:1256 440000000
native:31 450000000
ReflectASM:35 460000000
ReflectASM with index:31 470000000
ReflectASM with full match:44 480000000

iterate 8

fastInvoke:95 490000000
invoke:1260 500000000
native:32 510000000
ReflectASM:36 520000000
ReflectASM with index:31 530000000
ReflectASM with full match:44 540000000

iterate 9

fastInvoke:96 550000000
invoke:1250 560000000
native:31 570000000
ReflectASM:35 580000000
ReflectASM with index:32 590000000
ReflectASM with full match:44 600000000

iterate 10

fastInvoke:94 610000000
invoke:1245 620000000
native:31 630000000
ReflectASM:35 640000000
ReflectASM with index:32 650000000
ReflectASM with full match:44 660000000

可以看到通过预计算的索引调用的ReflectASM跟直接调用速度几乎一样,不考虑方法重载的ReflectASM速度略慢(即只通过方法名而不通过参数类型匹配方法),使用参数类型匹配的ReflectASM再慢一点,fastInvoke最慢,是前者的两倍多。对比了一下两者的源码,方法匹配部分的逻辑是一样的,但是ASM部分没研究,因为看不懂...

feng大可以检查下我的benchmark逻辑有没有问题 : )

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.