Giter Site home page Giter Site logo

sogou / srpc Goto Github PK

View Code? Open in Web Editor NEW
1.9K 47.0 379.0 1.57 MB

RPC framework based on C++ Workflow. Supports SRPC, Baidu bRPC, Tencent tRPC, thrift protocols.

License: Apache License 2.0

CMake 3.83% Makefile 0.62% Thrift 0.06% C++ 94.12% Python 0.07% Starlark 1.21% HTML 0.08%
rpc protobuf thrift workflow trpc brpc opentelemetry

srpc's Introduction

中文版入口


srpc-logo

Introduction

SRPC is an enterprise-level RPC system used by almost all online services in Sogou. It handles tens of billions of requests every day, covering searches, recommendations, advertising system, and other types of services.

Bases on Sogou C++ Workflow, it is an excellent choice for high-performance, low-latency, lightweight RPC systems. Contains AOP aspect-oriented modules that can report Metrics and Trace to a variety of cloud-native systems, such as OpenTelemetry, etc.

Its main features include:

  • Support multiple RPC protocols: SRPC, bRPC, Thrift, tRPC
  • Support multiple operating systems: Linux, MacOS, Windows
  • Support several IDL formats: Protobuf, Thrift
  • Support several data formats transparently: Json, Protobuf, Thrift Binary
  • Support several compression formats, the framework automatically decompresses: gzip, zlib, snappy, lz4
  • Support several communication protocols transparently: tcp, udp, sctp, tcp ssl
  • With HTTP+JSON, you can communicate with the client or server in any language
  • Use it together with Workflow Series and Parallel to facilitate the use of calculations and other asynchronous resources
  • Perfectly compatible with all Workflow functions, such as name service, upstream and other components
  • Report Tracing to OpenTelemetry
  • Report Metrics to OpenTelemetry and Prometheus
  • More features...

Installation

srpc has been packaged for Debian and Fedora. Therefore, we can install it from source code or from the package in the system.

reference: Linux, MacOS, Windows Installation and Compilation Guide

Quick Start

Let's quickly learn how to use it in a few steps.

For more detailed usage, please refer to Documents and Tutorial.

1. example.proto

syntax = "proto3";// You can use either proto2 or proto3. Both are supported by srpc

message EchoRequest {
    string message = 1;
    string name = 2;
};

message EchoResponse {
    string message = 1;
};

service Example {
    rpc Echo(EchoRequest) returns (EchoResponse);
};

2. generate code

protoc example.proto --cpp_out=./ --proto_path=./
srpc_generator protobuf ./example.proto ./

3. server.cc

#include <stdio.h>
#include <signal.h>
#include "example.srpc.h"

using namespace srpc;

class ExampleServiceImpl : public Example::Service
{
public:
    void Echo(EchoRequest *request, EchoResponse *response, RPCContext *ctx) override
    {
        response->set_message("Hi, " + request->name());
        printf("get_req:\n%s\nset_resp:\n%s\n",
                request->DebugString().c_str(), response->DebugString().c_str());
    }
};

void sig_handler(int signo) { }

int main()
{
    signal(SIGINT, sig_handler);
    signal(SIGTERM, sig_handler);

    SRPCServer server_tcp;
    SRPCHttpServer server_http;

    ExampleServiceImpl impl;
    server_tcp.add_service(&impl);
    server_http.add_service(&impl);

    server_tcp.start(1412);
    server_http.start(8811);
    getchar(); // press "Enter" to end.
    server_http.stop();
    server_tcp.stop();

    return 0;
}

4. client.cc

#include <stdio.h>
#include "example.srpc.h"

using namespace srpc;

int main()
{
    Example::SRPCClient client("127.0.0.1", 1412);
    EchoRequest req;
    req.set_message("Hello, srpc!");
    req.set_name("workflow");

    client.Echo(&req, [](EchoResponse *response, RPCContext *ctx) {
        if (ctx->success())
            printf("%s\n", response->DebugString().c_str());
        else
            printf("status[%d] error[%d] errmsg:%s\n",
                    ctx->get_status_code(), ctx->get_error(), ctx->get_errmsg());
    });

    getchar(); // press "Enter" to end.
    return 0;
}

5. make

These compile commands are only for Linux system. On other system, complete cmake in tutorial is recommanded.

g++ -o server server.cc example.pb.cc -std=c++11 -lsrpc
g++ -o client client.cc example.pb.cc -std=c++11 -lsrpc

6. run

Terminal 1:

./server

Terminal 2:

./client

We can also use CURL to post Http request:

curl 127.0.0.1:8811/Example/Echo -H 'Content-Type: application/json' -d '{message:"from curl",name:"CURL"}'

Output of Terminal 1:

get_req:
message: "Hello, srpc!"
name: "workflow"

set_resp:
message: "Hi, workflow"

get_req:
message: "from curl"
name: "CURL"

set_resp:
message: "Hi, CURL"

Output of Terminal 2:

message: "Hi, workflow"

Output of CURL:

{"message":"Hi, CURL"}

Benchmark

  • CPU 2-chip/8-core/32-processor Intel(R) Xeon(R) CPU E5-2630 v3 @2.40GHz
  • Memory all 128G
  • 10 Gigabit Ethernet
  • BAIDU brpc-client in pooled (connection pool) mode

QPS at cross-machine single client→ single server under different concurrency

Client = 1
ClientThread = 64, 128, 256, 512, 1024
RequestSize = 32
Duration = 20s
Server = 1
ServerIOThread = 16
ServerHandlerThread = 16

IMG

QPS at cross-machine multi-client→ single server under different client processes

Client = 1, 2, 4, 8, 16
ClientThread = 32
RequestSize = 32
Duration = 20s
Server = 1
ServerIOThread = 16
ServerHandlerThread = 16

IMG

QPS at same-machine single client→ single server under different concurrency

Client = 1
ClientThread = 1, 2, 4, 8, 16, 32, 64, 128, 256
RequestSize = 1024
Duration = 20s
Server = 1
ServerIOThread = 16
ServerHandlerThread = 16

IMG

QPS at same-machine single client→ single server under different request sizes

Client = 1
ClientThread = 100
RequestSize = 16, 32, 64, 128, 256, 512, 1024, 2048, 4096, 8192, 16384, 32768
Duration = 20s
Server = 1
ServerIOThread = 16
ServerHandlerThread = 16

IMG

Latency CDF for fixed QPS at same-machine single client→ single server

Client = 1
ClientThread = 50
ClientQPS = 10000
RequestSize = 1024
Duration = 20s
Server = 1
ServerIOThread = 16
ServerHandlerThread = 16
Outiler = 1%

IMG

Latency CDF for fixed QPS at cross-machine multi-client→ single server

Client = 32
ClientThread = 16
ClientQPS = 2500
RequestSize = 512
Duration = 20s
Server = 1
ServerIOThread = 16
ServerHandlerThread = 16
Outiler = 1%

IMG

Contact

  • Email - [email protected] - main author
  • Issue - You are very welcome to post questions to issues list.
  • QQ - Group number: 618773193

srpc's People

Contributors

barenboim avatar bkmgit avatar chanchann avatar dengjun101 avatar flyleier avatar hihybin avatar holmes1412 avatar kamilucious avatar kedixa avatar linqigang888 avatar liuzengh avatar luciouskami avatar qianch avatar wzl12356 avatar yvanwang avatar zcyc avatar zhang275 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

srpc's Issues

MacOS 安装 srpc 时 ld 出错

MacOS 10.15.7 环境下 make 命令报错:

截屏2022-04-05 下午6.09.47.png

openssl 使用 homebrew 安装, 版本为 LibreSSL 2.8.3, workflow 也已正常安装。 看上去是 architecture 设置不对

已设置环境变量:

OPENSSL_ROOT_DIR=/usr/local/opt/openssl 
OPENSSL_LIBRARIES=/usr/local/opt/openssl/lib

cmake 变量也已设置

cmake -DOPENSSL_ROOT_DIR=/usr/local/opt/openssl -DOPENSSL_LIBRARIES=/usr/local/opt/openssl/lib

Thrift IDL自动生成的skeleton代码相关修复

各位小伙伴,srpc会根据IDL文件自动生成两份代码:

  • 如果IDL是protobuf,会生成server.pb_skeleton.ccclient.pb_skeleton.cc;
  • 如果IDL是thrift,会生成server.thrift_skeleton.ccclient.thrift_skeleton.cc;

原生成的代码里thrift部分有误,会默认启动SRPCServerSRPCClient,已经改成默认启动ThriftFramed协议的ThriftServerThriftClient了,main()函数如下:

int main()
{
    unsigned short port = 1412;
    ThriftServer server; // 这里已修改为默认启动ThriftFramed协议的server

    ExampleServiceImpl example_impl;
    server.add_service(&example_impl);

    server.start(port);
    wait_group.wait();
    server.stop();
    return 0;
}

SRPC的Thrift Framed server/client可以和原生的其他语言的thrift互通,使用非常方便,欢迎大家尝试~

aarch64上make 出错 error: cannot use typeid with -fno-rtti

环境

EulerOS,aarch64,protobuf-3.5.0,gcc-7.3.0

复现方式:

cd srpc
make -j128

报错

[ 62%] Building CXX object src/compress/CMakeFiles/compress.dir/rpc_compress.cc.o
[ 65%] Building CXX object src/compress/CMakeFiles/compress.dir/rpc_compress_snappy.cc.o
In file included from /usr/include/google/protobuf/message.h:118:0,
                 from /home/x/code/srpc/src/rpc_basic.h:22,
                 from /home/x/code/srpc/src/compress/rpc_compress_snappy.h:20,
                 from /home/x/code/srpc/src/compress/rpc_compress_snappy.cc:19:
/usr/include/google/protobuf/arena.h: In member function ‘void* google::protobuf::Arena::AllocateInternal(bool)’:
/usr/include/google/protobuf/arena.h:654:15: error: cannot use typeid with -fno-rtti
     AllocHook(RTTI_TYPE_ID(T), n);
               ^
/usr/include/google/protobuf/arena.h: In member function ‘T* google::protobuf::Arena::CreateInternalRawArray(size_t)’:
/usr/include/google/protobuf/arena.h:693:15: error: cannot use typeid with -fno-rtti
     AllocHook(RTTI_TYPE_ID(T), n);
               ^
make[3]: *** [src/compress/CMakeFiles/compress.dir/build.make:76: src/compress/CMakeFiles/compress.dir/rpc_compress_snappy.cc.o] Error 1
make[3]: Leaving directory '/home/x/code/srpc/build.cmake'
make[2]: *** [CMakeFiles/Makefile2:405: src/compress/CMakeFiles/compress.dir/all] Error 2
make[2]: Leaving directory '/home/x/code/srpc/build.cmake'
make[1]: *** [Makefile:152: all] Error 2
make[1]: Leaving directory '/home/x/code/srpc/build.cmake'
make: *** [GNUmakefile:13: all] Error 2

如何使用srpc中的文件压缩功能

作者大大您好:目前我想用workflow实现客户端文件压缩传输的功能,当前主要是先完成批量文件下载的功能。
主要思路为利用ParallelWork和SeriesWork,在串行流中根据第一个httptask获取服务器中文件大小,文件超出阈值则在串行流中增加后续httptask分块下载,SeriesWork的callback中进行文件完整性检验与合并。
但是workflow中并不支持文件压缩的功能,srpc中是有这样的能力的,但是我不太懂rpc,所以想问问有没有什么小示例来解决我的疑惑,同时作为一个网络服务初学者,想请您咨询一下我的思路有没有啥问题,希望能获得您的帮助!

SRPC的链路跟踪与性能测试

1、链路跟踪有吗, 有埋点方案与可视化工具吗
2、性能测试,用的什么工具,如WIKI介绍中的图,是怎么生成的?

vs2013是否能支持

srpc使用了c++11标准,同时要求protobuf版本在3.12以上,是否是必须得?3.5版本是否可用?workflow也使用了c++11,但是vs2013仅支持部分,能否支持?

SRPC supports reporting traces to OpenTelemetry

1. Introduction

SRPC supports generating and reporting tracing and spans, which can be reported in multiple ways, including exporting data locally or to OpenTelemetry.

Since SRPC follows the data specification of OpenTelemetry and the specification of w3c trace context, now we can use RPCSpanOpenTelemetry as the reporting plugin.

The report conforms to the Workflow style, which is pure asynchronous task and therefore has no performance impact on the RPC requests and services.

2. Usage

After the plugin RPCSpanOpenTelemetry is constructed, we can use add_filter() to add it into server or client.

For tutorial/tutorial-02-srpc_pb_client.cc, add 2 lines like the following :

int main()                                                                   
{                                                                        
    Example::SRPCClient client("127.0.0.1", 1412); 

    RPCSpanOpenTelemetry span_otel("http://127.0.0.1:55358"); // jaeger http collector ip:port   
    client.add_filter(&span_otel);
    ...

For tutorial/tutorial-01-srpc_pb_server.cc, add the similar 2 lines. We also add the local plugin to print the reported data on the screen :

int main()
{
    SRPCServer server;  

    RPCSpanOpenTelemetry span_otel("http://127.0.0.1:55358");                            
    server.add_filter(&span_otel);                                                 

    RPCSpanDefault span_log; // this plugin will print the tracing info on the screen                                                  
    server.add_filter(&span_log);                                              
    ...

make the tutorial and run both server and client, we can see some tracing information on the screen.

image

We can find the span_id: 04d070f537f17d00 in client become parent_span_id: 04d070f537f17d00 in server:

image

3. Traces on Jaeger

Open the show page of Jaeger, we can find our service name Example and method name Echo. Here are two span nodes, which were reported by server and client respectively.

image

As what we saw on the screen, the client reported span_id: 04d070f537f17d00 and server reported span_id: 00202cf737f17d00, these span and the correlated tracing information can be found on Jaeger, too.

image

4. About Parameters

How long to collect a trace, and the number of reported retries and other parameters can be specified through the constructor parameters of RPCSpanOpenTelemetry. Code reference: src/module/rpc_span_policies.h

The default value is to collect up to 1000 trace information per second, and features such as transferring tracing information through the srpc framework transparently have also been implemented, which also conform to the specifications.

5. Attributes

We can also use add_attributes() to add some other informations as OTEL_RESOURCE_ATTRIBUTES.

Please notice that our service name "Example" is set also thought this attributes, the key of which is service.name. If service.name is also provided in OTEL_RESOURCE_ATTRIBUTES by users, then srpc service name takes precedence. Refers to : OpenTelemetry#resource

6. Log and Baggage

SRPC provides log() and baggage() to carry some user data through span.

API :

void log(const RPCLogVector& fields);
void baggage(const std::string& key, const std::string& value);

As a server, we can use RPCContext to add log annotation:

class ExampleServiceImpl : public Example::Service                                 
{
public: 
    void Echo(EchoRequest *req, EchoResponse *resp, RPCContext *ctx) override
    {
        resp->set_message("Hi back");
        ctx->log({{"event", "info"}, {"message", "rpc server echo() end."}});
    }
};

As a client, we can use RPCClientTask to add log on span:

srpc::SRPCClientTask *task = client.create_Echo_task(...);
task->log({{"event", "info"}, {"message", "log by rpc client echo()."}});

linux中使用srpc的CMakelists写法

hello!想问一下会出linux中使用srpc的CMakelists写法的示例吗?因为感觉现在CMakeLists还是挺常用的,作为刚刚开始学习这方面的新人感觉QuickStart还是有点不够用,如果能多几个示例就更好了:)

关于C++ Workflow项目

SRPC是基于搜狗的明星开源项目C++ Workflow开发,并且完美衔接。workflow是搜狗的异步网络与计算引擎,并包含多个通用协议的实现。大家可以先参考workflow项目的使用方法,应该可以更好理解SRPC。
GitHub地址:https://github.com/sogou/workflow

srpc 编译错误

gcc/g++ 版本是 11.2.1 20220127 (Red Hat 11.2.1-9) (GCC)
protoc 的版本是 libprotoc 3.11.4

编译 srpc 的源码报错:
[ 61%] Building CXX object src/compress/CMakeFiles/compress.dir/rpc_compress_snappy.cc.o In file included from /usr/local/include/google/protobuf/message.h:120, from /home/srpc/src/rpc_basic.h:22, from /home/srpc/src/compress/rpc_compress_snappy.h:20, from /home/srpc/src/compress/rpc_compress_snappy.cc:19: /usr/local/include/google/protobuf/arena.h: In member function ‘void* google::protobuf::Arena::AllocateInternal(bool)’: /usr/local/include/google/protobuf/arena.h:536:15: error: cannot use ‘typeid’ with ‘-fno-rtti’ 536 | AllocHook(RTTI_TYPE_ID(T), n); | ^~~~~~~~~~~~ /usr/local/include/google/protobuf/arena.h: In member function ‘T* google::protobuf::Arena::CreateInternalRawArray(size_t)’: /usr/local/include/google/protobuf/arena.h:599:15: error: cannot use ‘typeid’ with ‘-fno-rtti’ 599 | AllocHook(RTTI_TYPE_ID(T), n); |

加了CXXFLAGS -fno-rtti 报错信息如下:
/home/srpc/workflow/src/manager/UpstreamManager.cc: In static member function ‘static int UpstreamManager::upstream_add_server(const string&, const string&, const AddressParams*)’: /home/srpc/workflow/src/manager/UpstreamManager.cc:169:34: error: ‘dynamic_cast’ not permitted with ‘-fno-rtti’ 169 | UPSGroupPolicy *policy = dynamic_cast<UPSGroupPolicy *>(ns->get_policy(name.c_str())); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /home/srpc/workflow/src/manager/UpstreamManager.cc: In static member function ‘static int UpstreamManager::upstream_remove_server(const string&, const string&)’: /home/srpc/workflow/src/manager/UpstreamManager.cc:185:34: error: ‘dynamic_cast’ not permitted with ‘-fno-rtti’ 185 | UPSGroupPolicy *policy = dynamic_cast<UPSGroupPolicy *>(ns->get_policy(name.c_str())); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /home/srpc/workflow/src/manager/UpstreamManager.cc: In static member function ‘static int UpstreamManager::upstream_delete(const string&)’: /home/srpc/workflow/src/manager/UpstreamManager.cc:197:34: error: ‘dynamic_cast’ not permitted with ‘-fno-rtti’ 197 | UPSGroupPolicy *policy = dynamic_cast<UPSGroupPolicy *>(ns->del_policy(name.c_str())); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /home/srpc/workflow/src/manager/UpstreamManager.cc: In static member function ‘static std::vector<std::basic_string<char> > UpstreamManager::upstream_main_address_list(const string&)’: /home/srpc/workflow/src/manager/UpstreamManager.cc:211:34: error: ‘dynamic_cast’ not permitted with ‘-fno-rtti’ 211 | UPSGroupPolicy *policy = dynamic_cast<UPSGroupPolicy *>(ns->get_policy(name.c_str())); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /home/srpc/workflow/src/manager/UpstreamManager.cc: In static member function ‘static int UpstreamManager::upstream_disable_server(const string&, const string&)’: /home/srpc/workflow/src/manager/UpstreamManager.cc:223:34: error: ‘dynamic_cast’ not permitted with ‘-fno-rtti’ 223 | UPSGroupPolicy *policy = dynamic_cast<UPSGroupPolicy *>(ns->get_policy(name.c_str())); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /home/srpc/workflow/src/manager/UpstreamManager.cc: In static member function ‘static int UpstreamManager::upstream_enable_server(const string&, const string&)’: /home/srpc/workflow/src/manager/UpstreamManager.cc:239:34: error: ‘dynamic_cast’ not permitted with ‘-fno-rtti’ 239 | UPSGroupPolicy *policy = dynamic_cast<UPSGroupPolicy *>(ns->get_policy(name.c_str())); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /home/srpc/workflow/src/manager/UpstreamManager.cc: In static member function ‘static int UpstreamManager::upstream_replace_server(const string&, const string&, const AddressParams*)’: /home/srpc/workflow/src/manager/UpstreamManager.cc:256:34: error: ‘dynamic_cast’ not permitted with ‘-fno-rtti’ 256 | UPSGroupPolicy *policy = dynamic_cast<UPSGroupPolicy *>(ns->get_policy(name.c_str())); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

workflow与srpc开发者群

感谢各位小伙伴关注,我们组建了一个QQ群,对workflow和srpc感兴趣的开发者可以加:618773193
如果需要验证可以说明一下是workflow/srpc
srpc刚开源,还有许多生态需要建设,希望与大家多多交流~

附上二维码:

srpc与workflow如何无缝连接使用

我想使用srpc做数据IO服务器,只使用workflow时,有现成的例子,照着做就行(参考http_file_server),但在使用srpc时不知道如何将workflow中的代码和srpc无缝衔接起来,具体问题描述如下:
我在server的Echo中函数体中设置了一个IO读任务,并设置了回调,代码大致如下:

void Echo(WWIORequest *request, WWIOResponse *response, srpc::RPCContext *ctx) override {
    WFFileIOTask *pread_task;
    pread_task = WFTaskFactory::create_pread_task(fd, buf, size, 0,
                                                          pread_callback);
    pread_task->user_data = response; 
}

如果参照http_file_server的代码 ,则应该将以下4行类似代码整合到Echo函数中:

pread_task->user_data = resp;   /* pass resp pointer to pread task. */
server_task->user_data = buf;   /* to free() in callback() */
server_task->set_callback([](WFHttpTask *t){ free(t->user_data); });
series_of(server_task)->push_back(pread_task);

但是,server_task在srpc的环境中是不存在的,所以有以下几个问题:
1)没有类似server_task对象,buf中的数据如何传递给pread_task的回调函数?
2)没有类似server_task对象,释放buf怎么释放?如果是使用ctx->get_series()->set_callback函数的话,在里面的lamda函数该如何写呢?
3)series_of(server_task)->push_back(pread_task);这一行在srpc语境中等效于workflow的语句是ctx->get_series()->push_back(pread_task)吗?

辛苦大佬回答。

srpc windows 下vs2022编译tutorial测试用例出现链接问题

大家好!麻烦帮忙看一下,我用vs2022编译tutorial里的测试用例,出现openssl里一个目标文件找不到。
我编译openssl时的命令为
perl Configure VC-WIN32 no-asm --prefix=F:\srpc_win32\openssl-1.1.1n\build_win32
nmake
nmake install
生成的lib为libcrypto.lib和libssl.lib也加入到其包含目录中,openssl的子文件夹crypto下没有comp.obj文件,麻烦问一下接下来要怎么做?
image

windows下编译问题

图片
搞了一天了,心累。
昨天没看issues,没用vcpkg,自己按照依赖用cmake一个个编译的,最终编译到tutorial的时候报错,运行时库不匹配,改好之后又报错重定义。最后实在没办法就按照issues里用vcpkg装好了依赖,然后一步步编译workflow、srpc,最后又到了tutorial,这次又出了新问题。
说实话,真的不想用了,看起来依赖不多,编译起来问题一堆,我现在就一个想法,赶紧的毁灭吧

srpc 0.9.4 released!

Improvements

  • Support thrift IDL keywords: exception extend typedef;
  • Update span context with OpenTracing specification;
  • Add log() and baggage() for task/context;
  • Add rpc proxy demo;

Bug Fixes

  • Fix segment fault when output directory doesn't exist;
  • Fix attachment crash and RPCBuffer::cut() bug;
  • Fix srpc_generator incorrect dir_prefix;
  • Fix span_id to parent_span_id bug;

rpc 空参数 的支持

ProtocolBuf 自带了EmptyParam 的支持,但是在proto文件中使用导入,SRPC生成的时候报找到不到 google/protobuf/empty.proto。
所以,如何让SRPC支持空参数?
自己修改生成的RPC函数应该可以实现,但是很多地方都要修改。
对了,生成的是C++语言的。

编译错误

编译example时报错如下,请问是安装srpc过程中有什么问题吗,protobuf版本是3.19.4
image

thrift桩代码生成工具生成的桩代码与期望不一样

此处使用了apache thrift提供的thrift文件(thrift后缀github文本编辑器不支持上传,所以换成了txt后缀)
tutorial.txt
shared.txt

使用srpc_generator生成后发现部分地方生成错误,如下图红框
企业微信截图_52ebe79a-890e-42d2-9312-1548ffdc6e6b

总结:出现如上错误的问题,我猜测是tutorial.thrift文件中的Struct Work下的num1赋了初始值0,导致代码生成工具没有成功识别
wecom-temp-55d9af5a6d4522116b7655249faa16a1

编译server.cc 报这个错

Mac环境,编译命令
g++ -o server server.cc example.pb.cc -std=c++11 -lsrpc -I/usr/local/opt/[email protected]/include -lprotobuf

提示:

ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)

编译报错问题

/usr/local/include/srpc/rpc_task.inl:169:58: error: no type named 'Series' in 'WFServerTask<srpc::BaiduStdRequest, srpc::BaiduStdResponse>'
class RPCSeries : public WFServerTask<RPCREQ, RPCRESP>::Series
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~
/usr/local/include/srpc/rpc_task.inl:449:20: note: in instantiation of member class 'srpc::RPCServerTask<srpc::BaiduStdRequest, srpc::BaiduStdResponse>::RPCSeries' requested here
SERIES *series = dynamic_cast<SERIES *>(series_of(this));
^
/usr/local/include/srpc/rpc_task.inl:132:2: note: in instantiation of member function 'srpc::RPCClientTask<srpc::BaiduStdRequest, srpc::BaiduStdResponse>::message_out' requested here
RPCClientTask(const std::string& service_name,
^
/usr/local/include/srpc/rpc_client.h:57:20: note: in instantiation of member function 'srpc::RPCClientTask<srpc::BaiduStdRequest, srpc::BaiduStdResponse>::RPCClientTask' requested here
auto *task = new TASK(this->service_name,
^
./example.srpc.h:222:21: note: in instantiation of function template specialization 'srpc::RPCClientsrpc::RPCTYPEBRPC::create_rpc_client_task' requested here
auto *task = this->create_rpc_client_task("Echo", std::move(done));
^
1 warning and 3 errors generated.

thriftserver 多service 问题

thriftserver 在具有多service情况下,调用失败,初步跟踪了下,把方法名称作为service 的name,导致find失败,始终返回一个。

Windows下的编译问题

在windows 10 VS2022环境下编译。
按照几个Issue里面的步骤尝试了一下,出现了以下错误:
1
2

这几个头文件应该是windows下没有的
#ifndef WIN32 这样的操作肯定是不行的,请问应该怎么样修改才能编译成功呢?

编译错误

/usr/local/include/srpc/rpc_context.inl:119:60: 错误:对成员‘get_attachment’的请求有歧义
return task_->get_resp()->get_attachment(attachment, len);

修复watch_timeout未初始化的问题

背景说明

workflow内部提供了一个非常好用的first_timeout()的接口,为每个网络任务发出时控制第一次回复超时的时间,我们可以用来实现许多丰富的功能,srpc目前用来作为watch的功能:request发到远端后保持长连接,只要还没到达first_timeout()所控制的超时时间,期间远端都可以给我发送response,这个功能用作服务发现监控节点变更非常好用。

在srpc中为每个task提供的接口如下,叫做watch_timeout:

struct RPCTaskParams
{
     int send_timeout;
     int watch_timeout;
     // ...
};

bug描述

先前加入这个功能的时候,对于通过RPC_CLIENT_PARAMS_DEFAULT设置过全局的RPCClientParams的情况,watch_timeout_由于疏忽没有初始化,可能会是一个随机值。由于workflow的first_timeout()接口单位为毫秒,如果被随机设置成一个值比较小的正整数,则很容易触发本地超时。

修复办法

目前已修复,希望各位小伙伴升级到最新版本。也欢迎大家尝试使用这个功能,还有任何问题欢迎与我们反馈。感谢~

新增对腾讯TRPC协议的支持

经腾讯公司授权,我们SRPC项目开源了腾讯TRPC协议的实现。这也是TRPC的第一个开源实现,腾讯公司的同学可以试用一下。server侧依然只支持连接池模式的访问,不支持pipeline以及乱序返回。

rpc client端改为默认长连接

之前rpc server端默认是长连接,rpc client端默认为短连接,这使得用户需要手动修改client参数里的keep_alive_timeout来实现长连接client,如果没有修改的话性能大受影响。新的代码client改为默认长连接。

使用半同步方式与原生thrift互通的注意事项

srpc框架实现了thrift framed协议,因此可以和原生thrift进行互通。

原生thrift由于没有提供连接复用,小伙伴过去在使用thrift时往往会自己封装连接池,并使用半同步方式提升thrift的性能。
srpc提供了非常好的连接复用和线程复用,并且提供了兼容原thrift使用方式的接口,无论是client还是server性能都远超原thrift。

但小伙伴在业务升级时,可能会先升级client或server,逐步替换。
而我们在使用原生thrift client半同步接口与srpc thrift server进行通信时,需要注意:

srpc server的网络模型是一发一收,这意味着我们对一个连接调用thrift的send_method()recv_method()接口时也必须保证消息一发一收。不能对同一个连接连续调用多次send_method(),原生thrift server会忽略多次发送的行为,但srpc thrift server内部网络模型会认为是错误而关掉连接。

以我们的tutorial里定义的Echo举个例子:

service Example {
    EchoResult Echo(1:string message, 2:string name);
}

我们会使用以下两个半同步接口:

  void send_Echo(const std::string& message, const std::string& name);
  void recv_Echo(EchoResult& _return);

则小伙伴如果对原生thrift进行连接池封装,常见做法可能是:

    // 建立若干个这样的client,每个client相当于一个连接
    std::shared_ptr<TSocket> socket(new TSocket(IP, PORT)); 
    std::shared_ptr<TTransport> transport(new TFramedTransport(socket));
    std::shared_ptr<TProtocol> protocol(new TBinaryProtocol(transport));
    ExampleClient client(protocol);
    transport->open();

    // 然后放到自己实现的连接池里管理起来
    conn_pool->add_conn(&client);

使用时,如果不能针对同一个连接保证一发一收,则会被srpc thrift server被认为是错误包而关闭,client端会得到CLOSE_WAIT:

    conn_pool->get_conn()->send_Echo("hello", "srpc"); // 发送

    ... // 做别的事情

    conn_pool->get_conn()->recv_Echo(ret); // 接收,此连接很可能与发送时用的那个连接不同

因此,如果一定要用原生thrift并自己封装连接池,尽量这样:

    auto *conn = conn_pool->get_conn(); // 自行保证连接拿出来后被占用
    conn->send_Echo("hello", "srpc"); // 发送

    ... // 做别的事情

    conn->recv_Echo(res); // 接收
    conn_pool->put_conn(conn); // 归还

最后~~~
还是建议大家直接使用srpc thrift client,简单方便,再也不用自行封装连接池了。
client本身就是个连接池,接口简洁,性能优异:

    Example::ThriftClient client(IP, PORT); // 一步建立多连接异步client

    client.send_Echo("hello", "srpc"); // 发送

    ... // 做别的事情

    client.recv_Echo(res); // 接收

文件传输的问题

你好,我们要将一个pdf转图片的程序改造成一个rpc服务。用户传入一个pdf,服务转换成图片传回。这数据量还是蛮大的,这样的场景srpc能支持吗?
另一个问题,像grpc它支持二进制然后它有许多第三方的语言绑定,那srpc我要实现以上的服务,如果用http感觉要编码转成文本开销是很大的,如果用tcp的话,其它语言如java等岂不是要实现一个客户端?

windows下编译找不到protobuf

我在windows下安装了protobuf,并加入了环境变量。在用cmake对srpc进行编译的时候报错,报错内容如下:
CMake Error at src/CMakeLists.txt:17 (find_package):
Could not find a package configuration file provided by "Protobuf" with any
of the following names:

ProtobufConfig.cmake
protobuf-config.cmake

Add the installation prefix of "Protobuf" to CMAKE_PREFIX_PATH or set
"Protobuf_DIR" to a directory containing one of the above files. If
"Protobuf" provides a separate development package or SDK, be sure it has
been installed.

Thrift Union 支持

请问现在thrift idl 里边定义union是支持的吗?

我试了定义的并没有被解析出来生成对应代码

srpc在windows上的编译流程参考

0.编译前的准备:

从CMake的官网下载CMake的二进制文件并安装,建议CMake版本 >= 3.6

我们假设您当前的路径为:E:/GitHubProjects

由于一些原因,我建议您使用vcpkg来安装依赖项

打开Powershell/cmd/bash,拉取vcpkg,并安装依赖项:

git clone https://github.com/microsoft/vcpkg.git
cd vcpkg
.\bootstrap-vcpkg.bat

# 安装依赖项

# win32
.\vcpkg.exe install zlib:x86-windows protobuf:x86-windows openssl:x86-windows snappy:x86-windows lz4:x86-windows

# amd64
.\vcpkg.exe install zlib:x64-windows protobuf:x64-windows openssl:x64-windows snappy:x64-windows lz4:x64-windows

# 之所以要指定架构并分别安装两个架构的库的原因是因为vcpkg有迷之bug会导致cmake找不到包

# 注意!不推荐将vcpkg全局集成,这会导致vcpkg的包污染您的项目,如果您希望集成vcpkg的包到某一个项目,请您使用nuget的本地仓库

1.编译源代码:

从官方仓库拉取代码,并编译:

# 回到上一级目录
cd ..
git clone --recursive https://github.com/sogou/srpc.git
cd srpc
cd workflow

# 将workflow切换到windows分支
git checkout windows

#使用cmake生成vs解决方案,我当前的环境是cmake 3.23.0,visual studio 2022

# 编译32位版本
cmake -B build32 -S . -DCMAKE_TOOLCHAIN_FILE=E:\GitHubProjects\vcpkg\scripts\buildsystems\vcpkg.cmake -G "Visual Studio 17" -A Win32

cmake --build build32 --config Debug
cmake --build build32 --config Release

# 编译64位版本
cmake -B build64 -S . -DCMAKE_TOOLCHAIN_FILE=E:\GitHubProjects\vcpkg\scripts\buildsystems\vcpkg.cmake -G "Visual Studio 17" -A x64

cmake --build build64 --config Debug
cmake --build build64 --config Release

接下来编译srpc:

# 回到上一级目录
cd ..

# 编译32位版本
cmake -B build32 -S . -DCMAKE_TOOLCHAIN_FILE=E:\GitHubProjects\vcpkg\scripts\buildsystems\vcpkg.cmake  -G "Visual Studio 17" -A Win32

cmake --build build32 --config Debug
cmake --build build32 --config Release

# 编译64位版本
cmake -B build64 -S . -DCMAKE_TOOLCHAIN_FILE=E:\GitHubProjects\vcpkg\scripts\buildsystems\vcpkg.cmake  -G "Visual Studio 17" -A x64

cmake --build build64 --config Debug
cmake --build build64 --config Release



2.编译例子:

# 直接在srpc的目录下编译

# 编译32位版本
cmake -B buildt32 -S tutorial -DCMAKE_TOOLCHAIN_FILE=E:\GitHubProjects\vcpkg\scripts\buildsystems\vcpkg.cmake  -G "Visual Studio 17" -A Win32

cmake --build buildt32 --config Debug
cmake --build buildt32 --config Release

# 编译64位版本
cmake -B buildt64 -S tutorial -DCMAKE_TOOLCHAIN_FILE=E:\GitHubProjects\vcpkg\scripts\buildsystems\vcpkg.cmake  -G "Visual Studio 17" -A x64

cmake --build buildt64 --config Debug
cmake --build buildt64 --config Release

至此,windows上srpc的整个编译流程就结束了,可以尝试运行一下例子测试一下效果

尝试将srpc集成到项目中,编译成功,运行时报错:undefined symbol: _ZTVN8protocol11HttpMessageE

用c++filt解析结果为

vtable for protocol::HttpMessage

系统为CentOS 7
ProtoBuf version:3.13.0
项目的CMakeLists.txt 如下,srpc编译好后,将include和静态链接库放入项目里

set(SRPC_LIB srpc)
list(APPEND SRPC_INCLUDE_DIR
${ClickHouse_SOURCE_DIR}/contrib/srpc/_include
${ClickHouse_SOURCE_DIR}/contrib/srpc/workflow/_include
)
dbms_target_link_libraries(PRIVATE ${SRPC_LIB})
dbms_target_include_directories(PRIVATE ${SRPC_INCLUDE_DIR})

请问可能是哪里出了问题呢?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.