Giter Site home page Giter Site logo

nrpc's People

Contributors

alittlebrighter avatar cdevienne avatar dependabot[bot] avatar kulak avatar mdevan avatar nogoegst avatar unkaktus avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nrpc's Issues

Improve the response (rebooted)

We recently merged #13, which support fully explicit responses. It has the big advantage of avoiding a double encoding of the response, and in case of replies with simple types, the handler and wrapper signatures gets even more simplified.

After concretely using it I can however underline a few drawbacks:

  • The use of "oneof" generates structures that are tedious to deal with
  • The use of "oneof" introduces ambiguities in the json format. Using gogo/protobuf even generates tests of json encoding/decoding that fails(!)
  • Explicitly defining the replies is boring, and does not improve the readability of the .proto files. It also makes the experience significantly different from grpc.

These drawbacks are not enough for me to switch back to the former system, however I think a 3rd way may solve all the problems.

We want to:

  • avoid using "oneof"
  • avoid a double encoding
  • avoid fully explicit definitions

I propose to use a custom encoding for the response:

  • A successful reply is the normal protobuf encoding of the output type
  • A error reply is a "0" byte followed by a protobuf encoding of a nrpc.Error

With this approach, we have an easy, cheap and unmistakable way to detect if the response is an error (test the first byte of the buffer), at the cost of an extra byte only in case of error.
Also, we should get a slightly smaller payload, as a successful response would only encode the output type and send it unwrapped.

As for the json encoding, since we get to define the wire format, and replies are necessarily json objects, we could define that an error reply would have the following structure:

{ "__error__": { .... }}

This could be tested even before actually decoding the json, by probing the first 15 first bytes of so, which remains cheap.

Thoughts ?

Integrate with sentry

When implementing panics handling, it would be nice to have the option to report them to a sentry instance.

Make the server side subjects a setting (not a constant)

Currently the client side can be configured at runtime to use a different subject.
The server side, on the other hand, uses constants to define the subject it listens to (and parse).
This situation is inconsistent, it should be made consistent by removing the setting on the client or adding some settings on the server.

Generated handler Suject() function should take module/service params

The Subject() function on the generated handlers build the subject of the handler.

It currently insert wildcards for all the parameters, which is suitable for a subscription that aim to handle any possible parameter.

In some case however, a handler instance may need to subscribe to a specific parameter value.
The simplest would be that Subject() takes some arguments to optionally restrict the module/service subject params.

Cancel handlers that are too long

If a handler is too long, the client will get a nats timeout and the handler will consider that the client has received a reply.
Ideally, if the handler reaches a timeout slightly smaller that the client timeout, we should reply with a TOOLONG or SERVERBUSY error and cancel the handler context.

Dead ?

Sounded like a great idea but did not work out. Curious to know the postmortem

The document have error

go get github.com/nats-rpc/nrpc/protoc-gen-nrpc

go install github.com/nats-rpc/nrpc/protoc-gen-nrpc

what's the status of the project?

It seems that project is dormant since April, 2019.
I like the idea and starting point.

What shall I expect to work if I try using the project?
How much is done vs not done?

Add a NoRequest type

A common use-case of NATS is having a subject on which in information stream is published, and clients interested in it just have to subscribe to the subject to get them.

With NRPC as it is today, the way to formalize such a service is somehow awkward because the "client" side has to implement a "Server" interface, maybe with extra methods it is not interested in. And the "server" side uses a "Client" structure to publish the messages.

I propose we add a "NoRequest" message type that creates output-only methods, ie some methods a client can subscribe to, with a client-friendly api.

For example, given the following .proto:

service Broadcast {
   rpc InfoStream(nrpc.NoRequest) returns (Info) {}
}

The client type would a provide a convenient API to subscribe to it:

b := NewBroadcastClient(conn)
sub, err := b.InfoStreamSubscribe(func (Info) {})

We could also generate a chan-based subscription:

sub, c, err := b.InfoStreamChanSubscribe()

Or a sync subscription:

sub, err := b.InfoStreamSyncSubscribe()
info := sub.Next() // returns a decoded Info directly

custom subjects

I am currently evaluating nRPC for xbus.

One thing we rely a lot on is the permission system, and we have subjects that contains some ids so that only some clients may post on them, and the server does a wildcard sub so it gets all the requests from all the clients.

So I would like to specify, in the API, the exact subject to use, and it may contain some parameters that I would like the client to pass to the wrapping function, and the handler function on the server to get as a parameter.

Since protobuf allows custom options, I was thinking we could add a "nats_subject" method options.
The default would be function name, and it would always be prefixed by the service name and the package name if applicable.

A proto using this option would look like:

package demo;
service myservice {
    rpc function1(string) returns (bool) {
        option (nrpc.nats_subject) = "{clientid:string}.function1";
    }
}

In this case, the actual subject used by the client "c1" would be: "demo.myservice.c1.function1".
The wrapper function for the client would be something like:

func (myservice) function1(clientid string, request string) (bool, error) {
    // ...
}

On the server side, the handler interface would be:

type myserviceHandler interface {
    function1(clientid string, request string) (bool, error)
}

A side effect is that we would have 1 function = 1 subject, unlike the current implementation that seems to have one subject per service (unless I missed something).

Is it something that could make its way into nRPC if contributed ?

Stream reply pattern

I would like to propose the addition of streamed replies.

Use cases

Long lasting computing

A method call may take more than a reasonable timeout. In such case, the server-side could send a first reply as soon as possible, that basically says "I'm working on it, hold on", then may be come other empty replies until the actual result is ready and can be returned. Only then the client would unsubscribe.

Big replies

A method call may have to return a volume of data that is too big for being encoded in a single nats message, if not a protobuf message (which is recommended to remains small enough).
A streamed reply would allow to send several reply messages, and the client would do what is needed with it (saving them, assembling a bigger data structure, whatever).

.proto example

A new option "streamed_reply" could enable this pattern for a given method:

message FileList {
    repeated filename = 1;
}
service Demo {
    rpc ListFiles(nrpc.Void) returns (FileList) {
        option (nrpc.streamed_reply) = true;
    }
}

protocol details

The client subscribes to the reply subject without auto-unsubscribe, because it does not know how many messages will be received.
Each message of the reply can be one of:

  • empty message: basically means "hold on, work in progress"
  • an encoded output type: a chunk of data to be passed downstream
  • a normally encoded error with a special type "EOS", and a message containing the number of messages that were sent so the client can be sure nothing is missing.
  • a normally encoded error

The duration between two message must not exceed a typical nats timeout value (a few seconds). If it does, the client is supposed to return a timeout error.

client mapping

Example of a wrapper for the above .proto definition:

func (c *DemoClient) ListFiles(chan<-FileList) error {}

The chan is expected to be consumed in a separate goroutine, which can make things a little more complicated that we would like.

An alternative could make things easier:

func (c *DemoClient) ListFiles(chan<-FileList, chan<-error) {}

Here there is no need to lauch a goroutine, a simple for{select{}} is enough to consume the stream.

In both case the channel would be closed by the generated code when the reception is over.

server mapping

Example of a Server interface function for the above .proto definition:

type DemoServer interface {
    ListFiles(context.Context, chan<-FileList) error
}

The function would return only when everything is over. As long as the function is running, the nrpc code takes care of sending empty messages to avoid a timeout.

The server implementation has the responsibility to ensure messages are small enough to feat in a nats message. If a message exceeds the size, the context is canceled and the implementation is expected to return.

Other languages support

Nice project, I was going to attempt writing such a tool, it may save me lots of time!

That said, I see no support for other languages than go. In my case I would also need java and python (to start with). I am willing to contribute, but some hints on how to add such support would be appreciated.

Write a spec

The goal is to have a non-ambiguous description of how nrpc translates the protobuf services into nats patterns. that would be the reference for the various languages ports.

Stream replies are missing a backpressure mechanism

Currently the client has no mean to slowdown the emission of the messages by the server.

There should be one, that should not slow down emissions just because of its presence.
That rules out the client emitting a ack message for each message received.

A possible way would be to send, in a heartbeat message, an optional pause/resume command. The client would send it when its local buffer is filled (or about to), and when it is emptied (or about to).
The nice thing here is that its fully compatible with the current protocol, in both direction.

Handle concurrent calls

The current implementation of nrpc serializes all the calls server-side for each service.
Aside from the lack of concurrency, it also means that a call may be handled even if the client already had a timeout.

A solution to this would solve the following problems:

  • A server should be able to authorize parallel handling of requests, especially with the streamed replies that can be very long.
  • When parallel handling is enabled, a limit should be set on how many parallel calls are permitted
  • If a call cannot be processed soon enough (ie, too many calls already processing and no room for a new one after a given timeout), the server should gracefully fail and inform the client that the server is overloaded (so it can try again later).
  • Streamed replies methods should start the keep-alive as soon as the request arrives, and send an error if the handling cannot be started within a given time.
  • In any case, if the number of pending requests is too big, an error should be returned immediately

I'm starting to work on a solution with those elements in mind.
If any idea pops out, please share it!

Improve responses

Currently the responses are wrapped in a RPCResponse so an error can be returned.
I think we can avoid this by letting the user define its response types like this:

message MyMethodResponse {
  oneof nrpcresp {
    nrpc.Error error = 1;
    string message = 2 
  }
}

When this pattern is recognized by the code generator, the generated handler interface would return (string, error) instead of a MyMethodResponse. Also, the handler would catch panics.

With this approach we don't need a double encoding anymore, and the protocol is more explicit. The trade-off is a little more code in the .proto files, but I think it is worth it.

Thoughts?

Support optional fields

When adding a optional string name = 4; to my proto file and running protoc with --experimental_allow_proto3_optional I got the following error.
api.proto: is a proto3 file that contains optional fields, but code generator protoc-gen-nrpc hasn't been updated to support optional fields in proto3. Please ask the owner of this code generator to support proto3 optional.

Yes I know it's experimental but I thought this, optional fields, feature at least was on it's way.

Unable to build when no protobuf options declared

Unable to build helloworld getting the following error

It looks like it only happens if no options are included in the proto file

panic: runtime error: invalid memory address or nil pointer dereference [recovered]
        panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x12887e5]

goroutine 1 [running]:
text/template.errRecover(0xc4201cfdb8)
        /usr/local/Cellar/go/1.9.2/libexec/src/text/template/exec.go:140 +0x1df
panic(0x12e1e20, 0x14c7900)
        /usr/local/Cellar/go/1.9.2/libexec/src/runtime/panic.go:491 +0x283
github.com/golang/protobuf/protoc-gen-go/descriptor.(*FileOptions).github.com/golang/protobuf/proto.extensionsRead(0x0, 0x0, 0x14cccc0, 0x0)
        <autogenerated>:1 +0x5
github.com/golang/protobuf/proto.GetExtension(0x14b1d60, 0x0, 0x14cccc0, 0x0, 0x0, 0x0, 0x0)
        /Users/chsullivan/go/src/github.com/golang/protobuf/proto/extensions.go:367 +0x102
main.pkgSubject(0xc420120000, 0x0, 0x0)
        /Users/chsullivan/go/src/github.com/rapidloop/nrpc/protoc-gen-nrpc/main.go:221 +0x4e
reflect.Value.call(0x12d1980, 0x134aaf0, 0x13, 0x1335baa, 0x4, 0xc4201abac0, 0x1, 0x1, 0x1333c60, 0x1, ...)
        /usr/local/Cellar/go/1.9.2/libexec/src/reflect/value.go:434 +0x905
reflect.Value.Call(0x12d1980, 0x134aaf0, 0x13, 0xc4201abac0, 0x1, 0x1, 0x14b3cc0, 0xc42000d160, 0x1322ce0)
        /usr/local/Cellar/go/1.9.2/libexec/src/reflect/value.go:302 +0xa4
text/template.(*state).evalCall(0xc4201cfd38, 0x1322ce0, 0xc420120000, 0x16, 0x12d1980, 0x134aaf0, 0x13, 0x14b3c60, 0xc420148990, 0x13457a3, ...)
        /usr/local/Cellar/go/1.9.2/libexec/src/text/template/exec.go:670 +0x580
text/template.(*state).evalFunction(0xc4201cfd38, 0x1322ce0, 0xc420120000, 0x16, 0xc4201489c0, 0x14b3c60, 0xc420148990, 0xc42000d180, 0x2, 0x2, ...)
        /usr/local/Cellar/go/1.9.2/libexec/src/text/template/exec.go:538 +0x176
text/template.(*state).evalCommand(0xc4201cfd38, 0x1322ce0, 0xc420120000, 0x16, 0xc420148990, 0x0, 0x0, 0x0, 0x12c7b40, 0xc4201aba88, ...)
        /usr/local/Cellar/go/1.9.2/libexec/src/text/template/exec.go:435 +0x53a
text/template.(*state).evalPipeline(0xc4201cfd38, 0x1322ce0, 0xc420120000, 0x16, 0xc420089a90, 0x12c7b40, 0xc4201aba88, 0x98)
        /usr/local/Cellar/go/1.9.2/libexec/src/text/template/exec.go:408 +0x115
text/template.(*state).walk(0xc4201cfd38, 0x1322ce0, 0xc420120000, 0x16, 0x14b3b40, 0xc4201489f0)
        /usr/local/Cellar/go/1.9.2/libexec/src/text/template/exec.go:234 +0x4af
text/template.(*state).walk(0xc4201cfd38, 0x1322ce0, 0xc420120000, 0x16, 0x14b3e40, 0xc420148780)
        /usr/local/Cellar/go/1.9.2/libexec/src/text/template/exec.go:242 +0x11d
text/template.(*Template).execute(0xc420122900, 0x14aea60, 0xc42021a000, 0x1322ce0, 0xc420120000, 0x0, 0x0)
        /usr/local/Cellar/go/1.9.2/libexec/src/text/template/exec.go:197 +0x1f9
text/template.(*Template).Execute(0xc420122900, 0x14aea60, 0xc42021a000, 0x1322ce0, 0xc420120000, 0x0, 0x0)
        /usr/local/Cellar/go/1.9.2/libexec/src/text/template/exec.go:180 +0x53
main.main()
        /Users/chsullivan/go/src/github.com/rapidloop/nrpc/protoc-gen-nrpc/main.go:440 +0x3fd
--nrpc_out: protoc-gen-nrpc: Plugin failed with status code 2.

Implement polling functionality

It would be very useful to be able to ping/poll a service globally and receive multiple responses (one from each copy of the service running).

It would also be helpful to be able to append some token to the end of the subject on both client & server side. This would allow for polling of different subsets of the same service e.g. by region,datacenter or environment.

references to go-nats vs nats.go

When I tried using this project (great project) I've got a compiler error:

        github.com/nats-rpc/nrpc imports
        github.com/nats-io/go-nats: github.com/nats-io/[email protected]: parsing go.mod:
        module declares its path as: github.com/nats-io/nats.go
                but was required as: github.com/nats-io/go-nats

I noticed that "go-nats" project is obsolete and points to "nats.go". nrpc seems to reference obsolete version.

Are there any plans to reference newer version path?

Is there a workaround for my error message?

The project maintainability is dropping

I tried rebuilding the project and I found that things are hard to recompile:

  • test assert is gone.
  • reflection based protocol buffers library is out.
  • and protocol buffers promise to drop support for generator:
WARNING: Package "github.com/golang/protobuf/protoc-gen-go/generator" is deprecated.
        A future release of golang/protobuf will delete this package,
        which has long been excluded from the compatibility promise.

Support pub-only functions

REQ/REP in only a pattern in NATS, and some API can have no REP, especially when NATS Streaming is involved.

I propose that we add a special type 'Nothing' to nrpc.proto that would we an indication that a function will never reply.

service hello {
    rpc hello(string) returns (nrpc.Nothing) {}
}

The code generator would produce functions with no return value accordingly.

Enable travis

Hi @mdevan

I seems travis was lost when moving the project to the new organisation. Could you enable it back ?

Thanks

Christophe

Json encoding

I would like to be able to use json instead of proto for data encoding.

I can see that the .pb.go files annotates the structs for json encoding, so I guess the only addition would be where the marshal/unmarshal is done.

Use pointers for Server/Client args and return values

Returning a pointer is nice because if the function has an error most times you do not want to return an empty response struct

Below is an example of how gRPC func signatures look

type routeGuideServer struct {
        ...
}
...

func (s *routeGuideServer) GetFeature(ctx context.Context, point *pb.Point) (*pb.Feature, error) {
        ...
}
...

func (s *routeGuideServer) ListFeatures(rect *pb.Rectangle, stream pb.RouteGuide_ListFeaturesServer) error {
        ...
}
...

func (s *routeGuideServer) RecordRoute(stream pb.RouteGuide_RecordRouteServer) error {
        ...
}
...

func (s *routeGuideServer) RouteChat(stream pb.RouteGuide_RouteChatServer) error {
        ...
}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.