Giter Site home page Giter Site logo

grpc-tools's Introduction

grpc-tools CircleCI GitHub release

A suite of tools for gRPC debugging and development. Like Fiddler/Charles but for gRPC!

The main tool is grpc-dump which transparently intercepts network traffic and logs all gRPC and gRPC-Web requests with full metadata as a JSON stream. This stream is easily readable as it is or you can use tools like jq for more complex visualisation.

demo

This repository currently includes:

  • grpc-dump: a small gRPC proxy that dumps RPC details to a file for debugging, and later analysis/replay.
  • grpc-replay: takes the output from grpc-dump and replays requests to the server.
  • grpc-fixture: a proxy that takes the output from grpc-dump and replays saved responses to client requests.
  • grpc-proxy: a library for writing gRPC intercepting proxies. grpc-dump and grpc-fixture are both built on top of this library.

These tools are in alpha so expect breaking changes between releases. See the changelog for full details.

Installation:

The recommended way to install these tools is via Homebrew using:

brew install bradleyjkemp/formulae/grpc-tools

Alternatively, binaries can be downloaded from the GitHub releases page.

Or you can build the tools from source using:

go install github.com/bradleyjkemp/grpc-tools/...

grpc-dump

grpc-dump lets you see all of the gRPC requests being made by applications on your machine without any code changes required to applications or servers.

Simply start grpc-dump and configure your system/application to use it as a HTTP(S) proxy. You'll soon see requests logged in full as a JSON stream with service and method names.

Even if you don't have the original .proto files, grpc-dump will attempt to deserialise messages heuristically to give a human readable form.

# start the proxy (leave out the --port flag to automatically pick on)
grpc-dump --port=12345

# in another terminal, run your application pointing it at the proxy
# Warning: if your application connects to a localhost/127.0.0.1 address then proxy settings
# are usually ignored. To fix this you can use a service like https://readme.localtest.me
http_proxy=http://localhost:12345 my-app

# all the requests made by the application will be logged to standard output in the grpc-dump window e.g.
# {"service": "echo", "method": "Hi", "messages": ["....."] }
# JSON will be logged to STDOUT and any info or warning messages will be logged to STDERR

Many applications expect to talk to a gRPC server over TLS. For this you need to use the --key and --cert flags to point grpc-dump to certificates valid for the domains your application connects to.

The recommended way to generate these files is via the excellent mkcert tool. grpc-dump will automatically use any mkcert generated certificates in the current directory.

# Configure your system to trust mkcert certificates
mkcert -install

# Generate certificates for domains you want to intercept connections to
mkcert mydomain.com *.mydomain.com

# Start grpc-dump using the key and certificate created by mkcert
# Or start grpc-dump from the same directory and it will
# detect them automatically
grpc-dump --key=mydomain.com-key.pem --cert=mydomain.com.pem

More details for using grpc-dump (including the specification for the JSON output) can be found here.

grpc-fixture

# save the (stdout) output of grpc-dump to a file
grpc-dump --port=12345 > my-app.dump

# in another, run your application pointing it at the proxy
http_proxy=http://localhost:12345 my-app

# now run grpc-fixture from the previously saved output
grpc-fixture --port=12345 --dump=my-app.dump

# when running the application again, all requests will
# be intercepted and answered with saved responses,
# no requests will be made to the real gRPC server.
http_proxy=http://localhost:12345 my-app

For applications that expect a TLS server, the same --key and --cert flags can be used as described above for grpc-dump.

More details for using grpc-fixture can be found here.

grpc-tools's People

Contributors

bradleyjkemp avatar dependabot-preview[bot] avatar marcellanz avatar nirhaas avatar testwill avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

grpc-tools's Issues

mTLS support and grpc-cache

I've created a "grpc-cache" tool using mwitkow's code as a base, but having it be part of this ecosystem would be better. I'd like to add it, but I'm not seeing a way to support mTLS. It seems that this is the right place to add a third alternative. Am i reading this right?

`http_proxy` not working?

Hi, could you please provide a working example?

I'm playing the great tool using the official grpc example. But it seems as if the client will always ignore the http_proxy settings.

I start grpc-dump in one terminal:

grpc-dump --port=12345

Then start the server and client, hoping the request will go through proxy localhost:12345:

go run greeter_server/main.go # one terminal
http_proxy=http://localhost:12345 go run greeter_client/main.go   # another terminal

But nothing happens on the grpc-dump side. And the client side works on whatever http_proxy settings I provide. It seems as if the http_proxy settings is totally ignored.

Can you help? thanks!

grpc-dump doesn't show error/status details

This can be demonstrated with the grpc-go sample code.

  1. Start the sample server
$ go run ./server
  1. Copy helloworld.proto to proto/helloworld/helloworld.proto
  2. Start grpc-dump
$ ~/go/bin/grpc-dump -port 8980 -destination localhost:50052 -proto_roots proto
...
  1. Run the sample client twice:
$ go run ./client --addr localhost:8980
2020/04/01 00:10:48 Greeting: Hello world
$ go run ./client --addr localhost:8980
2020/04/01 00:10:50 Quota failure: violations:<subject:"name:world" description:"Limit one greeting per person" > 
exit status 1
  1. Inspect grpc-dump output:
{"service":"helloworld.Greeter","method":"SayHello","messages":[{"message_origin":"client","raw_message":"CgV3b3JsZA==","message":{"name":"world"},"timestamp":"2020-04-01T00:10:48.645753+03:00"},{"message_origin":"server","raw_message":"CgtIZWxsbyB3b3JsZA==","message":{"message":"Hello world"},"timestamp":"2020-04-01T00:10:48.646446+03:00"}],"metadata":{":authority":["localhost:8980"],"content-type":["application/grpc"],"user-agent":["grpc-go/1.29.0-dev"],"via":["HTTP/2.0 127.0.0.1:8980"]},"metadata_response_headers":{"content-type":["application/grpc"]},"metadata_response_trailers":{}}
{"service":"helloworld.Greeter","method":"SayHello","messages":[{"message_origin":"client","raw_message":"CgV3b3JsZA==","message":{"name":"world"},"timestamp":"2020-04-01T00:10:50.002394+03:00"}],"error":{"code":"ResourceExhausted","message":"Request limit exceeded."},"metadata":{":authority":["localhost:8980"],"content-type":["application/grpc"],"user-agent":["grpc-go/1.29.0-dev"],"via":["HTTP/2.0 127.0.0.1:8980"]},"metadata_response_headers":null,"metadata_response_trailers":{"content-type":["application/grpc"]}}

There are no error details:

{"code":"ResourceExhausted","message":"Request limit exceeded."}

I think, It would be better to add a raw grpc-status-details-bin trailer to the output, at least for replay and debugging.

Listen to different interfaces capability

Hello !

I would like the grpc-dump tool to give the ability for users to make the proxy listening on a different interface or IP address.

I found my way by monkey patching the source code and it worked pretty well, guessing you would implement this easily by just adding a CLI argument and reinjecting the value into the right function call :)

Thanks !

ResourceExhausted on large payload

I get the following error when proxying through grpc-dump.

failed proxying s2c rpc error: code = ResourceExhausted desc = grpc: received message larger than max (13558592 vs. 4194304)

It would be great to be able to override the default max message size (4 MB).

Improve dumping of long-lived streaming RPCs

Currently RPCs are only dumped to the output once they finish, this means we miss requests that live longer than the proxy. This is also confusing because messages are received by the server/client that aren't represented in real time in the dump.

A possible solution is to split streaming RPCs across multiple entries in the dump, perhaps one per message? (with something like a request id to link the lines together).
This has the benefit that messages appear in the dump in real time but has the downside that now the dump is much harder to understand (e.g. especially unary RPCs which are no longer a nice single entry).

Include a timestamp for each gRPC message

Each server/client message should have a timestamp logged of when it was received.
This will give users greater debugging power because e.g. they'll be able to find requests that took a long time.

This should be an easy addition to grpc-dump/recorded_server_stream.go

Include string form of error code in dump

Currently if an RPC returns an error then in the dump it appears as something like:
"error":{"code":13, "message":"somestring"}

It'd be nice if this were extended to:
"error":{"code":13, "code_string": "Internal", "message":"somestring"}

This could also be an opportunity for a breaking change: to have "code" be the string (as this is most often going to be used) and have a "code_number" field with the number.

grpc-replay: use the timestamp during replay

I looked at the code and believe that currently, the replay will be done as fast as possible. However, in many scenarios, the replay would be most realistic if it can match the originally recorded time difference from the beginning of recording.

Would it be possible to have such features to record the timestamp and use that to pace the replay?

grpc-proxy

Hi,

I would like to get more information about the grpc-proxy you built. can it also be used for Tensorflow serving (it uses grpc request / response)? I want to read the header and re-write it and forward it to to TensorFlow Serving and read the response and forward it to the grpc client.

grpc client <-> grpc proxy <-> TensorFlow Serving

Thanks.

How smart can grpc-replay be when inferring dynamic/ID fields?

Related: #1

A very common flow is for the client to send a Create request and receive in response some sort of handle that they can later use to Read/Update/Delete the resource they just created.
It would be really nice if grpc-replay could infer these fields and dynamically substitute them into later requests.

Detection process:

  • If a value is returned in a response that is later used in a request
  • The value is of type string (anything else?)
  • The value is the only field that differs between the actual response and the dump response

Where we are certain a field is an ID we can substitute immediately. When unsure, can prompt the user to tell us if we should treat the field as an ID.

A command-line flag would be useful to force fields to be treated as IDs.

Questions:

  • Under what conditions are we certain that something is an ID?
  • Is it ok to re-try a request if we change our mind about whether something is an ID? (would want to at least inform the user and likely prompt them whether or not to continue)
  • What information do we need to prompt the user for them to be able to decide if something is an ID

Replaying requests without proto definitions

Is there any way to replay requests dumped by grpc-dump when the description files / definitions are not known?
Won't the raw_message field in the dump be enough to resend it (without modifications)?

Support marshalling messages from human-readable format

When writing fixtures or writing dumps to be replayed for testing purposes, it would be much nicer to be able to write messages in the human readable form rather than requiring raw bytes.

grpc-replay and grpc-fixture should marshal the human readable format and send this instead of reading the raw_message field.

At the moment JSON Proto parsing doesn't support unknown fields so I think for now this will only be possible when proto definitions have been loaded.

Perhaps messages that have been heuristically decoded could be tagged and a custom https://github.com/jhump/protoreflect based parsed be used for when we don't have protos?

where to set HTTP_PROXY in python app?

Thanks for this library. really make grpc more easy to use.

I am trying the example with my python sever. have no idea what is the http_proxy for. If I let the grpc_dump connect to the port my python server listening to. The request are show, but no any response.

my python server is simliar to:

https://github.com/grpc/grpc/blob/master/examples/python/helloworld/greeter_server_with_reflection.py

clearly I miss something. I am trying to find the demo app you are using. Could you put the echo app code as an example? Thanks.

Add timestamps to messages

It would be great if grpc-dump would add timestamps of the time it records messages. This would not only allow to postprocess the output and make timing analyses, but also maybe to use the timestamps to mimic the (relative) timing behavior in grpc-replay.

grpc-dump: Use with iOS

I have been successful in using grpc-dump to listen on grpc traffic from my Android device. However, when ever I try to do the same thing on my iPhone (running iOS 14), I get the following error:

2020/12/26 21:48:08 http: TLS handshake error from <IP Address>:49345: EOF

Has anyone managed to successfully listen on grpc traffic from iOS?

Include common (frequently used) .proto files as static files

Definitions such as google/longrunning/operations.proto should be baked into the tools so that it's easier to run grpc-dump/etc. by just pointing at your own proto files.

This should be implemented via a custom file loader in the proto parser which uses static files if real ones cannot be found. (real ones should still take priority if they exist)

proxy variable needs http:// prefix

I am following the example, but my application (using the gRPC C++ API on macOS) seems to only work if

http_proxy=http://127.0.0.1:12345

is passed rather than

HTTP_PROXY=127.0.0.1:12345

Only the lower-case writing of http_proxy and only with the http:// prefix I can see output in grpc-dump (0.1.2).

how to use with docker

I tried setting an environment variable host.docker.internal:grpcdumpPORT in my docker compose file and the calls just appear as failed dialing host.docker.internal. Are there any examples of how to use with docker.

error: Decompressor is not installed for grpc-encoding "gzip"

For compressed gRPC messages, I get the following error in grpc-dump (0.1.2, macOS):

"error":{"code":12,"message":"grpc: Decompressor is not installed for grpc-encoding \"gzip\""}

The request seems not to be forwarded to the server.

I installed grpc-dump via homebrew. Is there anything I can do to use compression?

grpc-dump prints warning if message definition contains a map

WARN[0075] Failed to search for unknown fields in message =proto_decoder error="failed to enrich decode descriptor: unknown nested field type map[interface {}]interface {}"

Nevertheless map is dumped correctly.

Steps to reproduce:

  1. Start greeter_server
$ ~/go/bin/greeter_server
...
  1. Download helloworld.proto to proto/helloworld/helloworld.proto
  2. Edit helloworld.proto and add a map field to HelloRequest:
message HelloRequest {
  string name = 1;
  map<string, string> foo = 2;
}
  1. Start grpc-dump:
$ ~/go/bin/grpc-dump -port 8980 -destination localhost:50051 -proto_roots proto
...
  1. Call SayHello with grpc_cli, first without, then with a map:
$ grpc_cli -remotedb=false -proto_path proto -protofiles helloworld/helloworld.proto \
    call localhost:8980 helloworld.Greeter/SayHello 'name:"world"'
...
$ grpc_cli -remotedb=false -proto_path proto -protofiles helloworld/helloworld.proto \
    call localhost:8980 helloworld.Greeter/SayHello 'name:"world" foo{key:"abc" value:"123"}'
...
  1. grpc-dump prints warnings, but dumps everything correctly:
WARN[0003] Failed to search for unknown fields in message  =proto_decoder error="failed to enrich decode descriptor: unknown nested field type map[interface {}]interface {}"
{"service":"helloworld.Greeter","method":"SayHello","messages":[{"message_origin":"client","raw_message":"CgV3b3JsZA==","message":{"name":"world"},"timestamp":"2020-03-31T23:47:27.013997+03:00"},{"message_origin":"server","raw_message":"CgtIZWxsbyB3b3JsZA==","message":{"message":"Hello world"},"timestamp":"2020-03-31T23:47:27.014589+03:00"}],"metadata":{":authority":["localhost:8980"],"accept-encoding":["identity,gzip"],"content-type":["application/grpc"],"grpc-accept-encoding":["identity,deflate,gzip"],"user-agent":["grpc-c++/1.19.1 grpc-c/7.0.0 (osx; chttp2; gold)"],"via":["HTTP/2.0 127.0.0.1:8980"]},"metadata_response_headers":{"content-type":["application/grpc"]},"metadata_response_trailers":{}}
WARN[0075] Failed to search for unknown fields in message  =proto_decoder error="failed to enrich decode descriptor: unknown nested field type map[interface {}]interface {}"
{"service":"helloworld.Greeter","method":"SayHello","messages":[{"message_origin":"client","raw_message":"CgV3b3JsZBIKCgNhYmMSAzEyMw==","message":{"name":"world","foo":{"abc":"123"}},"timestamp":"2020-03-31T23:48:38.287028+03:00"},{"message_origin":"server","raw_message":"CgtIZWxsbyB3b3JsZA==","message":{"message":"Hello world"},"timestamp":"2020-03-31T23:48:38.28725+03:00"}],"metadata":{":authority":["localhost:8980"],"accept-encoding":["identity,gzip"],"content-type":["application/grpc"],"grpc-accept-encoding":["identity,deflate,gzip"],"user-agent":["grpc-c++/1.19.1 grpc-c/7.0.0 (osx; chttp2; gold)"],"via":["HTTP/2.0 127.0.0.1:8980"]},"metadata_response_headers":{"content-type":["application/grpc"]},"metadata_response_trailers":{}}

Workaround

In proto-files for grpc-dump replace map with repeated MapEntry field.
See: https://developers.google.com/protocol-buffers/docs/proto3#backwards-compatibility

grpc-dump outputs nothing if message contains google.protobuf.Any

Steps to reproduce:

  1. Start greeter_server
$ ~/go/bin/greeter_server
...
  1. Copy helloworld.proto to proto/helloworld/helloworld.proto
    Copy any.proto to proto/google/protobuf/any.proto
  2. Edit helloworld.proto and add a filed of type Any to request:
import "google/protobuf/any.proto";
...
message HelloRequest {
  string name = 1;
  google.protobuf.Any foo = 2;
}
  1. Start grpc-dump:
$ ~/go/bin/grpc-dump -port 8980 -destination localhost:50051 -proto_roots proto
...
  1. Call SayHello with grpc_cli and pass an empty foo:
$ grpc_cli -remotedb=false -proto_path proto -protofiles helloworld/helloworld.proto \
    call localhost:8980 helloworld.Greeter/SayHello 'name:"world" foo{}'
...
  1. grpc-dump prints a blank line instead of normal output.
  2. Now try again without foo:
$ grpc_cli -remotedb=false -proto_path proto -protofiles helloworld/helloworld.proto \
    call localhost:8980 helloworld.Greeter/SayHello 'name:"world"'
  1. grpc-dump records request/response as expected:
{"service":"helloworld.Greeter","method":"SayHello","messages":[{"message_origin":"client","raw_message":"CgV3b3JsZA==","message":{"name":"world"},"timestamp":"2020-03-31T23:43:03.923154+03:00"},{"message_origin":"server","raw_message":"CgtIZWxsbyB3b3JsZA==","message":{"message":"Hello world"},"timestamp":"2020-03-31T23:43:03.923359+03:00"}],"metadata":{":authority":["localhost:8980"],"accept-encoding":["identity,gzip"],"content-type":["application/grpc"],"grpc-accept-encoding":["identity,deflate,gzip"],"user-agent":["grpc-c++/1.19.1 grpc-c/7.0.0 (osx; chttp2; gold)"],"via":["HTTP/2.0 127.0.0.1:8980"]},"metadata_response_headers":{"content-type":["application/grpc"]},"metadata_response_trailers":{}}
  1. Try adding Any field to response.

Workaround

In proto-files for grpc-dump replace standard Any with MyAny of the same structure.

Support comments in JSON dump format

For dumps written or edited by hand (e.g. those used with grpc-fixture or grpc-replay) it would be nice to support comments in the JSON dump so that users can document their dumps.

These still need to support streaming decoding so running the entire file through a JSON pre-processor is not an option.

grpc-replay can't assert on RPC errors

Related to #24

grpc-replay is currently unable to assert that the server returns the same error as the dump that is being replayed. At the moment it only checks that the messages are correct.

Include channel information/id for RPCs

Currently the dump format does not include any information about which RPCs came in from the same client/connection. This information is probably accessible through the request context?

Adding this information would be useful to be able to filter for all RPCs from a given client and for grpc-replay to be able to make requests in parallel.

grpc-dump should support proxying to multiple backends

grpc-dump should accept a list of ports and destinations that should be proxied so that e.g. dumping RPCs to multiple micro-services is easily supported.

Acceptance Criteria:

  • --port flag takes a list of ports
  • --destination flag takes a list of destinations
  • Ensure that len(ports) == len(destinations)
  • Start a grpc-dump proxy on all ports with the correct destinations

grpc-dump fails to JSON encode google.protobuf.Any fields embedding proto messages

This might be a duplicate of #88 but I think I have a fix for that and it may fail differently for me.

Given

  1. Communication to be dumped is based on the following protbuf files:
    https://github.com/cloudstateio/go-support/tree/master/protobuf
  2. A or some proto files might/are not be imported by any other protofiles. Seems not to be a problem really.
  3. A sample message sequence dumped with grpc-dump can be found here dump_001_no_proto_roots.json
  4. A message sequence dumped with -proto_roots can be found here dump_001_proto_root_with_errors.out

If grpc-dump is run with -proto_roots, it fails with:

WARN[0020] Failed to search for unknown fields in message  =proto_decoder error="failed to enrich decode descriptor: failed to search nested field init: failed to search nested field snapshot: internal: got nil message"
WARN[0020] Failed to search for unknown fields in message  =proto_decoder error="failed to enrich decode descriptor: failed to search nested field init: internal: got nil message"
WARN[0020] Failed to search for unknown fields in message  =proto_decoder error="failed to enrich decode descriptor: failed to search nested field init: failed to search nested field snapshot: internal: got nil message"
WARN[0020] Failed to search for unknown fields in message  =proto_decoder error="failed to enrich decode descriptor: failed to search nested field reply: failed to search nested field client_action: failed to search nested field forward: internal: got nil message"

If grpc-dump is run without -proto_roots it does not fail but message fields are just named with their corresponding tags:

{
      "message_origin": "server",
      "raw_message": "CpIBCAISLwotCisKKXR5cGUuZ29vZ2xlYXBpcy5jb20vZ29vZ2xlLnByb3RvYnVmLkVtcHR5Il0KQnR5cGUuZ29vZ2xlYXBpcy5jb20vY29tLmV4YW1wbGUuc2hvcHBpbmdjYXJ0LnBlcnNpc3RlbmNlLkl0ZW1BZGRlZBIXChUKCUExNDM2MjM0NxIGRGVsdXhlGAU=",
      "message": {
        "1": {
          "1": "2",
          "2": {
            "1": {
              "1": {
                "1": "type.googleapis.com/google.protobuf.Empty"
              }
            }
          },
          "4": {
            "1": "type.googleapis.com/com.example.shoppingcart.persistence.ItemAdded",
            "2": {
              "1": {
                "1": "A14362347",
                "2": "Deluxe",
                "3": "5"
              }
            }
          }
        }
      },
      "timestamp": "2020-05-01T15:30:57.510682+02:00"
},

This message should look like this if encoded properly:

    {
      "message_origin": "server",
      "raw_message": "CpIBCAISLwotCisKKXR5cGUuZ29vZ2xlYXBpcy5jb20vZ29vZ2xlLnByb3RvYnVmLkVtcHR5Il0KQnR5cGUuZ29vZ2xlYXBpcy5jb20vY29tLmV4YW1wbGUuc2hvcHBpbmdjYXJ0LnBlcnNpc3RlbmNlLkl0ZW1BZGRlZBIXChUKCUExNDM2MjM0NxIGRGVsdXhlGAU=",
      "message": {
        "reply": {
          "commandId": "2",
          "clientAction": {
            "reply": {
              "payload": {
                "@type": "type.googleapis.com/google.protobuf.Empty",
                "value": {}
              }
            }
          },
          "events": [
            {
              "@type": "type.googleapis.com/com.example.shoppingcart.persistence.ItemAdded",
              "item": {
                "productId": "A14362347",
                "name": "Deluxe",
                "quantity": 5
              }
            }
          ]
        }
      },
      "timestamp": "2020-05-01T15:44:10.461715+02:00"
    },

Analysis
a) The error got nil message is coming from proto_decoder.unknownFieldResolver#enrichMessageand seems to be triggered by a recursive call to itself whenever the condition dynamicNestedMessage == nil is met here. I read the comment here about message being nil and a potential race condition, but it never occurred to me during my debugging sessions for this issue.
Returning nil in enrichMessage in this case seems to solve this issue. I'm not sure if this is valid, but I did not miss anything during my sessions doing so.

b) As soon as this is fixed, I got an error like this

FATA[0013] Failed to marshal rpc   error="json: error calling MarshalJSON for type *dynamic.Message: unknown message type "com.example.shoppingcart.GetShoppingCart"

which is coming from dynamic.AnyResolver#Resolve of protoreflect and is triggered by dump#dumpInterceptor which tries to json.Marshal the RPC struct with its collected messages.
json.Marshal in its path down to marshal types, will call ultimatively dynamic.Messages#MarshalJSON and will fail as the jsonpb.Marshaler config lacks any AnyResolver that, when json.Marshal calls it, could resolve a messages type not well known or proto.MessageType would find and then ultimatively fails with

unknown message type

Fix
Getting messages resolved using proto.MessageTypeand get then registered by proto.RegisterType is probably a bad idea and should only used by generated Go types by protoc.
After some debugging and tests I got with a solution where dumpInterceptor wraps a messages event into a type that implements json.Marshal interface and configures an AnyResolver for the jsonpb.Marshaler knowing all FileDescriptors the proto_descriptor.LoadProtoDirectories function ever sees during the load of the protofiles provided by the -proto_roots argument.

This is implemented in this PR #97 and works for the use case mentioned at the beginning of this issue.

grpc-go metadata example hangs when running though grpc-dump

Run the metadata example through grpc-dump and observe that client hangs.

  1. Start the server
$ go run ./server
server listening at [::]:50051
  1. Start grpc-dump
$ ~/go/bin/grpc-dump -port 8980 -destination localhost:50051
INFO[0000] Listening on 127.0.0.1:8980                  
INFO[0000] Not intercepting TLS connections
  1. Run the client through proxy
$ go run ./client --addr localhost:8980
--- unary ---
timestamp from header:
 0. Apr  1 10:05:11.029726000
location from header:
 0. MTV
response:
 - this is examples/metadata
timestamp from trailer:
 0. Apr  1 10:05:11.029891000
--- server streaming ---
timestamp from header:
 0. Apr  1 10:05:12.033684000
location from header:
 0. MTV
response:
 - this is examples/metadata
 - this is examples/metadata
 - this is examples/metadata
 - this is examples/metadata
 - this is examples/metadata
 - this is examples/metadata
 - this is examples/metadata
 - this is examples/metadata
 - this is examples/metadata
 - this is examples/metadata
timestamp from trailer:
 0. Apr  1 10:05:12.033878000
--- client streaming ---

The client hangs.

I don't know if it's related to passing metadata or a more general problem with client streaming.

Document grpc-dump output format

Without a proper specification of the dump output format it's hard for users to write parsers, etc. for the output of grpc-dump.

How strict should grpc-replay be when matching requests/responses?

Things that could be matched on:

  • Do request/response messages need to be an exact (byte-level) match of those in the dump?
  • Is it possible to ignore mismatches in certain fields?
  • How are "Status-like" RPCs handled (where you poll with some sort of ID and the response changes over time from e.g. starting -> running -> complete)
    • Is it ok to ignore the exact number of duplicate messages as long as the transitions match?

Send requests using proxied user agent

The upgrade to grpc-go (#33) broke failed the integration tests because the user agent changed.
This shouldn't happen because grpc-replay should send the user agent from the dump instead of its own default user agent.

I think the same issue will apply to grpc-proxy

grpc-dump should support dumping JSON proto messages

So that dumps can be easily filtered by request/response fields, each RPC in the dump should include a JSON format representation of the messages. This requires that grpc-dump be able to read the .proto files describing the RPCs being called.

Acceptance Criteria:

  • A new flag --proto_dir is added to grpc-dump which is used to point grpc-dump to a directory containing all the .proto files describing messages it will see
  • On startup this directory is scanned and a map created from fully-qualified RPC method to some form of proto descriptor
  • In the dump interceptor, the proto descriptor is looked up for each message and used to unmarshal the []byte message.
  • A new field is added to the message in the dump containing this unmarshalled form: note this should not be a string but should be a proper javascript object so that jq can be used to query it

grpc python http_proxy is sensitive to case and schema is not optional

βœ— http_proxy=localhost:12345 python greeter_client.py
E0619 00:36:52.389945000 4728600000 http_proxy.cc:62] 'localhost' scheme not supported in proxy URI
Greeter client received: Hello, you!
βœ— HTTP_PROXY=localhost:12345 python greeter_client.py
Greeter client received: Hello, you!
βœ— HTTP_PROXY=http://localhost:12345 python greeter_client.py
Greeter client received: Hello, you!

βœ— http_proxy=http://localhost:12345 python greeter_client.py
Greeter client received: Hello, you!

for test: use https://github.com/swuecho/hello_grpc, but change the port in server to 50051.

Cannot load proto files

I am having trouble using --proto_descriptors and --proto_roots

I have created a proto descriptor sample using protoc --descriptor_set_out=desc.pb schema.proto

When I run ./grpc-dump -proto_descriptors "desc.pb" the output is

$ ./grpc-dump -proto_descriptors "desc.pb" 
no such file: ""

In dump.go I noticed that both clauses are using protoRoots - this appears wrong

	if protoRoots != "" {
		r, err := proto_decoder.NewFileResolver(strings.Split(protoRoots, ",")...)
		if err != nil {
			return err
		}
		resolvers = append(resolvers, r)
	}
	if protoDescriptors != "" {
		r, err := proto_decoder.NewDescriptorResolver(strings.Split(protoRoots, ",")...)
		if err != nil {
			return err
		}
		resolvers = append(resolvers, r)
	}

Changing the second one to split on protoDescriptors did not change the issue but the error changed to

$ ./grpc-dump -proto_descriptors "desc.pb" 
no such file: "desc.pb"

Can you please provide an example command line showing how to reference the proto descriptors

Thanks

grpc-fixture should match based on destination/authority

Some gRPC services (e.g. reflection API or server healthcheck/status) are implemented by multiple servers which can return different results.

grpc-fixture should be able to return different responses depending on the intercepted request destination.

At the moment I think the best solution is to add an "authority" field to the messageTree struct which can be matched on by the fixture interceptor.

Related: #1

grpc-dump prints warning if nested message is null (not set)

Like this:

WARN[0004] Failed to search for unknown fields in message =proto_decoder error="failed to enrich decode descriptor: failed to search nested field foo: internal: got nil message"

Steps to reproduce:

  1. Start greeter_server
$ ~/go/bin/greeter_server
...
  1. Download helloworld.proto to proto/helloworld/helloworld.proto
  2. Edit helloworld.proto and add a nested message to HelloRequest:
message HelloRequest {
  string name = 1;
  Foo foo = 2;
}

message Foo {
    string bar = 1;
}
  1. Start grpc-dump:
$ ~/go/bin/grpc-dump -port 8980 -destination localhost:50051 -proto_roots proto
...
  1. Call SayHello with grpc_cli and omit foo:
$ grpc_cli -remotedb=false -proto_path proto -protofiles helloworld/helloworld.proto \
    call localhost:8980 helloworld.Greeter/SayHello 'name:"world"'
...
  1. grpc-dump prints a warning:
WARN[0004] Failed to search for unknown fields in message  =proto_decoder error="failed to enrich decode descriptor: failed to search nested field foo: internal: got nil message"
{"service":"helloworld.Greeter","method":"SayHello","messages":[{"message_origin":"client","raw_message":"CgV3b3JsZA==","message":{"name":"world"},"timestamp":"2020-03-31T22:37:14.401284+03:00"},{"message_origin":"server","raw_message":"CgtIZWxsbyB3b3JsZA==","message":{"message":"Hello world"},"timestamp":"2020-03-31T22:37:14.402273+03:00"}],"metadata":{":authority":["localhost:8980"],"accept-encoding":["identity,gzip"],"content-type":["application/grpc"],"grpc-accept-encoding":["identity,deflate,gzip"],"user-agent":["grpc-c++/1.19.1 grpc-c/7.0.0 (osx; chttp2; gold)"],"via":["HTTP/2.0 127.0.0.1:8980"]},"metadata_response_headers":{"content-type":["application/grpc"]},"metadata_response_trailers":{}}
  1. Now pass an empty foo:
$ grpc_cli -remotedb=false -proto_path proto -protofiles helloworld/helloworld.proto \
    call localhost:8980 helloworld.Greeter/SayHello 'name:"world" foo{}'
  1. No warning:
{"service":"helloworld.Greeter","method":"SayHello","messages":[{"message_origin":"client","raw_message":"CgV3b3JsZBIA","message":{"name":"world","foo":{}},"timestamp":"2020-03-31T22:38:07.585913+03:00"},{"message_origin":"server","raw_message":"CgtIZWxsbyB3b3JsZA==","message":{"message":"Hello world"},"timestamp":"2020-03-31T22:38:07.586199+03:00"}],"metadata":{":authority":["localhost:8980"],"accept-encoding":["identity,gzip"],"content-type":["application/grpc"],"grpc-accept-encoding":["identity,deflate,gzip"],"user-agent":["grpc-c++/1.19.1 grpc-c/7.0.0 (osx; chttp2; gold)"],"via":["HTTP/2.0 127.0.0.1:8980"]},"metadata_response_headers":{"content-type":["application/grpc"]},"metadata_response_trailers":{}}
  1. Repeat steps 3-8 with response.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.