Giter Site home page Giter Site logo

joedevivo / chatterbox Goto Github PK

View Code? Open in Web Editor NEW
237.0 237.0 67.0 2.31 MB

HTTP/2 Server for Erlang. Boy, that guy was a real chatterbox waddn't he? I didn't think he was ever going to stop with the story.

License: MIT License

Makefile 0.08% Erlang 41.88% HTML 5.73% CSS 19.04% JavaScript 24.71% SCSS 8.56%

chatterbox's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

chatterbox's Issues

Issues when connecting to nginx server

Hi,

when I connect to nginx (browser gets the files correctly using http2), I get weird log lines:

12:08:54.911 [error] [client][Stream 1] Flow Control got 570 bytes, Stream window was 0
12:08:54.941 [error] Sending 1 FLOWCONTROL ERROR because NSW = 2147549182

Looking at nginx logs, looks as if the request was served correctly:

[20/Dec/2016:12:08:54 +0000] "GET /index.html HTTP/2.0" 200 692 

This is how I did the request:

    {ok, Pid} = h2_client:start_link(https, "askfred.today", 443, [{client_preferred_next_protocols, {client, [<<"h2">>]}}]),

    RequestHeaders = [
               {<<":method">>, <<"GET">>},
               {<<":path">>, <<"/index.html">>},
               {<<":scheme">>, <<"https">>},
               {<<":authority">>, <<"askfred.today">>},
               {<<"accept">>, <<"*/*">>},
               {<<"accept-encoding">>, <<"gzip, deflate">>},
               {<<"user-agent">>, <<"chatterbox/0.1">>}
              ],

    RequestBody = <<>>,
%    {ok, StreamId} = h2_client:send_request(Pid, RequestHeaders, RequestBody),
%    loop_reception(Pid, StreamId).

    {ok, {ResponseHeaders, ResponseBody}} = h2_client:sync_request(Pid, RequestHeaders, RequestBody).

Can you please help?

Move this project in organization to have a better development

Dear @joedevivo,

It is possible to move this repository in an organization to have a better development?

The current development has been stopped more 2 years ago.
The last official build is 0.8.0 (2018-08-27), soon 5 years ago.

I know that @tsloughter works a lot on it and on grpcbox too.

Same for your other repositories:

I propose to move in https://github.com/chatterbox-project.

Thanks in advance.

new release?

Would be interresting to have a release soon removing warnings about gen_statem and ssl on OTP 21 :)

How can we know the max of streams allowed on the server

Hi,

using the h2_client how can I know the max number of concurrent streams allowed on the server.
I can see in the code that the received "SETTINGS" frame saves the other endpoint max concurrent streams in the local connection.

I would like to check the max streams allowed and then when I am sending split the requests in up max streams size request chunks.

Example:

I have to send 1000 requests so, I will check the max streams (100) so I will split the requests in 10 chunks and send and receive the streams until send the 1000 without exceed the max limit on the server (will cause a {ok, {error, 7}} on chatterbox server if we put a limit in the MCS).

Some guidance would be nice

Cheers

Stream creation concurrency bug

For starters, I'm not 100% certain on where to file this bug at. I've found a bug via grpcbox's use of streams, however, I don't see a way that it can be fixed externally from the h2_connection pid.

The code in grpcbox:

https://github.com/tsloughter/grpcbox/blob/545ccdfe3f54469d53f373384e81f00b0b51807c/src/grpcbox_client_stream.erl#L75-L86

The issue here is that we're allocating and returning a stream id from inside the h2_connection but we don't send headers from inside the h2_connection pid. The relevant section of RFC7541 is here:

https://httpwg.org/specs/rfc7540.html#rfc.section.5.1.1

The identifier of a newly established stream MUST be numerically greater than all streams that the initiating endpoint has opened or reserved. This governs streams that are opened using a HEADERS frame and streams that are reserved using PUSH_PROMISE. An endpoint that receives an unexpected stream identifier MUST respond with a connection error (Section 5.4.1) of type PROTOCOL_ERROR.

Given that we allocate the ids before headers are sent means that concurrent use of the h2_connection can easily crash the entire connection if the headers are sent in a different order than the stream ids are allocated which is fairly easy to induce by something like such:

[spawn(fun() -> use_grpcbox_client() end || N <- lists:seq(1, 1000)]

Theoretically grpcbox could do some sort of insistence on serializing stream creation but it really seems like this should be enforced inside h2_connection. The most obvious thing I can see would be to change h2_connection:new_stream to take the headers as an argument so that they're sent as ids are allocated.

1.0 API

Opening this for anyone with particular changes they'd like, whether it is improvements or bug fixes, to please let it be known in this issue so I can take them into account.

I'd like to get to a 1.0 release. Right now I try to stay away from changes that aren't backwards compatible because there is no major version to bump. Of course it is expected that before a 1.0 that anything goes, but it isn't fun for users if every release breaks their usage.

There are a number of exposed functions I think can be removed and stream callbacks need to be improved. This is going to have to be done carefully to not mark a 1.0 and then find out their are bugs requiring API changes right away :).

I'd also like to dig into h2_connection stream handling and verify there aren't some unnecessary bottlenecks (as in, operations that don't have to be done within the single process). For example, maybe if streams are in a protected ETS table instead of a list it'll be possible to move some operations out of the single process.

A good bit has changed in OTP since this library was written as well, so more moving to use of maps where possible and using uri_string to have a better client request interface.

h2_client:send_request/3 does not check the return from h2_connection:new_stream/1,2

h2_client:send_request/3 does not check the return from h2_connection:new_stream/1,2. When a new stream cannot be created, for example, because the peer connection sets the maximum concurrent streams to 1, h2_connection:new_stream/1,2 returns {error, 7}.

However, because h2_client:send_request/3 does not check this, it returns {ok, {error, 7}} to the caller, which blows up when it next tries to use a stream id of {error, 7}.

Crash when an unknown certificate be used in a connection

Thank you for your work! But I have got a trouble when using chatterbox, hope you can do me a favor :)

I'm working on APNS, and this is the code (elixir) I wrote to connect to APNS server:

    transport = :https
    host = 'gateway.sandbox.push.apple.com'
    port = 2195
    ssl_opts = [{:certfile, path}, :binary]
    :h2_client.start_link(transport, host, port, ssl_opts)

When the certificate file is valid, everything works fine, but if an invalid certificate file be used, chatterbox would crash. The error information is:

** (EXIT from #PID<0.302.0>) an exception was raised:
    ** (MatchError) no match of right hand side value: {:error, {:tls_alert, 'certificate unknown'}}
        (chatterbox) src/h2_connection.erl:148: :h2_connection.init/1
        (stdlib) gen_fsm.erl:378: :gen_fsm.init_it/6
        (stdlib) proc_lib.erl:240: :proc_lib.init_p_do_apply/3

If seems something should be done in h2_connection when the connection failed?

Couldn't compile hpack on Ubuntu 14.04 (elixir)

I created a new project by mix new hpack_test, and added only one dependence: {:chatterbox, git: "git://github.com/joedevivo/chatterbox.git"}, and then mix do deps.get, compile, an error raised:

Compiled src/lager_format.erl
==> hpack (compile)
src/hpack.erl:none: error in parse transform 'lager_transform': {function_clause,
                                             [{lager_transform,
                                               '-walk_ast/2-fun-0-',
                                               [{typed_record_field,
                                                 {record_field,19,
                                                  {atom,19,
                                                   connection_max_table_size},
                                                  {integer,19,4096}},
                                                 {type,19,non_neg_integer,
                                                  []}}],
                                               [{file,
                                                 "src/lager_transform.erl"},
                                                {line,60}]},
                                              {lists,map,2,
                                               [{file,"lists.erl"},
                                                {line,1239}]},
                                              {lists,map,2,
                                               [{file,"lists.erl"},
                                                {line,1239}]},
                                              {lager_transform,walk_ast,2,
                                               [{file,
                                                 "src/lager_transform.erl"},
                                                {line,60}]},
                                              {compile,
                                               '-foldl_transform/2-anonymous-2-',
                                               2,
                                               [{file,"compile.erl"},
                                                {line,958}]},
                                              {compile,foldl_transform,2,
                                               [{file,"compile.erl"},
                                                {line,960}]},
                                              {compile,
                                               '-internal_comp/4-anonymous-1-',
                                               2,
                                               [{file,"compile.erl"},
                                                {line,315}]},
                                              {compile,fold_comp,3,
                                               [{file,"compile.erl"},
                                                {line,341}]}]}
Compiling src/hpack.erl failed:
ERROR: compile failed while processing /root/Projects/hpack_test/deps/hpack: rebar_abort
** (Mix) Could not compile dependency :hpack, "/root/.mix/rebar compile skip_deps=true deps_dir="/root/Projects/hpack_test/_build/dev/lib"" command failed. You can recompile this dependency with "mix deps.compile hpack", update it with "mix deps.update hpack" or clean it with "mix deps.clean hpack"

Could you give me some suggestions?

Garbage Collection

Quick question for you. My apologies if i don't understand everything - i've only been using this library for a few days. I am using it as a http2 client that send push notifications to apple.

For example - here is some debuging output:

added stream #3 to {stream_set,client,{peer_subset,500,1,0,5,[{closed_stream,1,<0.2105.0>,[{<<":status">>,<<"200">>},{<<"apns-id">>,<<"b811c1d3-e19a-4721-9687-1f7560e005ee">>}],[],false},{active_stream,3,<0.2297.0>,<0.2105.0>,65535,65535,undefined,false}]},{peer_subset,unlimited,0,0,2,[]}} 

I've noticed that even though i've called h2_client:get_response it stays in the stream_set. Do i need to go through the closed streams and set the garbageproperty to true? Or is there a better way to mark it after i've dealt with it?

Doesn't compile for erlang versions older than 17

Hello, I'm on Erlang R16B03 and was not able to compile this project. I found a fix and would like to submit a fix. What is the best way to go about this?

The issue comes from src/http2_connection.erl. Within the connection record, settings_sent is identified as a queue:queue() but this syntax is only available for 17 and later. I have a fix that should work for both 17 and older versions.

Should we use lager in infrastructural library?

In my opinion, if something going wrong in low level library, it should throw exceptions or return error, just like :

exit({something_error, "..."})

or

{error, "...."}

And at top level application, we could handle the error depend on our production env or log the error in our format.

If the low level app log the error information by itself, the log maybe incompatible with top level app and will cause some trouble.

In Erlang project, most developer will choose lager to log their error information, and they also will set the configuration about lager. So, that is fine for using lager as the file backend. But in Elixir project, Logger will be the first choice for the developer. Unfortunate, most developer will not to notice the configuration about lager. As the result, the output of file backend will be untidiness.

So, I suggest remove the lager and return(or throw exception) the error information to the top level app. And use error_logger module if log information needed. Just like cowboy:

https://github.com/ninenines/cowboy/search?utf8=%E2%9C%93&q=error_logger&type=
https://github.com/ninenines/cowboy/search?utf8=%E2%9C%93&q=lager&type=

Better handling of stream level frames

Currently the header decoding is done here but it probably should be done in http2_stream:recv_frame.

recv_frame will have to be modified to take in a http2_connection state, and return an updated one as well.

This is good because we can put all the stream extraction stuff in here, and remove connection details (like socket) out of each stream, which means a smaller http2_connection.

This is an "after EUC" kind of thing.

Support OTP20+

In OTP20+ gen_fsm is deprecated and replaced by gen_statem :(

PING ACK flag check is incorrect

Sending a PING frame (without ACK flag set) in either direction between a chatterbox client and a chatterbox server results in an infinite PING loop. This is because the check for the PING ACK flag is incorrect, as shown below.

 route_frame({H, Ping},
             #connection{}=Conn)
     when H#frame_header.type == ?PING,
-         ?NOT_FLAG((#frame_header.flags), ?FLAG_ACK) ->
+         ?NOT_FLAG((H#frame_header.flags), ?FLAG_ACK) ->

This is fixed in PR #100.

Fix handling of RST_STREAM frames

For some background, I ended up hitting a corner case where a bidirectional gRPC stream (via grpcbox) was closing out on an error before any messages were exchanged. The observed behavior ended up being a timeout on a call to grpcbox_client:recv_data/2. Tracing this down it appears that chatterbox isn't properly following the RST_STREAM protocol which leaves grpcbox in limbo.

I tried to base these new state transitions off the diagram in RFC7540 5.1. Once adding this I appear to be getting the underlying error message that's causing the stream to fail though I'm not even certain on the mechanics of how that's delivered inside grpcbox.

Also given grpcbox, this is also based off @tsloughter's 0.9.1 tag so it won't apply cleanly here. The issues tab is disabled on that fork so I'm posting this here as I haven't got a clue one where better to post it.

diff --git a/src/h2_connection.erl b/src/h2_connection.erl
index 7ee864c..aaa2680 100644
--- a/src/h2_connection.erl
+++ b/src/h2_connection.erl
@@ -718,17 +718,17 @@ route_frame(
       stream_id=StreamId,
       type=?RST_STREAM
       },
-   _Payload},
+   Payload},
   #connection{} = Conn) ->
-    %% TODO: anything with this?
-    %% EC = h2_frame_rst_stream:error_code(Payload),
     Streams = Conn#connection.streams,
     Stream = h2_stream_set:get(StreamId, Streams),
     case h2_stream_set:type(Stream) of
         idle ->
             go_away(?PROTOCOL_ERROR, Conn);
-        _Stream ->
-            %% TODO: RST_STREAM support
+        active ->
+            recv_rs(Stream, Conn, h2_frame_rst_stream:error_code(Payload)),
+            {next_state, connected, Conn};
+        closed ->
             {next_state, connected, Conn}
     end;
 route_frame({H=#frame_header{}, _P},
@@ -1650,6 +1650,21 @@ recv_es(Stream, Conn) ->
             rst_stream(Stream, ?STREAM_CLOSED, Conn)
     end.
 
+-spec recv_rs(Stream :: h2_stream_set:stream(),
+              Conn :: connection(),
+              ErrorCode :: error_code()) ->
+                     ok.
+
+recv_rs(Stream, _Conn, ErrorCode) ->
+    case h2_stream_set:type(Stream) of
+        active ->
+            Pid = h2_stream_set:pid(Stream),
+            h2_stream:send_event(Pid, {recv_rs, ErrorCode});
+        _ ->
+            ok
+    end.
+
+
 -spec recv_pp(h2_stream_set:stream(),
               hpack:headers()) ->
                      ok.
diff --git a/src/h2_stream.erl b/src/h2_stream.erl
index 7a1bb08..d6aa95d 100644
--- a/src/h2_stream.erl
+++ b/src/h2_stream.erl
@@ -346,6 +346,18 @@ open(cast, recv_es,
              Stream}
     end;
 
+open(cast, {recv_rs, _ErrorCode},
+     #stream_state{
+        callback_mod=CB,
+        callback_state=CallbackState
+       }=Stream) ->
+    {ok, NewCBState} = callback(CB, on_end_stream, [], CallbackState),
+    {next_state,
+     closed,
+     Stream#stream_state{
+       callback_state=NewCBState
+      }, 0};
+
 open(cast, {recv_data,
       {#frame_header{
           flags=Flags,
@@ -544,6 +556,8 @@ half_closed_remote(cast,
             {next_state, closed, Stream, 0}
     end;
 
+half_closed_remote(cast, {recv_es, _ErrorCode}, Stream) ->
+    {next_state, closed, Stream, 0};
 
 half_closed_remote(cast, _,
        #stream_state{}=Stream) ->
@@ -655,6 +669,10 @@ half_closed_local(cast, recv_es,
        response_body = Data,
        callback_state=NewCBState
       }, 0};
+
+half_closed_local(cast, {recv_rs, _ErrorCode}, Stream) ->
+    half_closed_local(cast, recv_es, Stream);
+
 half_closed_local(cast, {send_t, _Trailers},
                   #stream_state{}) ->
     keep_state_and_data;

8.1.2.2. Connection-Specific Header Fields: Sends a HEADERS frame that contains the TE header field that contain any value other than "trailers"

      8.1.2.2. Connection-Specific Header Fields
        × Sends a HEADERS frame that contains the TE header field that contain any value other than "trailers"
          - The endpoint MUST respond with a stream error of type PROTOCOL_ERROR.
            Expected: GOAWAY frame (ErrorCode: PROTOCOL_ERROR)
                      RST_STREAM frame (ErrorCode: PROTOCOL_ERROR)
                      Connection close
              Actual: DATA frame (Length: 16, Flags: 1)

Binary leaking in h2_connection processes

I use chatterbox as a dependency of the inaka/apns4erl project. After some hours of running of the application I faced with ever growing process sizes (process_info(pid, memory)). Did some investigation I found this:

process_info(pid(0,1238,0), binary).
{binary,[{139665827264464,60,1},
         {139665831210656,60,1},
         {139664600088128,60,1},
         {139665831140616,60,1},
         {139665831222376,60,1},
         {139664600244240,60,1},
         {139665827188416,60,1},
         {139664600522856,60,1},
         {139664605657712,60,1},
         {139664605954696,60,1},
         {139664605903872,60,1},
         {139664606052304,60,1},
         {139664600433752,60,1},
         {139664605649112,60,1},
         {139664531257216,60,1},
         {139665827110208,60,1},
         {139664605153864,60,1},
         {139665827411976,60,1},
         {139665827397160,60,1},
         {139665827115824,60,1},
         {139664605243344,60,1},
         {139664605133416,60,1},
         {139664605420536,60,1},
         {139664605405256,60,1},
         {139664637840368,60,...},
         {139664638044448,...},
         {...}|...]}
(myhost@my_ip)48> length(element(2, v(47))).
1851

Do you have a clue where this binary leak can occur?

Message: a HEADERS frame cannot appear in the middle of a stream

I am getting above error, how to fix the above issue.

I am using grpcbox.

And by payload is of below size

{ok,#{payload =>
          {route_groups,#{routes =>
                              [#{info =>
                                     #{account_id => 1755856,
                                       ani_loc_dc => "1201555",
                                       ani_loc_id => 7174,fr_prefix => "1",
                                       fr_rand => 25,fr_route_id => 2301749,
                                       lb_group => "us-east-1",
                                       location_dc => "1973499",
                                       location_id => 138453,noa => 'INT',
                                       partition_id => 38077,prefix => "+",
                                       rds =>
                                           "BET_XXXXX_cc1fb83777e9de5e267afff8fe26885e815e18aa",
                                       route_type => 'CFR',
                                       router => "10.252.132.33",
                                       router_id => "v2.5.cc9bfbc-mod",
                                       sip_rtg_type => 'PROXY',
                                       stirshaken => 'US_CC_orif',
                                       switch_id => 108,utc => 1710259275},
                                 route =>
                                     [#{account_id => 1729502,
                                        addr => ["10.71.17.4:5060"],
                                        dtg => undefined,index => 1,
                                        switch_id => 167,tns => "6651",
                                        trunk_assign_id => 78389},
                                      #{account_id => 1729502,
                                        addr =>
                                            ["10.66.100.9:5130",
                                             "10.71.100.9:5130"],
                                        dtg => "nphoaxsboliv_y",index => 2,
                                        switch_id => 155,tns => "1274",
                                        trunk_assign_id => 80266}]}]}},
      transaction_id => 0}

h2_connection function for sending trailers

I'm playing around with a grpc server, which requires trailers. I discovered that on the client side (using ) I get an exception saying there was no value for the unary call response nearly every time. But as I kept playing with headers and what I was sending back I saw it work. In the end I discovered this works consistently:

ResponseHeaders = [{<<":status">>, <<"200">>},
                   {<<"grpc-encoding">>, <<"gzip">>},
                   {<<"content-type">>, <<"application/grpc">>}],
BodyToSend = zlib:gzip(messages:encode_msg(#'BoolValue'{value=true})),
Length = byte_size(BodyToSend),
OutFrame = <<1, Length:32, BodyToSend/binary>>,
h2_connection:send_headers(ConnPid, StreamId, ResponseHeaders),
h2_connection:send_body(ConnPid, StreamId, OutFrame, [{send_end_stream, false}]),

%% sleepy time
timer:sleep(200),

h2_connection:send_headers(ConnPid, StreamId, [{<<"grpc-status">>, <<"0">>}], [{send_end_stream, true}]);

With that timer:sleep/1 call it works every time.

A likely clue, but just confuses me more right now, is that the trailer is actually sent before the body without the timeout...

Without the timeout:

<- HEADERS(1)                                
    - END_STREAM                             
    + END_HEADERS                            
    :status: 200                             
    grpc-encoding: gzip                      
    content-type: application/grpc           

<- HEADERS(1)                                
    + END_STREAM                             
    + END_HEADERS                            
    grpc-status: 0                           

<- DATA(1)                                   
    - END_STREAM                             
    {27 bytes} 

The new release was created with 'v' prefix

Hi,

the last release (tag) has the v prefix and the previous does not have it.

Just notice it when I was updating the dependency version just by changing from 0.4.1 to 0.4.2.

Cheers

P

Receive performance reduce rapidly while concurrent stream number grows

In our test, performance reduces rapidly when concurrent stream number grows.

We found the follow code block in "h2_stream_set.erl",

get_from_subset(Id, PeerSubset) ->
    case lists:keyfind(Id, 2, PeerSubset#peer_subset.active) of
        false ->
            #closed_stream{id=Id};
        Stream ->
            Stream
    end.

{StreamID, ClientPid} is stored in list struct, it can somehow explain the performance issue we met.

Our use case is gRPC unary.

I have not checked the overall usage of this data structure yet. My suggestion is general to the kind of problem,

  • Keep list data structure, use qsort like way to speed lookup
  • Replace it with map or even ets table

Server or library?

Hi guys!

First of all, thanks for the awesome work. I have a very basic question:

Is Chatterbox a server, or a library that is to be used by a web server? i.e. can I serve for example a REST API using Chatterbox alone, or do I need something else on top?

The project description states that Chatterbox is a server, but the README describes it as a library, so that got me confused.

Finally, if a different dependency else is needed to actually process/answer requests, could you suggest one/a few alternatives?

Thanks in advance!

client needs to be able to stream responses

Right now all the responses are pretty much based on waiting for the whole response and an end_stream flag. #55 introduces the ability to set a callback module send partial bodies, but on the client side we have no way of consuming data that way. Ideally we should be able to, even if for no other reason than to adequately test #55

bi-directional streams ?

I have the idea to implement bi-directional streams over http2:

  1. the client send a request with headers, and methods
  2. The server respond with headers and wait from the client
    3 we enter then in the following loop until the client or the server decide to stop:
    3.a) the client send a chunk, wait for the server
    3.b) the server reply wioth a chun and then wait for the client
  3. The server finish with a terminal trailers

Wile it can be hackish to do it with the current implement, is it even possible? Any guidance would be appreciated.

any example to receive streams on the server?

Is it already possible to receive a stream in the server using the ranch protocol? Or maybe there another is way to do it?

In my project I could receive large bodies, and I would rather stream them than collecting everything in RAM so I can store them on the fly. Hopefully it is already possible :)

note: on the client side I was thinking to patch it so I can pass a function to the body instead of a only binary.

8.1.2.6. Malformed Requests and Responses: Sends a HEADERS frame that contains the "content-length" header field which does not equal the sum of the multiple DATA frame payload lengths

      8.1.2.6. Malformed Requests and Responses
        × Sends a HEADERS frame that contains the "content-length" header field which does not equal the sum of the multiple DATA frame payload lengths
          - The endpoint MUST respond with a stream error of type PROTOCOL_ERROR.
            Expected: GOAWAY frame (ErrorCode: PROTOCOL_ERROR)
                      RST_STREAM frame (ErrorCode: PROTOCOL_ERROR)
                      Connection close
              Actual: DATA frame (Length: 16, Flags: 1)

undefined parse transform 'lager_transform'

mix deps.compile chatterbox
===> Compiling chatterbox
===> Compiling src/h2_frame.erl failed
src/h2_frame.erl:none: undefined parse transform 'lager_transform'

** (Mix) Could not compile dependency :chatterbox, "/root/.mix/rebar3 bare compile --paths "/home/mongoosePush/MongoosePush/_build/dev/lib/*/ebin"" command failed. You can recompile this dependency with "mix deps.compile chatterbox", update it with "mix deps.update chatterbox" or clean it with "mix deps.clean chatterbox"

http2_client request crashing in http2_stream.half_closed_local

I'm using http2_client.start_link/3 to connect to a server and send requests. With both sync and async requests I get the following crash when sending the first request:

00:57:50.056 [error] ** State machine #PID<0.409.0> terminating
** Last event in was :recv_es
** When State == :half_closed_local
** Data == {:stream_state, 1, #PID<0.402.0>,
{:ssl,
{:sslsocket, {:gen_tcp, #Port<0.13518>, :tls_connection, :undefined},
#PID<0.406.0>}}, :idle, 65535, 65535, {[], []}, {[], []},
[{":method", "POST"}, {":scheme", "https"},
{":authority", "api.development.push.apple.com"},
{"accept", "application/json"}, {"accept-encoding", "gzip, deflate"},
{":path",
"/3/device/b9f91716e547ea82e8441a86b8a9f57bcd720cf73e02ad35062fbec3ee241ecb"},
{":apns-expiration", "0"}, {":apns-priority", "10"}], :undefined, false,
false,
[{":status", "200"}, {"apns-id", "F42F5426-89BC-750B-3E65-6EFAF71859BF"}],
:undefined, false, false, :undefined, :undefined, #PID<0.368.0>,
{:cb_static, []}, :chatterbox_static_stream}
** Reason for termination =
** {:function_clause,
[{:http2_stream, :half_closed_local,
[:recv_es,
{:stream_state, 1, #PID<0.402.0>,
{:ssl,
{:sslsocket, {:gen_tcp, #Port<0.13518>, :tls_connection, :undefined},
#PID<0.406.0>}}, :idle, 65535, 65535, {[], []}, {[], []},
[{":method", "POST"}, {":scheme", "https"},
{":authority", "api.development.push.apple.com"},
{"accept", "application/json"}, {"accept-encoding", "gzip, deflate"},
{":path",
"/3/device/b9f91716e547ea82e8441a86b8a9f57bcd720cf73e02ad35062fbec3ee241ecb"},
{":apns-expiration", "0"}, {":apns-priority", "10"}], :undefined, false,
false,
[{":status", "200"}, {"apns-id", "F42F5426-89BC-750B-3E65-6EFAF71859BF"}],
:undefined, false, false, :undefined, :undefined, #PID<0.368.0>,
{:cb_static, []}, :chatterbox_static_stream}],
[file: 'src/http2_stream.erl', line: 480]},
{:gen_fsm, :handle_msg, 7, [file: 'gen_fsm.erl', line: 518]},
{:proc_lib, :init_p_do_apply, 3, [file: 'proc_lib.erl', line: 240]}]}

00:57:50.056 [error] gen_fsm <0.409.0> in state half_closed_local terminated with reason: no function clause matching http2_stream:half_closed_local(r
ecv_es, {stream_state,1,<0.402.0>,{ssl,{sslsocket,{gen_tcp,#Port<0.13518>,tls_connection,undefined},<0.406.0>}},...}) line 480
00:57:50.056 [error] CRASH REPORT Process <0.409.0> with 3 neighbours exited with reason: no function clause matching http2_stream:half_closed_local(r
ecv_es, {stream_state,1,<0.402.0>,{ssl,{sslsocket,{gen_tcp,#Port<0.13518>,tls_connection,undefined},<0.406.0>}},...}) line 480 in gen_fsm:terminate/7
line 626
00:57:50.056 [error] Supervisor {<0.363.0>,poolboy_sup} had child undefined started with {'Elixir.Cartel.Pusher.Apns2',start_link,undefined} at <0.368
.0> exit with reason no function clause matching http2_stream:half_closed_local(recv_es, {stream_state,1,<0.402.0>,{ssl,{sslsocket,{gen_tcp,#Port<0.13
518>,tls_connection,undefined},<0.406.0>}},...}) line 480 in context child_terminated
** (exit) exited in: GenServer.call(#PID<0.368.0>, {:send, %Cartel.Message.Apns2{expiration: 0, id: nil, payload: %{aps: %{alert: "ciao"}}, priority:
10, token: "b9f91716e547ea82e8441a86b8a9f57bcd720cf73e02ad35062fbec3ee241ecb", topic: nil}}, 5000)
** (EXIT) an exception was raised:
** (FunctionClauseError) no function clause matching in :http2_stream.half_closed_local/2
(chatterbox) src/http2_stream.erl:480: :http2_stream.half_closed_local(:recv_es, {:stream_state, 1, #PID<0.402.0>, {:ssl, {:sslsocket, {:g
en_tcp, #Port<0.13518>, :tls_connection, :undefined}, #PID<0.406.0>}}, :idle, 65535, 65535, {[], []}, {[], []}, [{":method", "POST"}, {":scheme", "htt
ps"}, {":authority", "api.development.push.apple.com"}, {"accept", "application/json"}, {"accept-encoding", "gzip, deflate"}, {":path", "/3/device/b9f
91716e547ea82e8441a86b8a9f57bcd720cf73e02ad35062fbec3ee241ecb"}, {":apns-expiration", "0"}, {":apns-priority", "10"}], :undefined, false, false, [{":s
tatus", "200"}, {"apns-id", "F42F5426-89BC-750B-3E65-6EFAF71859BF"}], :undefined, false, false, :undefined, :undefined, #PID<0.368.0>, {:cb_static, []
}, :chatterbox_static_stream})
(stdlib) gen_fsm.erl:518: :gen_fsm.handle_msg/7
(stdlib) proc_lib.erl:240: :proc_lib.init_p_do_apply/3
(elixir) lib/gen_server.ex:564: GenServer.call/3
(poolboy) src/poolboy.erl:76: :poolboy.transaction/3

http2_client is leaking processes

Every time an http2_client:sync_request is made, the client makes a new stream and adds it to the list of streams within the http2_connection. The stream doesn't seem to be used again, closed, or removed from the list. What's the expected result?

Negative Flow Control Windows

#62 discovered a weirdness with flow control math w.r.t. to settings frames changing the initial window size. The fix is in, but it's only partial.

Right now it looks like *_send_what_we_can is based on non_neg_integers, and it needs to be reevaluated for all integers.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.