Giter Site home page Giter Site logo

ice's People

Contributors

aeronotix avatar anshulmalik avatar antonito avatar ashellunts avatar at-wat avatar boks1971 avatar cnderrauber avatar davidzhao avatar dihamilton avatar edaniels avatar enobufs avatar ernado avatar hugoarregui avatar jeremija avatar kcaffrey avatar kylecarbs avatar m1k1o avatar masterada avatar mjmac avatar nerd2 avatar pionbot avatar renovate-bot avatar renovate[bot] avatar san9h0 avatar sean-der avatar seppo0010 avatar stv0g avatar trivigy avatar wawesomenogui avatar zizhengtai avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ice's Issues

Make srflx gathering configurable

@masterada @trivigy

We need to make srflx gathering only do the 'default' interface by default I think.

While not spec-compliant I have gotten feedback from multiple people about speed regressions. This wouldn't be a problem if we had Trickle ICE from day one, but lots of people have setup their signaling already.

I say we make configurable, and maybe when we do /v3 we can roll back to the 'all interface' behavior.

Implement regular nomination

We need to stop using aggressive nomination.

If we have multiple pairs this can cause us to have different choices on each side (since the last pair with use-candidate is chosen.

Right now we just send every pair a 'USE-CANDIDATE' this means that each side may choose different candidates, all depending on when packets arrive.

Also then if you choose different candidates they will eventually die. ICE only does ping via SendIndications and we have no pongs. So each side is just sending pings and eventually each side dies since they haven't gotten any inbound traffic in 30 seconds.

panic: send on closed channel in github.com/pion/ice.(*Agent).updateConnectionState

panic: send on closed channel

goroutine 2270100 [running]:
github.com/pion/ice.(*Agent).updateConnectionState(...)
        /root/go/pkg/mod/github.com/pion/[email protected]/agent.go:642
github.com/pion/ice.(*controlledSelector).ContactCandidates(0xc001e74ba0)
        /root/go/pkg/mod/github.com/pion/[email protected]/selection.go:214 +0x21c
github.com/pion/ice.(*Agent).startConnectivityChecks.func1.1.1.1(0xc000f2e000)
        /root/go/pkg/mod/github.com/pion/[email protected]/agent.go:615 +0x3a
github.com/pion/ice.(*Agent).run(0xc000f2e000, 0xe4d648, 0x0, 0x0)
        /root/go/pkg/mod/github.com/pion/[email protected]/agent.go:194 +0xfd
github.com/pion/ice.(*Agent).startConnectivityChecks.func1.1.1()
        /root/go/pkg/mod/github.com/pion/[email protected]/agent.go:614 +0x3f
github.com/pion/ice.(*Agent).startConnectivityChecks.func1.1(0xc000f2e000)
        /root/go/pkg/mod/github.com/pion/[email protected]/agent.go:626 +0x110
created by github.com/pion/ice.(*Agent).startConnectivityChecks.func1
        /root/go/pkg/mod/github.com/pion/[email protected]/agent.go:612 +0x186

I'm running a very active instance of https://github.com/anacrolix/confluence, which makes frequent and implicit use of several webtorrent trackers. There's probably 100s or 1000s of webrtc PeerConnections being opened and closed, so unless I'm using the API incorrectly, this is probably a very elusive race condition in pion code somewhere.

Install instructions?

Got this error when trying to build another project:

package github.com/pions/webrtc/pkg/ice: cannot find package "github.com/pions/webrtc/pkg/ice" in any of:
        /usr/local/go/src/github.com/pions/webrtc/pkg/ice (from $GOROOT)
        /home/chiller/go/src/github.com/pions/webrtc/pkg/ice (from $GOPATH)

So I assumed it can be installed like:

$ go get github.com/pion/ice
# github.com/pion/dtls
src/github.com/pion/dtls/prf.go:72:10: undefined: curve25519.X25519
$ go version
go version go1.13 linux/amd64

Convert a.run() to a proper synchronization system

Summary

There is a real contention point with how the current synchronization is implemented. a.run() creates a single worker queue in order to do "pseudo locking" and to synchronize execution inside of the ice library. Unfortunately this is a really challenging design and creates a lot of issues. E.g. Bad perf benchmark, pool locking capabilities.

It would be nice to change this to improve performance and synchronization issues.

Describe alternatives you've considered

Did not consider any alternatives but the best solution would be something that does not require locking at all. Here is one really good place to draw ideas from:
https://drive.google.com/file/d/1nPdvhB0PutEJzdCq5ms6UI58dp50fcAN/view

panic: send on closed channel in TestConnectivityVNet/Symmetric_NATs_on_both_ends

environment: CI: Test i386 1.13

It randomly fails on CI environment.

=== RUN   TestConnectivityVNet/Symmetric_NATs_on_both_ends
turn ERROR: 2020/01/25 01:14:13 Packet unhandled in relay src 28.1.1.1:49157
panic: send on closed channel

goroutine 249 [running]:
github.com/pion/turn/v2/internal/client.(*Transaction).WriteResult(0x923d5e0, 0x9223290, 0x841e5a0, 0x92c5ac0, 0x0, 0x0, 0x0, 0x34)
	/go/pkg/mod/github.com/pion/turn/[email protected]/internal/client/transaction.go:83 +0x39
github.com/pion/turn/v2.(*Client).handleSTUNMessage(0x908e320, 0x927a000, 0x34, 0xffff, 0x841e5a0, 0x92c5ac0, 0x0, 0x0)
	/go/pkg/mod/github.com/pion/turn/[email protected]/client.go:471 +0x461
github.com/pion/turn/v2.(*Client).HandleInbound(0x908e320, 0x927a000, 0x34, 0xffff, 0x841e5a0, 0x92c5ac0, 0x92c5ac0, 0x0, 0x0)
	/go/pkg/mod/github.com/pion/turn/[email protected]/client.go:398 +0x202
github.com/pion/turn/v2.(*Client).Listen.func1(0x908e320)
	/go/pkg/mod/github.com/pion/turn/[email protected]/client.go:172 +0xba
created by github.com/pion/turn/v2.(*Client).Listen
	/go/pkg/mod/github.com/pion/turn/[email protected]/client.go:163 +0xd8
FAIL	github.com/pion/ice	36.062s
FAIL

Agent.Write should use best valid pair, not just selected.

In section-4 ICE RFC states

   Selected Pair, Selected Candidate Pair:  The candidate pair used for
      sending and receiving data for a component of a data stream is
      referred to as the "selected pair".  Before selected pairs have
      been produced for a data stream, any valid pair associated with a
      component of a data stream can be used for sending and receiving
      data for the component.  Once there are nominated pairs for each
      component of a data stream, the nominated pairs become the
      selected pairs for the data stream.  The candidates associated
      with the selected pairs are referred to as "selected candidates".

gatherCandidatesRelay can leak sockets

Your environment.

  • Version: 0.5.13

What did you do?

When gathering relay candidates with invalid credentials, a socket remains open after the peer is closed. This can be checked with lsof on a linux system.

What did you expect?

That all open sockets would be closed when the peer is closed.

What happened?

A socket remained open.

Debugging Notes

I've narrowed down this issue to gatherCandidatesRelay in gather.go. First we create a network socket, using:

locConn, err := a.net.ListenPacket(network, "0.0.0.0:0")
if err != nil {
    return err
}

After that, there are a variety of places we can return from the function with an error. In my case, it was the following (because I had some invalid credentials 😄):

relayConn, err := client.Allocate()
if err != nil {
    return err
}

If we encounter an error in any of these cases, the socket created earlier would never be closed (until the process is terminated). Ideally, this function would close the socket if an error occurs, and perhaps log an error.

This may also be an issue with the other candidate types, though I have not run into that yet.

Implement restartIce functionality?

Summary

Are you planning to support the ICERestart feature? It's available as an option in the OfferOptions struct, but options are not allowed as a parameter to PeerConnection.CreateOffer.

Motivation

Currently the only way to reconnect to a WebRTC session is to tear down the peer connection and recreate it on both sides. There's quite a bit of communication overhead required to do that, which is normally taken care of by the ICERestart flag.

Google's WebRTC implementation implements the ICE restart functionality.

I don't know if this is on your radar right now or not.

Describe alternatives you've considered

Repeated from above: "Currently the only way to reconnect to a WebRTC session is to tear down the peer connection and recreate it on both sides. There's quite a bit of communication overhead required to do that, which is normally taken care of by the ICERestart flag."

Additional Context

N/A

ICE username mismatch

Looks like there may be an issue with ICE username. I get an ICE warning indicating the username is concatenated with another string:

ice WARNING: 2020/01/06 20:58:20 discard message from (host 192.168.0.238:51718), unknown TransactionID 0xad67559870175f62fa4e89b5
ice WARNING: 2020/01/06 20:58:20 discard message from (192.168.0.238:51795), username mismatch expected(734f496d566d7652436e4872705943433a) actual(734f496d566d7652436e4872705943433a6564625675634b515a6e757a69564e4b)
ice WARNING: 2020/01/06 20:58:20 discard message from (192.168.56.1:51796), username mismatch expected(734f496d566d7652436e4872705943433a) actual(734f496d566d7652436e4872705943433a6564625675634b515a6e757a69564e4b)

May be a clue: The particular test starts 50 parallel peer connections. Not sure if this use-case is covered by the Pion test-suite.

Your environment.

  • Version: 726a16faa60dccdc8c00df8e1ae6426bf5b3bd05
  • Browser: N/A
  • OS: Windows

What did you do?

I updated libp2p/go-libp2p-webrtc-direct to latest pion/webrtc (726a16faa60dccdc8c00df8e1ae6426bf5b3bd05) and ran the test-suite with the race detector activated: go test -v -race -count=1 .
Note that the test doesn't fail it just takes a longer time due to this issue.

Share one UDP socket with all UDP candidates

Your environment.

  • Version: v0.4.3
  • Browser: n/a

What did you do?

During the work of #46, I noticed that a UDP socket was created for each 3 types of initial candidates, host, srflx and relay.

What did you expect?

All candidates (in the same address family) should share the base UDP socket (used for host candidate)

Current socket allocation:

 (udp soc 1) --------------- host candidate (192.168.1.2:5000)
 (udp soc 2) --------------- srflx candidate (27.1.1.1:49152 related 192.168.1.2:5001)
 (udp soc 3) --------------- relay candidate (1.2.3.4:5678 related 192.168.1.2:5002)

25 candidate pairs (or pings per period)

soc1 and 3 would create NAT binding, and these could be detected as prflx candidates.

Ideal socket allocation:

 (udp soc 1) -------+------- host candidate (192.168.1.2:5000)
                    +------- srflx candidate (27.1.1.1:49152 related 192.168.1.2:5000)
                    +------- relay candidate (1.2.3.4:5678 related 192.168.1.2:5000)

9 candidate pairs (or pings per period)

What happened?

  • Wasteful socket resource usage
  • Wasteful pings (connectivity checks)
  • Debug is hard.

Make IPv6 error message more descriptive

What did you do?

Started an ipv6 ICE candidate in a network that doesn't support ipv6

What did you expect?

A friendly error message

What happened?

This scary error message was logged:

ice WARNING: 2019/04/14 01:35:50 could not allocate udp6 stun:stun.stunprotocol.org:3478: failed to create STUN client

We should let people know why this message happened and whether it can be ignored.

(reported by @adwpc)

Race when creating Srflx candidates can cause invalid state

When we create srflx candidate we

  • Create a dialer
  • Close the dialer
  • Listen on the same port as the dialer

In the time between closing the dialer/listening there is a short time where another application could take that port. This has already been seen by one user, we might not be able to use this pattern unfortunately.

Valid peers can create Prflx candidates for addresses they don't control

  1. Attacker sends a BindRequest to Pions and gets saved as a validPair.
  2. Attacker never replies to Pion's BindRequest messages (so it will stay a validPair not selectedPair).
  3. Attacker/Pions performs DTLS handshake.
  4. Attacker sends a BindRequest with a higher priority and a spoofed return address (the target), which is also saved as a validPair.
  5. Pions now proceeds to send raw SRTP/data to the target address.
  6. Attacker can send RTCP/keep-alive information to keep the connection open.

This allows an attacker to DDoS a target without sending the data itself.

The fix involves requiring a SuccessResponse from a CandidatePair before writing to it. This may go against what the ICE RFC requires. It might actually not be possible because of NAT stuff, which is potentially a huge issue.

Slowdown because of server reflexive address allocation

What happened?

Between v0.2.7 and v0.2.8 there was a change in the way server reflexive addresses are allocated. Rather than collecting only one (default) address, the gathering process now actually goes through all of the local interfaces which takes a significant load time.

Here is the part that causes this:
https://github.com/pion/ice/blob/v0.2.8/gather.go#L131

Proposal

Create an asynchronous process that dispatches a goroutine to collect the server reflexive addresses but only waits for one (default one) to come back before continuing with the execution.

Requirements

  • OnIceCandidate must be called when new candidate is discovered
  • there must be a method for adding new remote candidates (not sure of the method name)
  • there should be an option for disabling trickle alltogether and waiting for all candidates

Selected pair is not accurate

Tested with v0.5.2

One of the tests below revealed that the selected pair (one of the two endpoints) sets does not seem to be correct:

Run.

PION_LOG_TRACE=all go test -v -run TestConnectivityVNet/Symmetric_NATs

One end reports correctly:

ice TRACE: 09:52:48.744710 agent.go:474: Set selected candidate pair: prio 72057593450725376 (local, prio 16777215) relay 1.2.3.4:5495 related 0.0.0.0:5237 <-> prflx 28.1.1.1:49157 related :0 (remote, prio 1862270975)

But the other end says:

ice TRACE: 09:52:48.745427 agent.go:474: Set selected candidate pair: prio 72057593987596287 (local, prio 2130706431) host 10.2.0.1:5300 <-> relay 1.2.3.4:5495 related 0.0.0.0:5237 (remote, prio 16777215)

Local host candidate is not the one succeeded. It should be prflx candidate.

This is probably due to the fact that the ice agent is not checking the mapped-address of received STUN binding response to identify the corresponding local candidate, and incorrectly marking the 'host' candidate as seen connectivity.

This does not affect any connectivity, however, it would be confusing during the debugging or a diagnosis (stats, etc).

use random port when gather local udp port

Summary

use random port when gather local udp port

Motivation

listenUDP() loop from minPort to maxPort, it will loop many times if there are many connection already existed.
Start loop from a random port will more effective.

Describe alternatives you've considered

A clear and concise description of the alternative solutions you've considered.

Additional context

https://github.com/libnice/libnice/blob/master/agent/agent.c
this libnice use nice_rng_generate_int() to calculate the start port.

Using coTurn TCP allocations with DataChannel webRTC client

I have configured a coturn server which I need to use with DataChannel for a chat app.

I am trying to reach a point where I can get the coturn server provide me with a tcp allocation.

I am using the standard call of RTCPeerConnection in JS client, providing the turn URI, username and credentials.

The transport I am providing in the URI params is TCP ?transport=tcp.

With all this, I am always receiving a UDP allocation and the server logs ChannelBind requests (UDP based)

Question:

How can I achieve TCP allocations throught webRTC client, to guarantee proper data delivery of the chat app in case of using my relay server?

TestTimeout fails on mac

@hugoArregui told me this has been failing on mac. It is failing on my mac.
Now I understand what is going on and I believe it is a bug in the test.
The fix is incoming.

Your environment.

  • Version: 14ec8
  • Browser: n/a

What did you do?

Run unitest on mac with:

go test -v -run TestTimeout

What did you expect?

The test to pass.

What happened?

=== RUN   TestTimeout
--- FAIL: TestTimeout (30.00s)
    transport_test.go:43: Connection timed out early. (after 29900 ms)

Avoid webrtc/pkg/rtcerr dependency

Summary

Avoid webrtc/pkg/rtcerr dependency

Motivation

It doesn't make sense for pions/ice to depend on pions/webrtc.

Describe alternatives you've considered

We can either:

  • Move rtcerr to it's own repo.
  • Avoid using rtcerr in package ICE. This seems more correct but will probably require translation of the errors in pions/webrtc.

candidate_base.go segmentation violation

Your environment.

  • Version: 0.5.7
  • OS: Ubuntu

What did you do?

I run WebRTC NewPeerConnection with Trickle ICE. It can connect successfully most of the time but rarely it panic. Is it possible to add recover in func (a *Agent) taskLoop().

What happened?

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x48 pc=0x815301]

goroutine 2350 [running]:
github.com/pion/ice.(*candidateBase).writeTo(0xc00008e600, 0xc000082c00, 0x64, 0x80, 0xa90a20, 0xc00008e540, 0x0, 0x0, 0x0)
        /home/cch/go/src/github.com/pion/ice/candidate_base.go:162 +0x51
github.com/pion/ice.(*Agent).sendSTUN(0xc000099440, 0xc00015c870, 0xa90a20, 0xc00008e600, 0xa90a20, 0xc00008e540)
        /home/cch/go/src/github.com/pion/ice/candidatepair.go:90 +0x75
github.com/pion/ice.(*Agent).sendBindingRequest(0xc000099440, 0xc00015c870, 0xa90a20, 0xc00008e600, 0xa90a20, 0xc00008e540)
        /home/cch/go/src/github.com/pion/ice/agent.go:855 +0x2eb
github.com/pion/ice.(*controlledSelector).PingCandidate(0xc000136300, 0xa90a20, 0xc00008e600, 0xa90a20, 0xc00008e540)
        /home/cch/go/src/github.com/pion/ice/selection.go:225 +0x3e1
github.com/pion/ice.(*Agent).pingAllCandidates(0xc000099440)
        /home/cch/go/src/github.com/pion/ice/agent.go:512 +0x176
github.com/pion/ice.(*controlledSelector).ContactCandidates(0xc000136300)
        /home/cch/go/src/github.com/pion/ice/selection.go:207 +0xc4
github.com/pion/ice.(*Agent).taskLoop(0xc000099440)
        /home/cch/go/src/github.com/pion/ice/agent.go:587 +0x197
created by github.com/pion/ice.NewAgent
        /home/cch/go/src/github.com/pion/ice/agent.go:395 +0x8aa

ICE connection state stuck at 'checking' forever

Summary

When there's no connectivity between two endpoints, both ends keep sending pings forever (never fails).

Motivation

This forces a developer to implement application-level timeout.

Describe alternatives you've considered

(see Motivation)

Additional context

RFC 5389 section 7.2.1. Sending over UDP

   Retransmissions continue until a response is received, or until a
   total of Rc requests have been sent.  Rc SHOULD be configurable and
   SHOULD have a default of 7.  If, after the last request, a duration
   equal to Rm times the RTO has passed without a response (providing
   ample time to get a response if only this final request actually
   succeeds), the client SHOULD consider the transaction to have failed.

stun.Username and stun.MessageIntegrity are not checked

Pions does not actually check if the remote username/password is correct. A bad entity could use this to hijack a session by sending a STUN packet with maxed out priority. The contents would still be encrypted using DTLS/SRTP, but it would kill the session and open up those protocols to man-in-the-middle attacks.

TestRelayOnlyConnection timeout

Running on Ubuntu bionic causes:

=== RUN   TestRelayOnlyConnection
ice ERROR: 2020/01/13 10:49:18 Failed to gather relay candidates: all retransmissions for Hd3JRle0+cm0RXWD failed
ice ERROR: 2020/01/13 10:49:25 Failed to gather relay candidates: all retransmissions for 2zWsf7VjDoXbPAyd failed
goroutine profile: total 11
...
panic: timeout

The cause might be same as pion/transport#51.

Warning "failed to write packet"

Your environment.

  • Version: v0.2.1
  • Browser: n/a

What did you do?

During a stress test using a data channel (one end sends 100MB of data to the other), "failed to write packets" started to print out, then eventually the receiver becomes unresponsive.

What did you expect?

Looking at pion/transport/packetio, there seems to be a buffer which would return an error on Write when the buffer is full. When that heppens, I'd hope the packet can be thrown out but the receiver would continue operational. (upper layer's retransmission should kick in when it is necessary)

I don't fully understand how data propagation works in ice package, but my guess is when the buffer becomes full, the buffer might not be notifying its upper layer of the presence of the data in the buffer, or maybe upper layer might have not read all the data in the buffer on the last notification (whichever the design intention is)...

nil pointer dereference on error handling

Your environment.

  • Version: v0.7.12
[signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x88be1f]

goroutine 6116 [running]:
github.com/pion/ice.closeConnAndLog(0x0, 0x0, 0xb3b160, 0xc006a56940, 0xc00229fe80, 0x39)
	/go/pkg/mod/github.com/pion/[email protected]/gather.go:26 +0x5f
github.com/pion/ice.(*Agent).gatherCandidatesSrflx.func1(0xc006943550, 0xc005be9400, 0x1, 0xa48beb, 0x12, 0x4b66, 0x0, 0x0, 0x0, 0x0, ...)
	/go/pkg/mod/github.com/pion/[email protected]/gather.go:260 +0x4fd
created by github.com/pion/ice.(*Agent).gatherCandidatesSrflx
	/go/pkg/mod/github.com/pion/[email protected]/gather.go:249 +0x189

What did you do?

I think this is due to a listenUDPInPortRange return error, and conn is nil.

What did you expect?

What happened?

segfault in agent.ok()

Your environment.

  • Version: v0.5.1
  • OS: macOS

What did you do?

I am running a load test that opens a large number of peer connections concurrently, exchanges SDPs between them, and then closes them.

What happened?

After around 40 tests, pion logged

ice ERROR: 2019/07/16 13:27:08 error processing checkCandidatesTimeout handler the agent is closed
ice ERROR: 2019/07/16 13:27:08 error processing checkCandidatesTimeout handler the agent is closed

A few seconds later it segfaulted:

ice ERROR: 2019/07/16 13:27:13 error processing checkCandidatesTimeout handler the agent is closed
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x1d8 pc=0x4221962]

goroutine 264 [running]:
github.com/pion/ice.(*Agent).ok(0x0, 0xc0004de350, 0x4446880)
        /Users/max/go/pkg/mod/github.com/pion/[email protected]/agent.go:153 +0x22
github.com/pion/ice.(*Agent).run(0x0, 0xc0004de350, 0x1, 0xc0004de340)
        /Users/max/go/pkg/mod/github.com/pion/[email protected]/agent.go:560 +0x40
github.com/pion/ice.(*Agent).OnConnectionStateChange(0x0, 0xc0004de340, 0x0, 0x0)
        /Users/max/go/pkg/mod/github.com/pion/[email protected]/agent.go:396 +0x65
github.com/pion/webrtc/v2.(*ICETransport).Start(0xc0001d7dc0, 0xc000318000, 0xc0001b22ca, 0x10, 0xc0001bc398, 0x20, 0x0, 0xc000065df0, 0x0, 0x0)
        /Users/max/go/pkg/mod/github.com/pion/webrtc/[email protected]/icetransport.go:83 +0x120
github.com/pion/webrtc/v2.(*PeerConnection).SetRemoteDescription.func3(0xc0003c8001, 0xc00013b800, 0xc0001b22ca, 0x10, 0xc0001bc398, 0x20, 0xc00067210c, 0x7, 0xc000672114, 0x5f)
        /Users/max/go/pkg/mod/github.com/pion/webrtc/[email protected]/peerconnection.go:961 +0x104
created by github.com/pion/webrtc/v2.(*PeerConnection).SetRemoteDescription
        /Users/max/go/pkg/mod/github.com/pion/webrtc/[email protected]/peerconnection.go:952 +0xe90

TestMulticastDNSOnlyConnection frequently timeout in travis

=== RUN   TestMulticastDNSOnlyConnection
ice ERROR: 2019/06/27 12:08:45 error processing checkCandidatesTimeout handler the agent is closed
goroutine profile: total 43
13 @ 0x4584ff 0x453a4a 0x453036 0x4b3f95 0x4b6745 0x4b6721 0x5f8df0 0x643091 0x64acc3 0x64aca4 0x6520f6 0x486771
#	0x453035	internal/poll.runtime_pollWait+0x55			/home/travis/.gimme/versions/go1.12.6.linux.amd64/src/runtime/netpoll.go:182
#	0x4b3f94	internal/poll.(*pollDesc).wait+0xe4			/home/travis/.gimme/versions/go1.12.6.linux.amd64/src/internal/poll/fd_poll_runtime.go:87
#	0x4b6744	internal/poll.(*pollDesc).waitRead+0x144		/home/travis/.gimme/versions/go1.12.6.linux.amd64/src/internal/poll/fd_poll_runtime.go:92
#	0x4b6720	internal/poll.(*FD).RawRead+0x120			/home/travis/.gimme/versions/go1.12.6.linux.amd64/src/internal/poll/fd_unix.go:534
#	0x5f8def	net.(*rawConn).Read+0x6f				/home/travis/.gimme/versions/go1.12.6.linux.amd64/src/net/rawconn.go:43
#	0x643090	golang.org/x/net/internal/socket.(*Conn).recvMsg+0x3d0	/home/travis/gopath/pkg/mod/golang.org/x/[email protected]/internal/socket/rawconn_msg.go:31
#	0x64acc2	golang.org/x/net/internal/socket.(*Conn).RecvMsg+0x212	/home/travis/gopath/pkg/mod/golang.org/x/[email protected]/internal/socket/socket.go:255
#	0x64aca3	golang.org/x/net/ipv4.(*payloadHandler).ReadFrom+0x1f3	/home/travis/gopath/pkg/mod/golang.org/x/[email protected]/ipv4/payload_cmsg.go:31
#	0x6520f5	github.com/pion/mdns.(*Conn).start+0x155		/home/travis/gopath/pkg/mod/github.com/pion/[email protected]/conn.go:249
8 @ 0x4584ff 0x453a4a 0x453036 0x4b3f95 0x4b5314 0x4b52ea 0x5ddc3a 0x6026ee 0x600546 0x86271a 0x486771
#	0x453035	internal/poll.runtime_pollWait+0x55			/home/travis/.gimme/versions/go1.12.6.linux.amd64/src/runtime/netpoll.go:182
#	0x4b3f94	internal/poll.(*pollDesc).wait+0xe4			/home/travis/.gimme/versions/go1.12.6.linux.amd64/src/internal/poll/fd_poll_runtime.go:87
#	0x4b5313	internal/poll.(*pollDesc).waitRead+0x213		/home/travis/.gimme/versions/go1.12.6.linux.amd64/src/internal/poll/fd_poll_runtime.go:92
#	0x4b52e9	internal/poll.(*FD).ReadFrom+0x1e9			/home/travis/.gimme/versions/go1.12.6.linux.amd64/src/internal/poll/fd_unix.go:219
#	0x5ddc39	net.(*netFD).readFrom+0x79				/home/travis/.gimme/versions/go1.12.6.linux.amd64/src/net/fd_unix.go:208
#	0x6026ed	net.(*UDPConn).readFrom+0x8d				/home/travis/.gimme/versions/go1.12.6.linux.amd64/src/net/udpsock_posix.go:47
#	0x600545	net.(*UDPConn).ReadFrom+0x95				/home/travis/.gimme/versions/go1.12.6.linux.amd64/src/net/udpsock.go:121
#	0x862719	github.com/pion/ice.(*candidateBase).recvLoop+0x209	/home/travis/gopath/src/github.com/pion/ice/candidate_base.go:93
8 @ 0x4584ff 0x468e4b 0x85acc7 0x486771
#	0x85acc6	github.com/pion/ice.(*Agent).taskLoop+0x246	/home/travis/gopath/src/github.com/pion/ice/agent.go:557
6 @ 0x4584ff 0x468e4b 0x85af02 0x486771
#	0x85af01	github.com/pion/ice.(*Agent).taskLoop+0x481	/home/travis/gopath/src/github.com/pion/ice/agent.go:570
2 @ 0x4584ff 0x468e4b 0x6507a1 0x85b8f0 0x486771
#	0x6507a0	github.com/pion/mdns.(*Conn).Query+0x3d0				/home/travis/gopath/pkg/mod/github.com/pion/[email protected]/conn.go:131
#	0x85b8ef	github.com/pion/ice.(*Agent).resolveAndAddMulticastCandidate+0x13f	/home/travis/gopath/src/github.com/pion/ice/agent.go:639
1 @ 0x4584ff 0x42e969 0x42e93f 0x42e6db 0x544833 0x54a799 0x544124 0x546784 0x544fdc 0x8a1d55 0x4580ec 0x486771
#	0x544832	testing.(*T).Run+0x692		/home/travis/.gimme/versions/go1.12.6.linux.amd64/src/testing/testing.go:917
#	0x54a798	testing.runTests.func1+0xa8	/home/travis/.gimme/versions/go1.12.6.linux.amd64/src/testing/testing.go:1157
#	0x544123	testing.tRunner+0x163		/home/travis/.gimme/versions/go1.12.6.linux.amd64/src/testing/testing.go:865
#	0x546783	testing.runTests+0x523		/home/travis/.gimme/versions/go1.12.6.linux.amd64/src/testing/testing.go:1155
#	0x544fdb	testing.(*M).Run+0x2eb		/home/travis/.gimme/versions/go1.12.6.linux.amd64/src/testing/testing.go:1072
#	0x8a1d54	main.main+0x344			_testmain.go:206
#	0x4580eb	runtime.main+0x20b		/home/travis/.gimme/versions/go1.12.6.linux.amd64/src/runtime/proc.go:200
1 @ 0x4584ff 0x468e4b 0x85a94a 0x85d314 0x88f7a5 0x85af44 0x486771
#	0x85a949	github.com/pion/ice.(*Agent).run+0x189				/home/travis/gopath/src/github.com/pion/ice/agent.go:546
#	0x85d313	github.com/pion/ice.(*Agent).Close+0x103			/home/travis/gopath/src/github.com/pion/ice/agent.go:738
#	0x88f7a4	github.com/pion/ice.TestHandlePeerReflexive.func1.1+0xab4	/home/travis/gopath/src/github.com/pion/ice/agent_test.go:279
#	0x85af43	github.com/pion/ice.(*Agent).taskLoop+0x4c3			/home/travis/gopath/src/github.com/pion/ice/agent.go:573
1 @ 0x4584ff 0x468e4b 0x85a94a 0x85d314 0x88ffa6 0x85af44 0x486771
#	0x85a949	github.com/pion/ice.(*Agent).run+0x189				/home/travis/gopath/src/github.com/pion/ice/agent.go:546
#	0x85d313	github.com/pion/ice.(*Agent).Close+0x103			/home/travis/gopath/src/github.com/pion/ice/agent.go:738
#	0x88ffa5	github.com/pion/ice.TestHandlePeerReflexive.func2.1+0x335	/home/travis/gopath/src/github.com/pion/ice/agent_test.go:310
#	0x85af43	github.com/pion/ice.(*Agent).taskLoop+0x4c3			/home/travis/gopath/src/github.com/pion/ice/agent.go:573
1 @ 0x4584ff 0x468e4b 0x873f12 0x873a3b 0x887638 0x884da6 0x544124 0x486771
#	0x873f11	github.com/pion/ice.(*Agent).connect+0x361			/home/travis/gopath/src/github.com/pion/ice/transport.go:44
#	0x873a3a	github.com/pion/ice.(*Agent).Dial+0xaa				/home/travis/gopath/src/github.com/pion/ice/transport.go:15
#	0x887637	github.com/pion/ice.connect+0x4b7				/home/travis/gopath/src/github.com/pion/ice/transport_test.go:187
#	0x884da5	github.com/pion/ice.TestMulticastDNSOnlyConnection+0x435	/home/travis/gopath/src/github.com/pion/ice/mdns_test.go:44
#	0x544123	testing.tRunner+0x163						/home/travis/.gimme/versions/go1.12.6.linux.amd64/src/testing/testing.go:865
1 @ 0x4584ff 0x468e4b 0x873f12 0x873b4b 0x89562a 0x486771
#	0x873f11	github.com/pion/ice.(*Agent).connect+0x361	/home/travis/gopath/src/github.com/pion/ice/transport.go:44
#	0x873b4a	github.com/pion/ice.(*Agent).Accept+0xaa	/home/travis/gopath/src/github.com/pion/ice/transport.go:21
#	0x895629	github.com/pion/ice.connect.func1+0x99		/home/travis/gopath/src/github.com/pion/ice/transport_test.go:182
1 @ 0x5b1a2e 0x5b17fa 0x5ad6ac 0x6a3fc7 0x486771
#	0x5b1a2d	runtime/pprof.writeRuntimeProfile+0x9d			/home/travis/.gimme/versions/go1.12.6.linux.amd64/src/runtime/pprof/pprof.go:708
#	0x5b17f9	runtime/pprof.writeGoroutine+0xc9			/home/travis/.gimme/versions/go1.12.6.linux.amd64/src/runtime/pprof/pprof.go:670
#	0x5ad6ab	runtime/pprof.(*Profile).WriteTo+0x4fb			/home/travis/.gimme/versions/go1.12.6.linux.amd64/src/runtime/pprof/pprof.go:329
#	0x6a3fc6	github.com/pion/transport/test.TimeOut.func1+0x96	/home/travis/gopath/pkg/mod/github.com/pion/[email protected]/test/util.go:18
panic: timeout
goroutine 202 [running]:
github.com/pion/transport/test.TimeOut.func1()
	/home/travis/gopath/pkg/mod/github.com/pion/[email protected]/test/util.go:21 +0x164
created by time.goFunc
	/home/travis/.gimme/versions/go1.12.6.linux.amd64/src/time/sleep.go:169 +0x52

I tried increasing the timeout to 1 minute but it didn't help

ICE connectivity problem

Your environment.

  • Version: v2.0.3
  • Browser: n/a

What did you do?

I am experiencing a connectivity problem (ICE) between two nodes (using v2.0.3). Both ends are behind different NATs which happened to be the same type, port-restricted cone NAT.

What did you expect?

ICE connection state (both ends) should go to "connected"

What happened?

One end's ICE connection state is stuck with "checking" forever. The other end went to "connected". Data channel I am trying to open wouldn't open.

Unlock of unlocked RWMutex in TestConnectivityVNet/Symmetric_NATs_on_both_ends

=== RUN   TestConnectivityVNet/Symmetric_NATs_on_both_ends
fatal error: sync: Unlock of unlocked RWMutex

goroutine 280 [running]:
runtime.throw(0xa91dbc, 0x20)
	/home/travis/.gimme/versions/go1.13.linux.amd64/src/runtime/panic.go:774 +0x72 fp=0xc000263a28 sp=0xc0002639f8 pc=0x458412
sync.throw(0xa91dbc, 0x20)
	/home/travis/.gimme/versions/go1.13.linux.amd64/src/runtime/panic.go:760 +0x35 fp=0xc000263a48 sp=0xc000263a28 pc=0x458395
sync.(*RWMutex).Unlock(0xc0000db630)
	/home/travis/.gimme/versions/go1.13.linux.amd64/src/sync/rwmutex.go:129 +0xf3 fp=0xc000263a88 sp=0xc000263a48 pc=0x49aa23
runtime.call32(0x0, 0xaa6220, 0xc000263e00, 0x800000008)
	/home/travis/.gimme/versions/go1.13.linux.amd64/src/runtime/asm_amd64.s:539 +0x3b fp=0xc000263ab8 sp=0xc000263a88 pc=0x486fcb
panic(0xa034e0, 0xb2e450)
	/home/travis/.gimme/versions/go1.13.linux.amd64/src/runtime/panic.go:679 +0x1b2 fp=0xc000263b48 sp=0xc000263ab8 pc=0x457f52
runtime.chansend(0xc0002320c0, 0xc000263c60, 0x457000, 0x6a1c10, 0x1)
	/home/travis/.gimme/versions/go1.13.linux.amd64/src/runtime/chan.go:187 +0x678 fp=0xc000263bd0 sp=0xc000263b48 pc=0x42dcf8
runtime.selectnbsend(0xc0002320c0, 0xc000263c60, 0x0)
	/home/travis/.gimme/versions/go1.13.linux.amd64/src/runtime/chan.go:615 +0x44 fp=0xc000263c08 sp=0xc000263bd0 pc=0x42ed74
github.com/pion/transport/vnet.(*UDPConn).onInboundChunk(0xc000020cc0, 0xb44260, 0xc000362e00)
	/home/travis/gopath/pkg/mod/github.com/pion/[email protected]/vnet/conn.go:236 +0xc0 fp=0xc000263c80 sp=0xc000263c08 pc=0x6a1c10
github.com/pion/transport/vnet.(*vNet).onInboundChunk(0xc000253c80, 0xb44260, 0xc000362e00)
	/home/travis/gopath/pkg/mod/github.com/pion/[email protected]/vnet/net.go:112 +0x16a fp=0xc000263d08 sp=0xc000263c80 pc=0x6a752a
github.com/pion/transport/vnet.(*Net).onInboundChunk(0xc0004e2840, 0xb44260, 0xc000362e00)
	/home/travis/gopath/pkg/mod/github.com/pion/[email protected]/vnet/net.go:623 +0x78 fp=0xc000263d30 sp=0xc000263d08 pc=0x6abd18
github.com/pion/transport/vnet.(*Router).processChunks(0xc0000db560, 0x0, 0x0, 0x0)
	/home/travis/gopath/pkg/mod/github.com/pion/[email protected]/vnet/router.go:471 +0x527 fp=0xc000263eb0 sp=0xc000263d30 pc=0x6b01f7
github.com/pion/transport/vnet.(*Router).Start.func1(0xc0000db560, 0xc000038480)
	/home/travis/gopath/pkg/mod/github.com/pion/[email protected]/vnet/router.go:224 +0x54 fp=0xc000263fd0 sp=0xc000263eb0 pc=0x6b1a74
runtime.goexit()
	/home/travis/.gimme/versions/go1.13.linux.amd64/src/runtime/asm_amd64.s:1357 +0x1 fp=0xc000263fd8 sp=0xc000263fd0 pc=0x488ce1
created by github.com/pion/transport/vnet.(*Router).Start
	/home/travis/gopath/pkg/mod/github.com/pion/[email protected]/vnet/router.go:221 +0x12a

goroutine 1 [chan receive]:
testing.(*T).Run(0xc0000f4100, 0xa8b061, 0x14, 0xaa55b0, 0x1)
	/home/travis/.gimme/versions/go1.13.linux.amd64/src/testing/testing.go:961 +0x68a
testing.runTests.func1(0xc0000f4100)
	/home/travis/.gimme/versions/go1.13.linux.amd64/src/testing/testing.go:1202 +0xa7
testing.tRunner(0xc0000f4100, 0xc0000b1cd8)
	/home/travis/.gimme/versions/go1.13.linux.amd64/src/testing/testing.go:909 +0x19a
testing.runTests(0xc0000bab40, 0xe568e0, 0x31, 0x31, 0x0)
	/home/travis/.gimme/versions/go1.13.linux.amd64/src/testing/testing.go:1200 +0x522
testing.(*M).Run(0xc000212080, 0x0)
	/home/travis/.gimme/versions/go1.13.linux.amd64/src/testing/testing.go:1117 +0x300
main.main()
	_testmain.go:242 +0x348

goroutine 279 [runnable]:
sync.runtime_SemacquireMutex(0xc0004e282c, 0x900000000, 0x1)
	/home/travis/.gimme/versions/go1.13.linux.amd64/src/runtime/sema.go:71 +0x47
sync.(*Mutex).lockSlow(0xc0004e2828)
	/home/travis/.gimme/versions/go1.13.linux.amd64/src/sync/mutex.go:138 +0x1c1
sync.(*Mutex).Lock(0xc0004e2828)
	/home/travis/.gimme/versions/go1.13.linux.amd64/src/sync/mutex.go:81 +0x7d
sync.(*RWMutex).Lock(0xc0004e2828)
	/home/travis/.gimme/versions/go1.13.linux.amd64/src/sync/rwmutex.go:98 +0x4a
github.com/pion/transport/vnet.(*udpConnMap).delete(0xc0004e2820, 0xb3a0a0, 0xc0004dfc80, 0x0, 0x0)
	/home/travis/gopath/pkg/mod/github.com/pion/[email protected]/vnet/conn_map.go:78 +0x6d
github.com/pion/transport/vnet.(*vNet).onClosed(0xc000253c80, 0xb3a0a0, 0xc0004dfc80)
	/home/travis/gopath/pkg/mod/github.com/pion/[email protected]/vnet/net.go:286 +0xa6
github.com/pion/transport/vnet.(*UDPConn).Close(0xc000020cc0, 0x20, 0x0)
	/home/travis/gopath/pkg/mod/github.com/pion/[email protected]/vnet/conn.go:152 +0x183
github.com/pion/turn/v2.(*Server).Close(0xc0000e81c0, 0x9e1f00, 0xc0000afe00)
	/home/travis/gopath/pkg/mod/github.com/pion/turn/[email protected]/server.go:115 +0xeb
github.com/pion/ice.(*virtualNet).close(0xc0004e2a20)
	/home/travis/gopath/src/github.com/pion/ice/connectivity_vnet_test.go:27 +0x51
github.com/pion/ice.TestConnectivityVNet.func2(0xc0000f4900)
	/home/travis/gopath/src/github.com/pion/ice/connectivity_vnet_test.go:400 +0x460
testing.tRunner(0xc0000f4900, 0xc0004e2780)
	/home/travis/.gimme/versions/go1.13.linux.amd64/src/testing/testing.go:909 +0x19a
created by testing.(*T).Run
	/home/travis/.gimme/versions/go1.13.linux.amd64/src/testing/testing.go:960 +0x652

goroutine 282 [select]:
github.com/pion/transport/vnet.(*Router).Start.func1(0xc0000db9e0, 0xc000038540)
	/home/travis/gopath/pkg/mod/github.com/pion/[email protected]/vnet/router.go:231 +0x13b
created by github.com/pion/transport/vnet.(*Router).Start
	/home/travis/gopath/pkg/mod/github.com/pion/[email protected]/vnet/router.go:221 +0x12a

goroutine 257 [runnable]:
github.com/pion/transport/vnet.(*UDPConn).ReadFrom(0xc00027e680, 0xc0002de000, 0xffff, 0xffff, 0xb3a0a0, 0xc0003472c0, 0xc000347201, 0x0, 0x0)
	/home/travis/gopath/pkg/mod/github.com/pion/[email protected]/vnet/conn.go:71 +0x16d
github.com/pion/turn/v2.(*Client).Listen.func1(0xc000234240)
	/home/travis/gopath/pkg/mod/github.com/pion/turn/[email protected]/client.go:172 +0x9d
created by github.com/pion/turn/v2.(*Client).Listen
	/home/travis/gopath/pkg/mod/github.com/pion/turn/[email protected]/client.go:169 +0x127

goroutine 259 [chan receive]:
testing.(*T).Run(0xc0000f4800, 0xa8e79a, 0x1b, 0xc0004e2780, 0xc36201)
	/home/travis/.gimme/versions/go1.13.linux.amd64/src/testing/testing.go:961 +0x68a
github.com/pion/ice.TestConnectivityVNet(0xc0000f4800)
	/home/travis/gopath/src/github.com/pion/ice/connectivity_vnet_test.go:366 +0x3be
testing.tRunner(0xc0000f4800, 0xaa55b0)
	/home/travis/.gimme/versions/go1.13.linux.amd64/src/testing/testing.go:909 +0x19a
created by testing.(*T).Run
	/home/travis/.gimme/versions/go1.13.linux.amd64/src/testing/testing.go:960 +0x652

goroutine 287 [runnable]:
github.com/pion/ice.NewAgent.func2(0xc0000d1180)
	/home/travis/gopath/src/github.com/pion/ice/agent.go:431 +0x89
created by github.com/pion/ice.NewAgent
	/home/travis/gopath/src/github.com/pion/ice/agent.go:430 +0x149f

goroutine 294 [runnable]:
net.IP.String(0xc0004e1e70, 0x10, 0x10, 0x50, 0xe505e0)
	/home/travis/.gimme/versions/go1.13.linux.amd64/src/net/ip.go:313 +0x94e
net.ipEmptyString(...)
	/home/travis/.gimme/versions/go1.13.linux.amd64/src/net/ip.go:372
net.(*UDPAddr).String(0xc000237710, 0x0, 0xb35100)
	/home/travis/.gimme/versions/go1.13.linux.amd64/src/net/udpsock.go:38 +0x427
github.com/pion/turn/v2/internal/allocation.(*FiveTuple).Fingerprint(0xc000237770, 0xc0004d2f50, 0xc0002324e0)
	/home/travis/gopath/pkg/mod/github.com/pion/turn/[email protected]/internal/allocation/five_tuple.go:35 +0x71
github.com/pion/turn/v2/internal/allocation.(*Manager).DeleteAllocation(0xc0000a8660, 0xc000237770)
	/home/travis/gopath/pkg/mod/github.com/pion/turn/[email protected]/internal/allocation/allocation_manager.go:119 +0x43
github.com/pion/turn/v2/internal/allocation.(*Allocation).packetHandler(0xc0000c4210, 0xc0000a8660)
	/home/travis/gopath/pkg/mod/github.com/pion/turn/[email protected]/internal/allocation/allocation.go:222 +0x1178
created by github.com/pion/turn/v2/internal/allocation.(*Manager).CreateAllocation
	/home/travis/gopath/pkg/mod/github.com/pion/turn/[email protected]/internal/allocation/allocation_manager.go:113 +0x6e7

goroutine 281 [select]:
github.com/pion/transport/vnet.(*Router).Start.func1(0xc0000db7a0, 0xc0000384e0)
	/home/travis/gopath/pkg/mod/github.com/pion/[email protected]/vnet/router.go:231 +0x13b
created by github.com/pion/transport/vnet.(*Router).Start
	/home/travis/gopath/pkg/mod/github.com/pion/[email protected]/vnet/router.go:221 +0x12a

goroutine 284 [runnable]:
github.com/pion/ice.NewAgent.func2(0xc0000d0f00)
	/home/travis/gopath/src/github.com/pion/ice/agent.go:431 +0x89
created by github.com/pion/ice.NewAgent
	/home/travis/gopath/src/github.com/pion/ice/agent.go:430 +0x149f

goroutine 304 [runnable]:
github.com/pion/ice.(*controllingSelector).Start.func1(0xc000021200)
	/home/travis/gopath/src/github.com/pion/ice/selection.go:30 +0x1af
created by github.com/pion/ice.(*controllingSelector).Start
	/home/travis/gopath/src/github.com/pion/ice/selection.go:29 +0xc7

goroutine 305 [runnable]:
github.com/pion/ice.(*Agent).startConnectivityChecks.func1.1(0xc0000d1180)
	/home/travis/gopath/src/github.com/pion/ice/agent.go:622 +0x1fd
created by github.com/pion/ice.(*Agent).startConnectivityChecks.func1
	/home/travis/gopath/src/github.com/pion/ice/agent.go:612 +0x3ee
FAIL	github.com/pion/ice	30.185s
FAIL

Some tests cause timeout on CI

TestConnectivityLite

environment: CI: Test i386 1.13

=== RUN   TestConnectivityLite
panic: test timed out after 10m0s

goroutine 97 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:1377 +0xcb
created by time.goFunc
	/usr/local/go/src/time/sleep.go:168 +0x37

goroutine 1 [chan receive, 9 minutes]:
testing.(*T).Run(0xa08e0a0, 0x83a5120, 0x14, 0x83bca2c, 0x301)
	/usr/local/go/src/testing/testing.go:961 +0x2cb
testing.runTests.func1(0xa1c80a0)
	/usr/local/go/src/testing/testing.go:1202 +0x5a
testing.tRunner(0xa1c80a0, 0xa096ee0)
	/usr/local/go/src/testing/testing.go:909 +0x9a
testing.runTests(0xa00c1c0, 0x862b440, 0x2f, 0x2f, 0x0)
	/usr/local/go/src/testing/testing.go:1200 +0x22d
testing.(*M).Run(0xa044f40, 0x0)
	/usr/local/go/src/testing/testing.go:1117 +0x13b
main.main()
	_testmain.go:138 +0x104

goroutine 59 [select, 9 minutes]:
github.com/pion/ice.(*Agent).connect(0xa1ee000, 0x841ea20, 0xa05e014, 0xa016101, 0xa08c1a0, 0x10, 0xa05a060, 0x20, 0xa0506c0, 0x4, ...)
	/go/src/github.com/pion/ice/transport.go:57 +0x12a
github.com/pion/ice.(*Agent).Dial(...)
	/go/src/github.com/pion/ice/transport.go:16
github.com/pion/ice.connectWithVNet(0xa1be2c0, 0xa1ee000, 0x0, 0x0)
	/go/src/github.com/pion/ice/connectivity_vnet_test.go:188 +0x41b
github.com/pion/ice.TestConnectivityLite(0xa08e0a0)
	/go/src/github.com/pion/ice/agent_test.go:500 +0x5f9
testing.tRunner(0xa08e0a0, 0x83bca2c)
	/usr/local/go/src/testing/testing.go:909 +0x9a
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:960 +0x2ac

goroutine 63 [select, 9 minutes]:
github.com/pion/transport/vnet.(*UDPConn).ReadFrom(0xa0106a0, 0xa1f8000, 0x2000, 0x2000, 0x0, 0x0, 0x807a5fe, 0x83bce70, 0xa05e450)
	/go/pkg/mod/github.com/pion/[email protected]/vnet/conn.go:71 +0x94
github.com/pion/ice.(*candidateBase).recvLoop(0xa012050)
	/go/src/github.com/pion/ice/candidate_base.go:92 +0xf0
created by github.com/pion/ice.(*candidateBase).start
	/go/src/github.com/pion/ice/candidate_base.go:81 +0xc0

goroutine 67 [select, 9 minutes]:
github.com/pion/transport/vnet.(*UDPConn).ReadFrom(0xa232840, 0xa09e000, 0x5dc, 0x5dc, 0xa232d80, 0xa09e000, 0x14, 0x5dc, 0x0)
	/go/pkg/mod/github.com/pion/[email protected]/vnet/conn.go:71 +0x94
github.com/pion/turn.(*Server).listen.func1(0xe7bb6248, 0xa232840, 0xa058480, 0xa050240)
	/go/pkg/mod/github.com/pion/[email protected]/server.go:258 +0x69
created by github.com/pion/turn.(*Server).listen
	/go/pkg/mod/github.com/pion/[email protected]/server.go:255 +0x2f3

goroutine 60 [select, 9 minutes]:
github.com/pion/ice.(*Agent).taskLoop(0xa1ee000)
	/go/src/github.com/pion/ice/agent.go:741 +0xdb
created by github.com/pion/ice.NewAgent
	/go/src/github.com/pion/ice/agent.go:444 +0x813

goroutine 48 [select, 9 minutes]:
github.com/pion/transport/vnet.(*Router).Start.func1(0xa08e000, 0xa050180)
	/go/pkg/mod/github.com/pion/[email protected]/vnet/router.go:231 +0xcd
created by github.com/pion/transport/vnet.(*Router).Start
	/go/pkg/mod/github.com/pion/[email protected]/vnet/router.go:221 +0xbb

goroutine 71 [select, 9 minutes]:
github.com/pion/transport/vnet.(*UDPConn).ReadFrom(0xa2329c0, 0xa1f4000, 0x2000, 0x2000, 0x0, 0x0, 0x807a5fe, 0x83bce70, 0xa22c810)
	/go/pkg/mod/github.com/pion/[email protected]/vnet/conn.go:71 +0x94
github.com/pion/ice.(*candidateBase).recvLoop(0xa0540a0)
	/go/src/github.com/pion/ice/candidate_base.go:92 +0xf0
created by github.com/pion/ice.(*candidateBase).start
	/go/src/github.com/pion/ice/candidate_base.go:81 +0xc0

goroutine 68 [select, 9 minutes]:
github.com/pion/ice.(*Agent).taskLoop(0xa1be2c0)
	/go/src/github.com/pion/ice/agent.go:741 +0xdb
created by github.com/pion/ice.NewAgent
	/go/src/github.com/pion/ice/agent.go:444 +0x813

goroutine 65 [select, 9 minutes]:
github.com/pion/transport/vnet.(*Router).Start.func1(0xa08e140, 0xa0501c0)
	/go/pkg/mod/github.com/pion/[email protected]/vnet/router.go:231 +0xcd
created by github.com/pion/transport/vnet.(*Router).Start
	/go/pkg/mod/github.com/pion/[email protected]/vnet/router.go:221 +0xbb

goroutine 66 [select, 9 minutes]:
github.com/pion/transport/vnet.(*Router).Start.func1(0xa08e1e0, 0xa050200)
	/go/pkg/mod/github.com/pion/[email protected]/vnet/router.go:231 +0xcd
created by github.com/pion/transport/vnet.(*Router).Start
	/go/pkg/mod/github.com/pion/[email protected]/vnet/router.go:221 +0xbb

goroutine 76 [select, 9 minutes]:
github.com/pion/ice.(*Agent).connect(0xa1be2c0, 0x841ea20, 0xa05e014, 0x0, 0xa016100, 0x10, 0xa1ec000, 0x20, 0x0, 0x1, ...)
	/go/src/github.com/pion/ice/transport.go:57 +0x12a
github.com/pion/ice.(*Agent).Accept(...)
	/go/src/github.com/pion/ice/transport.go:22
github.com/pion/ice.connectWithVNet.func1(0xa1be2c0, 0xa016100, 0x10, 0xa1ec000, 0x20, 0xa23e768, 0xa0506c0)
	/go/src/github.com/pion/ice/connectivity_vnet_test.go:183 +0x5e
created by github.com/pion/ice.connectWithVNet
	/go/src/github.com/pion/ice/connectivity_vnet_test.go:181 +0x3d1

goroutine 73 [select, 9 minutes]:
github.com/pion/transport/vnet.(*UDPConn).ReadFrom(0xa232b20, 0xa208000, 0x2000, 0x2000, 0x2000, 0x841d520, 0xa2337c0, 0x8420d60, 0xa2328c0)
	/go/pkg/mod/github.com/pion/[email protected]/vnet/conn.go:71 +0x94
github.com/pion/ice.(*candidateBase).recvLoop(0xa0542d0)
	/go/src/github.com/pion/ice/candidate_base.go:92 +0xf0
created by github.com/pion/ice.(*candidateBase).start
	/go/src/github.com/pion/ice/candidate_base.go:81 +0xc0
FAIL	github.com/pion/ice	600.013s

TestRelayOnlyConnection

environment: CI: Test i386 1.13

=== RUN   TestRelayOnlyConnection
goroutine profile: total 7
1 @ 0x80750c4 0x804ca12 0x804c9ed 0x804c75c 0x810eefb 0x8111d4a 0x810ebca 0x810ff6d 0x810f21b 0x82fe014 0x8074cf2 0x809da01
#	0x810eefa	testing.(*T).Run+0x2ca		/usr/local/go/src/testing/testing.go:961
#	0x8111d49	testing.runTests.func1+0x59	/usr/local/go/src/testing/testing.go:1202
#	0x810ebc9	testing.tRunner+0x99		/usr/local/go/src/testing/testing.go:909
#	0x810ff6c	testing.runTests+0x22c		/usr/local/go/src/testing/testing.go:1200
#	0x810f21a	testing.(*M).Run+0x13a		/usr/local/go/src/testing/testing.go:1117
#	0x82fe013	main.main+0x103			_testmain.go:138
#	0x8074cf1	runtime.main+0x201		/usr/local/go/src/runtime/proc.go:203

1 @ 0x80750c4 0x804ca12 0x804c9ed 0x804c77c 0x81cb575 0x81cb598 0x81ca60f 0x82d49e0 0x82d2a7d 0x82ca3fe 0x82e17e6 0x810ebca 0x809da01
#	0x81cb574	github.com/pion/turn/v2/internal/client.(*Transaction).WaitForResult+0x424	/go/pkg/mod/github.com/pion/turn/[email protected]/internal/client/transaction.go:92
#	0x81cb597	github.com/pion/turn/v2.(*Client).PerformTransaction+0x447			/go/pkg/mod/github.com/pion/turn/[email protected]/client.go:357
#	0x81ca60e	github.com/pion/turn/v2.(*Client).Allocate+0x1ce				/go/pkg/mod/github.com/pion/turn/[email protected]/client.go:250
#	0x82d49df	github.com/pion/ice.(*Agent).gatherCandidatesRelay+0x2ff			/go/src/github.com/pion/ice/gather.go:433
#	0x82d2a7c	github.com/pion/ice.(*Agent).gatherCandidates+0x10c				/go/src/github.com/pion/ice/gather.go:160
#	0x82ca3fd	github.com/pion/ice.NewAgent+0x84d						/go/src/github.com/pion/ice/agent.go:448
#	0x82e17e5	github.com/pion/ice.TestRelayOnlyConnection+0x3f5				/go/src/github.com/pion/ice/candidate_relay_test.go:59
#	0x810ebc9	testing.tRunner+0x99								/usr/local/go/src/testing/testing.go:909

1 @ 0x80750c4 0x806fd44 0x806f21b 0x80bd917 0x80be4a6 0x80be48b 0x815ad7f 0x816ebcf 0x816d997 0x81cd892 0x809da01
#	0x806f21a	internal/poll.runtime_pollWait+0x4a				/usr/local/go/src/runtime/netpoll.go:184
#	0x80bd916	internal/poll.(*pollDesc).wait+0x36				/usr/local/go/src/internal/poll/fd_poll_runtime.go:87
#	0x80be4a5	internal/poll.(*pollDesc).waitRead+0x165			/usr/local/go/src/internal/poll/fd_poll_runtime.go:92
#	0x80be48a	internal/poll.(*FD).ReadFrom+0x14a				/usr/local/go/src/internal/poll/fd_unix.go:219
#	0x815ad7e	net.(*netFD).readFrom+0x3e					/usr/local/go/src/net/fd_unix.go:208
#	0x816ebce	net.(*UDPConn).readFrom+0x3e					/usr/local/go/src/net/udpsock_posix.go:47
#	0x816d996	net.(*UDPConn).ReadFrom+0x46					/usr/local/go/src/net/udpsock.go:121
#	0x81cd891	github.com/pion/turn/v2.(*Server).packetConnReadLoop+0x211	/go/pkg/mod/github.com/pion/turn/[email protected]/server.go:160

1 @ 0x80750c4 0x806fd44 0x806f21b 0x80bd917 0x80be4a6 0x80be48b 0x815ad7f 0x816ebcf 0x816d997 0x81ce56a 0x809da01
#	0x806f21a	internal/poll.runtime_pollWait+0x4a			/usr/local/go/src/runtime/netpoll.go:184
#	0x80bd916	internal/poll.(*pollDesc).wait+0x36			/usr/local/go/src/internal/poll/fd_poll_runtime.go:87
#	0x80be4a5	internal/poll.(*pollDesc).waitRead+0x165		/usr/local/go/src/internal/poll/fd_poll_runtime.go:92
#	0x80be48a	internal/poll.(*FD).ReadFrom+0x14a			/usr/local/go/src/internal/poll/fd_unix.go:219
#	0x815ad7e	net.(*netFD).readFrom+0x3e				/usr/local/go/src/net/fd_unix.go:208
#	0x816ebce	net.(*UDPConn).readFrom+0x3e				/usr/local/go/src/net/udpsock_posix.go:47
#	0x816d996	net.(*UDPConn).ReadFrom+0x46				/usr/local/go/src/net/udpsock.go:121
#	0x81ce569	github.com/pion/turn/v2.(*Client).Listen.func1+0x69	/go/pkg/mod/github.com/pion/turn/[email protected]/client.go:166

1 @ 0x80750c4 0x806fd44 0x806f21b 0x80bd917 0x80bf0ad 0x80bf08e 0x8169248 0x8190f2c 0x8194532 0x8194511 0x81981f1 0x809da01
#	0x806f21a	internal/poll.runtime_pollWait+0x4a			/usr/local/go/src/runtime/netpoll.go:184
#	0x80bd916	internal/poll.(*pollDesc).wait+0x36			/usr/local/go/src/internal/poll/fd_poll_runtime.go:87
#	0x80bf0ac	internal/poll.(*pollDesc).waitRead+0xec			/usr/local/go/src/internal/poll/fd_poll_runtime.go:92
#	0x80bf08d	internal/poll.(*FD).RawRead+0xcd			/usr/local/go/src/internal/poll/fd_unix.go:534
#	0x8169247	net.(*rawConn).Read+0x47				/usr/local/go/src/net/rawconn.go:43
#	0x8190f2b	golang.org/x/net/internal/socket.(*Conn).recvMsg+0x18b	/go/pkg/mod/golang.org/x/[email protected]/internal/socket/rawconn_msg.go:32
#	0x8194531	golang.org/x/net/internal/socket.(*Conn).RecvMsg+0x141	/go/pkg/mod/golang.org/x/[email protected]/internal/socket/socket.go:255
#	0x8194510	golang.org/x/net/ipv4.(*payloadHandler).ReadFrom+0x120	/go/pkg/mod/golang.org/x/[email protected]/ipv4/payload_cmsg.go:31
#	0x81981f0	github.com/pion/mdns.(*Conn).start+0xf0			/go/pkg/mod/github.com/pion/[email protected]/conn.go:256

1 @ 0x80750c4 0x8083679 0x82cbb5a 0x809da01
#	0x82cbb59	github.com/pion/ice.(*Agent).taskLoop+0x1b9	/go/src/github.com/pion/ice/agent.go:754

1 @ 0x814008c 0x813fefa 0x813cf53 0x81d5057 0x809da01
#	0x814008b	runtime/pprof.writeRuntimeProfile+0x7b			/usr/local/go/src/runtime/pprof/pprof.go:708
#	0x813fef9	runtime/pprof.writeGoroutine+0x79			/usr/local/go/src/runtime/pprof/pprof.go:670
#	0x813cf52	runtime/pprof.(*Profile).WriteTo+0x2c2			/usr/local/go/src/runtime/pprof/pprof.go:329
#	0x81d5056	github.com/pion/transport/test.TimeOut.func1+0x56	/go/pkg/mod/github.com/pion/[email protected]/test/util.go:18

panic: timeout

goroutine 174 [running]:
github.com/pion/transport/test.TimeOut.func1()
	/go/pkg/mod/github.com/pion/[email protected]/test/util.go:21 +0xda
created by time.goFunc
	/usr/local/go/src/time/sleep.go:168 +0x37
FAIL	github.com/pion/ice	56.063s

ICE Connection with relay candidates using coTURN failed

Your environment.

  • Version: pion/[email protected]
  • Browser: n/a
  • Other Information - stacktraces, related issues, suggestions how to fix, links for us to have context

What did you do?

  • Attempted to establish one datachannel between two nodes using coTURN (relay)
  • I have set ICETransportPolicy to webrtc.ICETransportPolicyRelay (assuming, it uses relay candidate only)
  • coTURN ver: Version Coturn-4.5.0.3 'dan Eider'

What did you expect?

Both ends would connect with each other through a relay candidate

What happened?

Gathering relay (TURN) candidates are successful, but ICE connection state goes to "failed" in about 10 sec

Logs

Answerer side

$ SGRTC_LOG_TRACE=all go run main.go -epID server
is acceptor
sgrtc DEBUG: 23:01:33.468397 api.go:163: factory.Writer: &{0xc0000ac000}
sgrtc DEBUG: 23:01:33.468891 api.go:164: factory.DefaultLogLevel: Trace
sgrtc DEBUG: 23:01:33.468905 api.go:165: factory.ScopeLevels: map[]
signaling INFO: 2019/06/07 23:01:33 NewSignaling for server
sgrtc INFO: 2019/06/07 23:01:33 new endpoint: server
sgrtc DEBUG: 23:01:33.469448 api.go:317: Signaling started on server
sgrtc INFO: 2019/06/07 23:01:33 api run loop started
signaling DEBUG: 23:01:33.742738 signaling.go:96: sig(server): ev status
signaling DEBUG: 23:01:33.742766 signaling.go:109: sig(server): ev - PNConnectedCategory
sgrtc TRACE: 23:01:33.742797 api.go:997: signaled to select, but no waiter
signaling DEBUG: 23:01:54.668238 signaling.go:116: sig(server): ev message
signaling DEBUG: 23:01:54.668262 signaling.go:121: sig(server) received on: server
sgrtc DEBUG: 23:01:54.668394 api.go:797: signal received: from=client to=server
sgrtc DEBUG: 23:01:54.668405 api.go:798: signal received: body={"type":"offer","sdp":"v=0\r\no=- 572401240 1559973714 IN IP4 0.0.0.0\r\ns=-\r\nt=0 0\r\na=fingerprint:sha-256 C5:75:D0:B0:E0:CD:73:0B:F3:70:DD:FD:2C:38:7D:83:39:BC:97:41:CE:E9:3A:BB:02:B8:49:71:44:67:3B:DD\r\na=group:BUNDLE 0\r\nm=application 9 DTLS/SCTP 5000\r\nc=IN IP4 0.0.0.0\r\na=setup:active\r\na=mid:0\r\na=sendrecv\r\na=sctpmap:5000 webrtc-datachannel 1024\r\na=ice-ufrag:dCZLYxUikOOSvHee\r\na=ice-pwd:ddqUmulNHWQRLRsqAZTXBUtbqpwpZVdJ\r\na=candidate:foundation 1 udp 16777215 18.237.97.24 10134 typ relay raddr 10.0.0.135 rport 51583 generation 0\r\na=candidate:foundation 2 udp 16777215 18.237.97.24 10134 typ relay raddr 10.0.0.135 rport 51583 generation 0\r\na=end-of-candidates\r\na=setup:actpass\r\n"}
sgrtc DEBUG: 23:01:54.668415 api.go:856: api: new session detected
sgrtc TRACE: 23:01:54.668422 api.go:997: signaled to select, but no waiter
sgrtc DEBUG: 23:01:54.685386 api.go:412: Accept() on socket 0
sgrtc DEBUG: 23:01:54.685429 api.go:320: Endpoint server exists
session INFO: 2019/06/07 23:01:54 using ICE server: turn:ec2-18-237-97-24.us-west-2.compute.amazonaws.com (1560060033, YbOw+gzFEZYE87Yo5nS1nKR/xnw=)
session INFO: 2019/06/07 23:01:54 new session from server to peer client
pc INFO: 2019/06/07 23:01:54 signaling state changed to have-remote-offer
session INFO: 2019/06/07 23:01:54 signaling state has changed: have-remote-offer
ice DEBUG: 23:01:54.793277 agent.go:367: Started agent: isControlling? false, remoteUfrag: "dCZLYxUikOOSvHee", remotePwd: "ddqUmulNHWQRLRsqAZTXBUtbqpwpZVdJ"
ice INFO: 2019/06/07 23:01:54 Setting new connection state: Checking
ice TRACE: 23:01:54.793692 selection.go:205: pinging all candidates
pc INFO: 2019/06/07 23:01:54 ICE connection state changed: checking
session INFO: 2019/06/07 23:01:54 ICE connection state has changed: checking
ice TRACE: 23:01:54.794043 agent.go:706: ping STUN from relay 18.237.97.24:13077 related 10.0.0.135:49389 to relay 18.237.97.24:10134 related 10.0.0.135:51583
pc INFO: 2019/06/07 23:01:54 signaling state changed to stable
session INFO: 2019/06/07 23:01:54 signaling state has changed: stable
signaling DEBUG: 23:01:54.794223 signaling.go:165: signal sent: from=server to=client
signaling DEBUG: 23:01:54.794240 signaling.go:166: signal sent: body={"type":"answer","sdp":"v=0\r\no=- 609153572 1559973714 IN IP4 0.0.0.0\r\ns=-\r\nt=0 0\r\na=fingerprint:sha-256 7D:26:06:62:77:0D:26:7F:ED:D3:75:6B:55:D6:E9:63:1B:C9:E2:A5:BD:F4:2C:38:B3:9E:72:F8:A0:29:46:50\r\na=group:BUNDLE 0\r\nm=application 9 DTLS/SCTP 5000\r\nc=IN IP4 0.0.0.0\r\na=setup:active\r\na=mid:0\r\na=sendrecv\r\na=sctpmap:5000 webrtc-datachannel 1024\r\na=ice-ufrag:UhTUjahXcZjlMKwx\r\na=ice-pwd:MvLRRbgxrDUeulFtGkaNVixfVfOxspQV\r\na=candidate:foundation 1 udp 16777215 18.237.97.24 13077 typ relay raddr 10.0.0.135 rport 49389 generation 0\r\na=candidate:foundation 2 udp 16777215 18.237.97.24 13077 typ relay raddr 10.0.0.135 rport 49389 generation 0\r\na=end-of-candidates\r\n"}
ice TRACE: 23:01:56.794084 selection.go:205: pinging all candidates
ice TRACE: 23:01:56.794176 agent.go:706: ping STUN from relay 18.237.97.24:13077 related 10.0.0.135:49389 to relay 18.237.97.24:10134 related 10.0.0.135:51583
ice TRACE: 23:01:58.793789 selection.go:205: pinging all candidates
ice TRACE: 23:01:58.793869 agent.go:706: ping STUN from relay 18.237.97.24:13077 related 10.0.0.135:49389 to relay 18.237.97.24:10134 related 10.0.0.135:51583
ice TRACE: 23:02:00.793577 selection.go:205: pinging all candidates
ice TRACE: 23:02:00.793652 agent.go:706: ping STUN from relay 18.237.97.24:13077 related 10.0.0.135:49389 to relay 18.237.97.24:10134 related 10.0.0.135:51583
ice TRACE: 23:02:02.793638 selection.go:205: pinging all candidates
ice TRACE: 23:02:02.793723 agent.go:706: ping STUN from relay 18.237.97.24:13077 related 10.0.0.135:49389 to relay 18.237.97.24:10134 related 10.0.0.135:51583
ice TRACE: 23:02:04.793656 selection.go:205: pinging all candidates
ice TRACE: 23:02:04.793741 agent.go:706: ping STUN from relay 18.237.97.24:13077 related 10.0.0.135:49389 to relay 18.237.97.24:10134 related 10.0.0.135:51583
ice TRACE: 23:02:06.793512 selection.go:205: pinging all candidates
ice TRACE: 23:02:06.793599 agent.go:706: ping STUN from relay 18.237.97.24:13077 related 10.0.0.135:49389 to relay 18.237.97.24:10134 related 10.0.0.135:51583
ice TRACE: 23:02:08.793586 selection.go:205: pinging all candidates
ice TRACE: 23:02:08.793678 agent.go:706: ping STUN from relay 18.237.97.24:13077 related 10.0.0.135:49389 to relay 18.237.97.24:10134 related 10.0.0.135:51583
ice TRACE: 23:02:10.793532 selection.go:205: pinging all candidates
ice TRACE: 23:02:10.793724 agent.go:423: max requests reached for pair prio 72057589759737855 (local, prio 16777215) relay 18.237.97.24:13077 related 10.0.0.135:49389 <-> relay 18.237.97.24:10134 related 10.0.0.135:51583 (remote, prio 16777215), marking it as failed

Offerer side

$ SGRTC_LOG_TRACE=all go run main.go -epID client -peerID server
is initiator
sgrtc DEBUG: 23:01:54.223865 api.go:163: factory.Writer: &{0xc0000b4000}
sgrtc DEBUG: 23:01:54.224410 api.go:164: factory.DefaultLogLevel: Trace
sgrtc DEBUG: 23:01:54.224433 api.go:165: factory.ScopeLevels: map[]
signaling INFO: 2019/06/07 23:01:54 NewSignaling for client
sgrtc INFO: 2019/06/07 23:01:54 new endpoint: client
sgrtc DEBUG: 23:01:54.224869 api.go:317: Signaling started on client
sgrtc INFO: 2019/06/07 23:01:54 api run loop started
signaling DEBUG: 23:01:54.478936 signaling.go:96: sig(client): ev status
signaling DEBUG: 23:01:54.478972 signaling.go:109: sig(client): ev - PNConnectedCategory
sgrtc TRACE: 23:01:54.479024 api.go:997: signaled to select, but no waiter
session INFO: 2019/06/07 23:01:54 using ICE server: turn:ec2-18-237-97-24.us-west-2.compute.amazonaws.com (1560060033, YbOw+gzFEZYE87Yo5nS1nKR/xnw=)
session INFO: 2019/06/07 23:01:54 new session from client to peer server
session DEBUG: 23:01:54.492700 session.go:234: [client] adding data channel info for label 'data'
session DEBUG: 23:01:54.492734 session.go:306: ep(client): dc label=data reliable ordered
pc INFO: 2019/06/07 23:01:54 signaling state changed to have-local-offer
session INFO: 2019/06/07 23:01:54 signaling state has changed: have-local-offer
signaling DEBUG: 23:01:54.565490 signaling.go:165: signal sent: from=client to=server
signaling DEBUG: 23:01:54.565522 signaling.go:166: signal sent: body={"type":"offer","sdp":"v=0\r\no=- 572401240 1559973714 IN IP4 0.0.0.0\r\ns=-\r\nt=0 0\r\na=fingerprint:sha-256 C5:75:D0:B0:E0:CD:73:0B:F3:70:DD:FD:2C:38:7D:83:39:BC:97:41:CE:E9:3A:BB:02:B8:49:71:44:67:3B:DD\r\na=group:BUNDLE 0\r\nm=application 9 DTLS/SCTP 5000\r\nc=IN IP4 0.0.0.0\r\na=setup:active\r\na=mid:0\r\na=sendrecv\r\na=sctpmap:5000 webrtc-datachannel 1024\r\na=ice-ufrag:dCZLYxUikOOSvHee\r\na=ice-pwd:ddqUmulNHWQRLRsqAZTXBUtbqpwpZVdJ\r\na=candidate:foundation 1 udp 16777215 18.237.97.24 10134 typ relay raddr 10.0.0.135 rport 51583 generation 0\r\na=candidate:foundation 2 udp 16777215 18.237.97.24 10134 typ relay raddr 10.0.0.135 rport 51583 generation 0\r\na=end-of-candidates\r\na=setup:actpass\r\n"}
signaling DEBUG: 23:01:54.913179 signaling.go:116: sig(client): ev message
signaling DEBUG: 23:01:54.913216 signaling.go:121: sig(client) received on: client
sgrtc DEBUG: 23:01:54.913290 api.go:797: signal received: from=server to=client
sgrtc DEBUG: 23:01:54.913302 api.go:798: signal received: body={"type":"answer","sdp":"v=0\r\no=- 609153572 1559973714 IN IP4 0.0.0.0\r\ns=-\r\nt=0 0\r\na=fingerprint:sha-256 7D:26:06:62:77:0D:26:7F:ED:D3:75:6B:55:D6:E9:63:1B:C9:E2:A5:BD:F4:2C:38:B3:9E:72:F8:A0:29:46:50\r\na=group:BUNDLE 0\r\nm=application 9 DTLS/SCTP 5000\r\nc=IN IP4 0.0.0.0\r\na=setup:active\r\na=mid:0\r\na=sendrecv\r\na=sctpmap:5000 webrtc-datachannel 1024\r\na=ice-ufrag:UhTUjahXcZjlMKwx\r\na=ice-pwd:MvLRRbgxrDUeulFtGkaNVixfVfOxspQV\r\na=candidate:foundation 1 udp 16777215 18.237.97.24 13077 typ relay raddr 10.0.0.135 rport 49389 generation 0\r\na=candidate:foundation 2 udp 16777215 18.237.97.24 13077 typ relay raddr 10.0.0.135 rport 49389 generation 0\r\na=end-of-candidates\r\n"}
pc INFO: 2019/06/07 23:01:54 signaling state changed to stable
session INFO: 2019/06/07 23:01:54 signaling state has changed: stable
ice DEBUG: 23:01:54.946044 agent.go:367: Started agent: isControlling? true, remoteUfrag: "UhTUjahXcZjlMKwx", remotePwd: "MvLRRbgxrDUeulFtGkaNVixfVfOxspQV"
ice INFO: 2019/06/07 23:01:54 Setting new connection state: Checking
ice TRACE: 23:01:54.946098 selection.go:90: pinging all candidates
pc INFO: 2019/06/07 23:01:54 ICE connection state changed: checking
session INFO: 2019/06/07 23:01:54 ICE connection state has changed: checking
ice TRACE: 23:01:54.946148 agent.go:706: ping STUN from relay 18.237.97.24:10134 related 10.0.0.135:51583 to relay 18.237.97.24:13077 related 10.0.0.135:49389
ice TRACE: 23:01:56.950079 selection.go:90: pinging all candidates
ice TRACE: 23:01:56.950190 agent.go:706: ping STUN from relay 18.237.97.24:10134 related 10.0.0.135:51583 to relay 18.237.97.24:13077 related 10.0.0.135:49389
ice TRACE: 23:01:58.949974 selection.go:90: pinging all candidates
ice TRACE: 23:01:58.950056 agent.go:706: ping STUN from relay 18.237.97.24:10134 related 10.0.0.135:51583 to relay 18.237.97.24:13077 related 10.0.0.135:49389
ice TRACE: 23:02:00.950202 selection.go:90: pinging all candidates
ice TRACE: 23:02:00.950325 agent.go:706: ping STUN from relay 18.237.97.24:10134 related 10.0.0.135:51583 to relay 18.237.97.24:13077 related 10.0.0.135:49389
ice TRACE: 23:02:02.949902 selection.go:90: pinging all candidates
ice TRACE: 23:02:02.949993 agent.go:706: ping STUN from relay 18.237.97.24:10134 related 10.0.0.135:51583 to relay 18.237.97.24:13077 related 10.0.0.135:49389
ice TRACE: 23:02:04.949762 selection.go:90: pinging all candidates
ice TRACE: 23:02:04.949901 agent.go:706: ping STUN from relay 18.237.97.24:10134 related 10.0.0.135:51583 to relay 18.237.97.24:13077 related 10.0.0.135:49389
ice TRACE: 23:02:04.950027 selection.go:35: check timeout reached and no valid candidate pair found, marking connection as failed
ice INFO: 2019/06/07 23:02:04 Setting new connection state: Failed
pc INFO: 2019/06/07 23:02:04 ICE connection state changed: failed
session INFO: 2019/06/07 23:02:04 ICE connection state has changed: failed
ice TRACE: 23:02:06.947587 selection.go:90: pinging all candidates
ice TRACE: 23:02:06.947659 agent.go:706: ping STUN from relay 18.237.97.24:10134 related 10.0.0.135:51583 to relay 18.237.97.24:13077 related 10.0.0.135:49389
ice TRACE: 23:02:08.945718 selection.go:90: pinging all candidates
ice TRACE: 23:02:08.945797 agent.go:706: ping STUN from relay 18.237.97.24:10134 related 10.0.0.135:51583 to relay 18.237.97.24:13077 related 10.0.0.135:49389

coTURN log

89459: handle_udp_packet: New UDP endpoint: local addr 0.0.0.0:3478, remote addr 98.207.205.254:51583
89459: session 001000000000000028: realm <sgcoturn.xxx.com> user <>: incoming packet message processed, error 401: Unauthorized
89459: IPv4. Local relay addr: 172.31.23.244:10134
89459: session 001000000000000028: new, realm=<sgcoturn.xxx.com>, username=<1560060033>, lifetime=600
89459: session 001000000000000028: realm <sgcoturn.xxx.com> user <1560060033>: incoming packet ALLOCATE processed, success
89459: handle_udp_packet: New UDP endpoint: local addr 0.0.0.0:3478, remote addr 98.207.205.254:49389
89459: session 001000000000000029: realm <sgcoturn.xxx.com> user <>: incoming packet message processed, error 401: Unauthorized
89459: IPv4. Local relay addr: 172.31.23.244:13077
89459: session 001000000000000029: new, realm=<sgcoturn.xxx.com>, username=<1560060033>, lifetime=600
89459: session 001000000000000029: realm <sgcoturn.xxx.com> user <1560060033>: incoming packet ALLOCATE processed, success
89459: session 001000000000000029: peer 0.0.0.0 lifetime updated: 300
89459: session 001000000000000029: realm <sgcoturn.xxx.com> user <1560060033>: incoming packet CREATE_PERMISSION processed, success
89460: session 001000000000000028: peer 0.0.0.0 lifetime updated: 300
89460: session 001000000000000028: realm <sgcoturn.xxx.com> user <1560060033>: incoming packet CREATE_PERMISSION processed, success

Add dedicated checkList unit tests

@hugoArregui

This might not be needed, but would be nice to add some basic tests for this (n time should select this pair etc...)

Just a few checks to make sure we don't take relay when we have good host candidates etc..

Action Required: Fix Renovate Configuration

There is an error with this repository's Renovate configuration that needs to be fixed. As a precaution, Renovate will stop PRs until it is resolved.

Error type: undefined. Note: this is a nested preset so please contact the preset author if you are unable to fix it yourself.

Implement TURN

This is a covering ticket for all the things we need to do.

pion/webrtc

  • Pass TURN configuration from pion/webrtc to pion/ice
  • Add support for ICETransportPolicy so we can force TURN

pion/ice

  • TURN MVP. Add basic TURN support via pion/turnc
  • Update checklist/nomination logic so TURN is last (currently we just take first success)
  • Add TURN server to travis-ci, implement a version and confirm it can't regress

CandidatePair is not validated on Read

Another security concern. If a non-STUN packet is received, it is blindly forwarded to the mux. This increases the attack vector of the application by exposing DTLS/SRTP/etc to arbitrary packets from the outside world. You can imagine an attack where a bad actor sends DTLS ClientHello packets to all possible ICE ports, and sometimes it will be able to initiate the handshake before the intended peer (hijacking the connection).

The fix is to validate local/remote pairs. You need to receive a success from a given remote Candidate (including the correct username/password) before allowing non-STUN packets to be forwarded from that pair.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.