Giter Site home page Giter Site logo

vacp2p / nim-libp2p Goto Github PK

View Code? Open in Web Editor NEW
236.0 28.0 51.0 36.47 MB

libp2p implementation in Nim

Home Page: https://vacp2p.github.io/nim-libp2p/docs/

License: MIT License

Nim 99.77% Shell 0.16% Dockerfile 0.07%
libp2p nim p2p p2p-network peer-to-peer

nim-libp2p's People

Contributors

alejandrocabeza avatar alrevuelta avatar arnetheduck avatar btilford avatar cheatfate avatar cskiraly avatar cyanlemons avatar decanus avatar diegomrsantos avatar dryajov avatar emizzle avatar etan-status avatar ivansete-status avatar lchenut avatar markspanbroek avatar menduist avatar mratsim avatar narimiran avatar onqtam avatar oskarth avatar romanzac avatar sinkingsugar avatar stefantalpalaru avatar swader avatar tersec avatar tina1998612 avatar vpavlin avatar yglukhov avatar yyoncho avatar zah avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nim-libp2p's Issues

.gcsafe. everywhere

Minor but for sanity, I could not help to notice that {.gcsafe.} is everywhere basically in the code base.
Iirc there was a bug long time ago with methods spitting lot of warnings maybe which I actually reported but it was fixed I think.
I did not try yet but the notation is literally "asking for troubles" usually, also I see no threading (spawn etc) so it's probably really unnecessary and should be removed.
Anything I am missing?

Mplex will freeze reader if stream was closed remotely.

This test reproduces a problem.

import unittest, chronos, tables
import libp2p/[switch,
               multistream,
               protocols/identify,
               connection,
               transports/transport,
               transports/tcptransport,
               multiaddress,
               peerinfo,
               crypto/crypto,
               peer,
               protocols/protocol,
               muxers/muxer,
               muxers/mplex/mplex,
               muxers/mplex/types,
               protocols/secure/secio,
               protocols/secure/secure]

const TestCodec = "/test/proto/1.0.0"
type TestProto = ref object of LPProtocol

proc createSwitch(ma: MultiAddress): (Switch, PeerInfo) =
  var peerInfo: PeerInfo = PeerInfo.init(PrivateKey.random(RSA))
  peerInfo.addrs.add(ma)
  let identify = newIdentify(peerInfo)

  proc createMplex(conn: Connection): Muxer =
    result = newMplex(conn)

  let mplexProvider = newMuxerProvider(createMplex, MplexCodec)
  let transports = @[Transport(newTransport(TcpTransport))]
  let muxers = [(MplexCodec, mplexProvider)].toTable()
  let secureManagers = [(SecioCodec, Secure(newSecio(peerInfo.privateKey)))].toTable()
  let switch = newSwitch(peerInfo,
                         transports,
                         identify,
                         muxers,
                         secureManagers)
  result = (switch, peerInfo)

proc testSwitch(): Future[bool] {.async, gcsafe.} =
  var event = newAsyncEvent()

  proc handle(conn: Connection, proto: string) {.async, gcsafe.} =
    await event.wait()
    event.clear()
    await conn.close()

  let ma1: MultiAddress = Multiaddress.init("/ip4/0.0.0.0/tcp/0")
  let ma2: MultiAddress = Multiaddress.init("/ip4/0.0.0.0/tcp/0")

  var peerInfo1, peerInfo2: PeerInfo
  var switch1, switch2: Switch
  var awaiters: seq[Future[void]]

  (switch1, peerInfo1) = createSwitch(ma1)

  let testProto = new TestProto
  testProto.codec = TestCodec
  testProto.handler = handle
  switch1.mount(testProto)

  (switch2, peerInfo2) = createSwitch(ma2)
  awaiters.add(await switch1.start())
  awaiters.add(await switch2.start())
  await switch2.connect(switch1.peerInfo)

  let conn = await switch2.dial(switch1.peerInfo, TestCodec)
  let msgfut = conn.readLp()
  event.fire()
  var res = await withTimeout(msgfut, 1.seconds)

  await allFutures(switch1.stop(), switch2.stop())
  await allFutures(awaiters)

  result = res

when isMainModule:
  suite "Mplex tests":
    test "Mplex freeze after stream get closed remotely test":
      check waitFor(testSwitch()) == true

Add validation hook in Floodsub and Gossipsub

The current implementation is missing a validation hook that would allow validating and messages. If validation succeeds then the message would be further propagated, if not, the message is dropped and the peer potentially blacklisted. This needs to comply with spec and the Go reference implementation.

method call on nil object and suspicious connection closing behaviour

Intermittent exception, intermittently caught or not, while running make LOG_LEVEL=TRACE testnet1 on nim-beacon-chain's devel branch:

TRC 2020-02-23 13:25:13+01:00 closing connection                         tid=25059
TRC 2020-02-23 13:25:13+01:00 sending encrypted bytes                    tid=25059 bytes=0C00 topic=secio
TRC 2020-02-23 13:25:13+01:00 Writing message                            tid=25059 message=00000022BF1372D352F967D029DCE74FE2B7950A38383425275552433F431C6D0670DEEE93EE topic=secio
TRC 2020-02-23 13:25:13+01:00 Recieved message header                    tid=25059 header=00000022 length=34 topic=secio
TRC 2020-02-23 13:25:13+01:00 Received message body                      tid=25059 buffer=BF1372D352F967D029DCE74FE2B7950A38383425275552433F431C6D0670DEEE93EE length=34 topic=secio
TRC 2020-02-23 13:25:13+01:00 waiting for data                           tid=25059 topic=Mplex
TRC 2020-02-23 13:25:13+01:00 connection closed                          tid=25059 closed=false
TRC 2020-02-23 13:25:13+01:00 read public key from message               tid=25059 pubKey="Secp256k1 key (0381C11F59A277AF11F80924DCEF507A1E563706899B476AEEF34C2831066E3CC5)" topic=identify
TRC 2020-02-23 13:25:13+01:00 read address bytes from message            tid=25059 address=/ip4/0.0.0.0/tcp/9000 topic=identify
TRC 2020-02-23 13:25:13+01:00 read protoVersion from message             tid=25059 protoVersion=ipfs/0.1.0 topic=identify
TRC 2020-02-23 13:25:13+01:00 read agentVersion from message             tid=25059 agentVersion=nim-libp2p/0.0.1 topic=identify
TRC 2020-02-23 13:25:13+01:00 Identify for remote peer succeded          tid=25059 topic=identify
TRC 2020-02-23 13:25:13+01:00 connection's peerInfo                      tid=25059 peerInfo="PeerID: 16Uiu2HAmMPVjzko7GiQjg5bd5SsagcAzCb8GMSTdKUnW6vM8eTfA\nPeer Addrs: /ip4/0.0.0.0/tcp/9000\n" topic=Switch
TRC 2020-02-23 13:25:13+01:00 adding muxer for peer                      tid=25059 peer=16Uiu2HAmMPVjzko7GiQjg5bd5SsagcAzCb8GMSTdKUnW6vM8eTfA topic=Switch
TRC 2020-02-23 13:25:13+01:00 identify: identified remote peer           tid=25059 peer=16Uiu2HAmMPVjzko7GiQjg5bd5SsagcAzCb8GMSTdKUnW6vM8eTfA topic=Switch
TRC 2020-02-23 13:25:13+01:00 read header varint                         tid=25059 topic=MplexCoder varint=12
TRC 2020-02-23 13:25:13+01:00 read data len varint                       tid=25059 topic=MplexCoder varint=0
TRC 2020-02-23 13:25:13+01:00 closing connection                         tid=25059
TRC 2020-02-23 13:25:13+01:00 sending encrypted bytes                    tid=25059 bytes=0C00 topic=secio
TRC 2020-02-23 13:25:13+01:00 Writing message                            tid=25059 message=00000022FC81E99EA84B5504B122454DE9B1503944F43A4F6E34A3BE0DEB3239BBC7D2B8662C topic=secio
TRC 2020-02-23 13:25:13+01:00 Dialing peer                               tid=25059 peer=16Uiu2HAmGF43WzRjXm9kLKHdDCYYQsjViwyvrPY9WpZ5tgnSkbbF topic=Switch
TRC 2020-02-23 13:25:13+01:00 Reusing existing connection                tid=25059 topic=Switch
TRC 2020-02-23 13:25:13+01:00 initiating handshake                       tid=25059 codec="\x13/multistream/1.0.0\n" topic=Multistream
TRC 2020-02-23 13:25:13+01:00 read message from connection               tid=25059 data=@[] id=1 msgType=CloseOut topic=Mplex
TRC 2020-02-23 13:25:13+01:00 picking remote channels                    tid=25059 initiator=false topic=Mplex
TRC 2020-02-23 13:25:13+01:00 closing channel                            tid=25059 id=1 initiator=false msgType=CloseOut topic=Mplex
TRC 2020-02-23 13:25:13+01:00 picking remote channels                    tid=25059 initiator=false topic=Mplex
TRC 2020-02-23 13:25:13+01:00 waiting for data                           tid=25059 topic=Mplex
FAT 2020-02-23 13:25:13+01:00 Fatal exception reached                    tid=25059 err="cannot dispatch; dispatcher is nil\nAsync traceback:\nException message: cannot dispatch; dispatcher is nil\nException type:"

Notice that the stream appears to not be closed here: connection closed tid=25059 closed=false. Giovanni thinks this might stem from a Nim runtime bug where methods are dispatched to the parent class (which would be LPStream, with a close() method that would fail with an assertion error).

[Windows] Ambiguous call - getCurrentProcessId

Sanitize tests dealing with possible async exceptions

since the topic of exception safety was raised, this is a school-book example of broken code - on any exception (and there are plenty of them above, even if there's no visible hint of it), this cancel and the close below will not be called - in tests this is annoying because you often get a cascade of failures as ports are not closed correctly, but more generally it's bad because of the proliferation of poor code examples that later are copy-pasted

raising exceptions in nim is very easy and convenient - looking at the outcomes in our code, handling them correctly and consistently turns out to be very hard in practice

Originally posted by @arnetheduck in https://github.com/notifications/beta/MDE4Ok5vdGlmaWNhdGlvblRocmVhZDcwNjE3NjY1Mjo3MDA4OTAw

Split up testnative

I added 2 new tests to test fragmentation of secio/noise with big packets and on one of the CI providers (and 32bits builds only afaik) they caused the big monolitic testnative to fail even before the new tests were even called.
I know @dryajov also had issues in the past with tests order etc.

The reality is that our tests are ok but not perfect. We barely release all the resources and close what needs to be closed. There are lingering Futures and what not.

So my question is why not just pay the price of a bit slower builds (should not be too slow if we share nimcache no?) and split the monolitic testnative in many other tests?
Why this was not done from the start btw? slow build times?

Related to #75

P.S. until this situation is handled I'm putting on hold adding
https://github.com/sinkingsugar/nim-libp2p-rs-interop

Cancellation support.

Currently nim-libp2p do not have cancellation support, and this can be one more source of leaks for applications which is going to use it.

All API calls which performs open(dial, connect)/read/write operations should handle cancellation and perform cleanup of allocated resources and/or reset stream operations.

Example of leak can be seen here:
https://github.com/status-im/nim-beacon-chain/blob/master/beacon_chain/eth2_network.nim#L433-L442

Any other procedure which will going to use read/connect/dial with timeout should be able to cancel pending read/connect/dial operation.

using `finally` ends connections prematurely in `daemonapi`

Commit dde8c01 introduced a bug, whereby using finally the connection/transport is closed prematurely. This is because finally is triggered always, either on exception within the try block or when leaving the try scope. Using finally closes the connection/transport before it gets a chance to be used by the caller.

It should be changed back to proper exception handling, probably something like:

try
  ...
except Exception as exc:
  # do cleanup - stop transport, etc...
  # re-raise exception
  raise exc

[bufferstream] readers would be canceled on stream close.

import chronos
import bufferstream

proc testReadCancellation(): Future[bool] {.async.} =
  var stream = newBufferStream()
  proc readMessage1(s: BufferStream): Future[seq[byte]] {.async.} =
    result = await s.read(10)
  proc readMessage2(s: BufferStream): Future[seq[byte]] {.async.} =
    var res = await readMessage1(s)
    res.add(@[0x00'u8, 0x00'u8])
  proc readMessage3(s: BufferStream): Future[seq[byte]] {.async.} =
    var res = await readMessage2(s)
    res.add(@[0x01'u8, 0x01'u8])

  var fut = stream.readMessage3()
  await stream.close()
  var c = await fut

when isMainModule:
  echo waitFor(testReadCancellation())

This code raises CancelledError, but it should not.

Gossipsub tests failed.

 -- CLIENT1 --
Control socket: /tmp/p2pd-4.sock
Peer ID: QmYqE12NbzcAXaAACdDy4dyQgdcjRh5MgaXJF18zHwBLeS
Peer Addrs:
19:20:04.144 DEBUG   addrutil: InterfaceAddresses: from manet: [/ip4/127.0.0.1 /ip6/::1 /ip6/fe80::1 /ip6/fe80::a1:e8fa:ef1d:ec0d /ip4/192.168.58.128 /ip6/fe80::2f86:9328:6572:4419] addr.go:121
19:20:04.144 DEBUG   addrutil: InterfaceAddresses: usable: [/ip4/127.0.0.1 /ip6/::1 /ip4/192.168.58.128] addr.go:133
19:20:04.144 DEBUG   addrutil: adding resolved addr: /ip4/0.0.0.0/tcp/50969 /ip4/127.0.0.1/tcp/50969 [/ip4/127.0.0.1/tcp/50969] addr.go:64
19:20:04.144 DEBUG   addrutil: adding resolved addr: /ip4/0.0.0.0/tcp/50969 /ip4/192.168.58.128/tcp/50969 [/ip4/127.0.0.1/tcp/50969 /ip4/192.168.58.128/tcp/50969] addr.go:64
19:20:04.144 DEBUG   addrutil: adding resolved addr: /ip6/::/tcp/50970 /ip6/::1/tcp/50970 [/ip6/::1/tcp/50970] addr.go:64
19:20:04.144 DEBUG   addrutil: ResolveUnspecifiedAddresses: [/ip4/0.0.0.0/tcp/50969 /ip6/::/tcp/50970 /p2p-circuit] [/ip4/127.0.0.1 /ip6/::1 /ip4/192.168.58.128] [/ip4/127.0.0.1/tcp/50969 /ip4/192.168.58.128/tcp/50969 /ip6/::1/tcp/50970 /p2p-circuit] addr.go:109
/ip4/127.0.0.1/tcp/50969
/ip4/192.168.58.128/tcp/50969
/ip6/::1/tcp/50970
/p2p-circuit
19:20:04.938 DEBUG       p2pd: incoming connection daemon.go:127
19:20:04.938 DEBUG       p2pd: request: 0 [IDENTIFY] conn.go:38
19:20:04.938 DEBUG   addrutil: InterfaceAddresses: from manet: [/ip4/127.0.0.1 /ip6/::1 /ip6/fe80::1 /ip6/fe80::a1:e8fa:ef1d:ec0d /ip4/192.168.58.128 /ip6/fe80::2f86:9328:6572:4419] addr.go:121
19:20:04.939 DEBUG   addrutil: InterfaceAddresses: usable: [/ip4/127.0.0.1 /ip6/::1 /ip4/192.168.58.128] addr.go:133
19:20:04.939 DEBUG   addrutil: adding resolved addr: /ip4/0.0.0.0/tcp/50969 /ip4/127.0.0.1/tcp/50969 [/ip4/127.0.0.1/tcp/50969] addr.go:64
19:20:04.939 DEBUG   addrutil: adding resolved addr: /ip4/0.0.0.0/tcp/50969 /ip4/192.168.58.128/tcp/50969 [/ip4/127.0.0.1/tcp/50969 /ip4/192.168.58.128/tcp/50969] addr.go:64
19:20:04.939 DEBUG   addrutil: adding resolved addr: /ip6/::/tcp/50970 /ip6/::1/tcp/50970 [/ip6/::1/tcp/50970] addr.go:64
19:20:04.939 DEBUG   addrutil: ResolveUnspecifiedAddresses: [/p2p-circuit /ip4/0.0.0.0/tcp/50969 /ip6/::/tcp/50970] [/ip4/127.0.0.1 /ip6/::1 /ip4/192.168.58.128] [/p2p-circuit /ip4/127.0.0.1/tcp/50969 /ip4/192.168.58.128/tcp/50969 /ip6/::1/tcp/50970] addr.go:109
19:20:04.941 DEBUG       p2pd: incoming connection daemon.go:127
19:20:04.942 DEBUG       p2pd: request: 1 [CONNECT] conn.go:38
19:20:04.942 DEBUG       p2pd: connecting to QmV3iNZmHgCgwj5NRixVwC4M6TyKz8rh4DEk7K58Q2Ku2R conn.go:196
19:20:04.942 DEBUG  basichost: host <peer.ID Qm*HwBLeS> dialing <peer.ID Qm*Q2Ku2R> basic_host.go:449
19:20:04.942 DEBUG     swarm2: [<peer.ID Qm*HwBLeS>] swarm dialing peer [<peer.ID Qm*Q2Ku2R>] swarm_dial.go:184
19:20:04.942 DEBUG   addrutil: InterfaceAddresses: from manet: [/ip4/127.0.0.1 /ip6/::1 /ip6/fe80::1 /ip6/fe80::a1:e8fa:ef1d:ec0d /ip4/192.168.58.128 /ip6/fe80::2f86:9328:6572:4419] addr.go:121
19:20:04.942 DEBUG   addrutil: InterfaceAddresses: usable: [/ip4/127.0.0.1 /ip6/::1 /ip4/192.168.58.128] addr.go:133
19:20:04.942 DEBUG   addrutil: adding resolved addr: /ip4/0.0.0.0/tcp/50969 /ip4/127.0.0.1/tcp/50969 [/ip4/127.0.0.1/tcp/50969] addr.go:64
19:20:04.942 DEBUG   addrutil: adding resolved addr: /ip4/0.0.0.0/tcp/50969 /ip4/192.168.58.128/tcp/50969 [/ip4/127.0.0.1/tcp/50969 /ip4/192.168.58.128/tcp/50969] addr.go:64
19:20:04.942 DEBUG   addrutil: adding resolved addr: /ip6/::/tcp/50970 /ip6/::1/tcp/50970 [/ip6/::1/tcp/50970] addr.go:64
19:20:04.942 DEBUG   addrutil: ResolveUnspecifiedAddresses: [/p2p-circuit /ip4/0.0.0.0/tcp/50969 /ip6/::/tcp/50970] [/ip4/127.0.0.1 /ip6/::1 /ip4/192.168.58.128] [/p2p-circuit /ip4/127.0.0.1/tcp/50969 /ip4/192.168.58.128/tcp/50969 /ip6/::1/tcp/50970] addr.go:109
19:20:04.942 DEBUG     swarm2: <peer.ID Qm*HwBLeS> swarm dialing <peer.ID Qm*Q2Ku2R> swarm_dial.go:347
19:20:04.943 DEBUG     swarm2: <peer.ID Qm*HwBLeS> swarm dialing <peer.ID Qm*Q2Ku2R> /ip6/::1/tcp/50974 swarm_dial.go:413
19:20:04.943 DEBUG     swarm2: <peer.ID Qm*HwBLeS> swarm dialing <peer.ID Qm*Q2Ku2R> /ip4/127.0.0.1/tcp/50973 swarm_dial.go:413
19:20:04.943 DEBUG     swarm2: <peer.ID Qm*HwBLeS> swarm dialing <peer.ID Qm*Q2Ku2R> /ip4/192.168.58.128/tcp/50973 swarm_dial.go:413
19:20:04.944 DEBUG     swarm2: <peer.ID Qm*HwBLeS> swarm dialing <peer.ID Qm*Q2Ku2R> /p2p-circuit swarm_dial.go:413
19:20:04.944  INFO     swarm2: got error on dial to /p2p-circuit: <peer.ID Qm*HwBLeS> --> <peer.ID Qm*Q2Ku2R> dial attempt failed: Failed to dial through 0 known relay hosts swarm_dial.go:382
19:20:04.947 DEBUG      secio: 1.1 Identify: <peer.ID Qm*HwBLeS> Remote Peer Identified as <peer.ID Qm*Q2Ku2R> protocol.go:214
19:20:04.949 DEBUG      secio: 1.1 Identify: <peer.ID Qm*HwBLeS> Remote Peer Identified as <peer.ID Qm*Q2Ku2R> protocol.go:214
19:20:04.947 DEBUG      secio: 1.1 Identify: <peer.ID Qm*HwBLeS> Remote Peer Identified as <peer.ID Qm*Q2Ku2R> protocol.go:214
19:20:04.982 DEBUG     swarm2: [<peer.ID Qm*HwBLeS>] opening stream to peer [<peer.ID Qm*Q2Ku2R>] swarm.go:280
19:20:04.982 DEBUG     swarm2: network for <peer.ID Qm*HwBLeS> finished dialing <peer.ID Qm*Q2Ku2R> swarm_dial.go:219
19:20:04.982 DEBUG net/identi: IdentifyConn called twice on: <swarm.Conn[TCP] /ip6/::1/tcp/50970 (QmYqE12NbzcAXaAACdDy4dyQgdcjRh5MgaXJF18zHwBLeS) <-> /ip6/::1/tcp/50974 (QmV3iNZmHgCgwj5NRixVwC4M6TyKz8rh4DEk7K58Q2Ku2R)> id.go:81
19:20:04.985 DEBUG  basichost: protocol negotiation took 1.646815ms basic_host.go:255
19:20:04.986 DEBUG   addrutil: InterfaceAddresses: from manet: [/ip4/127.0.0.1 /ip6/::1 /ip6/fe80::1 /ip6/fe80::a1:e8fa:ef1d:ec0d /ip4/192.168.58.128 /ip6/fe80::2f86:9328:6572:4419] addr.go:121
19:20:04.986 DEBUG   addrutil: InterfaceAddresses: usable: [/ip4/127.0.0.1 /ip6/::1 /ip4/192.168.58.128] addr.go:133
19:20:04.986 DEBUG   addrutil: adding resolved addr: /ip4/0.0.0.0/tcp/50969 /ip4/127.0.0.1/tcp/50969 [/ip4/127.0.0.1/tcp/50969] addr.go:64
19:20:04.986 DEBUG   addrutil: adding resolved addr: /ip4/0.0.0.0/tcp/50969 /ip4/192.168.58.128/tcp/50969 [/ip4/127.0.0.1/tcp/50969 /ip4/192.168.58.128/tcp/50969] addr.go:64
19:20:04.986 DEBUG   addrutil: adding resolved addr: /ip6/::/tcp/50970 /ip6/::1/tcp/50970 [/ip6/::1/tcp/50970] addr.go:64
19:20:04.986 DEBUG   addrutil: ResolveUnspecifiedAddresses: [/ip4/0.0.0.0/tcp/50969 /ip6/::/tcp/50970 /p2p-circuit] [/ip4/127.0.0.1 /ip6/::1 /ip4/192.168.58.128] [/ip4/127.0.0.1/tcp/50969 /ip4/192.168.58.128/tcp/50969 /ip6/::1/tcp/50970 /p2p-circuit] addr.go:109
19:20:04.986 DEBUG net/identi: <peer.ID Qm*HwBLeS> sent listen addrs to <peer.ID Qm*Q2Ku2R>: [/ip4/127.0.0.1/tcp/50969 /ip4/192.168.58.128/tcp/50969 /ip6/::1/tcp/50970 /p2p-circuit] id.go:185
19:20:04.986 DEBUG net/identi: /ipfs/id/1.0.0 sent message to <peer.ID Qm*Q2Ku2R> /ip6/::1/tcp/50974 id.go:125
19:20:04.987 DEBUG  basichost: protocol negotiation took 2.87066ms basic_host.go:255
19:20:04.987 DEBUG     pubsub: PEERUP: Add new peer <peer.ID Qm*Q2Ku2R> using /meshsub/1.0.0 gossipsub.go:80
19:20:04.988 DEBUG   addrutil: InterfaceAddresses: from manet: [/ip4/127.0.0.1 /ip6/::1 /ip6/fe80::1 /ip6/fe80::a1:e8fa:ef1d:ec0d /ip4/192.168.58.128 /ip6/fe80::2f86:9328:6572:4419] addr.go:121
19:20:04.988 DEBUG   addrutil: InterfaceAddresses: usable: [/ip4/127.0.0.1 /ip6/::1 /ip4/192.168.58.128] addr.go:133
19:20:04.988 DEBUG   addrutil: adding resolved addr: /ip4/0.0.0.0/tcp/50969 /ip4/127.0.0.1/tcp/50969 [/ip4/127.0.0.1/tcp/50969] addr.go:64
19:20:04.988 DEBUG   addrutil: adding resolved addr: /ip4/0.0.0.0/tcp/50969 /ip4/192.168.58.128/tcp/50969 [/ip4/127.0.0.1/tcp/50969 /ip4/192.168.58.128/tcp/50969] addr.go:64
19:20:04.988 DEBUG   addrutil: adding resolved addr: /ip6/::/tcp/50970 /ip6/::1/tcp/50970 [/ip6/::1/tcp/50970] addr.go:64
19:20:04.988 DEBUG   addrutil: ResolveUnspecifiedAddresses: [/p2p-circuit /ip4/0.0.0.0/tcp/50969 /ip6/::/tcp/50970] [/ip4/127.0.0.1 /ip6/::1 /ip4/192.168.58.128] [/p2p-circuit /ip4/127.0.0.1/tcp/50969 /ip4/192.168.58.128/tcp/50969 /ip6/::1/tcp/50970] addr.go:109
19:20:04.988 DEBUG net/identi: identify identifying observed multiaddr: /ip6/::1/tcp/50970 [/p2p-circuit /ip4/127.0.0.1/tcp/50969 /ip4/192.168.58.128/tcp/50969 /ip6/::1/tcp/50970] id.go:402
19:20:04.988 DEBUG net/identi: added own observed listen addr: /ip6/::1/tcp/50970 --> /ip6/::1/tcp/50970 id.go:409
19:20:04.988 DEBUG net/identi: <peer.ID Qm*HwBLeS> received listen addrs for <peer.ID Qm*Q2Ku2R>: [/ip6/::1/tcp/50974 /p2p-circuit /ip4/127.0.0.1/tcp/50973 /ip4/192.168.58.128/tcp/50973 /ip6/::1/tcp/50974] id.go:245
19:20:04.988 DEBUG net/identi: /ipfs/id/1.0.0 received message from <peer.ID Qm*Q2Ku2R> /ip6/::1/tcp/50974 id.go:140
19:20:04.988 DEBUG  basichost: host <peer.ID Qm*HwBLeS> finished dialing <peer.ID Qm*Q2Ku2R> basic_host.go:473
19:20:04.992 DEBUG       p2pd: incoming connection daemon.go:127
19:20:04.992 DEBUG       p2pd: request: 8 [PUBSUB] conn.go:38
19:20:04.992 DEBUG  basichost: protocol negotiation took 3.069962ms basic_host.go:255
19:20:04.992 DEBUG     pubsub: JOIN test-topic gossipsub.go:267
19:20:04.993 DEBUG     pubsub: PEERDOWN: Remove disconnected peer <peer.ID Qm*Q2Ku2R> gossipsub.go:85
19:20:04.994 DEBUG     pubsub: PEERDOWN: Remove disconnected peer <peer.ID Qm*Q2Ku2R> gossipsub.go:85
19:20:06.996 DEBUG       p2pd: incoming connection daemon.go:127
19:20:06.996 DEBUG       p2pd: request: 8 [PUBSUB] conn.go:38
19:20:06.998 DEBUG       p2pd: incoming connection daemon.go:127
19:20:06.998 DEBUG       p2pd: request: 8 [PUBSUB] conn.go:38

 -- CLIENT2 --
Control socket: /tmp/p2pd-5.sock
Peer ID: QmV3iNZmHgCgwj5NRixVwC4M6TyKz8rh4DEk7K58Q2Ku2R
Peer Addrs:
19:20:04.842 DEBUG   addrutil: InterfaceAddresses: from manet: [/ip4/127.0.0.1 /ip6/::1 /ip6/fe80::1 /ip6/fe80::a1:e8fa:ef1d:ec0d /ip4/192.168.58.128 /ip6/fe80::2f86:9328:6572:4419] addr.go:121
19:20:04.842 DEBUG   addrutil: InterfaceAddresses: usable: [/ip4/127.0.0.1 /ip6/::1 /ip4/192.168.58.128] addr.go:133
19:20:04.842 DEBUG   addrutil: adding resolved addr: /ip4/0.0.0.0/tcp/50973 /ip4/127.0.0.1/tcp/50973 [/ip4/127.0.0.1/tcp/50973] addr.go:64
19:20:04.842 DEBUG   addrutil: adding resolved addr: /ip4/0.0.0.0/tcp/50973 /ip4/192.168.58.128/tcp/50973 [/ip4/127.0.0.1/tcp/50973 /ip4/192.168.58.128/tcp/50973] addr.go:64
19:20:04.842 DEBUG   addrutil: adding resolved addr: /ip6/::/tcp/50974 /ip6/::1/tcp/50974 [/ip6/::1/tcp/50974] addr.go:64
19:20:04.842 DEBUG   addrutil: ResolveUnspecifiedAddresses: [/p2p-circuit /ip4/0.0.0.0/tcp/50973 /ip6/::/tcp/50974] [/ip4/127.0.0.1 /ip6/::1 /ip4/192.168.58.128] [/p2p-circuit /ip4/127.0.0.1/tcp/50973 /ip4/192.168.58.128/tcp/50973 /ip6/::1/tcp/50974] addr.go:109
/p2p-circuit
/ip4/127.0.0.1/tcp/50973
/ip4/192.168.58.128/tcp/50973
/ip6/::1/tcp/50974
19:20:04.940 DEBUG       p2pd: incoming connection daemon.go:127
19:20:04.940 DEBUG       p2pd: request: 0 [IDENTIFY] conn.go:38
19:20:04.940 DEBUG   addrutil: InterfaceAddresses: from manet: [/ip4/127.0.0.1 /ip6/::1 /ip6/fe80::1 /ip6/fe80::a1:e8fa:ef1d:ec0d /ip4/192.168.58.128 /ip6/fe80::2f86:9328:6572:4419] addr.go:121
19:20:04.940 DEBUG   addrutil: InterfaceAddresses: usable: [/ip4/127.0.0.1 /ip6/::1 /ip4/192.168.58.128] addr.go:133
19:20:04.940 DEBUG   addrutil: adding resolved addr: /ip6/::/tcp/50974 /ip6/::1/tcp/50974 [/ip6/::1/tcp/50974] addr.go:64
19:20:04.940 DEBUG   addrutil: adding resolved addr: /ip4/0.0.0.0/tcp/50973 /ip4/127.0.0.1/tcp/50973 [/ip4/127.0.0.1/tcp/50973] addr.go:64
19:20:04.940 DEBUG   addrutil: adding resolved addr: /ip4/0.0.0.0/tcp/50973 /ip4/192.168.58.128/tcp/50973 [/ip4/127.0.0.1/tcp/50973 /ip4/192.168.58.128/tcp/50973] addr.go:64
19:20:04.940 DEBUG   addrutil: ResolveUnspecifiedAddresses: [/ip6/::/tcp/50974 /p2p-circuit /ip4/0.0.0.0/tcp/50973] [/ip4/127.0.0.1 /ip6/::1 /ip4/192.168.58.128] [/ip6/::1/tcp/50974 /p2p-circuit /ip4/127.0.0.1/tcp/50973 /ip4/192.168.58.128/tcp/50973] addr.go:109
19:20:04.944 DEBUG stream-upg: listener <stream.Listener[TCP] /ip4/0.0.0.0/tcp/50973> got connection: /ip4/0.0.0.0/tcp/50973 <---> /ip4/127.0.0.1/tcp/50969 listener.go:91
19:20:04.944 DEBUG stream-upg: listener <stream.Listener[TCP] /ip4/0.0.0.0/tcp/50973> got connection: /ip4/0.0.0.0/tcp/50973 <---> /ip4/192.168.58.128/tcp/50969 listener.go:91
19:20:04.944 DEBUG stream-upg: listener <stream.Listener[TCP] /ip6/::/tcp/50974> got connection: /ip6/::/tcp/50974 <---> /ip6/::1/tcp/50970 listener.go:91
19:20:04.949 DEBUG      secio: 1.1 Identify: <peer.ID Qm*Q2Ku2R> Remote Peer Identified as <peer.ID Qm*HwBLeS> protocol.go:214
19:20:04.960 DEBUG      secio: 1.1 Identify: <peer.ID Qm*Q2Ku2R> Remote Peer Identified as <peer.ID Qm*HwBLeS> protocol.go:214
19:20:04.960 DEBUG      secio: 1.1 Identify: <peer.ID Qm*Q2Ku2R> Remote Peer Identified as <peer.ID Qm*HwBLeS> protocol.go:214
19:20:04.982 DEBUG stream-upg: listener <stream.Listener[TCP] /ip6/::/tcp/50974> accepted connection: <stream.Conn[TCP] /ip6/::/tcp/50974 (<peer.ID Qm*Q2Ku2R>) <-> /ip6/::1/tcp/50970 (<peer.ID Qm*HwBLeS>)> listener.go:114
19:20:04.982 DEBUG     swarm2: swarm listener accepted connection: <stream.Conn[TCP] /ip6/::/tcp/50974 (<peer.ID Qm*Q2Ku2R>) <-> /ip6/::1/tcp/50970 (<peer.ID Qm*HwBLeS>)> swarm_listen.go:80
19:20:04.982 DEBUG     swarm2: [<peer.ID Qm*Q2Ku2R>] opening stream to peer [<peer.ID Qm*HwBLeS>] swarm.go:280
19:20:04.985 DEBUG  basichost: protocol negotiation took 780.078µs basic_host.go:255
19:20:04.986 DEBUG  basichost: protocol negotiation took 1.57487ms basic_host.go:255
19:20:04.986 DEBUG   addrutil: InterfaceAddresses: from manet: [/ip4/127.0.0.1 /ip6/::1 /ip6/fe80::1 /ip6/fe80::a1:e8fa:ef1d:ec0d /ip4/192.168.58.128 /ip6/fe80::2f86:9328:6572:4419] addr.go:121
19:20:04.986 DEBUG   addrutil: InterfaceAddresses: usable: [/ip4/127.0.0.1 /ip6/::1 /ip4/192.168.58.128] addr.go:133
19:20:04.986 DEBUG   addrutil: adding resolved addr: /ip6/::/tcp/50974 /ip6/::1/tcp/50974 [/ip6/::1/tcp/50974] addr.go:64
19:20:04.986 DEBUG   addrutil: adding resolved addr: /ip4/0.0.0.0/tcp/50973 /ip4/127.0.0.1/tcp/50973 [/ip4/127.0.0.1/tcp/50973] addr.go:64
19:20:04.986 DEBUG   addrutil: adding resolved addr: /ip4/0.0.0.0/tcp/50973 /ip4/192.168.58.128/tcp/50973 [/ip4/127.0.0.1/tcp/50973 /ip4/192.168.58.128/tcp/50973] addr.go:64
19:20:04.986 DEBUG   addrutil: ResolveUnspecifiedAddresses: [/ip6/::/tcp/50974 /p2p-circuit /ip4/0.0.0.0/tcp/50973] [/ip4/127.0.0.1 /ip6/::1 /ip4/192.168.58.128] [/ip6/::1/tcp/50974 /p2p-circuit /ip4/127.0.0.1/tcp/50973 /ip4/192.168.58.128/tcp/50973] addr.go:109
19:20:04.986 DEBUG net/identi: <peer.ID Qm*Q2Ku2R> sent listen addrs to <peer.ID Qm*HwBLeS>: [/ip6/::1/tcp/50974 /p2p-circuit /ip4/127.0.0.1/tcp/50973 /ip4/192.168.58.128/tcp/50973] id.go:185
19:20:04.988 DEBUG net/identi: /ipfs/id/1.0.0 sent message to <peer.ID Qm*HwBLeS> /ip6/::1/tcp/50970 id.go:125
19:20:04.988 DEBUG stream-upg: listener <stream.Listener[TCP] /ip4/0.0.0.0/tcp/50973> accepted connection: <stream.Conn[TCP] /ip4/0.0.0.0/tcp/50973 (<peer.ID Qm*Q2Ku2R>) <-> /ip4/127.0.0.1/tcp/50969 (<peer.ID Qm*HwBLeS>)> listener.go:114
19:20:04.988 DEBUG     swarm2: swarm listener accepted connection: <stream.Conn[TCP] /ip4/0.0.0.0/tcp/50973 (<peer.ID Qm*Q2Ku2R>) <-> /ip4/127.0.0.1/tcp/50969 (<peer.ID Qm*HwBLeS>)> swarm_listen.go:80
19:20:04.988 DEBUG net/identi: error opening initial stream for /ipfs/id/1.0.0: session shutdown id.go:98
19:20:04.988 DEBUG     swarm2: [<peer.ID Qm*Q2Ku2R>] opening stream to peer [<peer.ID Qm*HwBLeS>] swarm.go:280
19:20:04.989 DEBUG   addrutil: InterfaceAddresses: from manet: [/ip4/127.0.0.1 /ip6/::1 /ip6/fe80::1 /ip6/fe80::a1:e8fa:ef1d:ec0d /ip4/192.168.58.128 /ip6/fe80::2f86:9328:6572:4419] addr.go:121
19:20:04.989 DEBUG   addrutil: InterfaceAddresses: usable: [/ip4/127.0.0.1 /ip6/::1 /ip4/192.168.58.128] addr.go:133
19:20:04.989 DEBUG   addrutil: adding resolved addr: /ip4/0.0.0.0/tcp/50973 /ip4/127.0.0.1/tcp/50973 [/ip4/127.0.0.1/tcp/50973] addr.go:64
19:20:04.989 DEBUG   addrutil: adding resolved addr: /ip4/0.0.0.0/tcp/50973 /ip4/192.168.58.128/tcp/50973 [/ip4/127.0.0.1/tcp/50973 /ip4/192.168.58.128/tcp/50973] addr.go:64
19:20:04.989 DEBUG   addrutil: adding resolved addr: /ip6/::/tcp/50974 /ip6/::1/tcp/50974 [/ip6/::1/tcp/50974] addr.go:64
19:20:04.990 DEBUG   addrutil: ResolveUnspecifiedAddresses: [/p2p-circuit /ip4/0.0.0.0/tcp/50973 /ip6/::/tcp/50974] [/ip4/127.0.0.1 /ip6/::1 /ip4/192.168.58.128] [/p2p-circuit /ip4/127.0.0.1/tcp/50973 /ip4/192.168.58.128/tcp/50973 /ip6/::1/tcp/50974] addr.go:109
19:20:04.990 DEBUG net/identi: identify identifying observed multiaddr: /ip6/::/tcp/50974 [/p2p-circuit /ip4/127.0.0.1/tcp/50973 /ip4/192.168.58.128/tcp/50973 /ip6/::1/tcp/50974] id.go:402
19:20:04.990 DEBUG net/identi: <peer.ID Qm*Q2Ku2R> received listen addrs for <peer.ID Qm*HwBLeS>: [/ip4/127.0.0.1/tcp/50969 /ip4/192.168.58.128/tcp/50969 /ip6/::1/tcp/50970 /p2p-circuit /ip6/::1/tcp/50970] id.go:245
19:20:04.990 DEBUG net/identi: /ipfs/id/1.0.0 received message from <peer.ID Qm*HwBLeS> /ip6/::1/tcp/50970 id.go:140
19:20:04.990 DEBUG     pubsub: PEERUP: Add new peer <peer.ID Qm*HwBLeS> using /meshsub/1.0.0 gossipsub.go:80
19:20:04.990 DEBUG       p2pd: incoming connection daemon.go:127
19:20:04.990 DEBUG       p2pd: request: 1 [CONNECT] conn.go:38
19:20:04.991 DEBUG       p2pd: connecting to QmYqE12NbzcAXaAACdDy4dyQgdcjRh5MgaXJF18zHwBLeS conn.go:196
19:20:04.992 DEBUG stream-upg: accept upgrade error: write tcp 192.168.58.128:50973->192.168.58.128:50969: write: broken pipe (/ip4/0.0.0.0/tcp/50973 <--> /ip4/192.168.58.128/tcp/50969) listener.go:107
19:20:04.992 ERROR     pubsub: already have connection to peer:  <peer.ID Qm*HwBLeS> pubsub.go:268
19:20:04.992 DEBUG     pubsub: PEERUP: Add new peer <peer.ID Qm*HwBLeS> using /meshsub/1.0.0 gossipsub.go:80
19:20:04.993 DEBUG     pubsub: PEERDOWN: Remove disconnected peer <peer.ID Qm*HwBLeS> gossipsub.go:85
19:20:04.993 DEBUG       p2pd: incoming connection daemon.go:127
19:20:04.994 DEBUG       p2pd: request: 8 [PUBSUB] conn.go:38
19:20:04.994 DEBUG     pubsub: JOIN test-topic gossipsub.go:267
19:20:06.997 DEBUG       p2pd: incoming connection daemon.go:127
19:20:06.997 DEBUG       p2pd: request: 8 [PUBSUB] conn.go:38
19:20:06.999 DEBUG       p2pd: incoming connection daemon.go:127
19:20:06.999 DEBUG       p2pd: request: 8 [PUBSUB] conn.go:38

    /Users/tester/Projects/nim-libp2p/tests/testdaemon.nim(145, 43): Check failed: waitFor(pubsubTest({PSGossipSub})) == true
    waitFor(pubsubTest({PSGossipSub})) was false
    true was true
  [FAILED] GossipSub test

Multibase: Implement encodings.

Implement encodings in order of priority:

  • base64
  • base64pad
  • base64url
  • base64urlpad
  • base16
  • base16upper
  • base32z
  • base10
  • base8
  • base2
  • base1
  • identity
  • base32hex
  • base32hexupper
  • base32hexpad
  • base32hexpadupper
  • base32
  • base32upper
  • base32pad
  • base32padupper
  • base58btc
  • base58flickr

`minprotobuf` broken for out-of-order fields

In protobuf, keys may appear in any order, and keys may be repeated even for scalar fields in which case the last seen value should be used.

The minprotobuf API cannot be used correctly because it looks for keys and then advances an internal offset meaning that if code looks for keys in order 1, 2 while data is in order 2, 1, field 2 will not be found.

As a consequence, all current usages of it can be considered broken also.

https://developers.google.com/protocol-buffers/docs/encoding#order

Code sanity

Since we are talking about "sanitizing" and "security" for instance status-im/nimbus-eth1#164
I'm collecting here a list of risky code.
Not doing any change yet, but keeping it under the spotlight and up for discussion.

Wild casting

This is an example of bad practice, while string, seq[byte], seq[uint8] and Taintedstring are all equivalent at a low-level, this is something that might change, will likely change out of our control.
In this specific case both seq and string are also basically references (not exactly but similar behavior for the compiler).
Moreover is lossy as there is no UTF8 sanity check either.
https://github.com/status-im/nim-libp2p/blob/88a030d8fbd76023354f14ff10ba740786eb46a4/libp2p/muxers/mplex/mplex.nim#L88

Wild gcsafe

#68
I see way too many ref and way too many unsafe gcsafe

new pattern

I'm 50/50 on this.. I actually like the idea of newX expressing heap/ref allocations, but I often hear we want to change it into T.init (cc @arnetheduck )
I made a list of those newX in our code base (some are system.nim)

131: newSeq
184: newException
2: newSkContext
43: newConnection
17: newDaemonApi
1: newStringTable
3: newStringOfCap
47: newFuture
1: newPool
7: newAsyncEvent
7: newString
2: newMultistreamHandshakeException
22: newMultistream
2: newInvalidVarintException
1: newInvalidVarintSizeException
2: newChannel
16: newLPStreamEOFError
3: newStreamInternal
4: newLPStreamLimitError
21: newMplex
13: newStream
4: newMuxer
5: newMuxerProvider
9: newSeqOfCap
9: newIdentify
4: newMessage
5: newTimedCache
6: newMCache
3: newAsyncLock
15: newPubSub
63: newBufferStream
12: newPubSubPeer
9: newNoise
2: newPlainText
2: newSecioConn
4: newSecio
7: newStandardSwitch
48: newTransport
5: newSwitch
2: newAlreadyPipedError
4: newNotWritableError
8: newLPStreamIncompleteError
2: newChronosStream
2: newAsyncStreamReader
2: newAsyncStreamWriter
6: newLPStreamReadError
9: newLPStreamIncorrectError
4: newLPStreamWriteError
6: newNoPubSubException
3: newTestSelectStream
2: newTestLsStream
2: newTestNaStream

Error: undeclared identifier: 'allFutures'

Running nimble install I see

      Info: Dependency on chronos@any version already satisfied
  Verifying dependencies for [email protected]

But nimble test results in

../libp2p/transports/transport.nim(38, 29) template/generic instantiation from here
../libp2p/transports/transport.nim(41, 9) Error: undeclared identifier: 'allFutures'

Workaround

This works:

nimble install https://github.com/status-im/nim-chronos.git
nimble test

Expected behaviour

Running nimble install or make dep gets me all the required dependencies.

Env

oskarth@localhost /home/oskarth/git/nim-libp2p> nim -v
Nim Compiler Version 0.19.6 [Linux: amd64]
oskarth@localhost /home/oskarth/git/nim-libp2p> nimble -v
nimble v0.9.0 compiled at 2019-07-01 03:16:05

mplex header read incorrectly

https://github.com/status-im/nim-libp2p/blob/b1a34f478efe6d1212cfccecaf4dd8bf89fc2b1f/libp2p/muxers/mplex/coder.nim#L52

reading the header calls readMplexVarint which raises when the varint is bigger than max message size, this however is not applicable for the header itself

https://github.com/status-im/nim-libp2p/blob/b1a34f478efe6d1212cfccecaf4dd8bf89fc2b1f/libp2p/muxers/mplex/coder.nim#L64

converting to MessageType is not safe - 7 is not a valid value for the enum

Can't Load Daemon - Need Shared Lib?

I try to run the chat.nim example. It gives me: Could not find daemon executable! . So I compile daemonapi.nim but same error. So I add daemonapi executable to PATH but same error.

So finally I look in code and he expects p2pd on line 28 of daemonapi.nim but p2pd is not part of repo. Is this module using ipfs on its own or ipfs has to already be running on machine? I wanted to use him standalone in my application.

thanks

Add timeouts and message size limits to mplex

Currently mplex doesn't support any sort of timeout functionality, so streams would hang indefinitely if the remote dies unexpectedly or is too slow.

Message size limiting is also missing.

TransportOSError in chat example

Repro:

  • nim c -r --threads:on examples/chat.nim
  • ./example/chat

Then, open another instance on the same machine. Result:

λ examples\chat.exe
chat.nim(130)            chat
asyncmacro2.nim(334)     main
asyncmacro2.nim(36)      main_continue
chat.nim(108)            mainIter
stream.nim(1299)         start
stream.nim(935)          resumeAccept
stream.nim(804)          acceptPipeLoop
common.nim(476)          raiseTransportOsError
[[reraised from:
chat.nim(130)            chat
asyncloop.nim(915)       waitFor
asyncfutures2.nim(432)   read
]]
Error: unhandled exception: (5) Access is denied.

Async traceback:
  chat.nim(130)        chat
  asyncmacro2.nim(334) main
  asyncmacro2.nim(36)  main_continue
  chat.nim(108)        mainIter
  stream.nim(1299)     start
  stream.nim(935)      resumeAccept
  stream.nim(804)      acceptPipeLoop
  common.nim(476)      raiseTransportOsError
Exception message: (5) Access is denied.

Exception type: [TransportOsError]

[WIP] Roadmap

Cryptography

  • NIST P-256/384/521 curves, required to perform DHE.
  • NIST P-256/384/521 ECDSA required for peer identification.
  • RSA required for peer identification
  • ED25519 required for peer identification
  • SECP256k1 required for peer identification
  • ASN.1 DER encoder/decoder for (ECDSA, RSA public keys/private keys/signatures)
  • Key interface
  • Curve25519 required for noise-libp2p
  • Poly1305 required for noise-libp2p
  • ChaCha20 required for noise-libp2p

https://github.com/libp2p/go-libp2p-crypto

Network Interfaces

Storage and utility

Protocol negotiation

Stream Multiplexer

Connections

Connection manager

Transports

Protocols

Name resolution

Metrics

Swarm

This is actually main loop of any libp2p node which performs all the logic

Connecting to the daemon from the native implementation using Secp256k1 fails in secio.

When connecting to the go daemon from the native implementation, using the Secp256k1 curve the daemon fails with a failure in secio, other curves supported by libp2p such as ECDSA or RSA work fine.

The Secp256k1 implementation used in libp2p is https://github.com/status-im/nim-secp256k1, which is a wrapper of a fork https://github.com/status-im/secp256k1 of the core bitcoin secp256k1.

The specific failure that's thrown by the daemon is ERROR secio: 2.1 Verify: failed: %s malformed signature: no header magic protocol.go:314.

Here is a quick script to reproduce it:

import options
import chronos
import ../libp2p/[standard_setup,
                  daemon/daemonapi,
                  peerinfo,
                  crypto/crypto]

type
  NativePeerInfo = peerinfo.PeerInfo

proc main() {.async.} =
  var protos = @["/test-stream"]

  let nativeNode = newStandardSwitch(privKey = some(PrivateKey.random(PKScheme.Secp256k1)))
  let awaiters = await nativeNode.start()
  let daemonNode = await newDaemonApi()
  let daemonPeer = await daemonNode.identity()

  proc daemonHandler(api: DaemonAPI, stream: P2PStream) {.async.} =
    discard

  await daemonNode.addHandler(protos, daemonHandler)
  discard await nativeNode.dial(NativePeerInfo.init(daemonPeer.peer,
                                                    daemonPeer.addresses),
                                                    protos[0])
  await sleepAsync(1.seconds)

waitFor(main())

Question about usability and planning.

I am loving nim so far and want to use it to experiment with p2p technology, for that reason I was really thrilled to see you've won a grant to actually rewrite this in nim instead of using a wrapper. I want to inquire about intentions for this repository and its current state - is now a good time for me to begin to experiment with it, or will you implement a tagged release system that will signal that the library is ready for general use. I've had mixed luck with the provided examples.

Finally, will it support compilation to the js backend (especially interested in browser).

Less confusing init patterns

Just an idea to make the api more intuitive about resource management.

Init patterns

While trying to figure out why many tests were failing in the test-stream branch, I noticed that the resource creation and destruction API is counter-intuitive.
Most of the debugged issues were just lingering resources in the tests.

We are networking library so it's very common that releasing resources is spelled with close() and actually this is the case.
Yet most of those resources are created with a newXX pattern while the most natural would be openXX, so when we write code we can mentally automatically think and write stuff like, I know this pattern often will not be of use unless its a test but it still matters

let mplexDial = openMplex(conn)
# e.g. in the case of a test
defer: await mplexDial.close()

Without the need of thinking anything and without the need of any linter (very broken in nim) because we know already that a open has a close.

Also new is often associated with forget me ref objects.


new pattern research

131: newSeq
184: newException
2: newSkContext
43: newConnection
17: newDaemonApi
1: newStringTable
3: newStringOfCap
47: newFuture
1: newPool
7: newAsyncEvent
7: newString
2: newMultistreamHandshakeException
22: newMultistream
2: newInvalidVarintException
1: newInvalidVarintSizeException
2: newChannel
16: newLPStreamEOFError
3: newStreamInternal
4: newLPStreamLimitError
21: newMplex
13: newStream
4: newMuxer
5: newMuxerProvider
9: newSeqOfCap
9: newIdentify
4: newMessage
5: newTimedCache
6: newMCache
3: newAsyncLock
15: newPubSub
63: newBufferStream
12: newPubSubPeer
9: newNoise
2: newPlainText
2: newSecioConn
4: newSecio
7: newStandardSwitch
48: newTransport
5: newSwitch
2: newAlreadyPipedError
4: newNotWritableError
8: newLPStreamIncompleteError
2: newChronosStream
2: newAsyncStreamReader
2: newAsyncStreamWriter
6: newLPStreamReadError
9: newLPStreamIncorrectError
4: newLPStreamWriteError
6: newNoPubSubException
3: newTestSelectStream
2: newTestLsStream
2: newTestNaStream

tests/testnative.nim fails with --threads:on

GC-safety issues which you can see by adding a "nim.cfg" or "config.nims" containing --threads:on in the "tests" subdir:

Hint: rpcmsg [Processing]
/mnt/sda3/storage/CODE/status/nim-beacon-chain/vendor/nim-libp2p/libp2p/crypto/ecnist.nim(885, 7) Warning: 'sign' is not GC-safe as it performs an indirect call here [GcUnsafe2]
/mnt/sda3/storage/CODE/status/nim-beacon-chain/vendor/nim-libp2p/libp2p/crypto/crypto.nim(430, 6) Warning: 'sign' is not GC-safe as it calls 'sign' [GcUnsafe2]
/mnt/sda3/storage/CODE/status/nim-beacon-chain/vendor/nim-libp2p/libp2p/protocols/pubsub/rpcmsg.nim(143, 6) Warning: 'sign' is not GC-safe as it calls 'sign' [GcUnsafe2]
/mnt/sda3/storage/CODE/status/nim-beacon-chain/vendor/nim-libp2p/libp2p/protocols/pubsub/rpcmsg.nim(171, 6) Error: 'makeMessage' is not GC-safe as it calls 'sign'

Multiaddress test fail

Error: unhandled exception: cannot open: C:\Users\Bruno\repos\nim-libp2p\libp2p\crypto\BearSSL\src\codec\ccopy.c [IOError]
stack trace: (most recent call last)
C:\Users\Bruno\repos\nim-libp2p\libp2p.nimble(19) testTask
C:\Users\Bruno\repos\Nim\lib\system\nimscript.nim(237) exec
C:\Users\Bruno\repos\Nim\lib\system\nimscript.nim(237, 7) Error: unhandled exception: FAILED: nim c -r tests/testmultiaddress

`Connection.readLP` exception unsafe

LPStreamIncompleteError or LPStreamReadError will cause
https://github.com/status-im/nim-libp2p/blob/d42833947a4baddf21da8ac3105e2d5956a6daac/libp2p/connection.nim#L121 to return a seq of length 10 initialized here: https://github.com/status-im/nim-libp2p/blob/d42833947a4baddf21da8ac3105e2d5956a6daac/libp2p/connection.nim#L118

https://github.com/status-im/nim-libp2p/blob/d42833947a4baddf21da8ac3105e2d5956a6daac/libp2p/connection.nim#L130 will return a partially initialized seq instead

if readExactly leaks some other exception (as it will, now or in the future when the code changes without this code being updated), it will pass through, leaving s in a broken and inconsistent state having read an unknown number of bytes - given that exceptions rarely are handled correctly, this will cause further issues when s is reused incorrectly, swallowing the root cause that was hidden here.

[bufferstream] pushTo() could stuck/freeze on close.

import chronos
import bufferstream

proc createMessage(tmplate: string, size: int): seq[byte] =
  result = newSeq[byte](size)
  for i in 0 ..< len(result):
    result[i] = byte(tmplate[i mod len(tmplate)])

proc testWriteNeverFinish(): Future[bool] {.async.} =
  var stream = newBufferStream()
  var message = createMessage("MESSAGE", DefaultBufferSize * 2 + 1)
  var fut = stream.pushTo(message)
  await stream.close()
  try:
    await wait(fut, 100.milliseconds)
    result = true
  except AsyncTimeoutError:
    result = false

when isMainModule:
  echo waitFor(testWriteNeverFinish())

Expecting true, but currently its false.

Refactor switch to be a state machine

Use a finite state machine to manage the different steps of connections establishing and upgrading. This makes everything more robust and less prone to ordering attacks - i.e. muxing can come if and only if the channel has been secured (i.e. if a secure manager has been previously provided)

Make peerid spec compiant

The current implementation of peer id deviates from the spec in that it defaults the CIDv0 representation instead of CIDv1 as mandated by the spec in https://github.com/libp2p/specs/blob/master/peer-ids/peer-ids.md#string-representation. In particular this excerpt states:

Implementations parsing IDs from text MUST support both base58 CIDv0 and CIDv1 in base32, 
and they MUST generate base32-encoded CIDv1 by default. Generating CIDv0 is allowed as an 
opt-in (behind a flag).

Which is reversed in the current implementation.

I'll speculate the reason is that most early implementations of libp2p defaulting to the CIDv0 format.

In addition, it seems like CIDv1 handling is completely missing from the current implementation of peer id, tho most (all?) required primitives seem to be available.

Roadmap - first beta

Commit e623e70 marks the first functional and interoperable version of the stack. This effectively graduates it to a solid alpha that can be consumed by third party applications. Additionally, with this in place, we can start chipping away at making the stack more robust, cleaner and improving the overall code structure and developer experience.

This issue is a rough outline of what's next, some priorities and major areas of focus to further move the stack closer to v1 are:

  • Cleanup and Refactor
    • The current codebase requires a thorough cleanup
    • Some suggested refactoring was the use of Result in critical codepaths and the use of the .init() pattern for object creation
    • Overcall code structure improvements
    • Move away from manual protobuf parsing
  • Performance
    • Profile, benchmark and improve performance
  • Further testing and validation of spec adherence
    • More overall testing coverage alongside more through interop hopefully using daemons from other implementations (Rust, C++, etc...)

The current implementation was written based on the minimal Eth2 networking requirements outlined in ethresearch/p2p#4, hence some additional (possibly post v1) improvements would be:

  • Add missing functionality
    • Nat traversal,
    • Discovery (DHTs)
    • Connection Management
    • Circuit Relaying
    • Yamux

Peer/PeerID issues.

In recently merged PR #32 there was made changes in peer.nim which i think needs to be reconsidered.

  1. https://github.com/status-im/nim-libp2p/blob/master/libp2p/peer.nim#L21-L25
type
  PeerID* = object
    data*: seq[byte]
    privateKey*: Option[PrivateKey]
    publicKey: Option[PublicKey]

PeerID type now has optional fields of privateKey and publicKey

privateKey

Field privateKey field is currently used, and will be used only to represent and store local peer's private key. All other peers which will be stored in memory, database will have this field empty because we will not know private keys of remote nodes. Size of empty Option[PrivateKey] is 80 bytes.

publicKey

Now about publicKey field. PeerID itself is base58 encoded string of multihash(sha256, "remote peer's public key"), this is done for all public keys which size exceeds 42 bytes (e.g. RSA, ECNIST, which are the most used keys in current network), public keys from secp256k1 or ed25519 which is less then 42 bytes it will be stored as not hashed `multihash(identity, "remote peer's public key").

So in 50% of cases public key could not be recovered from PeerID, to check if remote PeerID has public key and to retrieve this public key there was present two procedures:

IMHO this two procedures are enough to retrieve public key from PeerID if its needed.

To store empty Option[PublicKey] you also need 80 bytes, initialized with RSA/ECNIST public keys it will takes much more.

So total minimal overhead of PeerID size is 160 bytes.

Also from my point of view PeerID is logically must be divided from peer's public key as well as from local peer's private key. Because situation looks like we trying to keep original data (e.g. private key and public key) with its hashed value.

cc @arnetheduck @zah @yglukhov @dryajov @kdeme

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.