Giter Site home page Giter Site logo

discovery's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

discovery's Issues

Verification not passed errors

Getting a bunch of errors like the one below on MainNet:

Verification not passed for message [WhoAreYouPacketImpl{header=Header{header={protocolId=discv5, version=0x0001, flag=WHOAREYOU, nonce=0xb26c437c64c748562af2bd9a, authDataSize=24}, authData=WhoAreYouAuthData{idNonce=0x596d39ebb3e01f0835e97aee8cf16ae0, enrSeq=902}}, cipherMsgSize=0}] from node NodeRecord{seq=55832, publicKey=0x0238e67d0988a0ef5e6e878aee8c01d3c2150131f39ec2b908a8954419004674a4, udpAddress=Optional[/118.137.82.183:1235], tcpAddress=Optional[/118.137.82.183:9000], asBase64=-NG4QGpf-fGoYMKgPtiUeIW4r200wXanc_kCdI56d-yjye9ca2u36-w7CXTf-jFuecj-uXUnyht3vpe6BscqFkX0CuaC2hiHYXR0bmV0c4gAAAAAAAAAAIRldGgykLUwPyoAAAAA__________-CaWSCdjSCaXCEdolSt4NpcDaQAAAAAAAAAAAAAP__dolSt4lzZWNwMjU2azGhAjjmfQmIoO9eboeK7owB08IVATHznsK5CKiVRBkARnSkg3RjcIIjKIN1ZHCCBNOEdWRwNoIjKA, nodeId=0x12288b9761ae1339677660c71ff8b1f98bf351b33ab8d39f4a64c209b37c519e} in status AUTHENTICATED

Likely the message should just be debug level if there really is something wrong with the incoming message.

DiscoveryNetworkTest intermittently fails

Description

The DiscoveryNetworkTest .test() fails in ~25% of cases on Mac OS with the following exception:

org.opentest4j.AssertionFailedError: expected: <false> but was: <true>
    at org.junit.jupiter.api.AssertionUtils.fail(AssertionUtils.java:55)
    at org.junit.jupiter.api.AssertFalse.assertFalse(AssertFalse.java:40)
    at org.junit.jupiter.api.AssertFalse.assertFalse(AssertFalse.java:35)
    at org.junit.jupiter.api.Assertions.assertFalse(Assertions.java:210)
    at org.ethereum.beacon.discovery.DiscoveryNetworkTest.test(DiscoveryNetworkTest.java:151)

Frequency: 25%

Versions

  • Software version: 5bb7c5a
  • Java version: java 11.0.8 2020-07-14 LTS
    Java(TM) SE Runtime Environment 18.9 (build 11.0.8+10-LTS)
    Java HotSpot(TM) 64-Bit Server VM 18.9 (build 11.0.8+10-LTS, mixed mode)
  • OS Name & Version: Mac OS Catalina 10.15.5

`NPE` during handshake

[  101390] [  1501.844034] [service_teku-besu-0--teku]                      [I] 2023-08-25 01:35:24.257 DEBUG - Failed to read message [HandshakeMessagePacketImpl{header=Header{header={protocolId=discv5, version=0x0001, flag=HANDSHAKE, nonce=0xcd9d74c536ab3c6a75c26773, authDataSize=131}, authData=HandshakeAuthData{srcNodeId=0x91cee533c2d769ed9d975db8818b429c165caa62b4a0e1138d4ad5e87c2e0675, idSignature=0x6df7fa17d06442612f4440e0de394bdb96c98da79ac9b978debfdc1a7a46da8d4a58ccb789dc51b7ff04236b1b6f0e41f1c5749b4deccffe8af9ec1b8b16cefd, ephemeralPubKey=0x03dc1529c9919ab052d0ea5083bb4c746d537f59b708fcfc16ad022d0000b6b267, enrBytes=0x}}, cipherMsgSize=35}] from node Optional[NodeRecord{seq=3, publicKey=0x0351e208b8ac6f80b4571514262dcf5267e4bfafcfa6aacc04f50feb3c8909c93e, udpAddress=Optional[/10.0.20.100:4000], tcpAddress=Optional[/10.0.20.100:4000], asBase64=-Ly4QIXaXc24eRykr61igZRUBcpd-j6APdwf1XgnKKBYwSIJGEr8JF0pP5oJQiI6tlSJWnsuRcSIonkftH8h7Z--jUMDh2F0dG5ldHOI__________-EZXRoMpBpQ4ceBAAGZv__________gmlkgnY0gmlwhAoAFGSJc2VjcDI1NmsxoQNR4gi4rG-AtFcVFCYtz1Jn5L-vz6aqzAT1D-s8iQnJPohzeW5jbmV0cw-DdGNwgg-gg3VkcIIPoA, nodeId=0x91cee533c2d769ed9d975db8818b429c165caa62b4a0e1138d4ad5e87c2e0675, customFields={tcp=4000, udp=4000, attnets=0xffffffffffffffff, syncnets=0x0f, eth2=0x6943871e04000666ffffffffffffffff, ip=0x0a001464, id=V4, secp256k1=0x0351e208b8ac6f80b4571514262dcf5267e4bfafcfa6aacc04f50feb3c8909c93e}}] in status INITIAL
[  101390] [  1501.844034] [service_teku-besu-0--teku]                      [I] java.lang.NullPointerException: Cannot invoke "org.ethereum.beacon.discovery.pipeline.info.RequestInfo.getRequest()" because "requestInfo" is null
[  101390] [  1501.844034] [service_teku-besu-0--teku]                      [I] at org.ethereum.beacon.discovery.schema.NodeSession.lambda$cancelAllRequests$8(NodeSession.java:214) ~[discovery-22.12.0.jar:22.12.0]
[  101390] [  1501.844034] [service_teku-besu-0--teku]                      [I] at java.lang.Iterable.forEach(Iterable.java:75) ~[?:?]
[  101390] [  1501.844034] [service_teku-besu-0--teku]                      [I] at org.ethereum.beacon.discovery.schema.NodeSession.cancelAllRequests(NodeSession.java:210) ~[discovery-22.12.0.jar:22.12.0]
[  101390] [  1501.844034] [service_teku-besu-0--teku]                      [I] at org.ethereum.beacon.discovery.pipeline.handler.HandshakeMessagePacketHandler.markHandshakeAsFailed(HandshakeMessagePacketHandler.java:159) ~[discovery-22.12.0.jar:22.12.0]
[  101390] [  1501.844034] [service_teku-besu-0--teku]                      [I] at org.ethereum.beacon.discovery.pipeline.handler.HandshakeMessagePacketHandler.handle(HandshakeMessagePacketHandler.java:134) ~[discovery-22.12.0.jar:22.12.0]
[  101390] [  1501.844034] [service_teku-besu-0--teku]                      [I] at reactor.core.publisher.FluxPeekFuseable$PeekFuseableConditionalSubscriber.onNext(FluxPeekFuseable.java:489) ~[reactor-core-3.5.8.jar:3.5.8]
[  101390] [  1501.844034] [service_teku-besu-0--teku]                      [I] at reactor.core.publisher.FluxPeekFuseable$PeekFuseableConditionalSubscriber.onNext(FluxPeekFuseable.java:503) ~[reactor-core-3.5.8.jar:3.5.8]
[  101390] [  1501.844034] [service_teku-besu-0--teku]                      [I] at reactor.core.publisher.FluxPeekFuseable$PeekFuseableConditionalSubscriber.onNext(FluxPeekFuseable.java:503) ~[reactor-core-3.5.8.jar:3.5.8]
[  101390] [  1501.844034] [service_teku-besu-0--teku]                      [I] at reactor.core.publisher.FluxPeekFuseable$PeekFuseableConditionalSubscriber.onNext(FluxPeekFuseable.java:503) ~[reactor-core-3.5.8.jar:3.5.8]
[  101390] [  1501.844034] [service_teku-besu-0--teku]                      [I] at reactor.core.publisher.FluxPeekFuseable$PeekFuseableConditionalSubscriber.onNext(FluxPeekFuseable.java:503) ~[reactor-core-3.5.8.jar:3.5.8]
[  101390] [  1501.844034] [service_teku-besu-0--teku]                      [I] at reactor.core.publisher.FluxPeekFuseable$PeekFuseableConditionalSubscriber.onNext(FluxPeekFuseable.java:503) ~[reactor-core-3.5.8.jar:3.5.8]
[  101390] [  1501.844034] [service_teku-besu-0--teku]                      [I] at reactor.core.publisher.FluxPeekFuseable$PeekFuseableConditionalSubscriber.onNext(FluxPeekFuseable.java:503) ~[reactor-core-3.5.8.jar:3.5.8]
[  101390] [  1501.844034] [service_teku-besu-0--teku]                      [I] at reactor.core.publisher.FluxPeekFuseable$PeekFuseableConditionalSubscriber.onNext(FluxPeekFuseable.java:503) ~[reactor-core-3.5.8.jar:3.5.8]
[  101390] [  1501.844034] [service_teku-besu-0--teku]                      [I] at reactor.core.publisher.FluxReplay$SizeBoundReplayBuffer.replayNormal(FluxReplay.java:877) ~[reactor-core-3.5.8.jar:3.5.8]
[  101390] [  1501.844034] [service_teku-besu-0--teku]                      [I] at reactor.core.publisher.FluxReplay$SizeBoundReplayBuffer.replay(FluxReplay.java:965) ~[reactor-core-3.5.8.jar:3.5.8]
[  101390] [  1501.844034] [service_teku-besu-0--teku]                      [I] at reactor.core.publisher.ReplayProcessor.tryEmitNext(ReplayProcessor.java:508) ~[reactor-core-3.5.8.jar:3.5.8]
[  101390] [  1501.844034] [service_teku-besu-0--teku]                      [I] at reactor.core.publisher.InternalManySink.emitNext(InternalManySink.java:27) ~[reactor-core-3.5.8.jar:3.5.8]
[  101390] [  1501.844034] [service_teku-besu-0--teku]                      [I] at reactor.core.publisher.ReplayProcessor.onNext(ReplayProcessor.java:495) ~[reactor-core-3.5.8.jar:3.5.8]
[  101390] [  1501.844034] [service_teku-besu-0--teku]                      [I] at reactor.core.publisher.FluxCreate$IgnoreSink.next(FluxCreate.java:639) ~[reactor-core-3.5.8.jar:3.5.8]
[  101390] [  1501.844034] [service_teku-besu-0--teku]                      [I] at reactor.core.publisher.FluxCreate$SerializedFluxSink.next(FluxCreate.java:161) ~[reactor-core-3.5.8.jar:3.5.8]
[  101390] [  1501.844034] [service_teku-besu-0--teku]                      [I] at org.ethereum.beacon.discovery.pipeline.PipelineImpl.push(PipelineImpl.java:52) ~[discovery-22.12.0.jar:22.12.0]
[  101390] [  1501.844034] [service_teku-besu-0--teku]                      [I] at reactor.core.publisher.FluxPeekFuseable$PeekFuseableConditionalSubscriber.onNext(FluxPeekFuseable.java:489) [reactor-core-3.5.8.jar:3.5.8]
[  101390] [  1501.844034] [service_teku-besu-0--teku]                      [I] at reactor.core.publisher.FluxReplay$SizeBoundReplayBuffer.replayNormal(FluxReplay.java:877) [reactor-core-3.5.8.jar:3.5.8]
[  101390] [  1501.844040] [service_teku-besu-0--teku]                      [I] at reactor.core.publisher.FluxReplay$SizeBoundReplayBuffer.replay(FluxReplay.java:965) [reactor-core-3.5.8.jar:3.5.8]
[  101390] [  1501.844040] [service_teku-besu-0--teku]                      [I] at reactor.core.publisher.ReplayProcessor.tryEmitNext(ReplayProcessor.java:508) [reactor-core-3.5.8.jar:3.5.8]
[  101390] [  1501.844040] [service_teku-besu-0--teku]                      [I] at reactor.core.publisher.InternalManySink.emitNext(InternalManySink.java:27) [reactor-core-3.5.8.jar:3.5.8]
[  101390] [  1501.844040] [service_teku-besu-0--teku]                      [I] at reactor.core.publisher.ReplayProcessor.onNext(ReplayProcessor.java:495) [reactor-core-3.5.8.jar:3.5.8]
[  101390] [  1501.844040] [service_teku-besu-0--teku]                      [I] at reactor.core.publisher.FluxCreate$IgnoreSink.next(FluxCreate.java:639) [reactor-core-3.5.8.jar:3.5.8]
[  101390] [  1501.844040] [service_teku-besu-0--teku]                      [I] at reactor.core.publisher.FluxCreate$SerializedFluxSink.next(FluxCreate.java:161) [reactor-core-3.5.8.jar:3.5.8]
[  101390] [  1501.844040] [service_teku-besu-0--teku]                      [I] at org.ethereum.beacon.discovery.network.IncomingMessageSink.channelRead0(IncomingMessageSink.java:31) [discovery-22.12.0.jar:22.12.0]
[  101390] [  1501.844040] [service_teku-besu-0--teku]                      [I] at org.ethereum.beacon.discovery.network.IncomingMessageSink.channelRead0(IncomingMessageSink.java:20) [discovery-22.12.0.jar:22.12.0]
[  101390] [  1501.844040] [service_teku-besu-0--teku]                      [I] at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:99) [netty-transport-4.1.96.Final.jar:4.1.96.Final]
[  101390] [  1501.844040] [service_teku-besu-0--teku]                      [I] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) [netty-transport-4.1.96.Final.jar:4.1.96.Final]
[  101390] [  1501.844040] [service_teku-besu-0--teku]                      [I] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) [netty-transport-4.1.96.Final.jar:4.1.96.Final]
[  101390] [  1501.844040] [service_teku-besu-0--teku]                      [I] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) [netty-transport-4.1.96.Final.jar:4.1.96.Final]
[  101390] [  1501.844040] [service_teku-besu-0--teku]                      [I] at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.96.Final.jar:4.1.96.Final]
[  101390] [  1501.844040] [service_teku-besu-0--teku]                      [I] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) [netty-transport-4.1.96.Final.jar:4.1.96.Final]
[  101390] [  1501.844040] [service_teku-besu-0--teku]                      [I] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) [netty-transport-4.1.96.Final.jar:4.1.96.Final]
[  101390] [  1501.844040] [service_teku-besu-0--teku]                      [I] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) [netty-transport-4.1.96.Final.jar:4.1.96.Final]
[  101390] [  1501.844040] [service_teku-besu-0--teku]                      [I] at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:280) [netty-handler-4.1.96.Final.jar:4.1.96.Final]
[  101390] [  1501.844040] [service_teku-besu-0--teku]                      [I] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) [netty-transport-4.1.96.Final.jar:4.1.96.Final]
[  101390] [  1501.844040] [service_teku-besu-0--teku]                      [I] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) [netty-transport-4.1.96.Final.jar:4.1.96.Final]
[  101390] [  1501.844040] [service_teku-besu-0--teku]                      [I] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) [netty-transport-4.1.96.Final.jar:4.1.96.Final]
[  101390] [  1501.844040] [service_teku-besu-0--teku]                      [I] at io.netty.handler.traffic.AbstractTrafficShapingHandler.channelRead(AbstractTrafficShapingHandler.java:506) [netty-handler-4.1.96.Final.jar:4.1.96.Final]
[  101390] [  1501.844040] [service_teku-besu-0--teku]                      [I] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) [netty-transport-4.1.96.Final.jar:4.1.96.Final]
[  101390] [  1501.844040] [service_teku-besu-0--teku]                      [I] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) [netty-transport-4.1.96.Final.jar:4.1.96.Final]
[  101390] [  1501.844040] [service_teku-besu-0--teku]                      [I] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) [netty-transport-4.1.96.Final.jar:4.1.96.Final]
[  101390] [  1501.844040] [service_teku-besu-0--teku]                      [I] at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) [netty-transport-4.1.96.Final.jar:4.1.96.Final]
[  101390] [  1501.844040] [service_teku-besu-0--teku]                      [I] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) [netty-transport-4.1.96.Final.jar:4.1.96.Final]
[  101390] [  1501.844045] [service_teku-besu-0--teku]                      [I] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) [netty-transport-4.1.96.Final.jar:4.1.96.Final]
[  101390] [  1501.844045] [service_teku-besu-0--teku]                      [I] at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [netty-transport-4.1.96.Final.jar:4.1.96.Final]
[  101390] [  1501.844045] [service_teku-besu-0--teku]                      [I] at io.netty.channel.nio.AbstractNioMessageChannel$NioMessageUnsafe.read(AbstractNioMessageChannel.java:97) [netty-transport-4.1.96.Final.jar:4.1.96.Final]
[  101390] [  1501.844517] [service_teku-besu-0--teku]                      [I] at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) [netty-transport-4.1.96.Final.jar:4.1.96.Final]
[  101390] [  1501.844517] [service_teku-besu-0--teku]                      [I] at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) [netty-transport-4.1.96.Final.jar:4.1.96.Final]
[  101390] [  1501.844517] [service_teku-besu-0--teku]                      [I] at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) [netty-transport-4.1.96.Final.jar:4.1.96.Final]
[  101390] [  1501.844517] [service_teku-besu-0--teku]                      [I] at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) [netty-transport-4.1.96.Final.jar:4.1.96.Final]
[  101390] [  1501.844517] [service_teku-besu-0--teku]                      [I] at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) [netty-common-4.1.96.Final.jar:4.1.96.Final]
[  101390] [  1501.844517] [service_teku-besu-0--teku]                      [I] at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.96.Final.jar:4.1.96.Final]
[  101390] [  1501.844517] [service_teku-besu-0--teku]                      [I] at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [netty-common-4.1.96.Final.jar:4.1.96.Final]
[  101390] [  1501.844517] [service_teku-besu-0--teku]                      [I] at java.lang.Thread.run(Thread.java:833) [?:?]
[  101390] [  1501.865032] [service_teku-besu-0--teku]                      [I] 2023-08-25 01:35:24.299 DEBUG - Cancelling all requests in session NodeSession{0x91cee533c2d769ed9d975db8818b429c165caa62b4a0e1138d4ad5e87c2e0675 (INITIAL)}
[  101390] [  1501.865053] [service_teku-besu-0--teku]                      [I] 2023-08-25 01:35:24.299 DEBUG - Acct schedule not ok: 2645 > 2*1000 from ChannelTC901969167

full trace here:
https://ethereum.antithesis.com/report/eL6y6R864ppt1SUzoKzXGu5pubdUU9t1/1yGq_5zY8JIxqoRE8q4syRZv8CHoyAQY1Y-_X4dvBS8.html?auth=v2.public.eyJzY29wZSI6eyJSZXBvcnRTY29wZVYxIjp7ImFzc2V0IjoiMXlHcV81elk4Skl4cW9SRThxNHN5Ulp2OENIb3lBUVkxWS1fWDRkdkJTOC5odG1sIiwicmVwb3J0X2lkIjoiZUw2eTZSODY0cHB0MVNVem9LelhHdTVwdWJkVVU5dDEifX0sIm5iZiI6IjIwMjMtMDgtMzBUMDc6NTQ6NTRaIn0bWJGgX4B6hXPiBi0zIdZZYvCKM-JXmKBBzawc7kMTjfimv2r-fgYGXi3_Pd-lFayXg8HwWMpkjvLXt6YZcTMK

Pack the Nodes message more tightly

Description

Currently ENRs for Nodes message are packed considering the max ENR size, i.e. max 4 ENRs per packet to fit the max packet size in the worst case.
If ENRs are smaller then more records could be packed into a single message.

Also need to consider that the message can be sent withing both OrdinaryMessage and Handshake packets. However for Nodes message (as a response message) this situation is highly unlikely to occur with a honest remote node, so probably just controlling and dropping outgoing too large message would be enough. (see comment #72 (comment))

Continue retrying bootnodes

Description

To avoid dropping all peers from the node table, ensure that bootnodes are continuously retried, even if they have failed previously rather than dropping them from the node table entirely.

This doesn't mean they need to be in the k-buckets, just ensure that if they drop out of the k-buckets that we will periodically ping them to check if they are back online and can be used to bootstrap again if the k-buckets wind up empty. A periodic ping task should be enough to achieve this as it will re-add them to the k-buckets if they reply and there is room.

Handle unexpected WHOAREYOU

When communicating with peers, we may receive a WHOAREYOU at any time and should handle it cleanly. Currently we only expect it when we sent a random packet or are authenticated. However, it’s possible that they are trying to connect to us at the same time we are connecting to them, resulting in both sides sending WHOAREYOU to each other. We need to handle that case cleanly somehow.

Adding SECP256R1 support to be NIST compliant

In Besu there is ongoing work to add SECP256R1 as an alternative elliptic curve. It is intended for private networks, which need to be NIST compliant.

After adding support for the new elliptic curve, I ran the acceptance tests of Besu. All are failing because the peer discovery does not yet support any other elliptic curve. The peer discovery of Besu uses, among other classes, util/Functions. This class has SECP256K1 hardcoded in it and I would like to add support for SECP256R1 as well.

I would like to know what you think about adding a different elliptic curve. And if you like the idea, if you have any suggestion for a good approach to implement it.

discovery-tasks-1 | ERROR | DefaultSchedulers | Unhandled exception (1)

2020-06-04 07:57:28.481+00:00 | TimeTickChannel-0 | INFO  | teku-event-log | ^[[37mSlot Event  *** Slot: 59987, Block:    ... empty, Epoch: 1874, Finalized checkpoint: 1872, Finalized root: d0278a..3bf3, Peers: 24^[[0m
2020-06-04 07:57:30.645+00:00 | discovery-tasks-1 | ERROR | DefaultSchedulers | Unhandled exception:
reactor.core.Exceptions$ErrorCallbackNotImplemented: java.util.NoSuchElementException: No value present
Caused by: java.util.NoSuchElementException: No value present
        at java.util.Optional.orElseThrow(Optional.java:375) ~[?:?]
        at org.ethereum.beacon.discovery.pipeline.handler.NodeIdToSession.getRemoteSocketAddress(NodeIdToSession.java:125) ~[discovery-0.3.6.jar:0.3.6]
        at org.ethereum.beacon.discovery.pipeline.handler.NodeIdToSession.getOrCreateSession(NodeIdToSession.java:93) ~[discovery-0.3.6.jar:0.3.6]
        at org.ethereum.beacon.discovery.pipeline.handler.NodeIdToSession.handle(NodeIdToSession.java:87) ~[discovery-0.3.6.jar:0.3.6]
        at reactor.core.publisher.FluxPeekFuseable$PeekFuseableSubscriber.onNext(FluxPeekFuseable.java:189) ~[reactor-core-3.3.0.RELEASE.jar:3.3.0.RELEASE]
        at reactor.core.publisher.FluxPeekFuseable$PeekFuseableSubscriber.onNext(FluxPeekFuseable.java:203) ~[reactor-core-3.3.0.RELEASE.jar:3.3.0.RELEASE]
        at reactor.core.publisher.FluxPeekFuseable$PeekFuseableSubscriber.onNext(FluxPeekFuseable.java:203) ~[reactor-core-3.3.0.RELEASE.jar:3.3.0.RELEASE]
        at reactor.core.publisher.FluxReplay$SizeBoundReplayBuffer.replayNormal(FluxReplay.java:811) ~[reactor-core-3.3.0.RELEASE.jar:3.3.0.RELEASE]
        at reactor.core.publisher.FluxReplay$SizeBoundReplayBuffer.replay(FluxReplay.java:895) ~[reactor-core-3.3.0.RELEASE.jar:3.3.0.RELEASE]
        at reactor.core.publisher.ReplayProcessor.onNext(ReplayProcessor.java:442) ~[reactor-core-3.3.0.RELEASE.jar:3.3.0.RELEASE]
        at reactor.core.publisher.FluxCreate$IgnoreSink.next(FluxCreate.java:618) ~[reactor-core-3.3.0.RELEASE.jar:3.3.0.RELEASE]
        at reactor.core.publisher.FluxCreate$SerializedSink.next(FluxCreate.java:153) ~[reactor-core-3.3.0.RELEASE.jar:3.3.0.RELEASE]
        at org.ethereum.beacon.discovery.pipeline.PipelineImpl.push(PipelineImpl.java:43) ~[discovery-0.3.6.jar:0.3.6]
        at org.ethereum.beacon.discovery.DiscoveryManagerImpl.executeTaskImpl(DiscoveryManagerImpl.java:151) ~[discovery-0.3.6.jar:0.3.6]
        at org.ethereum.beacon.discovery.DiscoveryManagerImpl.ping(DiscoveryManagerImpl.java:162) ~[discovery-0.3.6.jar:0.3.6]
        at org.ethereum.beacon.discovery.task.LiveCheckTasks.lambda$add$2(LiveCheckTasks.java:55) ~[discovery-0.3.6.jar:0.3.6]
        at org.ethereum.beacon.discovery.scheduler.ErrorHandlingScheduler.runAndHandleError(ErrorHandlingScheduler.java:70) ~[discovery-0.3.6.jar:0.3.6]
        at org.ethereum.beacon.discovery.scheduler.ErrorHandlingScheduler.lambda$execute$1(ErrorHandlingScheduler.java:46) ~[discovery-0.3.6.jar:0.3.6]
        at org.ethereum.beacon.discovery.scheduler.Scheduler.lambda$execute$0(Scheduler.java:35) ~[discovery-0.3.6.jar:0.3.6]
        at org.ethereum.beacon.discovery.scheduler.ExecutorScheduler.lambda$execute$0(ExecutorScheduler.java:32) ~[discovery-0.3.6.jar:0.3.6]
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) [?:?]
        at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) [?:?]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
        at java.lang.Thread.run(Thread.java:830) [?:?]

From @benjaminion Consensys/teku#2058

Unhandled IllegalArgumentException

Description

Exception reaching end of netty pipeline:

16:41:56.388 WARN  - An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
reactor.core.Exceptions$ErrorCallbackNotImplemented: java.lang.IllegalArgumentException: Provided length 32 is too big: the value has size 18 and has only 18 bytes from 0
Caused by: java.lang.IllegalArgumentException: Provided length 32 is too big: the value has size 18 and has only 18 bytes from 0
	at com.google.common.base.Preconditions.checkArgument(Preconditions.java:478) ~[guava-28.1-jre.jar:?]
	at org.apache.tuweni.bytes.ArrayWrappingBytes.slice(ArrayWrappingBytes.java:74) ~[tuweni-bytes-0.10.0.jar:0.10.0]
	at org.ethereum.beacon.discovery.packet.UnknownPacket.isWhoAreYouPacket(UnknownPacket.java:36) ~[discovery-0.3.6.jar:0.3.6]
	at org.ethereum.beacon.discovery.pipeline.handler.WhoAreYouAttempt.lambda$handle$1(WhoAreYouAttempt.java:40) ~[discovery-0.3.6.jar:0.3.6]
	at org.ethereum.beacon.discovery.pipeline.HandlerUtil.requireCondition(HandlerUtil.java:45) ~[discovery-0.3.6.jar:0.3.6]
	at org.ethereum.beacon.discovery.pipeline.handler.WhoAreYouAttempt.handle(WhoAreYouAttempt.java:38) ~[discovery-0.3.6.jar:0.3.6]
	at reactor.core.publisher.FluxPeekFuseable$PeekFuseableSubscriber.onNext(FluxPeekFuseable.java:189) ~[reactor-core-3.3.0.RELEASE.jar:3.3.0.RELEASE]
	at reactor.core.publisher.FluxPeekFuseable$PeekFuseableSubscriber.onNext(FluxPeekFuseable.java:203) ~[reactor-core-3.3.0.RELEASE.jar:3.3.0.RELEASE]
	at reactor.core.publisher.FluxReplay$SizeBoundReplayBuffer.replayNormal(FluxReplay.java:811) ~[reactor-core-3.3.0.RELEASE.jar:3.3.0.RELEASE]
	at reactor.core.publisher.FluxReplay$SizeBoundReplayBuffer.replay(FluxReplay.java:895) ~[reactor-core-3.3.0.RELEASE.jar:3.3.0.RELEASE]
	at reactor.core.publisher.ReplayProcessor.onNext(ReplayProcessor.java:442) ~[reactor-core-3.3.0.RELEASE.jar:3.3.0.RELEASE]
	at reactor.core.publisher.FluxCreate$IgnoreSink.next(FluxCreate.java:618) ~[reactor-core-3.3.0.RELEASE.jar:3.3.0.RELEASE]
	at reactor.core.publisher.FluxCreate$SerializedSink.next(FluxCreate.java:153) ~[reactor-core-3.3.0.RELEASE.jar:3.3.0.RELEASE]
	at org.ethereum.beacon.discovery.pipeline.PipelineImpl.push(PipelineImpl.java:43) ~[discovery-0.3.6.jar:0.3.6]
	at reactor.core.publisher.LambdaSubscriber.onNext(LambdaSubscriber.java:160) ~[reactor-core-3.3.0.RELEASE.jar:3.3.0.RELEASE]
	at reactor.core.publisher.FluxReplay$SizeBoundReplayBuffer.replayNormal(FluxReplay.java:811) ~[reactor-core-3.3.0.RELEASE.jar:3.3.0.RELEASE]
	at reactor.core.publisher.FluxReplay$SizeBoundReplayBuffer.replay(FluxReplay.java:895) ~[reactor-core-3.3.0.RELEASE.jar:3.3.0.RELEASE]
	at reactor.core.publisher.ReplayProcessor.onNext(ReplayProcessor.java:442) ~[reactor-core-3.3.0.RELEASE.jar:3.3.0.RELEASE]
	at reactor.core.publisher.FluxCreate$IgnoreSink.next(FluxCreate.java:618) ~[reactor-core-3.3.0.RELEASE.jar:3.3.0.RELEASE]
	at reactor.core.publisher.FluxCreate$SerializedSink.next(FluxCreate.java:153) ~[reactor-core-3.3.0.RELEASE.jar:3.3.0.RELEASE]
	at org.ethereum.beacon.discovery.network.IncomingMessageSink.channelRead0(IncomingMessageSink.java:31) ~[discovery-0.3.6.jar:0.3.6]
	at org.ethereum.beacon.discovery.network.IncomingMessageSink.channelRead0(IncomingMessageSink.java:20) ~[discovery-0.3.6.jar:0.3.6]
	at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) ~[netty-all-4.1.42.Final.jar:4.1.42.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) [netty-all-4.1.42.Final.jar:4.1.42.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) [netty-all-4.1.42.Final.jar:4.1.42.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352) [netty-all-4.1.42.Final.jar:4.1.42.Final]
	at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102) [netty-all-4.1.42.Final.jar:4.1.42.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) [netty-all-4.1.42.Final.jar:4.1.42.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) [netty-all-4.1.42.Final.jar:4.1.42.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352) [netty-all-4.1.42.Final.jar:4.1.42.Final]
	at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:241) [netty-all-4.1.42.Final.jar:4.1.42.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) [netty-all-4.1.42.Final.jar:4.1.42.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) [netty-all-4.1.42.Final.jar:4.1.42.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352) [netty-all-4.1.42.Final.jar:4.1.42.Final]
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1422) [netty-all-4.1.42.Final.jar:4.1.42.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) [netty-all-4.1.42.Final.jar:4.1.42.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) [netty-all-4.1.42.Final.jar:4.1.42.Final]
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:931) [netty-all-4.1.42.Final.jar:4.1.42.Final]
	at io.netty.channel.nio.AbstractNioMessageChannel$NioMessageUnsafe.read(AbstractNioMessageChannel.java:93) [netty-all-4.1.42.Final.jar:4.1.42.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:700) [netty-all-4.1.42.Final.jar:4.1.42.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:635) [netty-all-4.1.42.Final.jar:4.1.42.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:552) [netty-all-4.1.42.Final.jar:4.1.42.Final]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:514) [netty-all-4.1.42.Final.jar:4.1.42.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$6.run(SingleThreadEventExecutor.java:1044) [netty-all-4.1.42.Final.jar:4.1.42.Final]
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-all-4.1.42.Final.jar:4.1.42.Final]
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [netty-all-4.1.42.Final.jar:4.1.42.Final]
	at java.lang.Thread.run(Thread.java:834) [?:?]

Stream ordering of buckets.streamClosestPeers(targetId) does not follow the xor ordering (Spec Mismatch)

Description

Stream ordering of buckets.streamClosestPeers(targetId) does not follow the xor ordering (Spec Mismatch)

Acceptance Criteria

  • Stream ordering should exactly adhere to xor distance

Steps to Reproduce (Bug)

  1. Call buckets.streamClosestPeers(targetId), after that sort the stream and compare it to your previous stream - Notice the discrepancy

Implementation notes

Vocab:

  • Move left - Move from a distance d to d-1.
  • Move right - Move from a distance d to d+1

Currently, we start off at the target xor distance, and then simultaneous move left and right and add them to our iterator. But that is wrong. We should Prioritize left buckets, then move right, if we don't get enough peers. Moving left guarantees that atleast cpl bits match

go-libp2p-kbucket follows the exact rationale described above. (Except for them left is right (logDistance vs common prefix length))
geth does and n^2 traversal - gets all the peers from all buckets and sorts them.

Preferred implementation:
if we worry about optimization: Node table is a centrally used data structure, so we have to design it as efficient as possible. We can add an additional parameter n, and stream and add to buckets in the above mentioned order. We stop scanning for extra buckets hasMoreBucketsToScan when we have grabbed and put more than n to a tree set ordered by xor distance comparator

if we don't worry about optimization: Stream all peers and add a .sorted() at the end of the stream. we may use a simple hashset instead of a treeset since we are already sorting at the end.

But as it is going to be 16*256, I'm not exactly sure on how approach 1 would help us. (Again since most of the calls would fall on Bucket 256 anyway)

Unhandled NullPointerException

Description

java.lang.NullPointerException
	at org.ethereum.beacon.discovery.scheduler.ExpirationScheduler.cancel(ExpirationScheduler.java:55)
	at org.ethereum.beacon.discovery.scheduler.ExpirationScheduler.put(ExpirationScheduler.java:39)
	at org.ethereum.beacon.discovery.task.RecursiveLookupTasks.addTimeout(RecursiveLookupTasks.java:66)
	at org.ethereum.beacon.discovery.task.RecursiveLookupTasks.lambda$add$1(RecursiveLookupTasks.java:51)
	at org.ethereum.beacon.discovery.scheduler.ErrorHandlingScheduler.runAndHandleError(ErrorHandlingScheduler.java:70)
	at org.ethereum.beacon.discovery.scheduler.ErrorHandlingScheduler.lambda$execute$1(ErrorHandlingScheduler.java:46)

hash(homeNodeId) can be cached

Each time a RandomPacket is received, the sourceNodeId must be calculated from the tag field. This requires the hash(homeNodeId) to be calculated:

Call to getSourceNodeId(homeNodeId)
https://github.com/PegaSysEng/discovery/blob/840e90bea80699790eb750839be7402fe8b598b0/src/main/java/org/ethereum/beacon/discovery/pipeline/handler/UnknownPacketTagToSender.java#L49

hash(homeNodeId) seems to happen each time:
https://github.com/PegaSysEng/discovery/blob/840e90bea80699790eb750839be7402fe8b598b0/src/main/java/org/ethereum/beacon/discovery/packet/UnknownPacket.java#L58

This can be avoided if the hash(homeNodeId) is cached.

Check for recently evicted peers before adding them to the node table.

Description

An enhancement when adding peers to node table.

Discussion + Additional Information:

Let's say that a peer was dead (didn't reply to our last 10 pings).

We as a node evicted that particular peer.

And we are performing walks and when we do that, the evicted peer might still get added to our new table, If it is received from other lazy peer during FIND_NODES RPC.

And still we would use the same for our lookups etc. Added with exponential backoff, this might actually slow down disc?

Ping: @ajsutton. Does this make sense? or downright wrong?

Replace web3j crypto

Replace the web3j crypto functions with something more reliable. Possibly using BouncyCastle directly or maybe Tuweni crypto. In particular the ECDSA functions.

DiscoveryIntegrationTest intermittently fails

Description

As part of the CI build, this test (shouldRecoverAfterErrorWhileDecodingInboundMessage) will intermittently fail. If you push another commit to the PR, half of the time it will succeed. I've been unable to reproduce this failure locally. There's already a 30 second timeout (in waitFor), so I don't think it's a slow test that's taking too long.

DiscoveryIntegrationTest > shouldRecoverAfterErrorWhileDecodingInboundMessage() FAILED
    java.util.concurrent.TimeoutException at DiscoveryIntegrationTest.java:369

https://github.com/ConsenSys/discovery/blob/1ffc42f4a295f00007ec17e4db7707a29aa269bb/src/test/java/org/ethereum/beacon/discovery/integration/DiscoveryIntegrationTest.java#L367-L371

References

Add a new method, NodeRecordFactory.fromEnr(String enr)

Description

Add a new method, NodeRecordFactory.fromEnr(String enr), that will parse a string (represented as enr:enrBase64) and convert it to an NodeRecord. NodeRecordFactory.fromEnr(String enr) should be very similar to the existing method NodeRecordFactory.fromBase64(String enrBase64), the only difference is that the new method should handle the enr: prefix.

The need for this new method revealed itself while fixing teku issue 3583.

Investigate lack of peers reported by MainNet bootnodes

Description

The Teku MainNet bootnode reports far fewer peers than other bootnodes with most distances having no reported peers.

Our bootnode ENR: enr:-KG4QJRlj4pHagfNIm-Fsx9EVjW4rviuZYzle3tyddm2KAWMJBDGAhxfM2g-pDaaiwE8q19uvLSH4jyvWjypLMr3TIcEhGV0aDKQ9aX9QgAAAAD__________4JpZIJ2NIJpcIQDE8KdiXNlY3AyNTZrMaEDhpehBDbZjM_L9ek699Y7vhUJ-eAdMyQW_Fil522Y0fODdGNwgiMog3VkcIIjKA

For exploratory testing I've been using a new class in the discovery test source tree:

package org.ethereum.beacon.discovery;

import static org.ethereum.beacon.discovery.util.Functions.PRIVKEY_SIZE;

import java.util.ArrayList;
import java.util.List;
import java.util.Optional;
import java.util.Random;
import org.apache.tuweni.bytes.Bytes;
import org.ethereum.beacon.discovery.schema.NodeRecord;
import org.ethereum.beacon.discovery.schema.NodeRecordBuilder;
import org.ethereum.beacon.discovery.schema.NodeRecordFactory;
import org.ethereum.beacon.discovery.util.Functions;
import org.ethereum.beacon.discovery.util.Utils;
import org.web3j.crypto.ECKeyPair;

public class Playground {
  public static void main(String[] args) {
    ECKeyPair keyPair = Functions.generateECKeyPair(new Random(1));
    final Bytes privateKey =
        Bytes.wrap(Utils.extractBytesFromUnsignedBigInt(keyPair.getPrivateKey(), PRIVKEY_SIZE));

    final DiscoverySystem system =
        new DiscoverySystemBuilder()
            .listen("0.0.0.0", 9000)
            .privateKey(privateKey)
            .localNodeRecord(
                new NodeRecordBuilder()
                    .privateKey(privateKey)
                    .address("180.150.110.29", 9000)
                    .seq(0)
                    .build())
            .newAddressHandler(
                (oldRecord, proposedRecord) -> {
                  System.out.println("Propsing address: " + proposedRecord);
                  return Optional.of(proposedRecord);
                })
            .build();
    final NodeRecord efBootnode =
        NodeRecordFactory.DEFAULT.fromEnr(
            "enr:-Ku4QHqVeJ8PPICcWk1vSn_XcSkjOkNiTg6Fmii5j6vUQgvzMc9L1goFnLKgXqBJspJjIsB91LTOleFmyWWrFVATGngBh2F0dG5ldHOIAAAAAAAAAACEZXRoMpC1MD8qAAAAAP__________gmlkgnY0gmlwhAMRHkWJc2VjcDI1NmsxoQKLVXFOhp2uX6jeT0DvvDpPcU8FWMjQdR4wMuORMhpX24N1ZHCCIyg");
    final NodeRecord bootnode1 =
        NodeRecordFactory.DEFAULT.fromEnr(
            "enr:-KG4QJRlj4pHagfNIm-Fsx9EVjW4rviuZYzle3tyddm2KAWMJBDGAhxfM2g-pDaaiwE8q19uvLSH4jyvWjypLMr3TIcEhGV0aDKQ9aX9QgAAAAD__________4JpZIJ2NIJpcIQDE8KdiXNlY3AyNTZrMaEDhpehBDbZjM_L9ek699Y7vhUJ-eAdMyQW_Fil522Y0fODdGNwgiMog3VkcIIjKA");
    final NodeRecord bootnode2 =
        NodeRecordFactory.DEFAULT.fromEnr(
            "enr:-KG4QL-eqFoHy0cI31THvtZjpYUu_Jdw_MO7skQRJxY1g5HTN1A0epPCU6vi0gLGUgrzpU-ygeMSS8ewVxDpKfYmxMMGhGV0aDKQtTA_KgAAAAD__________4JpZIJ2NIJpcIQ2_DUbiXNlY3AyNTZrMaED8GJ2vzUqgL6-KD1xalo1CsmY4X1HaDnyl6Y_WayCo9GDdGNwgiMog3VkcIIjKA");
  final NodeRecord node = efBootnode;
    system.start().join();

    system.ping(node).join();
    System.out.println("Pinged node " + node.getNodeId());
    final List<Integer> all = new ArrayList<>();
    for (int i = 0; i <= 258; i++) {
      System.out.println("------------ Distance " + i);
      system.findNodes(node, List.of(0)).join();
      all.add(i);
    }
    System.out.println("------------ All");
    system.findNodes(node, all).join();
    System.exit(0);
  }
}

checkTalkMessageHandling() intermittently fails

Description

DiscoveryIntegrationTest.checkTalkMessageHandling() fails intermittently on CircleCI. Locally, it fails almost all the time.

The failure is due to a timeout:


java.util.concurrent.TimeoutException
	at java.base/java.util.concurrent.CompletableFuture.timedGet(CompletableFuture.java:1886)
	at java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:2021)
	at org.ethereum.beacon.discovery.integration.DiscoveryIntegrationTest.waitFor(DiscoveryIntegrationTest.java:312)
	at org.ethereum.beacon.discovery.integration.DiscoveryIntegrationTest.waitFor(DiscoveryIntegrationTest.java:308)
	at org.ethereum.beacon.discovery.integration.DiscoveryIntegrationTest.checkTalkMessageHandling(DiscoveryIntegrationTest.java:226)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:566)
	at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:675)
	at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
	at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:125)
	at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:132)
	at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:124)
	at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:74)
	at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115)
	at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105)
	at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:104)
	at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:62)
	at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:43)
	at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:35)
	at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104)
	at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:202)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:198)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:135)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:69)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:135)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:125)
	at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:135)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:123)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:122)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:80)
	at java.base/java.util.ArrayList.forEach(ArrayList.java:1540)
	at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:139)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:125)
	at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:135)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:123)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:122)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:80)
	at java.base/java.util.ArrayList.forEach(ArrayList.java:1540)
	at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:139)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:125)
	at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:135)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:123)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:122)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:80)
	at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:32)
	at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57)
	at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:51)
	at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:220)
	at org.junit.platform.launcher.core.DefaultLauncher.lambda$execute$6(DefaultLauncher.java:188)
	at org.junit.platform.launcher.core.DefaultLauncher.withInterceptedStreams(DefaultLauncher.java:202)
	at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:181)
	at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:128)
	at org.gradle.api.internal.tasks.testing.junitplatform.JUnitPlatformTestClassProcessor$CollectAllTestClassesExecutor.processAllTestClasses(JUnitPlatformTestClassProcessor.java:99)
	at org.gradle.api.internal.tasks.testing.junitplatform.JUnitPlatformTestClassProcessor$CollectAllTestClassesExecutor.access$000(JUnitPlatformTestClassProcessor.java:79)
	at org.gradle.api.internal.tasks.testing.junitplatform.JUnitPlatformTestClassProcessor.stop(JUnitPlatformTestClassProcessor.java:75)
	at org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.stop(SuiteTestClassProcessor.java:61)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:566)
	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
	at org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:33)
	at org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:94)
	at com.sun.proxy.$Proxy2.stop(Unknown Source)
	at org.gradle.api.internal.tasks.testing.worker.TestWorker.stop(TestWorker.java:132)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:566)
	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
	at org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:182)
	at org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:164)
	at org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:412)
	at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:64)
	at org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:48)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
	at org.gradle.internal.concurrent.ThreadFactoryImpl$ManagedThreadRunnable.run(ThreadFactoryImpl.java:56)
	at java.base/java.lang.Thread.run(Thread.java:834)

Discovery: ConcurrentModificationException in Bucket.getLiveNodes

Description

java.util.concurrent.CompletionException: java.util.ConcurrentModificationException
	at java.base/java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:315)
	at java.base/java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:320)
	at java.base/java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:936)
	at java.base/java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:911)
	at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510)
	at java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2162)
	at java.base/java.util.concurrent.CompletableFuture$Timeout.run(CompletableFuture.java:2874)
	at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
	at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
	at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: java.util.ConcurrentModificationException
	at java.base/java.util.ArrayList$ArrayListSpliterator.tryAdvance(ArrayList.java:1604)
	at java.base/java.util.stream.ReferencePipeline.forEachWithCancel(ReferencePipeline.java:129)
	at java.base/java.util.stream.AbstractPipeline.copyIntoWithCancel(AbstractPipeline.java:527)
	at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:513)
	at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499)
	at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:921)
	at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
	at java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:682)
	at org.ethereum.beacon.discovery.storage.KBucket.getLiveNodes(KBucket.java:52)
	at org.ethereum.beacon.discovery.storage.KBuckets.lambda$getLiveNodeRecords$0(KBuckets.java:50)
	at java.base/java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:273)
	at java.base/java.util.stream.Streams$StreamBuilderImpl.forEachRemaining(Streams.java:411)
	at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509)
	at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499)
	at java.base/java.util.stream.StreamSpliterators$WrappingSpliterator.forEachRemaining(StreamSpliterators.java:310)
	at java.base/java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:735)
	at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509)
	at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499)
	at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:921)
	at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
	at java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:682)
	at org.ethereum.beacon.discovery.storage.KBucketsIterator.updateCurrentBatch(KBucketsIterator.java:61)
	at org.ethereum.beacon.discovery.storage.KBucketsIterator.hasNext(KBucketsIterator.java:40)
	at java.base/java.util.Iterator.forEachRemaining(Iterator.java:132)
	at java.base/java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1845)
	at java.base/java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:734)
	at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509)
	at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499)
	at java.base/java.util.stream.StreamSpliterators$WrappingSpliterator.forEachRemaining(StreamSpliterators.java:310)
	at java.base/java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:735)
	at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509)
	at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499)
	at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:921)
	at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
	at java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:682)
	at tech.pegasys.teku.networking.p2p.connection.ConnectionManager.lambda$connectToBestPeers$1(ConnectionManager.java:114)
	at tech.pegasys.teku.networking.eth2.peers.Eth2PeerSelectionStrategy.selectPeersToConnect(Eth2PeerSelectionStrategy.java:86)
	at tech.pegasys.teku.networking.p2p.connection.ConnectionManager.connectToBestPeers(ConnectionManager.java:107)
	at tech.pegasys.teku.networking.p2p.connection.ConnectionManager.lambda$searchForPeers$2(ConnectionManager.java:131)
	at tech.pegasys.teku.infrastructure.async.SafeFuture.lambda$finish$22(SafeFuture.java:304)
	at java.base/java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:934)
	... 10 more

Don't include address in ENR until it is known

Description

Support starting with an ENR that contains no local IP information, then fill in the local address when it is determined from PONG responses.

Should still support specifying an initial IP to handle the case where a user explicitly sets the external IP.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.