* Remove Initialized from NodeStreamMessage, return Initialized from ControlMessageHandler to avoid deadlocking queue with backpressure
* Revert a few things
* scalafmt
* Move initialization cancellable into connection graph so we can cancel it if PeerMessageSender.disconnect() is called
* Create ConnectionGraph.stop()
* fix compile
* Move methods out of PeerManager.onInitialization()
* Add PersistentPeerData, QueriedPeerData
* Segregate PeerData -> {AttemptToConnectPeerData, PersistentPeerData}, handle the cases differently in managePeerAfterInitialization()
* Remove call to sync() in BitcoinSServerMain
* Fix bug where we were attempting to stop peers that had already had their connections fail
* reduce log level for peer discovery failures
* simplify re-query invalid headers test case
* Cleanup test
* Cleanup another test
* Fix re-query invalid headers unti test
* fix unit test
* Empty commit to run CI
* Empty commit to re-run CI
* Empty commit to run CI
* Fix bug where we just ned to awaitAllSync() rather than attempt to sync()
* Use awaitSyncAndIBD, mv awaitSyncAndIBD to NodeTestUtil
* Replace more NodeUnitTest.syncNeutrinoNode() with NodeTestUtil.awaitSyncAndIBD
* Replace more NodeUnitTest.syncNeutrinoNode() with NodeTestUtil.awaitSyncAndIBD
* Remove explicit call to sync()
* Remove extra call to NodeUnitTest.syncNeutrinoNode() in test fixture
* Add NeutrinoNodeNotConnnectedWithBitcoinds
* Don't switch sync peer if its the same peer we are currently syncing from it
* Fix another test
* Remove another sync() call
* Add carve out with MisBehavingPeers for the case where the badPeer was our last peer
* Remove more .sync() calls
* Make PeerMessageSender.reconnect() return a Future that is completed when the connection is established
* Replace hard require() with AsyncUtil.retryUntilSatisfiedF()
* Defensively try to sync everytime a peer is a initialized
* Turn logging OFF
* Fix DataMessagehandlerTest, remove calls to NeutrinoNode.sync() in test fixtures
* First attempt at implementing inactivityChecks()
* Move lastParsedMessage logic into PeerMessageSender
* Add bitcoin-s.node.inactivity-timeout config option
* implement unit test
* Fix CommonSettings
* scalafmt
* scalafmt
* Rename lastSuccessfulParsedMsg -> lastSuccessfulParsedMsgOpt
* make sure we disconnect in the case the stream terminates with an error
* Reduce log level
* WIP: Implement socks5 proxy in PeerMessageSender
* Get something working
* Refactor to use Either when passing a socks5 message or non socks5 ByteString downstream
* Socks5Message -> Socks5MessageResponse
* Revert things
* More cleanup
* Fix rebase
* Move socks5Handler() Flow into Socks5Connection
* Revert NeutrinoNode
* Implement auth for socks5 proxy in stream
* Cleanups
* Make PeerData.peerMessageSender not wrapped in Future
* Remove peerMessageSenderApi param
* Small cleanups
* Rename DataMessageHandlerState -> NodeState
* Cleanups in DataMessageHandler
* Refactor StreamDataMessageWrapper -> NodeStreamMessage
* More small fixes
* Make handleDataPayload() take PeerData rather than Peer
* Move peers into NodeState and remove as param in DataMessageHandler
* replacePeers whenever PeerManager.{onP2PClientDisconnected,onInitialization} is called
* rework ValidatingHeaders.toString()
* Empty commit to run CI
* Fix duplicate filter header sync by adding delay before attempting to sync filter headers
* Fix bug where we don't wait for AsyncUtil.nonBlockingSleep()
* Fix bug in DataMessageHandler.isFiltersSynced()
* Try alternative implementation to fix bug
* Fix valid states for CompactFilterMessage, revert PeerFinder delay
* WIP: Try to move byte streaming/parsing of p2p messages out of P2PClient
* WIP2: Work on killing the actor, replace it with a steram
* Get basic ability to send/receive version message working
* Transition PeerMessageReceiverState to Initializing inside of PeerMessagesender.connect()
* Refactor things out of PeerMessageSender.connect(), add some flow logs
* Get NeutrinoNodeTest be able to sync passing
* Fix some bugs, create ConnectionGraph helper class
* Use killswitch rather than Source.maybe to disconnect peer
* WIP: Debug
* Switch halfClose to false on Tcp.outgoingConnection() to not keep lingering connections
* Delete P2PClientActorTest
* Delete all P2PClient stuff
* Attempt implementing reconnection logic in PeerMessageSender
* remove supervisor
* Empty commit to re-run CI
* Small cleanups
* Implement sendResponseTimeout()
* Restore logback-test.xml
* Add callback to log error message on reconnect
* Increase queueSize/maxConcurrentOffers size
* WIP
* WIP2
* Rebase with P2PClientCallbacks
* scalafmt
* Rework stream materialization in PeerManager to have DataMessageHandler encapsulated in the stream
* WIP: Fix compile
* Get everything compiling, ignore Uncached tests for now
* Increase queue size
* WIP: Make queue re-usable based on PeerManager.{start()/stop()}
* Get things compiling after rebase
* Try to handle case where we have SendToPeer in queue with peer that has been disconnected
* Empty commit to re-run CI
* Add sleep to let version/verack handshake complete in P2PClientActorTest
* Empty commit to re-run CI
* Reduce usage of bitcoind in P2PClientActorTest from 3 bitcoinds -> 2 bitcoinds
* Add error message to PeerFinder.stop() so we know what peer was not getting removed
* Cleanup error message
* Fix scalafmt, add state to log message
* Fix bug PeerMessageReceiverState.stopReconnect() which didn't send DisconnectedPeer() to queue
* Empty commit to re-run CI
* Empty commit to re-run CI
* Reduce log level of onP2PClientDisconnected
* Empty commit to re-run CI
* Small cleanup
* scalafmt
* Get new reference to ChainHandler in more places node.syncFromNewPeer() is called
* Fix rebase
* Commit to run on CI
* Empty commit to run CI
* Empty commit to run CI
* Empty commit to re-run CI
* Empty commit to re-run CI
* Try to reproduce with logs on CI
* Empty commit to re-run CI
* WIP
* Rework onP2PClientDisconnected to return new DataMessagehandlerState
* Save comment about bug
* Add a helper method switchSyncToPeer to take into account the previous DataMessagehandlerState if we need to start a new sync because of disconnection
* Empty commit to re-run CI
* Empty commit to re-run CI
* Cleanup
* Fix case where we weren't sending getheaders to new peer when old peer was disconnected when in state DoneSyncing
* Revert logback-test.xml
* remove comment
* Try using syncHelper() rather than getHeaderSyncHelper() to make sure we sync filters as well if needed
* Re-add log
* Fix bug where we weren't starting to sync filter headers
* Tighten dmhState type to SyncDataMessageHandler on syncFilters(), clean up uncessary code
* Empty commit to re-run CI
* Empty commit to re-run CI
* Add PeerMessageSenderApi.gossipGetHeadersMessage(), use it in Node.sync()
* Rework invalid headers test case to not need to reach into internals of akka stream
* Rework re-query headers test case to not need to reach into internals of akka stream
* Rework switch peers test case to not need to reach into internals of akka stream
* Use peerManager.offer() rather than reaching into DataMessageHandler to send messages to stream
* use gossipGetHeadersMessage() after getting done with IBD to query all peers instead of just one
* Empty commit to re-run CI
* Empty commit to re-run CI
* Empty commit to re-run CI