* Move methods out of PeerManager.onInitialization()
* Add PersistentPeerData, QueriedPeerData
* Segregate PeerData -> {AttemptToConnectPeerData, PersistentPeerData}, handle the cases differently in managePeerAfterInitialization()
* Remove call to sync() in BitcoinSServerMain
* Fix bug where we were attempting to stop peers that had already had their connections fail
* reduce log level for peer discovery failures
* simplify re-query invalid headers test case
* Cleanup test
* Cleanup another test
* Fix re-query invalid headers unti test
* fix unit test
* Empty commit to run CI
* Empty commit to re-run CI
* Empty commit to run CI
* Fix bug where we just ned to awaitAllSync() rather than attempt to sync()
* Use awaitSyncAndIBD, mv awaitSyncAndIBD to NodeTestUtil
* Replace more NodeUnitTest.syncNeutrinoNode() with NodeTestUtil.awaitSyncAndIBD
* Replace more NodeUnitTest.syncNeutrinoNode() with NodeTestUtil.awaitSyncAndIBD
* Remove explicit call to sync()
* Remove extra call to NodeUnitTest.syncNeutrinoNode() in test fixture
* Add NeutrinoNodeNotConnnectedWithBitcoinds
* Don't switch sync peer if its the same peer we are currently syncing from it
* Fix another test
* Remove another sync() call
* Add carve out with MisBehavingPeers for the case where the badPeer was our last peer
* Remove more .sync() calls
* Make PeerMessageSender.reconnect() return a Future that is completed when the connection is established
* Replace hard require() with AsyncUtil.retryUntilSatisfiedF()
* Defensively try to sync everytime a peer is a initialized
* Turn logging OFF
* Fix DataMessagehandlerTest, remove calls to NeutrinoNode.sync() in test fixtures
* First attempt at implementing inactivityChecks()
* Move lastParsedMessage logic into PeerMessageSender
* Add bitcoin-s.node.inactivity-timeout config option
* implement unit test
* Fix CommonSettings
* scalafmt
* scalafmt
* Rename lastSuccessfulParsedMsg -> lastSuccessfulParsedMsgOpt
* make sure we disconnect in the case the stream terminates with an error
* Reduce log level
* WIP: Implement socks5 proxy in PeerMessageSender
* Get something working
* Refactor to use Either when passing a socks5 message or non socks5 ByteString downstream
* Socks5Message -> Socks5MessageResponse
* Revert things
* More cleanup
* Fix rebase
* Move socks5Handler() Flow into Socks5Connection
* Revert NeutrinoNode
* Implement auth for socks5 proxy in stream
* Cleanups
* Make PeerData.peerMessageSender not wrapped in Future
* Remove peerMessageSenderApi param
* Small cleanups
* Rename DataMessageHandlerState -> NodeState
* Cleanups in DataMessageHandler
* Refactor StreamDataMessageWrapper -> NodeStreamMessage
* More small fixes
* Make handleDataPayload() take PeerData rather than Peer
* Move peers into NodeState and remove as param in DataMessageHandler
* replacePeers whenever PeerManager.{onP2PClientDisconnected,onInitialization} is called
* rework ValidatingHeaders.toString()
* Empty commit to run CI
* Fix duplicate filter header sync by adding delay before attempting to sync filter headers
* Fix bug where we don't wait for AsyncUtil.nonBlockingSleep()
* Fix bug in DataMessageHandler.isFiltersSynced()
* Try alternative implementation to fix bug
* Fix valid states for CompactFilterMessage, revert PeerFinder delay
* WIP: Try to move byte streaming/parsing of p2p messages out of P2PClient
* WIP2: Work on killing the actor, replace it with a steram
* Get basic ability to send/receive version message working
* Transition PeerMessageReceiverState to Initializing inside of PeerMessagesender.connect()
* Refactor things out of PeerMessageSender.connect(), add some flow logs
* Get NeutrinoNodeTest be able to sync passing
* Fix some bugs, create ConnectionGraph helper class
* Use killswitch rather than Source.maybe to disconnect peer
* WIP: Debug
* Switch halfClose to false on Tcp.outgoingConnection() to not keep lingering connections
* Delete P2PClientActorTest
* Delete all P2PClient stuff
* Attempt implementing reconnection logic in PeerMessageSender
* remove supervisor
* Empty commit to re-run CI
* Small cleanups
* Implement sendResponseTimeout()
* Restore logback-test.xml
* Add callback to log error message on reconnect
* Increase queueSize/maxConcurrentOffers size
* WIP
* WIP2
* Rebase with P2PClientCallbacks
* scalafmt
* Rework stream materialization in PeerManager to have DataMessageHandler encapsulated in the stream
* WIP: Fix compile
* Get everything compiling, ignore Uncached tests for now
* Increase queue size
* WIP: Make queue re-usable based on PeerManager.{start()/stop()}
* Get things compiling after rebase
* Try to handle case where we have SendToPeer in queue with peer that has been disconnected
* Empty commit to re-run CI
* Add sleep to let version/verack handshake complete in P2PClientActorTest
* Empty commit to re-run CI
* Reduce usage of bitcoind in P2PClientActorTest from 3 bitcoinds -> 2 bitcoinds
* Add error message to PeerFinder.stop() so we know what peer was not getting removed
* Cleanup error message
* Fix scalafmt, add state to log message
* Fix bug PeerMessageReceiverState.stopReconnect() which didn't send DisconnectedPeer() to queue
* Empty commit to re-run CI
* Empty commit to re-run CI
* Reduce log level of onP2PClientDisconnected
* Empty commit to re-run CI
* Small cleanup
* scalafmt
* Get new reference to ChainHandler in more places node.syncFromNewPeer() is called
* Fix rebase
* Commit to run on CI
* Empty commit to run CI
* Empty commit to run CI
* Empty commit to re-run CI
* Empty commit to re-run CI
* Try to reproduce with logs on CI
* Empty commit to re-run CI
* WIP
* Rework onP2PClientDisconnected to return new DataMessagehandlerState
* Save comment about bug
* Add a helper method switchSyncToPeer to take into account the previous DataMessagehandlerState if we need to start a new sync because of disconnection
* Empty commit to re-run CI
* Empty commit to re-run CI
* Cleanup
* Fix case where we weren't sending getheaders to new peer when old peer was disconnected when in state DoneSyncing
* Revert logback-test.xml
* remove comment
* Try using syncHelper() rather than getHeaderSyncHelper() to make sure we sync filters as well if needed
* Re-add log
* Fix bug where we weren't starting to sync filter headers
* Tighten dmhState type to SyncDataMessageHandler on syncFilters(), clean up uncessary code
* Empty commit to re-run CI
* Empty commit to re-run CI
* Add PeerMessageSenderApi.gossipGetHeadersMessage(), use it in Node.sync()
* Rework invalid headers test case to not need to reach into internals of akka stream
* Rework re-query headers test case to not need to reach into internals of akka stream
* Rework switch peers test case to not need to reach into internals of akka stream
* Use peerManager.offer() rather than reaching into DataMessageHandler to send messages to stream
* use gossipGetHeadersMessage() after getting done with IBD to query all peers instead of just one
* Empty commit to re-run CI
* Empty commit to re-run CI
* Empty commit to re-run CI
* Get PeerMessageSenderApi using akka streams for outbound p2p messages
* Use offer method rather than accessing queue directly
* Fix flaky unit test
* Empty commit to re-run CI
* Move methods for requesting filterheaders/filters into PeerManager, now use akka stream for those outbound p2p message
* Move sendInventoryMessage to PeerManager
* Move sendGetHeadersMessage() methods to PeerManager
* WIP: move more methods to PeerMessageSenderApi
* WIP2
* initialize stream before calling PeerFinder.start() so oubound messages get processed
* Rebase
* Make queue buffer size dependent on maxConnectedPeers
* Change state to HeaderSync() if we are re-querying for block headers
* Empty commit to re-run CI
* Remove PeerMessageSender from handleDataPayload()
* Limit access to PeerManager.peerMessageSenders
* revert a few things
* Fix rebase issues
* Fix rebase
* Turn down logging
* Rebase
* remove guard that checks peer size before labeling as MisBehavingPeer
* Fix small bug where we needed to switch syncPeer and we weren't
* Empty commit to run CI
* Empty commit to re-run CI
* Fix test case where we weren't awaiting for Node.sync() to return Some(peer)
* Empty commit to re-run CI
* Fix another reference where we were calling Node.sync() too soon after Node.start()
* scalafmt
* Add another retryUntilSatisfied() on NeutrinoNode.sync()
* Remove awaitPeerWithServices()
* Empty commit to run CI
* Rework Node.sync() to return Future[Option[Peer]] rather than Future[Unit]. This returns the peer we are syncing with, if we could find one to sync with
* Turn logging OFF again
* Empty commit to re-run CI
* Use AsyncUtil.retryUntilSatisfied() when calling node.sync() after starting node to make sure we have a peer to sync from in a test case
* Await on re-started node not stale reference in NeutrinoNodeWithWalletTest
* Fix second reference
* Empty commit to re-run CI
* Move P2PClientCallbacks.onStop() into disconnection logic rather than Actor.postStop()
* Rename onStop -> onDisconnect
* Add reconnect flag to P2PCallbacks.onDisconnect so we don't attempt to reconnect when not necessary
* Rename flag to forceReconnect, check getPeerConnectionCount in PeerManager.onP2PClientDisconnect to see if we have 0 connections. If we do reconnect
* Add PeerManager.isStarted flag, guard reconnection logic with this flag in onP2PClientDisconnected()
* Clear PeerFinder._peerData when PeerFinder.stop() is called
* WIP: Move disconnection logic into stream
* Rework ordering of PeerManager.stop() to shutdown queue after peers are removed
* Empty commit to re-run CI
* Await for getConnectionCount async in test case
* Try increasing queue size
* Bump queue size to 16
* Put initialization, initialization timeout logic in queue rather than callbacks
* Make messages that are sent to the queue rather than callbacks for various control messages
* Empty commit to re-run CI
* Empty commit to re-run CI
* Empty commit to re-run CI
* Remove P2PCallbacks all together
* Re-add PeerMessageReceiverTest
* WIP
* Reset PeerManager.dataMessageHandler to None on PeerManager.stop()
* WIP2
* Always set dataMessageHandler.peerData we process a new message in the stream
* remove PeerManager from PeerDatta
* Make PeerFinder mutable inside of PeerManager, pass the queue into PeerFinder to decouple PeerManager and PeerFinder
* Don't verify actor system shutdown for now
* Drain data message stream before PeerManager.stop() is complete
* Try to fix race condition where peers.length gets mutated as peers get disconnected
* Adjust logic to check if we have a _new_ peer, not a specific peer count
* rework PeerManager.stop() ordering
* Rework PeerFinder.stop() to use Future.traverse()
* Make ControlMessageHandler take PeerManager rather than Node as a param
* refactor PeerData to not take a reference to Node
* Move ControlMessageHandler out of {Node,NeutrinoNode}
* Fix ReConnectionTest
* Cleanup
* Revert logback-test.xml
* Fix connectioncount test case
* Get P2PClientTest passing consistently
* Empty commit to re-run CI