* simplify re-query invalid headers test case
* Cleanup test
* Cleanup another test
* Fix re-query invalid headers unti test
* fix unit test
* Empty commit to run CI
* Empty commit to re-run CI
* Empty commit to run CI
* Fix bug where we just ned to awaitAllSync() rather than attempt to sync()
* Use awaitSyncAndIBD, mv awaitSyncAndIBD to NodeTestUtil
* Replace more NodeUnitTest.syncNeutrinoNode() with NodeTestUtil.awaitSyncAndIBD
* Replace more NodeUnitTest.syncNeutrinoNode() with NodeTestUtil.awaitSyncAndIBD
* Remove explicit call to sync()
* Remove extra call to NodeUnitTest.syncNeutrinoNode() in test fixture
* Add NeutrinoNodeNotConnnectedWithBitcoinds
* Don't switch sync peer if its the same peer we are currently syncing from it
* Fix another test
* Remove another sync() call
* Add carve out with MisBehavingPeers for the case where the badPeer was our last peer
* Remove more .sync() calls
* Make PeerMessageSender.reconnect() return a Future that is completed when the connection is established
* Replace hard require() with AsyncUtil.retryUntilSatisfiedF()
* Defensively try to sync everytime a peer is a initialized
* Turn logging OFF
* Fix DataMessagehandlerTest, remove calls to NeutrinoNode.sync() in test fixtures
* First attempt at implementing inactivityChecks()
* Move lastParsedMessage logic into PeerMessageSender
* Add bitcoin-s.node.inactivity-timeout config option
* implement unit test
* Fix CommonSettings
* scalafmt
* scalafmt
* Rename lastSuccessfulParsedMsg -> lastSuccessfulParsedMsgOpt
* make sure we disconnect in the case the stream terminates with an error
* Reduce log level
* WIP: Implement socks5 proxy in PeerMessageSender
* Get something working
* Refactor to use Either when passing a socks5 message or non socks5 ByteString downstream
* Socks5Message -> Socks5MessageResponse
* Revert things
* More cleanup
* Fix rebase
* Move socks5Handler() Flow into Socks5Connection
* Revert NeutrinoNode
* Implement auth for socks5 proxy in stream
* Cleanups
* Make PeerData.peerMessageSender not wrapped in Future
* Remove peerMessageSenderApi param
* Small cleanups
* Rename DataMessageHandlerState -> NodeState
* Cleanups in DataMessageHandler
* Refactor StreamDataMessageWrapper -> NodeStreamMessage
* More small fixes
* Make handleDataPayload() take PeerData rather than Peer
* Move peers into NodeState and remove as param in DataMessageHandler
* replacePeers whenever PeerManager.{onP2PClientDisconnected,onInitialization} is called
* rework ValidatingHeaders.toString()
* Empty commit to run CI
* Fix duplicate filter header sync by adding delay before attempting to sync filter headers
* Fix bug where we don't wait for AsyncUtil.nonBlockingSleep()
* Fix bug in DataMessageHandler.isFiltersSynced()
* Try alternative implementation to fix bug
* Fix valid states for CompactFilterMessage, revert PeerFinder delay
* WIP: Try to move byte streaming/parsing of p2p messages out of P2PClient
* WIP2: Work on killing the actor, replace it with a steram
* Get basic ability to send/receive version message working
* Transition PeerMessageReceiverState to Initializing inside of PeerMessagesender.connect()
* Refactor things out of PeerMessageSender.connect(), add some flow logs
* Get NeutrinoNodeTest be able to sync passing
* Fix some bugs, create ConnectionGraph helper class
* Use killswitch rather than Source.maybe to disconnect peer
* WIP: Debug
* Switch halfClose to false on Tcp.outgoingConnection() to not keep lingering connections
* Delete P2PClientActorTest
* Delete all P2PClient stuff
* Attempt implementing reconnection logic in PeerMessageSender
* remove supervisor
* Empty commit to re-run CI
* Small cleanups
* Implement sendResponseTimeout()
* Restore logback-test.xml
* Add callback to log error message on reconnect
* Increase queueSize/maxConcurrentOffers size