* Fix bug in PeerManager.connect() where we would throw ane xception when our PeerFinder had not seen the peer before
* Remove ReConnectionTest case where we constantly try to reconenct o a peer we disconnected, revert logback-test.xml
Refactor ReConnectionTest to use NodeTestUtil.awaitConnectionCount()
Refactor more of codebase to use NodeTestUtil.awaitConnectionCount()
Refactor PeerManagerTest to use NodeTestUtil.awaitConnectionCount()
Refactor more nodeTest to use NodeTestUtil.awaitConnectionCount()
use bitcoinds.size for expectedConnectionCount
* Conslidate PeerFinder.start() logic into peerConnectionScheduler, add bitcoins-s.node.try-peers-start-delay config settig to indicate how long until we start attempting to connect to peers in PeerFinder
* Cleanup
* Revert logback-test.xml
* Add documentation
* Empty commit to run CI
* Rework NodeAppConfig.peers to return Vector[Peer] rather than Vector[String]
* Add higher priority to paramPeers
* Add LargeTransactionTest
* Remove unused ByteArrayOutputStream
* Add block test case for 00000000ce4a4666cce2205d760d37b5579cdedf3ac9e4295557e8ac962cde55
* Cache txId to avoid re-computing txids which can take alot of resources
* Cache WitnessTransaction.toBaseTx, remove println
* remove BlockTest as it didn't capture the bug
* Remove lazy evaluation of bytes in {Transaction, TransactionInput, TransactionOutput}
* use ByteVector.concat in BytesUtil.writeCmpctSizeUInt()
* mv cmpct.bytes into ByteVector.concat()
* Remove uncessary lazy byteSize
* Add test case for 3a8dd04bc1f8179d0b85c8e1a1e89d058833ae64a9a8c3681da3ca329297beb1
* Fix bug where we were classifying things are MultSigScriptPubKey that did not have maxSigs' number of public keys in the Script
* Remove redundant comment
* Get test case setup for tx 1c1f50e
* Fix bug in OP_CHECKLOCKTIMEVERIFY.isValidAsm(), check that the ScriptNumber is lessthan or equal to 5 bytes in size
* Fix bug in OP_CHECKSEQUENCEVERIFY.isValidAsm(), check that the ScriptNumber is lessthan or equal to 5 bytes in size
* Fix bug to check <= rather than <
* Revert Constants.scala, ScriptNumberUtil.scala, remove superflous 'return'
* Try specifying graalvm download url to fetch modern native image
try explicitly stating version #
Add tgz+
try using simple OS to get thigns working
Try runner.os
* Add comment for where to find new jdks in the future
* Move NodeState back to node module
* Refactor peerWithServicesDataMap into NodeState
* More usage of state.peerDataMap in stream
* Fix log message
* Fix compile
* Move PeerFinder into NodeState
* WIP: Move PeerFinder init into NeutrinoNode.start()
* Get things mostly working again after rebase
* Fix bug in handling of headers message where we wouldn't transition to DoneSyncing if a peer sent us 0 headers
* Move SendToPeer into stream
* scalafmt
* Empty commit to run CI
* Re-add DataMessageHandlerTest
* Renable disconnecting portion of NeutrinoNodeTest
* Empty commit to run CI
* Remove disconnection part of test again
* Empty commit to re-run CI
* Empty commit to re-run CI
* Fix bug where number needed to be interpreted as a UInt32 rather than Int32 by the ScriptInterpreter in the case of OP_CSV
* Add static test vector, fix another occurrence of bug
* Get ChainApi.nextBlockHeaderBatchRange() tests passing
* Get all tests passing in chainTest/test
* Get all tests passing
* Add BlockHeaderDAO.getBlockchainFrom(header,startHeight), use it for filter marker computation
* Add carve out for IBD when syncing filter headers to avoid loading entire block header chain into memory
* Cleanup
* Fix inconsistencies between ChainApi.{nextBlockHeaderBatchRange, nextFilterHeaderBatchRange}
* Fix bug when requesting FilterSyncMarker for filter headers outside of our in memory blockchain's range
* Rework buildNHeaders to be faster, move it to ChainUnitTest
* Dont initiate disconnect logic from bitcoind, its falky for some reason
* Fix bug where we weren't switching sync to another peer on disconnect for the case where we were waiting for disconnection
* Empty commit to run CI
* Empty commit to run CI
* Add NodeShuttingDown
* Add NodeShutdown to NodeStreamMessage, use it in PeerManager.stop()
* Add guard to NeutrinoNode.stop() to see if stop() was called when the isStarted flag is set, if we aren't started don't attempt to destroy the akka stream