Commit Graph

391 Commits

Author SHA1 Message Date
Olaoluwa Osuntokun
555de44d9f Revert "Merge pull request #4895 from wpaulino/disallow-premature-chan-updates"
This reverts commit 6e6384114c, reversing
changes made to 98ea433271.
2021-02-09 19:55:45 -08:00
Conner Fromknecht
b1fee734ec
discovery/sync_manager: remove unneeded markGraphSyncing
AFAICT it's not possible to flip back from bein synced_to_chain, so we
remove the underlying call that could reflect this. The method is moved
into the test file since it's still used to test correctness of other
portions of the flow.
2021-01-29 00:19:48 -08:00
Conner Fromknecht
e42301dee2
lntest: call markGraphSynced from gossipSyncer
Rather than performing this call in the SyncManager, we give each
gossipSyncer the ability to mark the first sync completed. This permits
pinned syncers to contribute towards the rpc-level synced_to_graph
value, allowing the value to be true after the first pinned syncer or
regular syncer complets. Unlinke regular syncers, pinned syncers can
proceed in parallel possibly decreasing the waiting time if consumers
rely on this field before proceeding to load their application.
2021-01-29 00:19:48 -08:00
Conner Fromknecht
fcd5cb625a
config: expose gossip.pinned-syncers for conf
The pinned syncer set is exposed as a comma-separated list of pubkeys.
2021-01-29 00:19:47 -08:00
Conner Fromknecht
340414356d
discovery: perform initial historical sync for pinned peers 2021-01-29 00:19:47 -08:00
Conner Fromknecht
2f0d56d539
discovery: add support for PinnedSyncers
A pinned syncer is an ActiveSyncer that is configured to always remain
active for the lifetime of the connection. Pinned syncers do not count
towards the total NumActiveSyncer count, which are rotated periodically.

This features allows nodes to more tightly synchronize their routing
tables by ensuring they are always receiving gossip from distinguished
subset of peers.
2021-01-29 00:19:47 -08:00
Conner Fromknecht
9e932f2a64
discovery/sync_manager: Pause/Resume HistoricalSyncTicker
This gives each initial historical syncer an equal amount of time before
being rotated, even if some fail.
2021-01-29 00:19:47 -08:00
Conner Fromknecht
ef0cd82c1f
discovery/sync_manager: make setHistoricalSyncer closure 2021-01-29 00:19:46 -08:00
Conner Fromknecht
72fbd1283b
discovery/sync_manager: break out IsGraphSynced check 2021-01-29 00:19:46 -08:00
Conner Fromknecht
7c6aa20bd8
discovery: handle err for linter 2021-01-29 00:19:46 -08:00
Wilmer Paulino
7ef1f3f636
discovery: use source of ann upon confirmed channel ann batch
We do this instead of using the source of the AnnounceSignatures
message, as we filter out the source when broadcasting any
announcements, leading to the remote node not receiving our channel
update. Note that this is done more for the sake of correctness and to
address a flake within the integration tests, as channel updates are
sent directly and reliably to channel counterparts.
2021-01-06 13:16:44 -08:00
Wilmer Paulino
00d4e92362
discovery: prevent rebroadcast of premature channel updates
As similarly done with premature channel announcements, we'll no longer
allow premature channel updates to be rebroadcast once mature. This is
no longer necessary as channel announcements that we're not aware of are
usually broadcast to us with their accompanying channel updates.
2021-01-06 12:52:41 -08:00
Wilmer Paulino
871a6f1690
discovery: prevent rebroadcast of previously premature announcements 2020-12-08 15:18:08 -08:00
Wilmer Paulino
a4f33ae63c
discovery: adhere to proper channel chunk splitting for ReplyChannelRange 2020-12-08 15:18:07 -08:00
Wilmer Paulino
c5fc7334a4
discovery: limit NumBlocks to best known height for outgoing QueryChannelRange
This is done to ensure we don't receive replies for channels in blocks
not currently known to us, which we wouldn't be able to process.
2020-12-08 15:18:06 -08:00
Olaoluwa Osuntokun
13a2598ded
discovery: add new option to toggle gossip rate limiting
In this commit, we add a new option to toggle gossip rate limiting. This
new option can be useful in contexts that require near instant
propagation of gossip messages like integration tests.
2020-11-30 16:38:56 -08:00
Olaoluwa Osuntokun
7e298f1434
Merge pull request #3367 from cfromknecht/batched-graph-updates
Batched graph updates
2020-11-25 18:40:40 -08:00
Wilmer Paulino
791ba3eb50
discovery: rate limit incoming channel updates
This change was largely motivated by an increase in high disk usage as a
result of channel update spam. With an in memory graph, this would've
gone mostly undetected except for the increased bandwidth usage, which
this doesn't aim to solve yet. To minimize the effects to disks, we
begin to rate limit channel updates in two ways. Keep alive updates,
those which only increase their timestamps to signal liveliness, are now
limited to one per lnd's rebroadcast interval (current default of 24H).
Non keep alive updates are now limited to one per block per direction.
2020-11-25 15:38:08 -08:00
Conner Fromknecht
f8154c65c5
discovery/gossiper: increase validation barrier size to 1000
This allows for a 1000 different validation operations to proceed
concurrently. Now that we are batching operations at the db level, the
average number of outstanding requests will be higher since the commit
latency has increased. To compensate, we allow for more outstanding
requests to keep the gossiper busy while batches are constructed.
2020-11-24 16:39:47 -08:00
Conner Fromknecht
fb9218d100
discovery/gossiper: channel announcements can't be outdated 2020-11-24 16:38:14 -08:00
Andras Banki-Horvath
d89f51d1d0
multi: add reset closure to kvdb.Update
Similarly as with kvdb.View this commits adds a reset closure to the
kvdb.Update call in order to be able to reset external state if the
underlying db backend needs to retry the transaction.
2020-11-05 17:57:12 +01:00
Andras Banki-Horvath
2a358327f4
multi: add reset closure to kvdb.View
This commit adds a reset() closure to the kvdb.View function which will
be called before each retry (including the first) of the view
transaction. The reset() closure can be used to reset external state
(eg slices or maps) where the view closure puts intermediate results.
2020-11-05 17:57:12 +01:00
yyforyongyu
ef38b12fda
multi: use timeout field in dialer 2020-09-16 11:50:04 +08:00
eugene
49d8f04197 multi: migrate instances of mockSigner to the mock package
This commit moves all localized instances of mock implementations of
the Signer interface to the lntest/mock package. This allows us to
remove a lot of code and have it housed under a single interface in
many cases.
2020-08-28 15:43:51 -04:00
Conner Fromknecht
cff52f7622
Merge pull request #4352 from matheusdtech/discovery-lock-premature
discovery: correctly lock premature messages
2020-06-26 22:50:09 -07:00
Brian Mancini
28931390ff discovery: prevent endBlock overflow in replyChanRangeQuery
Modifies syncer.replyChanRangeQuery method to use the LastBlockHeight
method on the query. LastBlockHeight safely calculates the ending
block height and prevents an overflow of start_block + num_blocks.

Prior to this change, query messages that had a start_block +
num_blocks that overflows uint32_max would return zero results in the
reply message.

Tests are added to fix the bug and ensure proper start and end values
are supplied to the channel graph filter.
2020-06-18 16:48:09 -04:00
Matheus Degiovani
44f83731bc discovery: Correctly lock premature annoucements
This reworks the locking behavior of the Gossiper so that a race
condition on channel updates and block notifications doesn't cause any
loss of messages.

This fixes an issue that manifested mostly as flakes on itests during
WaitForNetworkChannelOpen calls.

The previous behavior allowed ChannelUpdates to be missed if they
happened concurrently to block notifications. The
processNetworkAnnoucement call would check for the current block height,
then lock the gossiper and add the msg to the prematureAnnoucements
list. New blocks would trigger an update to the current block height
then a lock and check of the aforementioned list.

However, specially during itests it could happen that the missing lock
before checking the height could case a race condition if the following
sequence of events happened:

- A new ChannelUpdate message was received and started processing on a
  separate goroutine
- The isPremature() call was made and verified that the ChannelUpdate
  was in fact premature
- The goroutine was scheduled out
- A new block started processing in the gossiper. It updated the block
  height, asked and was granted the lock for the gossiper and verified
  there was zero premature announcements. The lock was released.
- The goroutine processing the ChannelUpdate asked for the gossiper lock
  and was granted it. It added the ChannelUpdate in the
  prematureAnnoucements list. This can never be processed now.

The way to fix this behavior is to ensure that both isPremature checks
done inside processNetworkAnnoucement and best block updates are made
inside the same critical section (i.e. while holding the same lock) so
that they can't both check and update the prematureAnnoucements list
concurrently.
2020-06-05 15:58:33 -03:00
Matheus Degiovani
ccc8f8e48f discovery: Log new blocks
This should help debug some flaky itests.
2020-06-05 13:31:40 -03:00
Conner Fromknecht
d0d2ca403d
multi: rename ReadTx to RTx 2020-05-26 18:20:37 -07:00
Roei Erez
ae2c37e043 Ensure chain notifier is started before accessed.
The use case comes from the RPC layer that is ready before the
chain notifier which is used in the sub server.
2020-04-30 12:54:33 +03:00
Conner Fromknecht
0f94b8dc62
multi: return input.Signature from SignOutputRaw 2020-04-10 14:27:35 -07:00
Conner Fromknecht
ec784db511
multi: remove returned error from WipeChannel
The linter complains about not checking the return value from
WipeChannel in certain places. Instead of checking we simply remove the
returned error because the in-memory modifications cannot fail.
2020-04-02 17:39:29 -07:00
Conner Fromknecht
4e793497c8
Merge pull request #2669 from cfromknecht/use-netann-in-discovery
netann+discovery+server: consolidate network announcements to netann pkg
2020-03-23 13:38:06 -07:00
Conner Fromknecht
92456d063d
discovery: remove unused updateChanPolicies struct 2020-03-19 13:43:57 -07:00
Conner Fromknecht
5c2fc4a2d6
discovery/gossiper: use netann pkg for signing channel updates 2020-03-19 13:43:39 -07:00
Olaoluwa Osuntokun
ace7a78494
discovery: covert to use new kvdb abstraction 2020-03-18 19:35:07 -07:00
Conner Fromknecht
089ac647d8
discovery/chan_series: use netann.ChannelUpdateFromEdge helper 2020-03-17 16:24:25 -07:00
Conner Fromknecht
7b0d564692
discovery: move remotePubFromChanInfo to gossiper, remove utils 2020-03-17 16:24:10 -07:00
Conner Fromknecht
6a813e3433
discovery/multi: move CreateChanAnnouncement to netann 2020-03-17 16:23:54 -07:00
Conner Fromknecht
d82aacbdc5
discovery/utils: use netann.ChannelUpdateFromEdge 2020-03-17 16:23:37 -07:00
Conner Fromknecht
df44d19936
discovery/multi: move SignAnnouncement to netann 2020-03-17 16:23:01 -07:00
Wilmer Paulino
57b69e3b1a
discovery: check ChainHash in QueryChannelRange messages
If the provided ChainHash in a QueryChannelRange message does not match
that of our current chain, then we should send a blank response, rather
than reply with channels for the wrong chain.
2020-01-17 11:51:09 -08:00
Wilmer Paulino
1bacdfb41e
discovery: interpret block range from ReplyChannelRange messages
We move from our legacy way of interpreting ReplyChannelRange messages
which was incorrect. Previously, we'd rely on the Complete field of the
ReplyChannelRange message to determine when our peer had sent all of
their replies. Now, we properly adhere to the specification by
interpreting the block ranges of these messages as intended.

Due to the large number of nodes deployed with the previous method, we
still maintain and detect when we are communicating with them, such that
we are still able to sync with them for backwards compatibility.
2020-01-06 14:03:13 -08:00
Wilmer Paulino
d688e13d35
discovery: remove unnecessary test check
It's not possible to send another reply once all replies have been sent
without another request. The purpose of the check is also done within
another test, TestGossipSyncerReplyChanRangeQueryNoNewChans, so it can
be removed from here.
2020-01-06 14:02:31 -08:00
Wilmer Paulino
c7c0853531
discovery: cover requested range in ReplyChannelRange messages
In order to properly adhere to the spec, when handling a
QueryChannelRange message, we must reply with a series of
ReplyChannelRange messages, that when consumed together cover the
entirety of the block range requested.
2020-01-06 14:00:15 -08:00
Wilmer Paulino
1f781ea431
discovery: use inclusive range in FilterChannelRange
FilterChannelRange takes an inclusive range, so it was possible for us
to return channels for an additional block that was not requested.
2020-01-06 14:00:14 -08:00
Johan T. Halseth
c04ef68cc3
Merge pull request #3826 from arik-so/wrong_chain_error_fix
fix order in wrong chain error message
2019-12-12 09:46:31 +01:00
Arik Sosman
e83df875ad
fix wrong chain error message 2019-12-11 17:43:24 -08:00
Olaoluwa Osuntokun
6a9b96122d
discovery: properly set FirstBlockHeight and NumBlocks in responses
In this commit we fix in a bug in `lnd` that could cause other
implementations which implement a strict version of the spec to
disconnect when trying to sync their channel graph using the gossip
query feature. Before this commit, we would embed the request to a
`QueryChannelRange` in the response, causing some clients to reject the
response as the `FirstBlockHeight` and `NumBlocks` field would be
identical for each chunk of the response.

In order to remedy this, we now properly set these two fields with each
returned chunk. Note that even after this commit, we keep our existing
behavior surrounding the `Complete` field as is. Otherwise, current
`lnd` clients which rely on this field (rather than the two
aforementioned fields) wouldn't be able to properly detect when a set of
responses to their query was "complete".

Partially fixes #3728.
2019-12-10 17:05:58 -08:00
Conner Fromknecht
5e27b5022c
multi: remove LocalFeatures and GlobalFeatures 2019-11-08 05:32:00 -08:00
Conner Fromknecht
16318c5a41
multi: merge local+global features from remote peer 2019-11-08 05:31:47 -08:00
cryptagoras
0ad6c4748f
discovery/gossiper: fix minor typo
It was missing a space
"addingto waiting batch" -> "adding to waiting batch"
2019-10-15 14:16:03 +03:00
Olaoluwa Osuntokun
3f8526a0ca
peer+lnpeer: add new methods to expose local+global features for lnpeer interface 2019-09-25 18:26:01 -07:00
Wilmer Paulino
04a7cda3d5
Merge pull request #3534 from alrs/discovery-test-improvements
discovery: Goroutine Test Fixes and Linting
2019-09-25 16:12:30 -07:00
Lars Lehtonen
0cae1e69ab discovery: error string lint fixes
discovery: lint fix to remove append loop
2019-09-25 18:42:38 +00:00
Lars Lehtonen
58c23074d1 discovery: use error channels with test goroutines 2019-09-25 18:41:42 +00:00
Joost Jager
c80feeb4b3
routing+discovery: extract local channel manager
The policy update logic that resided part in the gossiper and
part in the rpc server is extracted into its own object.

This prepares for additional validation logic to be added for policy
updates that would otherwise make the gossiper heavier.

It is also a small first step towards separation of our own channel data
from the rest of the graph.
2019-09-23 13:07:08 +02:00
Joost Jager
4b2eb9cb81
discovery: push max htlc migration further up the call tree
As a preparation for making the gossiper less responsible for validating
and supplementing local channel policy updates, this commits moves the
on-the-fly max htlc migration up the call tree. The plan for a follow up
commit is to move it out of the gossiper completely for local channel
updates, so that we don't need to return a list of final applied policies
anymore.
2019-09-23 13:07:06 +02:00
Joost Jager
339ff357d1
channeldb: invalidate channel signature cache on update 2019-09-23 13:07:04 +02:00
Joost Jager
5090bb27ad
discovery: remove redundant signature setting
The signature is retrieved, not used and overwritten with a
new signature.
2019-09-23 13:07:02 +02:00
Conner Fromknecht
1d41d4d666
multi: move WaitPredicate, WaitNoError, WaitInvariant to lntest/wait 2019-09-19 12:46:29 -07:00
Johan T. Halseth
92123c603d
gossiper: retransmit self NodeAnnouncement 2019-09-16 10:54:42 +02:00
Johan T. Halseth
24004fcb37
gossiper+server: define SelfNodeAnnouncement 2019-09-16 10:54:42 +02:00
Johan T. Halseth
e36d15582c
discovery/gossiper test: add TestRetransmit
This commit adds a test that ensures outdated announcements are
retransmitted when the RetransmitTicker ticks.
2019-09-16 10:54:38 +02:00
Johan T. Halseth
70d63abe9f
discovery/test: set global test timestamp 2019-09-16 10:23:01 +02:00
Johan T. Halseth
8b9fd039ec
discovery/gossiper test: remove mockGraphSource.SelfEdges 2019-09-16 10:23:01 +02:00
Johan T. Halseth
e201fbe396
discovery+server: RetransmitDelay->RetransmitTicker
Also let retransmitStaleChannels take a timestamp, to make it easier to
test.
2019-09-16 10:23:01 +02:00
Johan T. Halseth
74c9551564
discovery+server: make RebroadcastInterval part of config 2019-09-16 10:23:00 +02:00
Johan T. Halseth
3d8f194670
discovery/gossiper: extract adding nodeAnnouncement into method 2019-09-16 10:23:00 +02:00
Joost Jager
3d7de2ad39
multi: remove dead code 2019-09-10 17:21:59 +02:00
Valentine Wallace
8ce7f82da0 discovery+switch: apply zero forwarding policy updates in-memory as well as on disk
In this commit, we fix a bug where if a user updates a forwarding policy to be
zero, the update will be applied to the policy correctly on-disk, but not
in-memory.

We solve this issue by having the gossiper return the list of on-disk updated
policies and passing these policies to the switch, so the switch can assume
that zero-valued fields are intentional and not just uninitialized.
2019-09-09 23:39:44 -07:00
Wilmer Paulino
2e122a807b
Merge pull request #3406 from cfromknecht/die-spew
pilot+discovery: die spew
2019-08-22 15:33:56 -07:00
Wilmer Paulino
e15e524637
discovery: prevent broadcast of anns received during initial graph sync
There's no need to broadcast these as we assume that online nodes have
already received them. For nodes that were offline, they should receive
them as part of their initial graph sync.
2019-08-21 12:06:33 -07:00
Conner Fromknecht
e2a53f71d0
pilot+discovery: remove info spews 2019-08-20 14:13:05 -07:00
Wilmer Paulino
c405e89197
discovery: check non-nil syncer upon historical sync tick 2019-08-13 18:23:05 -07:00
Wilmer Paulino
977c139f3c
discovery: handle graph synced status after stalled initial historical sync
This ensures that the graph synced status is marked true at some point
once a historical sync has completed. Before this commit, a stalled
historical sync could cause us to never mark the graph as synced.
2019-08-06 17:56:55 -07:00
Wilmer Paulino
af4234f680
discovery: allow the SyncManager to report whether the graph is synced 2019-08-06 17:56:54 -07:00
Conner Fromknecht
8cebddfe50
discovery/gossiper: thread IgnoreHistoricalFilters to sync manager 2019-07-30 17:25:47 -07:00
Conner Fromknecht
a3e690e253
discovery/sync_manager: init all syncers with IgnoreHistoricalFilters 2019-07-30 17:25:31 -07:00
Conner Fromknecht
35a2de23a3
discovery/syncer: add flag to prevent historical gossip filter dump 2019-07-30 17:25:15 -07:00
Conner Fromknecht
a4097c113a
discovery/gossiper: remove unused SynchronizeNode method 2019-07-15 14:11:18 -07:00
Conner Fromknecht
933e723ec7
Merge pull request #3178 from federicobond/once-refactor
multi: replace manual CAS with sync.Once in several more modules
2019-07-08 20:33:44 -07:00
Olaoluwa Osuntokun
9c957193cf
discovery: remove retries from DNS based SampleNodeAddrs, allow down seeds
In this commit, we modify the `SampleNodeAddrs` method to no longer
retry itself. Instead, we'll now leave this task to the caller of the
this method. Additionally, we'll no longer return with an error if we
can't hit a particular seed. Instead, we'll log the error and move onto
the next seed. Finally, we'll also no longer require that the DNS seed
has a secondary seed in order to support a wider array of DNS seeds.
2019-06-28 16:10:47 -07:00
Wilmer Paulino
67132d4ee3
discovery: set source of node announcement broadcast to belonging node
We do this to ensure the node announcement propagates to our channel
counterparty. At times, the node announcement does not propagate to them
when opening our first channel due to a race condition between
IsPublicNode and processing announcement signatures. This isn't
necessary for channel updates and announcement signatures as we send
those to our channel counterparty directly through the reliable sender.
2019-06-20 13:59:37 -07:00
Federico Bond
0a9141763e multi: replace manual CAS with sync.Once in several more modules 2019-06-12 09:37:26 -03:00
Federico Bond
9bd3055fb8 discovery,fundingmanager: avoid serialization in NotifyWhenOnline 2019-06-04 16:36:21 -03:00
Conner Fromknecht
f8287b0080
Merge pull request #2985 from johng/sub-batch
Broadcast gossip announcements in sub batches
2019-05-28 17:05:06 -07:00
Johan T. Halseth
6ba6982ae7
discovery/sync_manager_test: add TestSyncManagerHistoricalSyncOnReconnect
TestSyncManagerHistoricalSyncOnReconnect tests that the sync manager will
re-trigger a historical sync when a new peer connects after a historical
sync has completed, but we have lost all peers.
2019-05-24 11:05:30 +02:00
Johan T. Halseth
526486ae24
discovery/sync_manager: restart historical sync on first connected peer
To handle the case where we have been without peers, and get a new
connection, we reset the historical scan booleans when the first active
syncer is connected to trigger another historical sync.
2019-05-24 11:05:29 +02:00
John Griffith
cf2885dd4a discovery: test we calculate and generate correct sub batch sizes 2019-05-23 10:51:25 +01:00
John Griffith
9eb5fe9587 discovery: split gossiper announcement into sub batches 2019-05-23 10:51:25 +01:00
Johan T. Halseth
c4415f0400
Merge pull request #3044 from cfromknecht/spelling-fixes
multi: fix spelling mistakes
2019-05-07 08:50:36 +02:00
Conner Fromknecht
17b2140cb5
multi: fix spelling mistakes 2019-05-04 15:35:37 -07:00
Olaoluwa Osuntokun
985902be27
Merge pull request #2916 from cfromknecht/split-syncer-query-reply
discovery: make gossip replies synchronous
2019-04-29 17:40:13 -07:00
Johan T. Halseth
ee257fd0eb
multi: move Route to sub-pkg routing/route 2019-04-29 14:52:33 +02:00
Conner Fromknecht
5251ebe117
discovery/syncer_test: fix benign off-by-one in DelayDOS test
Prior to this change, the numQueryResponses that we calculated would be
one more than what we actually wanted since it didn't account for the
initial QueryChannelRange msg. This resulted in the test sending one
extra delayed query than was configured. This doesn't fundamentally
impact the test, but does make what happens in the test more reflective
of the configuration.
2019-04-26 20:05:23 -07:00
Conner Fromknecht
bf4543e2bd
discovery/syncer: make gossip sends synchronous
This commit makes all replies in the gossip syncer synchronous, meaning
that they will wait for each message to be successfully written to the
remote peer before attempting to send the next. This helps throttle
messages the remote peer has requested, preventing unintended
disconnects when the remote peer is slow to process messages. This
changes also helps out congestion in the peer by forcing the syncer to
buffer the messages instead of dumping them into the peer's queue.
2019-04-26 20:05:10 -07:00
Conner Fromknecht
23d10336c2
discovery/syncer: separate query + reply handlers
This commit creates a distinct replyHandler, completely isolating the
requesting state machine from the processing of queries from the remote
peer. Before the two were interlaced, and the syncer could only reply to
messages in certain states. Now the two will be complete separated,
which is preliminary step to make the replies synchronous (as otherwise
we would be blocking our own requesting state machine).

With this changes, the channelGraphSyncer of each peer will drive the
replyHanlder of the other. The two can now operate independently, or
even spun up conditionally depending on advertised support for gossip
queries, as shown below:

          A                                 B
 channelGraphSyncer ---control-msg--->
                                        replyHandler
 channelGraphSyncer <--control-msg----
           gossiper <--gossip-msgs----

                    <--control-msg---- channelGraphSyncer
       replyHandler
                    ---control-msg---> channelGraphSyncer
                    ---gossip-msgs---> gossiper
2019-04-26 20:03:14 -07:00
Wilmer Paulino
d68842ee9e
discovery: queue active syncers until initial historical sync signal
In this commit, we begin to queue any active syncers until the initial
historical sync has completed. We do this to ensure we can properly
handle any new channel updates at tip. This is required for fresh nodes
that are syncing the channel graph for the first time. If we begin
accepting updates at tip while the initial historical sync is still
ongoing, then we risk not processing certain updates since we've yet to
learn of the channels themselves.
2019-04-24 13:20:57 -07:00
Wilmer Paulino
07136a5bc2
discovery: handle initial historical sync disconnection
In this commit, we add logic to handle a peer with whom we're performing
an initial historical sync disconnecting. This is required to ensure we
get as much of the graph as possible when starting a fresh node. It will
also serve useful to ensure we do not get stalled once we prevent active
GossipSyncers from starting until the initial historical sync has
completed.
2019-04-24 13:20:55 -07:00