Adds the boilerplate needed for PeerManager and OnionMessenger to work
together, with some corresponding docs and misc updates mostly due to the
PeerManager public API changing.
If a user restores from a backup that they know is stale, they'd
like to force-close all of their channels (or at least the ones
they know are stale) *without* broadcasting the latest state,
asking their peers to do so instead. This simply adds methods to do
so, renaming the existing `force_close_channel` and
`force_close_all_channels` methods to disambiguate further.
BackgroundProcessor can take an optional P2PGossipSync and an optional
RapidGossipSync, but doing so may be easy to misuse. Each has a
reference to a NetworkGraph, which could be different between the two,
but only one is actually used.
Instead, allow passing one object wrapped in a GossipSync enum. Also,
fix a bug where the NetworkGraph is not persisted on shutdown if only a
RapidGossipSync is given.
Instead of implementing EventHandler for P2PGossipSync, implement it on
NetworkGraph. This allows RapidGossipSync to handle events, too, by
delegating to its NetworkGraph.
P2PGossipSync logs before delegating to NetworkGraph in its
EventHandler. In order to share this handling with RapidGossipSync,
NetworkGraph needs to take a logger so that it can implement
EventHandler instead.
NetGraphMsgHandler implements RoutingMessageHandler to handle gossip
messages defined in BOLT 7 and maintains a view of the network by
updating NetworkGraph. Rename it to P2PGossipSync, which better
describes its purpose, and to contrast with RapidGossipSync.
Create a wrapper struct for rapid gossip sync that can be passed to
BackgroundProcessor's start method, allowing it to only start pruning
the network graph upon rapid gossip sync's completion.
The main loop of the background processor has this line:
`peer_manager.process_events(); // Note that this may block on ChannelManager's locking`
which does, indeed, sometimes block waiting on the `ChannelManager`
to finish whatever its doing. Specifically, its the only place in
the background processor loop that we block waiting on the
`ChannelManager`, so if the `ChannelManager` is relatively busy, we
may end up being blocked there most of the time.
This should be fine, except today we had a user who's node was
particularly slow in processing some channel updates, resulting in
the background processor being blocked there (as expected). Then,
when the channel updates were completed (and persisted) the next
thing the background processor did was hand the user events to
process, creating yet more channel updates. Ultimately, the users'
node crashed before finishing the event processing. This left us
with an updated monitor on disk and an outdated manager, and they
lost the channel on startup.
Here we simply move the above quoted line to after the normal event
processing, ensuring the next thing we do after blocking on
`ChannelManager` locks is persist the manager, prior to event
handling.
Instead of creating a separate trait for persisting NetworkGraph, use and rename the existing ChannelManagerPersister to handle them both. persist_graph is then called on removal of stale channels and on exit.
Because many lightning nodes can take quite some time to respond to
pings, the five second ping timer can sometimes cause spurious
disconnects even though a peer is online. However, in part as a
response to mobile users where a connection may be lost as result
of only a short time with the app in a "paused" state, we had a
rather aggressive ping time to ensure we would disconnect quickly.
However, since we now just used a fixed time for the "went to
sleep" detection, we can somewhat increase the ping timer. We still
want to be fairly aggressive to avoid sending HTLCs to a peer that
is offline, but the tradeoff between spurious disconnections and
stuck payments is likely doesn't need to be quite as aggressive.
In the sample client (and likely other downstream users), event
processing may block on slow operations (e.g. Bitcoin Core RPCs)
and ChannelManager persistence may take some time. This should be
fine, except that we consider this a case of possible backgrounding
and disconnect all of our peers when it happens.
Instead, we here avoid considering event processing time in the
time between PeerManager events.