Create a wrapper struct for rapid gossip sync that can be passed to
BackgroundProcessor's start method, allowing it to only start pruning
the network graph upon rapid gossip sync's completion.
The main loop of the background processor has this line:
`peer_manager.process_events(); // Note that this may block on ChannelManager's locking`
which does, indeed, sometimes block waiting on the `ChannelManager`
to finish whatever its doing. Specifically, its the only place in
the background processor loop that we block waiting on the
`ChannelManager`, so if the `ChannelManager` is relatively busy, we
may end up being blocked there most of the time.
This should be fine, except today we had a user who's node was
particularly slow in processing some channel updates, resulting in
the background processor being blocked there (as expected). Then,
when the channel updates were completed (and persisted) the next
thing the background processor did was hand the user events to
process, creating yet more channel updates. Ultimately, the users'
node crashed before finishing the event processing. This left us
with an updated monitor on disk and an outdated manager, and they
lost the channel on startup.
Here we simply move the above quoted line to after the normal event
processing, ensuring the next thing we do after blocking on
`ChannelManager` locks is persist the manager, prior to event
handling.
Instead of creating a separate trait for persisting NetworkGraph, use and rename the existing ChannelManagerPersister to handle them both. persist_graph is then called on removal of stale channels and on exit.
Because many lightning nodes can take quite some time to respond to
pings, the five second ping timer can sometimes cause spurious
disconnects even though a peer is online. However, in part as a
response to mobile users where a connection may be lost as result
of only a short time with the app in a "paused" state, we had a
rather aggressive ping time to ensure we would disconnect quickly.
However, since we now just used a fixed time for the "went to
sleep" detection, we can somewhat increase the ping timer. We still
want to be fairly aggressive to avoid sending HTLCs to a peer that
is offline, but the tradeoff between spurious disconnections and
stuck payments is likely doesn't need to be quite as aggressive.
In the sample client (and likely other downstream users), event
processing may block on slow operations (e.g. Bitcoin Core RPCs)
and ChannelManager persistence may take some time. This should be
fine, except that we consider this a case of possible backgrounding
and disconnect all of our peers when it happens.
Instead, we here avoid considering event processing time in the
time between PeerManager events.
We update the `Channel::update_time_counter` field (which is copied
into `ChannelUpdate::timestamp`) only when the channel is
initialized or closes, and when a new block is connected. However,
if a peer disconnects or reconnects, we may wish to generate
`ChannelUpdate` updates in between new blocks. In such a case, we
need to make sure the `timestamp` field is newer than any previous
updates' `timestamp` fields, which we do here by simply
incrementing it when the channel status is changed.
As a side effect of this we have to update
`test_background_processor` to ensure it eventually succeeds even
if the serialization of the `ChannelManager` changes after the test
begins.
I realized on my own node that I don't have any visibility into how
long a monitor or manager persistence call takes, potentially
blocking other operations. This makes it much more clear by adding
a relevant log_trace!() print immediately before and immediately
after persistence.
Scorer uses time to determine how much to penalize a channel after a
failure occurs. Parameterizing it by time cleans up the code such that
no-std support is in a single AlwaysPresent struct, which implements the
Time trait. Time is implemented for std::time::Instant when std is
available.
This parameterization also allows for deterministic testing since a
clock could be devised to advance forward as needed.
NetworkGraph is owned by NetGraphMsgHandler, but DefaultRouter requires
a reference to it. Introduce shared ownership to NetGraphMsgHandler so
that both can use the same NetworkGraph.
Upon receiving a PaymentPathFailed event, the failing payment may be
retried on a different path. To avoid using the channel responsible for
the failure, a scorer should be notified of the failure before being
used to find a new route.
Add a payment_path_failed method to routing::Score and call it in
InvoicePayer's event handler. Introduce a LockableScore parameterization
to InvoicePayer so the scorer is locked only once before calling
find_route.