Closure due to feerate disagreements are a specific closure reason
which admins can understand and tune their config (in the form of
their `FeeEstimator`) to avoid, so having a separate
`ClosureReason` for it is useful.
In the next commit we'll add a second field to
`ChannelError::Close` so here we prep by converting existing calls
to the constructor function, which is almost a full-file sed.
When calling MessageRouter::create_blinded_path, ChannelManager was
including disconnected peers. Filter peers such that only connected ones
are included. Otherwise, the resulting BlindedPath may not work.
When an offer is short-lived, the likelihood of a channel used in a
compact blinded path going away is low. Require passing the absolute
expiry of an offer to ChannelManager::create_offer_builder so that it
can be used to determine whether or not compact blinded path should be
used.
Use the same criteria for creating blinded paths for refunds as well.
There's no need to save space when creating reply paths since they are
part of onion messages rather than in QR codes. Use normal blinded paths
for these instead as they are less likely to become invalid in case of
channel closure.
Using compact blinded paths isn't always necessary or desirable. For
instance, reply paths are communicated via onion messages where space
isn't a premium unlike in QR codes. Additionally, long-lived paths could
become invalid if the channel associated with the SCID is closed.
Refactor MessageRouter::create_blinded_paths into two methods: one for
compact blinded paths and one for normal blinded paths.
An upcoming change to the MessageRouter trait will require reusing
DefaultMessageRouter::create_blinded_paths. Move the code to a utility
function so facilitate this.
Our "what is the channel's current state" structs have slowly
grown to be rather nontrivial, and now include eight structs with
many fields. Thus, it makes sense to pull them out of
`ln::channelmanager` and into their own module.
This also makes things easier for the C bindings which don't
support `pub use` from a private module.
This removes dependency of watched_outputs from
per_commitment_claimable_outpoints, it is required since we will
no longer have direct access to per_commitment_claimable_outpoints
once we start publishing PersistClaimInfo as part of #3049.
This never really made a lot of sense from an API perspective, but
was required to avoid handing the background processor an explicit
`OnionMessegner`, which we are now doing. Thus, we can simply drop
these bounds as unnecessary.
This adds an `OnionMessenger::process_pending_events_async`
mirroring the same in `ChannelManager`. However, unlike the one in
`ChannelManager`, this processes the events in parallel by spawning
all futures and using the new `MultiFuturePoller`.
Because `OnionMessenger` just generates a stream of messages to
store/fetch, we first process all the events to store new messages,
`await` them, then process all the events to fetch stored messages,
ensuring reordering shouldn't result in lost messages (unless we
race with a peer disconnection, which could happen anyway).
In the next commit, `OnionMessenger` events are handled in parallel
using rust async. When we do that, we'll want to handle
`OnionMessageIntercepted` events prior to
`OnionMessagePeerConnected` ones.
While we'd generally prefer to handle all events in the order they
were generated, if we want to handle them in parallel, we don't
want a `OnionMessageIntercepted` event to start being processed,
then handle an `OnionMessagePeerConnected` prior to the first
completing. This could cause us to store a freshly-intercepted
message for a peer in a DB that was just wiped because the peer
is now connected.
This does run the risk of processing a `OnionMessagePeerConnected`
event prior to an `OnionMessageIntercepted` event (because a peer
connected, then disconnected, then we received a message for that
peer all before any events were handled), that is somewhat less
likely and discarding a message in a rare race is better than
leaving a message lying around undelivered.
Thus, here, we store `OnionMessenger` events in separate `Vec`s
which we can pull from in message-type-order.
One of the most common first-steps in troubleshooting routefinding
issues is we ask for the local channel state to determine what the
available HTLC bounds are. While we log first-hop channel details
when we decline to use them, this doesn't tell us if we have
missing channels, and thus here we log all first-hop channels at
the start.
We also take this opportunity to log the limits that were violated
any time we log that we're not using a channel, rather than only
when its a first-hop.