In the next commit, we'll use the new `node_counter`s to remove a
`HashMap` from the router, using a `Vec` to store all our per-node
information. In order to make finding entries in that `Vec` cheap,
here we store the source and destintaion `node_counter`s in
`ChannelInfo`, givind us the counters for both ends of a channel
without doing a second `HashMap` lookup.
These counters are simply a unique number describing each node.
They have no specific meaning, but start at 0 and count up, with
counters being reused after a node has been deleted.
This includes when building TxCreationKeys, as well as for open_channel
and accept_channel messages. Note: this is only for places where we are
retrieving the current per commitment point, which excludes
channel_reestablish.
For quite some time, LDK has force-closed channels if the peer
sends us a feerate update which is below our `FeeEstimator`'s
concept of a channel lower-bound. This is intended to ensure that
channel feerates are always sufficient to get our commitment
transaction confirmed on-chain if we do need to force-close.
However, we've never checked our channel feerate regularly - if a
peer is offline (or just uninterested in updating the channel
feerate) and the prevailing feerates on-chain go up, we'll simply
ignore it and allow our commitment transaction to sit around with a
feerate too low to get confirmed.
Here we rectify this oversight by force-closing channels with stale
feerates, checking after each block. However, because fee
estimators are often buggy and force-closures piss off users, we
only do so rather conservatively. Specifically, we only force-close
if a channel's feerate is below the minimum `FeeEstimator`-provided
minimum across the last day.
Further, because fee estimators are often especially buggy on
startup (and because peers haven't had a chance to update the
channel feerates yet), we don't force-close channels until we have
a full day of feerate lower-bound history.
This should reduce the incidence of force-closures substantially,
but it is expected this will still increase force-closures somewhat
substantially depending on the users' `FeeEstimator`.
Fixes#993
Closure due to feerate disagreements are a specific closure reason
which admins can understand and tune their config (in the form of
their `FeeEstimator`) to avoid, so having a separate
`ClosureReason` for it is useful.
In the next commit we'll add a second field to
`ChannelError::Close` so here we prep by converting existing calls
to the constructor function, which is almost a full-file sed.
When calling MessageRouter::create_blinded_path, ChannelManager was
including disconnected peers. Filter peers such that only connected ones
are included. Otherwise, the resulting BlindedPath may not work.
When an offer is short-lived, the likelihood of a channel used in a
compact blinded path going away is low. Require passing the absolute
expiry of an offer to ChannelManager::create_offer_builder so that it
can be used to determine whether or not compact blinded path should be
used.
Use the same criteria for creating blinded paths for refunds as well.
There's no need to save space when creating reply paths since they are
part of onion messages rather than in QR codes. Use normal blinded paths
for these instead as they are less likely to become invalid in case of
channel closure.
Using compact blinded paths isn't always necessary or desirable. For
instance, reply paths are communicated via onion messages where space
isn't a premium unlike in QR codes. Additionally, long-lived paths could
become invalid if the channel associated with the SCID is closed.
Refactor MessageRouter::create_blinded_paths into two methods: one for
compact blinded paths and one for normal blinded paths.
An upcoming change to the MessageRouter trait will require reusing
DefaultMessageRouter::create_blinded_paths. Move the code to a utility
function so facilitate this.
Our "what is the channel's current state" structs have slowly
grown to be rather nontrivial, and now include eight structs with
many fields. Thus, it makes sense to pull them out of
`ln::channelmanager` and into their own module.
This also makes things easier for the C bindings which don't
support `pub use` from a private module.
This removes dependency of watched_outputs from
per_commitment_claimable_outpoints, it is required since we will
no longer have direct access to per_commitment_claimable_outpoints
once we start publishing PersistClaimInfo as part of #3049.
This never really made a lot of sense from an API perspective, but
was required to avoid handing the background processor an explicit
`OnionMessegner`, which we are now doing. Thus, we can simply drop
these bounds as unnecessary.
This adds an `OnionMessenger::process_pending_events_async`
mirroring the same in `ChannelManager`. However, unlike the one in
`ChannelManager`, this processes the events in parallel by spawning
all futures and using the new `MultiFuturePoller`.
Because `OnionMessenger` just generates a stream of messages to
store/fetch, we first process all the events to store new messages,
`await` them, then process all the events to fetch stored messages,
ensuring reordering shouldn't result in lost messages (unless we
race with a peer disconnection, which could happen anyway).
In the next commit, `OnionMessenger` events are handled in parallel
using rust async. When we do that, we'll want to handle
`OnionMessageIntercepted` events prior to
`OnionMessagePeerConnected` ones.
While we'd generally prefer to handle all events in the order they
were generated, if we want to handle them in parallel, we don't
want a `OnionMessageIntercepted` event to start being processed,
then handle an `OnionMessagePeerConnected` prior to the first
completing. This could cause us to store a freshly-intercepted
message for a peer in a DB that was just wiped because the peer
is now connected.
This does run the risk of processing a `OnionMessagePeerConnected`
event prior to an `OnionMessageIntercepted` event (because a peer
connected, then disconnected, then we received a message for that
peer all before any events were handled), that is somewhat less
likely and discarding a message in a rare race is better than
leaving a message lying around undelivered.
Thus, here, we store `OnionMessenger` events in separate `Vec`s
which we can pull from in message-type-order.