In e06484b0f4, we added specific
handling for outbound-channel initial monitor updates failing -
in such a case we have a counterparty who tried to open a second
channel with the same funding info we just gave them, causing us
to force-close our outbound channel as it shows up as
duplicate-funding. Its largely harmless as it leads to a spurious
force-closure of a channel with a peer doing something absurd,
however it causes the `full_stack_target` fuzzer to fail.
Sadly, in 574c77e7bc, as we were
dropping handling of `PermanentFailure` handling for updates, we
accidentally dropped handling for initial updates as well.
Here we fix the issue (again) and add a test.
We'd previously assumed that LDK would receive
`funding_transaction_generated` prior to our peer learning the txid
and panicked if the peer tried to open a redundant channel to us
with the same funding outpoint.
While this assumption is generally safe, some users may have
out-of-band protocols where they notify their LSP about a funding
outpoint first, or this may be violated in the future with
collaborative transaction construction protocols, i.e. the upcoming
dual-funding protocol.
Currently the channel shutdown sequence has a number of steps which
all the shutdown callsites have to call. Because many shutdown
cases are rare error cases, its relatively easy to miss a call and
leave users without `Event`s or miss some important cleanup.
One of those steps, calling `issue_channel_close_events`, is rather
easy to remove, as it only generates two events, which can simply
be moved to another shutdown step.
Here we remove `issue_channel_close_events` by moving
`ChannelClosed` event generation into `finish_force_close_channel`.
Currently the channel shutdown sequence has a number of steps which
all the shutdown callsites have to call. Because many shutdown
cases are rare error cases, its relatively easy to miss a call and
leave users without `Event`s or miss some important cleanup.
One of those steps, calling `issue_channel_close_events`, is rather
easy to remove, as it only generates two events, which can simply
be moved to another shutdown step.
Here we move the first of the two events, `DiscardFunding`, into
`finish_force_close_channel`.
If we promote our channel to `AwaitingChannelReady` after adding
funding info, but still have `MONITOR_UPDATE_IN_PROGRESS` set, we
haven't broadcasted the funding transaction yet and thus should
return values from `unbroadcasted_funding[_txid]` and generate a
`DiscardFunding` event.
62f8669654 added two
`htlc_maximum_msat.unwrap_or`s, but used a default value of 0,
spuriously causing all HTLCs to fail if we don't have an htlc
maximum value. This should be mostly harmless, but we should fix it
anyway.
In 4b5db8c3ce, `channelmanager::PendingHTLCRouting` was made
public, exposing a `FinalOnionHopData` field to the world. However,
`FinalOnionHopData` was left crate-private, making the enum
impossible to construct.
There isn't a strong reason for this (even though the
`FinalOnionHopData` API is somewhat confusing, being separated from
the rest of the onion structs), so we expose it here.
To avoid exposing a node's identity in a blinded path, only create
one-hop blinded paths if the node has been announced, and thus has
public channels. Otherwise, there is no way to route a payment to the
node, exposing its identity needlessly.
When constructing blinded payment paths for Bolt12Invoice, delegate to
Router::create_blinded_payment_paths which may produce multi-hop blinded
paths. Fallback to one-hop blinded paths if the Router fails or returns
no paths.
The Router trait is used to find a Route for paying a node. Expand the
interface with a create_blinded_payment paths method for creating such
paths to a recipient node.
Provide an implementation for DefaultRouter that creates two-hop
blinded paths where the recipient's peers serve as the introduction
nodes.
When constructing blinded paths for Offer and Refund, delegate to
MessageRouter::create_blinded_paths which may produce multi-hop blinded
paths. Fallback to one-hop blinded paths if the MessageRouter fails or
returns no paths.
Likewise, do the same for InvoiceRequest and Bolt12Invoice reply paths.
When finding a route through a blinded path, a random CLTV offset may be
added to the path in order to preserve privacy. This needs to be
accounted for in the blinded path's PaymentConstraints. Add
CLTV_FAR_FAR_AWAY to the max_cltv_expiry constraint to allow for such
offsets.
When we fail to accept a counterparty's funding for various
reasons, we should ensure we call the correct cleanup methods in
`internal_funding_created` to remove the temporary data for the
channel in our various internal structs (primarily the SCID alias
map).
This adds the missing cleanup, using `convert_chan_phase_err`
consistently in all the error paths.
This also ensures we get a `ChannelClosed` event when relevant.
ChannelManager is parameterized by a Router in order to find routes when
sending and retrying payments. For the offers flow, it needs to be able
to construct blinded paths (e.g., in the offer and in reply paths).
Instead of adding yet another parameter to ChannelManager, require that
any Router also implements MessageRouter. Implement this for
DefaultRouter by delegating to a DefaultMessageRouter.
The MessageRouter trait is used to find an OnionMessagePath to a
Destination (e.g., to a BlindedPath). Expand the interface with a
create_blinded_paths method for creating such paths to a recipient.
Provide a default implementation creating two-hop blinded paths where
the recipient's peers serve as introduction nodes.
The RouteBlinding feature flag is signals support for relaying payments
over blinded paths. It is used for paying BOLT 12 invoices, which are
required to included at least one blinded path.
Previously, we used to cleanup monitor updates at both consolidation
threshold and new block connects. With this change we will only
cleanup when our consolidation criteria is met. Also, we remove
monitor read from cleanup logic, in case of update consolidation.
Note: In case of channel-closing monitor update, we still need to
read the old monitor before persisting the new one in order to
determine the cleanup range.
Because we decay the bucket information in the background, there's
not much reason to try to decay them immediately prior to updating,
and in removing that we can also clean up a good bit of dead code,
which we do here.
Now that the serialization format of `no-std` and `std`
`ProbabilisticScorer`s both just use `Duration` since UNIX epoch
and don't care about time except when decaying, we don't need to
warn users to not mix the scorers across `no-std` and `std` flags.
Fixes#2539
This is a good gut-check to ensure we don't end up taking a ton of
time decaying channel liquidity info.
It currently clocks in around 1.25ms on an i7-1360P.
Now that we don't access time via the `Time` trait in
`ProbabilisticScorer`, we can finally drop the `Time` bound
entirely, removing the `ProbabilisticScorerUsingTime` and type
alias indirection and replacing it with a simple struct.
In the coming commits, the `T: Time` bound on `ProbabilisticScorer`
will be removed. In order to enable that, we need to switch over to
using the `ScoreUpdate`-provided current time (as a `Duration`
since the unix epoch), making the `T` bound entirely unused.
In the coming commits, the `T: Time` bound on `ProbabilisticScorer`
will be removed. In order to enable that, we need to pass the
current time (as a `Duration` since the unix epoch) through the
score updating pipeline, allowing us to keep the
`*last_updated_time` fields up-to-date as we go.
Now that we aren't decaying during scoring, when we set the
last_updated time in the history bucket logic doesn't matter, so
we should just update it when we've just updated the history
buckets.
Because scoring is an incredibly performance-sensitive operation,
doing liquidity information decay (and especially fetching the
current time!) during scoring isn't really a great idea. Now that
we decay liquidity information in the background, we don't have any
reason to decay during scoring, and we ultimately remove it
entirely here.
Because scoring is an incredibly performance-sensitive operation,
doing liquidity information decay (and especially fetching the
current time!) during scoring isn't really a great idea. Now that
we decay liquidity information in the background, we don't have any
reason to decay during scoring, and we remove the historical bucket
liquidity decaying here.
This implements decaying in the `ProbabilisticScorer`'s
`ScoreLookup::decay_liquidity_certainty` implementation, using
floats for accuracy since we're no longer particularly
time-sensitive. Further, it (finally) removes score entries which
have decayed to zero.
In the next commit, we'll start to use the new
`ScoreUpdate::decay_liquidity_certainty` to decay our bounds in the
background. This will result in the `last_updated` field getting
updated regularly on decay, rather than only on update. While this
isn't an issue for the regular liquidity bounds, it poses a problem
for the historical liquidity buckets, which are decayed on a
separate (and by default much longer) timer. If we didn't move to
tracking their decays separately, we'd never let the `last_updated`
field get old enough for the historical buckets to decay at all.
Instead, here we introduce a new `Duration` in the
`ChannelLiquidity` which tracks the last time the historical
liquidity buckets were last updated. We initialize it to a copy of
`last_updated` on deserialization if it is missing.
Rather than relying on fetching the current time during
routefinding, here we introduce a new trait method to `ScoreUpdate`
to do so. This largely mirrors what we do with the `NetworkGraph`,
and allows us to take on much more expensive operations (floating
point exponentiation) in our decaying.
We are intending to release without having completed our async
signing logic, which sadly means we need to cfg-gate it to ensure
we restore the previous state of panicking on signer errors, rather
than putting us in a stuck state with no way to recover.
Here we add a new `async_signing` cfg flag and use it to gate all
the new logic from #2558 effectively reverting commits
1da29290e7 through
014a336e59.
In the coming commits, we'll stop relying on fetching the time
during routefetching, preferring to decay score data in the
background instead.
The first step towards this - passing the current time through into
the scorer when updating.
In the next commits we'll need `f64`'s `powf`, which is only
available in `std`. For `no-std`, here we depend on `libm` (a
`rust-lang` org project), which we can use for `powf`.