62f8669654 added two
`htlc_maximum_msat.unwrap_or`s, but used a default value of 0,
spuriously causing all HTLCs to fail if we don't have an htlc
maximum value. This should be mostly harmless, but we should fix it
anyway.
In 4b5db8c3ce, `channelmanager::PendingHTLCRouting` was made
public, exposing a `FinalOnionHopData` field to the world. However,
`FinalOnionHopData` was left crate-private, making the enum
impossible to construct.
There isn't a strong reason for this (even though the
`FinalOnionHopData` API is somewhat confusing, being separated from
the rest of the onion structs), so we expose it here.
Since `lightning-invoice` now depends on the `bitcoin` crate
directly, also depending on the `bitcoin_hashes` crate is redundant
and just means we confuse users by setting the `std` flag only on
`bitcoin`. Thus, we drop the explicit dependency here and replace
it with `bitcoin::hashes`.
When we make the `PrivateRoute` inner `RouteHint` `pub`, we failed
to note that the `PrivateRoute::new` constructor actually verifies
a length invariant. Thus, we un-export the inner field and force
users to go back through the `new` fn.
To avoid exposing a node's identity in a blinded path, only create
one-hop blinded paths if the node has been announced, and thus has
public channels. Otherwise, there is no way to route a payment to the
node, exposing its identity needlessly.
When constructing blinded payment paths for Bolt12Invoice, delegate to
Router::create_blinded_payment_paths which may produce multi-hop blinded
paths. Fallback to one-hop blinded paths if the Router fails or returns
no paths.
The Router trait is used to find a Route for paying a node. Expand the
interface with a create_blinded_payment paths method for creating such
paths to a recipient node.
Provide an implementation for DefaultRouter that creates two-hop
blinded paths where the recipient's peers serve as the introduction
nodes.
When constructing blinded paths for Offer and Refund, delegate to
MessageRouter::create_blinded_paths which may produce multi-hop blinded
paths. Fallback to one-hop blinded paths if the MessageRouter fails or
returns no paths.
Likewise, do the same for InvoiceRequest and Bolt12Invoice reply paths.
When finding a route through a blinded path, a random CLTV offset may be
added to the path in order to preserve privacy. This needs to be
accounted for in the blinded path's PaymentConstraints. Add
CLTV_FAR_FAR_AWAY to the max_cltv_expiry constraint to allow for such
offsets.
When we fail to accept a counterparty's funding for various
reasons, we should ensure we call the correct cleanup methods in
`internal_funding_created` to remove the temporary data for the
channel in our various internal structs (primarily the SCID alias
map).
This adds the missing cleanup, using `convert_chan_phase_err`
consistently in all the error paths.
This also ensures we get a `ChannelClosed` event when relevant.
ChannelManager is parameterized by a Router in order to find routes when
sending and retrying payments. For the offers flow, it needs to be able
to construct blinded paths (e.g., in the offer and in reply paths).
Instead of adding yet another parameter to ChannelManager, require that
any Router also implements MessageRouter. Implement this for
DefaultRouter by delegating to a DefaultMessageRouter.
The MessageRouter trait is used to find an OnionMessagePath to a
Destination (e.g., to a BlindedPath). Expand the interface with a
create_blinded_paths method for creating such paths to a recipient.
Provide a default implementation creating two-hop blinded paths where
the recipient's peers serve as introduction nodes.
The RouteBlinding feature flag is signals support for relaying payments
over blinded paths. It is used for paying BOLT 12 invoices, which are
required to included at least one blinded path.
Previously, we used to cleanup monitor updates at both consolidation
threshold and new block connects. With this change we will only
cleanup when our consolidation criteria is met. Also, we remove
monitor read from cleanup logic, in case of update consolidation.
Note: In case of channel-closing monitor update, we still need to
read the old monitor before persisting the new one in order to
determine the cleanup range.
Because we decay the bucket information in the background, there's
not much reason to try to decay them immediately prior to updating,
and in removing that we can also clean up a good bit of dead code,
which we do here.
There's some edge cases in our scoring when the information really
should be decayed but hasn't yet been prior to an update. Rather
than try to fix them exactly, we instead decay the scorer a bit
more often, which largely solves them but also gives us a bit more
accurate bounds on our channels, allowing us to reuse channels at
a similar amount to what just failed immediately, but at a
substantial penalty.
Now that the serialization format of `no-std` and `std`
`ProbabilisticScorer`s both just use `Duration` since UNIX epoch
and don't care about time except when decaying, we don't need to
warn users to not mix the scorers across `no-std` and `std` flags.
Fixes#2539
This is a good gut-check to ensure we don't end up taking a ton of
time decaying channel liquidity info.
It currently clocks in around 1.25ms on an i7-1360P.
Now that we don't access time via the `Time` trait in
`ProbabilisticScorer`, we can finally drop the `Time` bound
entirely, removing the `ProbabilisticScorerUsingTime` and type
alias indirection and replacing it with a simple struct.
In the coming commits, the `T: Time` bound on `ProbabilisticScorer`
will be removed. In order to enable that, we need to switch over to
using the `ScoreUpdate`-provided current time (as a `Duration`
since the unix epoch), making the `T` bound entirely unused.
In the coming commits, the `T: Time` bound on `ProbabilisticScorer`
will be removed. In order to enable that, we need to pass the
current time (as a `Duration` since the unix epoch) through the
score updating pipeline, allowing us to keep the
`*last_updated_time` fields up-to-date as we go.
Now that we aren't decaying during scoring, when we set the
last_updated time in the history bucket logic doesn't matter, so
we should just update it when we've just updated the history
buckets.
Because scoring is an incredibly performance-sensitive operation,
doing liquidity information decay (and especially fetching the
current time!) during scoring isn't really a great idea. Now that
we decay liquidity information in the background, we don't have any
reason to decay during scoring, and we ultimately remove it
entirely here.
Because scoring is an incredibly performance-sensitive operation,
doing liquidity information decay (and especially fetching the
current time!) during scoring isn't really a great idea. Now that
we decay liquidity information in the background, we don't have any
reason to decay during scoring, and we remove the historical bucket
liquidity decaying here.
This implements decaying in the `ProbabilisticScorer`'s
`ScoreLookup::decay_liquidity_certainty` implementation, using
floats for accuracy since we're no longer particularly
time-sensitive. Further, it (finally) removes score entries which
have decayed to zero.
In the next commit, we'll start to use the new
`ScoreUpdate::decay_liquidity_certainty` to decay our bounds in the
background. This will result in the `last_updated` field getting
updated regularly on decay, rather than only on update. While this
isn't an issue for the regular liquidity bounds, it poses a problem
for the historical liquidity buckets, which are decayed on a
separate (and by default much longer) timer. If we didn't move to
tracking their decays separately, we'd never let the `last_updated`
field get old enough for the historical buckets to decay at all.
Instead, here we introduce a new `Duration` in the
`ChannelLiquidity` which tracks the last time the historical
liquidity buckets were last updated. We initialize it to a copy of
`last_updated` on deserialization if it is missing.
Rather than relying on fetching the current time during
routefinding, here we introduce a new trait method to `ScoreUpdate`
to do so. This largely mirrors what we do with the `NetworkGraph`,
and allows us to take on much more expensive operations (floating
point exponentiation) in our decaying.
We are intending to release without having completed our async
signing logic, which sadly means we need to cfg-gate it to ensure
we restore the previous state of panicking on signer errors, rather
than putting us in a stuck state with no way to recover.
Here we add a new `async_signing` cfg flag and use it to gate all
the new logic from #2558 effectively reverting commits
1da29290e7 through
014a336e59.
In the coming commits, we'll stop relying on fetching the time
during routefetching, preferring to decay score data in the
background instead.
The first step towards this - passing the current time through into
the scorer when updating.