Previously, we used to cleanup monitor updates at both consolidation
threshold and new block connects. With this change we will only
cleanup when our consolidation criteria is met. Also, we remove
monitor read from cleanup logic, in case of update consolidation.
Note: In case of channel-closing monitor update, we still need to
read the old monitor before persisting the new one in order to
determine the cleanup range.
Because we decay the bucket information in the background, there's
not much reason to try to decay them immediately prior to updating,
and in removing that we can also clean up a good bit of dead code,
which we do here.
There's some edge cases in our scoring when the information really
should be decayed but hasn't yet been prior to an update. Rather
than try to fix them exactly, we instead decay the scorer a bit
more often, which largely solves them but also gives us a bit more
accurate bounds on our channels, allowing us to reuse channels at
a similar amount to what just failed immediately, but at a
substantial penalty.
Now that the serialization format of `no-std` and `std`
`ProbabilisticScorer`s both just use `Duration` since UNIX epoch
and don't care about time except when decaying, we don't need to
warn users to not mix the scorers across `no-std` and `std` flags.
Fixes#2539
This is a good gut-check to ensure we don't end up taking a ton of
time decaying channel liquidity info.
It currently clocks in around 1.25ms on an i7-1360P.
Now that we don't access time via the `Time` trait in
`ProbabilisticScorer`, we can finally drop the `Time` bound
entirely, removing the `ProbabilisticScorerUsingTime` and type
alias indirection and replacing it with a simple struct.
In the coming commits, the `T: Time` bound on `ProbabilisticScorer`
will be removed. In order to enable that, we need to switch over to
using the `ScoreUpdate`-provided current time (as a `Duration`
since the unix epoch), making the `T` bound entirely unused.
In the coming commits, the `T: Time` bound on `ProbabilisticScorer`
will be removed. In order to enable that, we need to pass the
current time (as a `Duration` since the unix epoch) through the
score updating pipeline, allowing us to keep the
`*last_updated_time` fields up-to-date as we go.
Now that we aren't decaying during scoring, when we set the
last_updated time in the history bucket logic doesn't matter, so
we should just update it when we've just updated the history
buckets.
Because scoring is an incredibly performance-sensitive operation,
doing liquidity information decay (and especially fetching the
current time!) during scoring isn't really a great idea. Now that
we decay liquidity information in the background, we don't have any
reason to decay during scoring, and we ultimately remove it
entirely here.
Because scoring is an incredibly performance-sensitive operation,
doing liquidity information decay (and especially fetching the
current time!) during scoring isn't really a great idea. Now that
we decay liquidity information in the background, we don't have any
reason to decay during scoring, and we remove the historical bucket
liquidity decaying here.
This implements decaying in the `ProbabilisticScorer`'s
`ScoreLookup::decay_liquidity_certainty` implementation, using
floats for accuracy since we're no longer particularly
time-sensitive. Further, it (finally) removes score entries which
have decayed to zero.
In the next commit, we'll start to use the new
`ScoreUpdate::decay_liquidity_certainty` to decay our bounds in the
background. This will result in the `last_updated` field getting
updated regularly on decay, rather than only on update. While this
isn't an issue for the regular liquidity bounds, it poses a problem
for the historical liquidity buckets, which are decayed on a
separate (and by default much longer) timer. If we didn't move to
tracking their decays separately, we'd never let the `last_updated`
field get old enough for the historical buckets to decay at all.
Instead, here we introduce a new `Duration` in the
`ChannelLiquidity` which tracks the last time the historical
liquidity buckets were last updated. We initialize it to a copy of
`last_updated` on deserialization if it is missing.
Rather than relying on fetching the current time during
routefinding, here we introduce a new trait method to `ScoreUpdate`
to do so. This largely mirrors what we do with the `NetworkGraph`,
and allows us to take on much more expensive operations (floating
point exponentiation) in our decaying.
We are intending to release without having completed our async
signing logic, which sadly means we need to cfg-gate it to ensure
we restore the previous state of panicking on signer errors, rather
than putting us in a stuck state with no way to recover.
Here we add a new `async_signing` cfg flag and use it to gate all
the new logic from #2558 effectively reverting commits
1da29290e7 through
014a336e59.
In the coming commits, we'll stop relying on fetching the time
during routefetching, preferring to decay score data in the
background instead.
The first step towards this - passing the current time through into
the scorer when updating.
In the next commits we'll need `f64`'s `powf`, which is only
available in `std`. For `no-std`, here we depend on `libm` (a
`rust-lang` org project), which we can use for `powf`.
Although this new check is unreachable right now, it helps prevent potential
future errors where we incorrectly fail blinded HTLCs with an unblinded error.
If a blinded HTLC errors when added to a Channel, such as if the recipient has
already sent a shutdown message, they should malformed-fail backwards with
error code INVALID_ONION_BLINDING and a zeroed out onion hash per BOLT 4.
If a blinded HTLC does not satisfy the receiver's requirements, e.g. bad CLTV
or amount, they should malformed-fail backwards with error code
INVALID_ONION_BLINDING and a zeroed out onion hash per BOLt 4.
If a recipient behind a multihop blinded path fails to decode their onion
payload, they should fail backwards with error code INVALID_ONION_BLINDING and
a zeroed out onion hash per BOLT 4.
And use it in the multihop blinded path receive failure test. Will be used in
the next commit to test receiving an invalid blinded final onion payload.
We can't use the existing get_route test util here because blinded payments
rely on the sender adding a random shadow CLTV offset to the final hop; without
this the payment will be failed with cltv-expiry-too-soon.
If a blinded recipient to a multihop blinded path needs to fail back a
malformed HTLC, they should use error code INVALID_ONION_BLINDING and a zeroed
out onion hash per BOLT 4.
If an HTLC fails after its RAA is processed, it is failed back with
ChannelManager::fail_htlc_backwards_internal. This method will now correctly
inform the channel that this HTLC is blinded and to construct an
update_malformed message accordingly.
Useful for failing blinded payments back with malformed, and will also be
useful in the future when we move onion decoding into
process_pending_htlc_forwards, after which Channel::fail_htlc will be used for
all malformed htlcs.
Currently it returns only update_fail, but we'll want it to be able to return
update_malformed as well in upcoming commits. We'll use this for correctly
failing blinded received HTLCs backwards with malformed and
invalid_onion_blinding error per BOLT 4.
For context, blinded HTLCs where we are not the intro node must always be
failed back with malformed and invalid_onion_blinding error per BOLT 4.
Prior to supporting blinded payments, the only way for an update_malformed to
be returned from Channel was if an onion was actually found to be malformed
during initial update_add processing. This meant that any malformed HTLCs would
never live in the holding cell but instead would be returned directly upon
initial RAA processing.
Now, we need to be able to store these HTLCs in the holding cell because the
HTLC failure necessitating an update_malformed may come long after the RAA is
initially processed, and we may not be a state to send the update_malformed
message at that time.
Therefore, add a new holding cell HTLC variant for blinded non-intro node
HTLCs, which will signal to Channel to fail with malformed and the correct
error code.
Will be used in the next commit(s) to let us know to fail blinded received
HTLCs backwards with the malformed and invalid_onion_blinding error per BOLT 4.