At various points we've had issues where `Event` processing for a
user possibly is taking a long time, causing other things to stall.
However, due to lack of logging during `Event` processing itself
this can be rather difficult to debug. In
85eb8145fb we attempted to add
logging for this, but in doing so ended up with more verbose
logging than we were comfortable with.
Instead, here, we log only when we actually process an `Event`,
directly in the callsite passing the `Event` to the user.
Fixes#3331
When we handle a `ChannelMonitorUpdate` completion we always
complete everything that was waiting on any updates to the same
channel all at once. Thus, we need to skip all updates if there's
pending updates besides the one that was just completed.
We handled this correctly for open channels, but the shortcut for
closed channels ignored any other pending updates entirely.
Here we fix this, which is ultimately required for tests which are
added in a few commits to pass.
c99d3d785d updated
`ChannelMonitorUpdate::update_id` to continue counting up even
after the channel is closed. It, however, accidentally updated the
`ChannelMonitorUpdate` application logic to skip testing that
`ChannelMonitorUpdate`s are well-ordered after the channel has been
closed (in an attempt to ensure other checks in the same
conditional block were applied).
This fixes that oversight.
Closing channels requires a two step process - first
`update_maps_on_chan_removal` is called while holding the same
per-peer lock under which the channel reached the terminal state,
then after dropping the same lock(s), `finish_close_channel` is
called.
Because the channel is closed and thus no further
`ChannelMonitorUpdate`s are generated for the off-chain state, we'd
previously applied the `ChannelMonitorUpdate` in
`finish_close_channel`. This was tweaked somewhat in
c99d3d785d when we stopped using
`u64::MAX` for any updates after closure. However, we worked around
the races that implied by setting the `update_id` only when we go
to apply the `ChannelMonitorUpdate`, rather than when we create it.
In a coming commit, we'll need to have an `update_id` immediately
upon creation (to track in-flight updates that haven't reached
application yet). This implies that we can no longer apply closure
`ChannelMonitorUpdate`s after dropping the per-peer lock(s), as the
updates must be well-ordered with any later updates to the same
channel, even after it has been closed.
Thus, here, we add `ChannelMonitorUpdate` handling to
`update_maps_on_chan_removal`, renaming it `locked_close_channel`
to better capture its new purpose.
In the coming commits we'll start handling `ChannelMonitorUpdate`s
during channel closure in-line rather than after dropping locks via
`finish_close_channel`. In order to make that easy, here we add a
new `REMAIN_LOCKED_UPDATE_ACTIONS_PROCESSED_LATER` variant to
`handle_new_monitor_update!` which can attempt to apply an update
without dropping the locks and processing
`MonitorUpdateCompletionAction`s immediately.
During block connection, we cannot apply `ChannelMonitorUpdate`s if
we're running during the startup sequence (i.e. before the user has
called any methods outside of block connection). We previously
handled this by simply always pushing any `ChannelMonitorUpdate`s
generated during block connection into the
`pending_background_events` queue.
However, this results in `ChannelMonitorUpdate`s going through the
queue when we could just push them immediately. Here we explicitly
check `background_events_processed_since_startup` and use that to
decide whether to push updates through the background queue
instead.
On startup, if we have a channel which was closed immediately
before shutdown such that the `ChannelMonitorUpdate` marking the
channel as closed is still in-flight, it doesn't make sense to
generate a fresh `ChannelMonitorUpdate` marking the channel as
closed immediately after the existing in-flight one.
Here we detect this case and drop the extra update, though its not
all that harmful it does avoid some test changes in the coming
commits.
When deciding if we should remove a `PeerState` entry we want to
ensure we don't remove if there are pending updates in
`in_flight_monitor_updates`. Previously this was done with a simple
`in_flight_monitor_updates.is_empty()`, however this can prevent
removal of `PeerState` entries if a channel had an update at some
point (leaving an entry in the map) but the update was ultimately
completed.
Instead, we need to iterate over the entries in
`in_flight_monitor_updates` and decline to remove `PeerState`s only
if there is an entry for a pending update still in-flight.
2e685ffb24 removed a field from UserConfig, which
broke this fuzz target. The hardcoded strings needed to be updated to remove
this field from them too.
Previously, upon receipt of a GossipTimestampFilter message, we would
immediately start unloading the entire network graph on our
unsuspecting peer.
This commit modifies our behavior to only do so if the timestamp of
the filter message is at least six hour old. Otherwise, we only send
updated sync data as it comes in.
This option was added to force users to opt into breaking compat with
< 0.0.116, so it should be fine to remove for the 0.1 release. Otherwise,
receiving to static invoices would always require flipping this on.
Previously, LDK offered two ways to limit log outputs:
filtering during runtime client-side by matching on any given `Record`'s `Level` value,
as well as at compile time through the `max_level_*` feature flags.
It turns out the latter approach was always broken when used outside of
the `lightning` crate. Here, we therefore simply drop the feature-based
filtering.
`historical_estimated_payment_success_probability` exposes the
historical success probability estimator publicly, but only allows
fetching data from channels where we have sufficient information.
In the previous commit,
`live_estimated_payment_success_probability` was added to enable
querying the live bounds success probability estimator.
Sadly, while the historical success probability estimator falls
back to the live probability estimator, it does so with a different
final parameter to `success_probability`, making
`live_estimated_payment_success_probability` not useful for
calculating the actual historical model output when we have
insufficient data.
Instead, here, we add a new parameter to
`historical_estimated_payment_success_probability` which
determines whether it will return fallback data from the live
model instead.
We already expose the estimated success probability from the
historical liquidity bounds from
`historical_estimated_payment_success_probability`, but we don't
do that for the live liquidity bounds.
Here we add a `live_estimated_payment_success_probability` which
exposes the probability result from the live liquidity bounds as
well.
Our current architecture requires `GossipVerifier`'s type signature to
include a circular reference to itself. Previously, we directly gave
that as `Self`, which however did not work when setting it dynamically
under all circumstances. Here we take `Self` by `Arc` mitigate this
issue.
A recent change accidentally inverted the returned monitor update for
the case where an update is applied after the channel has been closed.
This commit corrects that mistake.
We want to remove this before release so that we can work on a way to
not persist this but rather get it from other persisted data and just
free up the TLV.
Note that the "added in 0.0.124" comment was incorrect as it was
actually added in #3137 but the comment was stale so it's safe to remove.