Currently `claim_funds` and `claim_funds_internal` call
`claim_funds_from_hop` and then surface and `Event` to the user
informing them of the forwarded/claimed payment based on it's
result. In both places we assume that a claim "completed" even if
a monitor update is being done async.
Instead, here we push that event generation through a
`MonitorUpdateCompletionAction` and a call to
`handle_monitor_update_completion_action`. This will allow us to
hold the event(s) until async monitor updates complete in the
future.
When `claim_funds` has to claim multiple HTLCs as a part of a
single MPP payment, it currently does so holding the
`channel_state` lock for the entire duration of the claim loop.
Here we swap that for taking the lock once for each HTLC. This
allows us to be more flexible with locks going forward, and
ultimately isn't a huge change - if our counterparty intends to
force-close a channel, us choosing to ignore it by holding the
`channel_state` lock for the duration of the claim isn't going to
result in a commitment update, it will just result in the preimage
already being in the `ChannelMonitor`.
Currently `claim_funds` does all HTLC claims in one `channel_state`
lock, ensuring that we always make claims from channels which are
open. It can thus avoid ever having to generate a
`ChannelMonitorUpdate` containing a preimage for a closed channel,
which we only do in `claim_funds_internal` (for forwarded payments).
In the next commit we'll change the locking of
`claim_funds_from_hop` so that `claim_funds` is no longer under a
single lock but takes a lock for each claim. This allows us to be
more flexible with locks going forward, and ultimately isn't a huge
change - if our counterparty intends to force-close a channel, us
choosing to ignore it by holding the `channel_state` lock for the
duration of the claim isn't going to result in a commitment update,
it will just result in the preimage already being in the
`ChannelMonitor`.
This adds a new enum, `MonitorUpdateCompletionAction` and a method
to execute the "actions". They are intended to be done once a
(potentially-async) `ChannelMonitorUpdate` persistence completes,
however this behavior will be implemented in a future PR. For now,
this adds the relevant infrastructure which will allow us to
prepare `claim_funds` for better monitor async handling.
In the next commits we'll move to generating `PaymentClaimed`
events while handling `ChannelMonitorUpdate`s rather than directly
in line. Thus, as a prerequisite, here we move to storing the info
required to generate the `PaymentClaimed` event in a separate map.
Note that while this does introduce a new map which is written as
an even value which users cannot opt out of, the map is only filled
in when users use the asynchronous `ChannelMonitor` updates and
after a future PR. As these are still considered beta, breaking
downgrades for such users is considered acceptable in the future PR
(which will likely be one LDK version later).
When a channel is force-closed, if a `ChannelMonitor` update is
completed but a `ChannelManager` persist has not yet happened,
HTLCs which were removed in the latest (persisted) `ChannelMonitor`
update will not be failed even though they do not appear in the
commitment transaction which went on chain. This is because the
`ChannelManager` thinks the `ChannelMonitor` is responsible for
them (as it is stale), but the `ChannelMonitor` has no knowledge of
the HTLC at all (as it is not stale).
The fix for this is relatively simple - we need to check for this
specific case and fail back such HTLCs when deserializing a
`ChannelManager`
If, after forwarding a payment to our counterparty, we restart with
a ChannelMonitor update having been persisted, but the
corresponding ChannelManager update was not persisted, we'll still
have the forwarded HTLC in the `forward_htlcs` map on start. This
will cause us to generate a (spurious) `PendingHTLCsForwardable`
event. However, when we go to forward said HTLC, we'll notice the
channel has been closed and leave it up to the `ChannelMontior` to
finalize the HTLC.
This is all fine today - we won't lose any funds, we'll just
generate an excess forwardable event and then fail to forward.
However, in the future when we allow for forward-time channel
changes this could break. Thus, its worth adding tests for this
behavior today, and, while we're at it, removing the spurious
forwardable HTLCs event.
Soon we're going to need to return an error when ChannelManager is unable to
find a route, so we'll need a way to distinguish between that and the user
supplying an invalid route.
Currently we loop over `htlcs_to_fail` locking `channel_state` for each
element only to call `get_htlc_inbound_temp_fail_err_and_data` with the
same inputs on each iteration. This is unnecessary, we can refactor and
call `get_htlc_inbound_temp_fail_err_and_data` outside of the loop.
Currently `fail_htlc_backwards_internal` takes ownership of its source
and reason parameters however they are not consumed so we can borrow them.
Includes refactoring to use local variables before the function call.
We create `HTLCFailReason` inline in function calls in a bunch of places
in the `channelmanager` module, we can make the code more terse with no
loss of clarity by implementing a couple of constructor methods.
See ChannelManager::forward_intercepted_htlc and
ChannelManager::get_intercept_scid for details
Co-authored-by: John Cantrell <johncantrell97@gmail.com>
Co-authored-by: Valentine Wallace <vwallace@protonmail.com>
And store the pending intercepted HTLC in pending_intercepted_htlcs
Co-authored-by: John Cantrell <johncantrell97@gmail.com>
Co-authored-by: Valentine Wallace <vwallace@protonmail.com>
No htlcs are intercepted yet, that will be added in upcoming commit(s)
Co-authored-by: John Cantrell <johncantrell97@gmail.com>
Co-authored-by: Valentine Wallace <vwallace@protonmail.com>
For Windows build only, the
`TestPersister::chain_sync_monitor_persistences` lock has a lock order
before the `ChannelManager::per_peer_state` lock. This fix ensures that
the `per_peer_state` lock isn't held before the
`TestPersister::chain_sync_monitor_persistences` lock is acquired.
As the `channel_state` lock will be removed, we prepare for that by
flipping the lock order for `pending_inbound_payments` and
`pending_outbound_payments` locks to before the `channel_state` lock.
Now that `handle_channel_resumption` can't fail, the error handling
in `post_handle_chan_restoration` is now dead code. Removing it
makes `post_handle_chan_restoration` only a single block, so here
we simply remove the macro and inline the single block into the two
places the macro was used.
There is no reason anymore for `handle_chan_restoration_locked` to
be a macro, and our long-term desire is to move away from macros as
they substantially bloat our compilation time (and binary size).
Thus, we simply remove `handle_chan_restoration_locked` here and
turn it into a function.
When we process a `channel_reestablish` message we free the HTLC
update holding cell as things may have changed while we were
disconnected. However, some time ago, to handle freeing from the
holding cell when a monitor update completes, we added a holding
cell freeing check in `get_and_clear_pending_msg_events`. This
leaves the in-`channel_reestablish` holding cell clear redundant,
as doing it immediately or is `get_and_clear_pending_msg_events` is
not a user-visible difference.
Thus, we remove the redundant code here, substantially simplifying
`handle_chan_restoration_locked` while we're at it.
As it was previously omitted, we clarify here starting from which version users can expect the `user_channel_id` to be randomized for inbound channels.
LND nodes have very broken fee estimators, causing them to suggest
feerates that don't even meet a current mempool minimum feerate
when fees go up over the course of hours. This can cause us to
reject their feerate estimates as they're not high enough, even
though their new feerate is higher than what we had already (which
is the feerate we'll use to broadcast a closing transaction). This
implies we force-close the channel and broadcast something with a
feerate lower than our counterparty was offering.
Here we simply accept such feerates as they are better than what we
had. We really should also close the channel, but only after we
get their signature on the new feerate. That should happen by
checking channel feerates every time we see a new block so is
orthogonal to this code.
Ultimately the fix is anchor outputs plus package-based relay in
Bitcoin Core, however we're still quite some ways from that, so
worth needlessly closing channels for now.