Currently `claim_funds` does all HTLC claims in one `channel_state`
lock, ensuring that we always make claims from channels which are
open. It can thus avoid ever having to generate a
`ChannelMonitorUpdate` containing a preimage for a closed channel,
which we only do in `claim_funds_internal` (for forwarded payments).
In the next commit we'll change the locking of
`claim_funds_from_hop` so that `claim_funds` is no longer under a
single lock but takes a lock for each claim. This allows us to be
more flexible with locks going forward, and ultimately isn't a huge
change - if our counterparty intends to force-close a channel, us
choosing to ignore it by holding the `channel_state` lock for the
duration of the claim isn't going to result in a commitment update,
it will just result in the preimage already being in the
`ChannelMonitor`.
This adds a new enum, `MonitorUpdateCompletionAction` and a method
to execute the "actions". They are intended to be done once a
(potentially-async) `ChannelMonitorUpdate` persistence completes,
however this behavior will be implemented in a future PR. For now,
this adds the relevant infrastructure which will allow us to
prepare `claim_funds` for better monitor async handling.
In the next commits we'll move to generating `PaymentClaimed`
events while handling `ChannelMonitorUpdate`s rather than directly
in line. Thus, as a prerequisite, here we move to storing the info
required to generate the `PaymentClaimed` event in a separate map.
Note that while this does introduce a new map which is written as
an even value which users cannot opt out of, the map is only filled
in when users use the asynchronous `ChannelMonitor` updates and
after a future PR. As these are still considered beta, breaking
downgrades for such users is considered acceptable in the future PR
(which will likely be one LDK version later).
If we try to send any onion error with the `UPDATE` flag in
response to a phantom receipt, we should always swap it for
something generic that doesn't require a `channel_update` in it.
Here we use `temporary_node_failure`.
Test provided by Valentine Wallace <vwallace@protonmail.com>
When we receive a phantom HTLC with a bogus/modified CLTV, we
should fail back with `incorrect_cltv_expiry`, but that requires a
`channel_update`, which we cannot generate for a phantom HTLC which
has no corresponding channel. Thus, instead, we have to fall back
to `incorrect_cltv_expiry`.
Fixes#1879
The spec mandates that we copy the `sha256_hash_of_onion` field
from the `UpdateFailMalformedHTLC` message into the error message
we send back to the sender, however we simply ignored it. Here we
copy it into the message correctly.
This replaces `final_expiry_too_soon` with
`incorrect_or_unknown_payment` as was done in
https://github.com/lightning/bolts/pull/608. Note that the
rationale for this (that it may expose whether you are the final
recipient for the payment or not) does not currently apply to us -
we don't apply different final CLTV values to different payments.
However, we might in the future, and this will make us slightly
more consistent with other nodes.
Like the previous commit, here we update the update_fee+commit
logic to simply push the fee update into the holding cell and then
use the standard holding-cell-freeing codepaths to actually send
the commitment update. This removes a substantial amount of code,
reducing redundant codepaths and keeping channel state machine
logic in channel.rs.
We currently free the channel holding cells in
`get_and_clear_pending_msg_events`, blocking outbound messages
while we do so. This is fine, but may block the message pipeline
longer than we need to. In the next commit we'll push
timer-originating channel fee updates out through the holding cell
pipeline, leaning more on that freeing in the future.
Thus, to avoid a regression in message time, here we clear the
holding cell after processing all timer events. This also avoids
needing to change tests in the next commit.
When we batch HTLC updates, we currently do the explicit queueing
plus the commitment generation in the `ChannelManager`. This is a
bit strange as its ultimately really a `Channel` responsibility to
generate commitments at the correct time, with the abstraction
leaking into `ChannelManager` with the `send_htlc` and
`get_update_fail_htlc` method docs having clear comments about
how `send_commitment` MUST be called prior to calling other
`Channel` methods.
Luckily `Channel` already has an update queue - the holding cell.
Thus, we can trivially rewrite the batch update logic as inserting
the desired updates into the holding cell and then asking all
channels to clear their holding cells.
When a channel is force-closed, if a `ChannelMonitor` update is
completed but a `ChannelManager` persist has not yet happened,
HTLCs which were removed in the latest (persisted) `ChannelMonitor`
update will not be failed even though they do not appear in the
commitment transaction which went on chain. This is because the
`ChannelManager` thinks the `ChannelMonitor` is responsible for
them (as it is stale), but the `ChannelMonitor` has no knowledge of
the HTLC at all (as it is not stale).
The fix for this is relatively simple - we need to check for this
specific case and fail back such HTLCs when deserializing a
`ChannelManager`
If, after forwarding a payment to our counterparty, we restart with
a ChannelMonitor update having been persisted, but the
corresponding ChannelManager update was not persisted, we'll still
have the forwarded HTLC in the `forward_htlcs` map on start. This
will cause us to generate a (spurious) `PendingHTLCsForwardable`
event. However, when we go to forward said HTLC, we'll notice the
channel has been closed and leave it up to the `ChannelMontior` to
finalize the HTLC.
This is all fine today - we won't lose any funds, we'll just
generate an excess forwardable event and then fail to forward.
However, in the future when we allow for forward-time channel
changes this could break. Thus, its worth adding tests for this
behavior today, and, while we're at it, removing the spurious
forwardable HTLCs event.
Soon we're going to need to return an error when ChannelManager is unable to
find a route, so we'll need a way to distinguish between that and the user
supplying an invalid route.
Currently we loop over `htlcs_to_fail` locking `channel_state` for each
element only to call `get_htlc_inbound_temp_fail_err_and_data` with the
same inputs on each iteration. This is unnecessary, we can refactor and
call `get_htlc_inbound_temp_fail_err_and_data` outside of the loop.
Currently `fail_htlc_backwards_internal` takes ownership of its source
and reason parameters however they are not consumed so we can borrow them.
Includes refactoring to use local variables before the function call.
We create `HTLCFailReason` inline in function calls in a bunch of places
in the `channelmanager` module, we can make the code more terse with no
loss of clarity by implementing a couple of constructor methods.
See ChannelManager::forward_intercepted_htlc and
ChannelManager::get_intercept_scid for details
Co-authored-by: John Cantrell <johncantrell97@gmail.com>
Co-authored-by: Valentine Wallace <vwallace@protonmail.com>
And store the pending intercepted HTLC in pending_intercepted_htlcs
Co-authored-by: John Cantrell <johncantrell97@gmail.com>
Co-authored-by: Valentine Wallace <vwallace@protonmail.com>
No htlcs are intercepted yet, that will be added in upcoming commit(s)
Co-authored-by: John Cantrell <johncantrell97@gmail.com>
Co-authored-by: Valentine Wallace <vwallace@protonmail.com>
For Windows build only, the
`TestPersister::chain_sync_monitor_persistences` lock has a lock order
before the `ChannelManager::per_peer_state` lock. This fix ensures that
the `per_peer_state` lock isn't held before the
`TestPersister::chain_sync_monitor_persistences` lock is acquired.
As the `channel_state` lock will be removed, we prepare for that by
flipping the lock order for `pending_inbound_payments` and
`pending_outbound_payments` locks to before the `channel_state` lock.