We want to move all outbound payment-related things to this new module, to help
break up ChannelManager so future payment retries work doesn't increase the
size of ChannelManager.
This change follows the rationale of commit 62236c7 and addresses the
last remaining redundant local commitment broadcast.
There's no need to broadcast our local commitment transaction if we've
already seen a confirmed one as it'll be immediately rejected as a
duplicate/conflict.
This will also help prevent dispatching spurious events for bumping
commitment and HTLC transactions through anchor outputs since the
dispatch for said events follows the same flow as our usual commitment
broadcast.
If, after forwarding an intercepted payment to our counterparty, we
restart with a ChannelMonitor update having been persisted, but the
corresponding ChannelManager update not having been persisted,
we'll still have the intercepted HTLC in the
`pending_intercepted_htlcs` map on start (and potentially a pending
`HTLCIntercepted` event). This will cause us to allow the user to
handle the forwarded HTLC twice, potentially double-forwarding it.
This builds on 0bb87ddad7, which
provided a preemptive fix for the general relay case (though it was
not an actual issue at the time). We simply check for the HTLCs
having been forwarded on startup and remove them from the map.
Fixes#1858
If a user calls `abandon_payment`, then restarts without freshly
persisting the `ChannelManager`, the payment will still be pending
on restart. This was unclear from the docs (and the docs seemed to
imply otherwise). Because this doesn't materially impact the
usability of `abandon_payment` (users shouldn't be called
`retry_payment` on an abandoned one anyway), we simply document it.
Fixes#1804.
Define an interface for BOLT 12 `invoice_request` messages. The
underlying format consists of the original bytes and the parsed
contents.
The bytes are later needed when constructing an `invoice` message. This
is because it must mirror all the `offer` and `invoice_request` TLV
records, including unknown ones, which aren't represented in the
contents.
The contents will be used in `invoice` messages to avoid duplication.
Some fields while required in a typical user-pays-merchant flow may not
be necessary in the merchant-pays-user flow (e.g., refund, ATM).
Currently `claim_funds` and `claim_funds_internal` call
`claim_funds_from_hop` and then surface and `Event` to the user
informing them of the forwarded/claimed payment based on it's
result. In both places we assume that a claim "completed" even if
a monitor update is being done async.
Instead, here we push that event generation through a
`MonitorUpdateCompletionAction` and a call to
`handle_monitor_update_completion_action`. This will allow us to
hold the event(s) until async monitor updates complete in the
future.
When `claim_funds` has to claim multiple HTLCs as a part of a
single MPP payment, it currently does so holding the
`channel_state` lock for the entire duration of the claim loop.
Here we swap that for taking the lock once for each HTLC. This
allows us to be more flexible with locks going forward, and
ultimately isn't a huge change - if our counterparty intends to
force-close a channel, us choosing to ignore it by holding the
`channel_state` lock for the duration of the claim isn't going to
result in a commitment update, it will just result in the preimage
already being in the `ChannelMonitor`.
Currently `claim_funds` does all HTLC claims in one `channel_state`
lock, ensuring that we always make claims from channels which are
open. It can thus avoid ever having to generate a
`ChannelMonitorUpdate` containing a preimage for a closed channel,
which we only do in `claim_funds_internal` (for forwarded payments).
In the next commit we'll change the locking of
`claim_funds_from_hop` so that `claim_funds` is no longer under a
single lock but takes a lock for each claim. This allows us to be
more flexible with locks going forward, and ultimately isn't a huge
change - if our counterparty intends to force-close a channel, us
choosing to ignore it by holding the `channel_state` lock for the
duration of the claim isn't going to result in a commitment update,
it will just result in the preimage already being in the
`ChannelMonitor`.
This adds a new enum, `MonitorUpdateCompletionAction` and a method
to execute the "actions". They are intended to be done once a
(potentially-async) `ChannelMonitorUpdate` persistence completes,
however this behavior will be implemented in a future PR. For now,
this adds the relevant infrastructure which will allow us to
prepare `claim_funds` for better monitor async handling.
In the next commits we'll move to generating `PaymentClaimed`
events while handling `ChannelMonitorUpdate`s rather than directly
in line. Thus, as a prerequisite, here we move to storing the info
required to generate the `PaymentClaimed` event in a separate map.
Note that while this does introduce a new map which is written as
an even value which users cannot opt out of, the map is only filled
in when users use the asynchronous `ChannelMonitor` updates and
after a future PR. As these are still considered beta, breaking
downgrades for such users is considered acceptable in the future PR
(which will likely be one LDK version later).
If we try to send any onion error with the `UPDATE` flag in
response to a phantom receipt, we should always swap it for
something generic that doesn't require a `channel_update` in it.
Here we use `temporary_node_failure`.
Test provided by Valentine Wallace <vwallace@protonmail.com>
When we receive a phantom HTLC with a bogus/modified CLTV, we
should fail back with `incorrect_cltv_expiry`, but that requires a
`channel_update`, which we cannot generate for a phantom HTLC which
has no corresponding channel. Thus, instead, we have to fall back
to `incorrect_cltv_expiry`.
Fixes#1879
When we're constructing an HTLCFailReason, we should check that we
set the data to at least the correct length for the given failure
code, which we do here.
The spec mandates that we copy the `sha256_hash_of_onion` field
from the `UpdateFailMalformedHTLC` message into the error message
we send back to the sender, however we simply ignored it. Here we
copy it into the message correctly.
This replaces `final_expiry_too_soon` with
`incorrect_or_unknown_payment` as was done in
https://github.com/lightning/bolts/pull/608. Note that the
rationale for this (that it may expose whether you are the final
recipient for the payment or not) does not currently apply to us -
we don't apply different final CLTV values to different payments.
However, we might in the future, and this will make us slightly
more consistent with other nodes.
Now that `HTLCFailReason` is opaque and in `onion_utils`, we should
encapsulate it so that `ChannelManager` can no longer directly
access its inner fields.
Like the previous commit, here we update the update_fee+commit
logic to simply push the fee update into the holding cell and then
use the standard holding-cell-freeing codepaths to actually send
the commitment update. This removes a substantial amount of code,
reducing redundant codepaths and keeping channel state machine
logic in channel.rs.
We currently free the channel holding cells in
`get_and_clear_pending_msg_events`, blocking outbound messages
while we do so. This is fine, but may block the message pipeline
longer than we need to. In the next commit we'll push
timer-originating channel fee updates out through the holding cell
pipeline, leaning more on that freeing in the future.
Thus, to avoid a regression in message time, here we clear the
holding cell after processing all timer events. This also avoids
needing to change tests in the next commit.
When we batch HTLC updates, we currently do the explicit queueing
plus the commitment generation in the `ChannelManager`. This is a
bit strange as its ultimately really a `Channel` responsibility to
generate commitments at the correct time, with the abstraction
leaking into `ChannelManager` with the `send_htlc` and
`get_update_fail_htlc` method docs having clear comments about
how `send_commitment` MUST be called prior to calling other
`Channel` methods.
Luckily `Channel` already has an update queue - the holding cell.
Thus, we can trivially rewrite the batch update logic as inserting
the desired updates into the holding cell and then asking all
channels to clear their holding cells.
When a channel is force-closed, if a `ChannelMonitor` update is
completed but a `ChannelManager` persist has not yet happened,
HTLCs which were removed in the latest (persisted) `ChannelMonitor`
update will not be failed even though they do not appear in the
commitment transaction which went on chain. This is because the
`ChannelManager` thinks the `ChannelMonitor` is responsible for
them (as it is stale), but the `ChannelMonitor` has no knowledge of
the HTLC at all (as it is not stale).
The fix for this is relatively simple - we need to check for this
specific case and fail back such HTLCs when deserializing a
`ChannelManager`
If, after forwarding a payment to our counterparty, we restart with
a ChannelMonitor update having been persisted, but the
corresponding ChannelManager update was not persisted, we'll still
have the forwarded HTLC in the `forward_htlcs` map on start. This
will cause us to generate a (spurious) `PendingHTLCsForwardable`
event. However, when we go to forward said HTLC, we'll notice the
channel has been closed and leave it up to the `ChannelMontior` to
finalize the HTLC.
This is all fine today - we won't lose any funds, we'll just
generate an excess forwardable event and then fail to forward.
However, in the future when we allow for forward-time channel
changes this could break. Thus, its worth adding tests for this
behavior today, and, while we're at it, removing the spurious
forwardable HTLCs event.
Since `ChannelMonitor`s will now re-derive signers rather than
persisting them, we can no longer use the OnlyReadsKeysInterface
concrete implementation.
To do so, we introduce a new serialization version that doesn't store a
channel's signer, and instead stores its signer's `channel_keys_id`.
This is a unique identifier that can be provided to our `KeysInterface`
to re-derive all private key material for said channel.
We choose to not upgrade the minimum compatible serialization version
until a later time, which will also remove any signer serialization
logic on implementations of `KeysInterface` and `Sign`.
Now that ready_channel is also called on startup upon deserializing
channels, we opt to rename it to a more indicative name.
We also derive `PartialEq` on ChannelTransactionParameters to allow
implementations to determine whether `provide_channel_parameters` calls
are idempotent after the channel parameters have already been provided.
`get_channel_signer` previously had two different responsibilites:
generating unique `channel_keys_id` and using said ID to derive channel
keys. We decide to split it into two methods `generate_channel_keys_id`
and `derive_channel_signer`, such that we can use the latter to fulfill
our goal of re-deriving signers instead of persisting them. There's no
point in storing data that can be easily re-derived.