While the HTLC-claim process happens across all MPP parts under one
lock, this doesn't imply that they are claimed fully atomically on
disk. Ultimately, an application can crash after persisting one
`ChannelMonitorUpdate` out of multiple monitor updates needed for
the full claim.
Previously, this would leave us in a very bad state - because of
the all-channels-available check in `claim_funds` we'd refuse to
claim the payment again on restart (even though the
`PaymentReceived` event will be passed to the user again), and we'd
end up having partially claimed the payment!
The fix for the consistency part of this issue is pretty
straightforward - just check for this condition on startup and
complete the claim across all channels/`ChannelMonitor`s if we
detect it.
This still leaves us in a confused state from the perspective of
the user, however - we've actually claimed a payment but when they
call `claim_funds` we return `false` indicating it could not be
claimed.
The `ChannelMonitor` had a field for the counterparty's
`cur_revocation_points`. Somewhat confusingly, this actually stored
the counterparty's *per-commitment* points, not the (derived)
revocation points.
Here we correct this by simply renaming the references as
appropriate. Note the update in `channel.rs` makes the variable
names align correctly.
In `HTLCUpdate` and `OnchainEvent` tracking, we store the HTLC
value (rounded down to whole satoshis). This is somewhat
confusingly referred to as the `onchain_value_satoshis` even though
it refers to the commitment transaction output value, not the value
available on chain (which may have been reduced by an
HTLC-Timeout/HTLC-Success transaction).
Several fields used in tracking on-chain HTLC outputs were
named `input_idx` despite referring to the output index in the
commitment transaction. Here they are all renamed
`commitment_tx_output_idx` for clarity.
This update also includes a minor refactor. The return type of
`pending_monitor_events` has been changed to a `Vec` tuple with the
`OutPoint` type. This associates a `Vec` of `MonitorEvent`s with a
funding outpoint.
We've also renamed `source/sink_channel_id` to `prev/next_channel_id` in
the favour of clarity.
The `chain::Listen` interface provides a block-connection-based
alternative to the `chain::Confirm` interface, which supports
providing transaction data at a time separate from the block
connection time.
For users who are downloading the full headers tree (e.g. from a
node over the Bitcoin P2P protocol) but who are not downloading
full blocks (e.g. because they're using BIP 157/158 filtering)
there is no API that matches exactly their event stream -
`chain::Listen` requries full blocks for each block,
`chain::Confirm` requires breaking each connection event into two
calls.
Given its incredibly trivial to take a `TransactionData` in
addition to a `Block` in `chain::Listen` we do so here, adding a
default-implementation `block_connected` which simply creates the
`TransactionData`, which ultimately all of the `chain::Listen`
implementations currently do anyway.
Closes#1128.
When handling the broadcast of a local commitment transactions
(with associated CSV delays prior to spendability), we incorrectly
handled the CSV delays on HTLC transactions. This caused us to miss
spendable outputs for HTLCs which were awaiting a CSV delay.
Further, because of this, we could hit an assertion as
`get_claimable_balances` asserted that HTLCs were resolved after
the funding spend was resolved, which was not true if the HTLC did
not have a CSV delay attached (due to the above bug or due to it
being an HTLC claim by our counterparty).
This fixes both bugs, also converting some assertions to
`debug_assert`s to avoid future issues as balance mis-calculation
is not currently an indication of potential funds loss.
Thanks to Cash App for reporting this bug.
To support the feature of generating invoices that can be paid to any of
multiple nodes, a key manager need to be able to share an inbound_payment_key
and phantom secret key. This is because a phantom payment may be received by
any node participating in the invoice, so all nodes must be able to decrypt the
phantom payment (and therefore must share decryption key(s)) in the act of
pretending to be the phantom node. Thus we add a new `PhantomKeysManager` that
supports these features.
To be more specific, the inbound payment key must be shared because it is used
to decrypt the payment details for verification (LDK avoids storing inbound
payment data by encrypting payment metadata in the payment hash and/or payment
secret).
The phantom secret must be shared because enables any real node included in the
phantom invoice to decrypt the final layer of the onion packet, since the onion
is encrypted by the sender using the phantom public key provided in the
invoice.
This removes one more place where we directly access the node_id
secret key in `ChannelManager`, slowly marching towards allowing
the node_id secret key to be offline in the signer.
More importantly, it allows more ChannelAnnouncement logic to move
into the `Channel` without having to pass the node secret key
around, avoiding the announcement logic being split across two
files.
Fix build errors
Create script using p2wsh for comparison
Using p2wpkh for generating the payment script
spendable_outputs sanity check
Return err in spendable_outputs
Doc updates in keysinterface
We don't expect users to ever change behavior based on the string
contained in a `MonitorUpdateErr`, except log it, so there's little
reason to not just log it ourselves and return a `()` for errors.
We do so here, simplifying the callsite in `ChainMonitor` as well.
Previously, monitor updates were allowed freely even after a
funding-spend transaction confirmed. This would allow a race
condition where we could receive a payment (including the
counterparty revoking their broadcasted state!) and accept it
without recourse as long as the ChannelMonitor receives the block
first, the full commitment update dance occurs after the block is
connected, and before the ChannelManager receives the block.
Obviously this is an incredibly contrived race given the
counterparty would be risking their full channel balance for it,
but its worth fixing nonetheless as it makes the potential
ChannelMonitor states simpler to reason about.
The test in this commit also tests the behavior changed in the
previous commit.
`ChannelMonitorUpdate`s may contain multiple updates, including, eg
a payment preimage after a commitment transaction update. While
such updates are generally not generated today, we shouldn't return
early out of the update loop, causing us to miss any updates after
an earlier update fails.
We currently assume our counterparty is naive and misconfigured and
may force-close a channel to get an HTLC we just forwarded them.
There shouldn't be any reason to do this - we don't have any such
bug, and we shouldn't start by assuming our counterparties are
buggy. Worse, this results in refusing to forward payments today,
failing HTLCs for largely no reason.
Instead, we keep a fairly conservative check, but not one which
will fail HTLC forwarding spuriously - testing only that the HTLC
doesn't expire for a few blocks from now.
Fixes#1114.
I realized on my own node that I don't have any visibility into how
long a monitor or manager persistence call takes, potentially
blocking other operations. This makes it much more clear by adding
a relevant log_trace!() print immediately before and immediately
after persistence.
If we go to send a payment, add the HTLC(s) to the channel(s),
commit the ChannelMonitor updates to disk, and then crash, we'll
come back up with no pending payments but HTLC(s) ready to be
claim/failed.
This makes it rather impractical to write a payment sender/retryer,
as you cannot guarantee atomicity - you cannot guarantee you'll
have retry data persisted even if the HTLC(s) are actually pending.
Because ChannelMonitors are *the* atomically-persisted data in LDK,
we lean on their current HTLC data to figure out what HTLC(s) are a
part of an outbound payment, rebuilding the pending payments list
on reload.