bcaba29f92 started returning
pre-built `Route`s from the router in the `chanmon_consistency`
fuzzer. In doing so, it didn't properly fill in the `route_parms`
field which is expected to match the requested parameters. This
causes a debug assertion when sending.
Here we fix this by setting the correct `route_params`.
Now that the core features required for `async_signing` are in
place, we can go ahead and expose it publicly (rather than behind a
a `cfg`-flag). We still don't have full async support for
`get_per_commitment_point`, but only one case in channel
reconnection remains. The overall logic may still have some
hiccups, but its been in use in production at a major LDK user for
some time now. Thus, it doesn't really make sense to hide behind a
`cfg`-flag, even if the feature is only 99% complete. Further, the
new paths exposed are very restricted to signing operations that
run async, so the risk for existing users should be incredibly low.
This moves the common `if during_startup { push background event }
else { apply ChannelMonitorUpdate }` pattern by simply inlining it
in `handle_new_monitor_update`.
One of the largest gaps in our async persistence functionality has
been preimage (claim) updates to closed channels. Here we finally
implement support for this (for updates which are generated during
startup).
Thanks to all the work we've built up over the past many commits,
this is a fairly straightforward patch, removing the
immediate-completion logic from `claim_mpp_part` and adding the
required in-flight tracking logic to
`apply_post_close_monitor_update`.
Like in the during-runtime case in the previous commit, we sadly
can't use the `handle_new_monitor_update` macro wholesale as it
handles the `Channel` resumption as well which we don't do here.
On startup, we walk the preimages and payment HTLC sets on all our
`ChannelMonitor`s, re-claiming all payments which we recently
claimed. This ensures all HTLCs in any claimed payments are claimed
across all channels.
In doing so, we expect to see the same payment multiple times,
after all it may have been received as multiple HTLCs across
multiple channels. In such cases, there's no reason to redundantly
claim the same set of HTLCs again and again. In the current code,
doing so may lead to redundant `PaymentClaimed` events, and in a
coming commit will instead cause an assertion failure.
One of the largest gaps in our async persistence functionality has
been preimage (claim) updates to closed channels. Here we finally
implement support for this (for updates at runtime).
Thanks to all the work we've built up over the past many commits,
this is a well-contained patch within `claim_mpp_part`, pushing
the generated `ChannelMonitorUpdate`s through the same pipeline we
use for open channels.
Sadly we can't use the `handle_new_monitor_update` macro wholesale
as it handles the `Channel` resumption as well which we don't do
here.
In d1c340a0e1 we added support in
`handle_new_monitor_update!` for handling updates without dropping
locks.
In the coming commits we'll start handling `ChannelMonitorUpdate`s
"like normal" for updates against closed channels. Here we set up
the first step by adding a new `POST_CHANNEL_CLOSE` variant on
`handle_new_monitor_update!` which attempts to handle the
`ChannelMonitorUpdate` and handles completion actions if it
finishes immediately, just like the pre-close variant.
In c99d3d785d we added a new
`apply_post_close_monitor_update` method which takes a
`ChannelMonitorUpdate` (possibly) for a channel which has been
closed, sets the `update_id` to the right value to keep our updates
well-ordered, and then applies it.
Setting the `update_id` at application time here is fine - updates
don't really have an order after the channel has been closed, they
can be applied in any order - and was done for practical reasons
as calculating the right `update_id` at generation time takes a
bit more work on startup, and was impossible without new
assumptions during claim.
In the previous commit we added exactly the new assumption we need
at claiming (as it's required for the next few commits anyway), so
now the only thing stopping us is the extra complexity.
In the coming commits, we'll move to tracking post-close
`ChannelMonitorUpdate`s as in-flight like any other updates, which
requires having an `update_id` at generation-time so that we know
what updates are still in-flight.
Thus, we go ahead and eat the complexity here, creating
`update_id`s when the `ChannelMonitorUpdate`s are generated for
closed-channel updates, like we do for channels which are still
live.
We also ensure that we always insert `ChannelMonitorUpdate`s in the
pending updates set when we push the background event, avoiding a
race where we push an update as a background event, then while its
processing another update finishes and the post-update actions get
run.
Here we make a test that disables a channel signer's ability
to return commitment points upon being first derived for a channel.
We also fit in a couple cleanups: removing a comment referencing a
previous design with a `HolderCommitmentPoint::Uninitialized` variant,
as well as adding coverage for updating channel maps in async closing
signed.
Here we handle the case where our signer is pending the next commitment
point when we try to send channel ready. We set a flag to remember to
send this message when our signer is unblocked. This follows the same
general pattern as everywhere else where we're waiting on a commitment
point from the signer in order to send a message.
Similar to `open_channel`, if a signer cannot provide a commitment point
immediately, we set a flag to remember we're waiting for a point to send
`accept_channel`. We make sure to get the first two points before moving
on, so when we advance our commitment we always have a point available.
For all of our async signing logic in channel establishment v1, we set
signer flags in the method where we create the raw lightning message
object. To keep things consistent, this commit moves setting the signer
flags to where we create funding_created, since this was being set
elsewhere before.
While we're doing this cleanup, this also slightly refactors our
funding_signed method to move some code out of an indent, as well
as removes a log to fix a nit from #3152.
In the event that a signer cannot provide a commitment point
immediately, we set a flag to remember we're waiting for this before we
can send `open_channel`. We make sure to get the first two commitment
points, so when we advance commitments, we always have a commitment
point available.
When initializing a context, we set the `signer_pending_open_channel`
flag to false, and leave setting this flag for where we attempt to
generate a message.
When checking to send messages when a signer is unblocked, we must
handle both when we haven't gotten any commitment point, as well as when
we've gotten the first but not the second point.
Following a previous commit adding `HolderCommitmentPoint` elsewhere, we
make the transition to use those commitment points and remove the
existing one.
We are choosing to move the `HolderCommitmentPoint` (the struct that
tracks commitment points retrieved from the signer + the commitment
number) to handle channel establishment, where we have no commitment
point at all. Previously we introduced this struct to track when we were
pending a commitment point (because of an async signer) during normal
channel operation, which assumed we always had a commitment point to
start out with.
Intiially we tried to add an `Uninitialized` variant
that held no points, but that meant that we needed to handle that case
everywhere which left a bunch of scary/unnecessary unwraps/expects.
Instead, we just hold an optional `HolderCommitmentPoint` struct
on our unfunded channels, and a non-optional `HolderCommitmentPoint`
for funded channels.
This commit starts that transition. A following commit will remove the
holder commitment point from the current `ChannelContext`.
This also makes small fixups to the comments on the
HolderCommitmentPoint variants.
Now that we track the latest `ChannelMonitorUpdate::update_id` for
each closed channel in
`PeerState::closed_channel_monitor_update_ids`, we should always
have a `PeerState` entry for the channel counterparty any time we
go to claim an HTLC on a channel, even if its closed.
Here we make this a hard assertion as we'll need to access that
`PeerState` in the coming commits to track in-flight updates
against closed channels.
UnknownPaymentContext is used when payment::ReceiveTlvs doesn't contain
a PaymentContext. This is only needed for a legacy BlindedPaymentPath.
Since these paths a short-lived, UnknownPaymentContext is no longer
needed. Remove it and require that payment::ReceiveTlvs always contains
a PaymentContext.
Any such path would fail authentication since the payment::ReceiveTlvs
would be missing an HMAC and Nonce, so this is a good time to remove
UnknownPaymentContext.
When receiving a payment over a BlindedPaymentPath, a PaymentContext is
included but was not authenticated. The previous commit adds an HMAC of
the payment::ReceiveTlvs (which contains the PaymentContext) and the
nonce used to create the HMAC. This commit verifies the authenticity
when parsing the InboundOnionPayload. This prevents a malicious actor
from for forging it.
In order to authenticate a PaymentContext, an HMAC and Nonce must be
included along with it in payment::ReceiveTlvs. Compute the HMAC when
constructing a BlindedPaymentPath and include it in the recipient's
BlindedPaymentTlvs. Authentication will be added in an upcoming commit.
Now that NodeSigner::get_inbound_payment_key returns an ExpandedKey
instead of KeyMaterial, the latter is no longer needed. Remove
KeyMaterial and replace its uses with [u8; 32].
NodeSinger::get_inbound_payment_key_material returns KeyMaterial, which
is used for constructing an ExpandedKey. Change the trait to return an
ExpandedKey directly instead. This allows for direct access to the
ExpandedKey when a NodeSigner referenced is available. Otherwise, it
would either need to be reconstructed or passed in separately.
When receiving a PaymentContext from a blinded payment, the context must
be authenticated. Otherwise, the context can be forged and would appear
within a PaymentPurpose. Add functions for constructing and verifying an
HMAC for the ReceiveTlvs, which contains the PaymentContext.
This commit adds counterparty node IDs to `PaymentForwarded`
to address the potential ambiguity of using `ChannelIds` alone,
especially in cases like v1 0conf opens where `ChannelIds`
may not be unique. Including the counterparty node IDs
provides better clarity and makes the information more useful.
A claim transaction with locktime T can only be mined at block heights
of T+1 or above, so it should only be broadcast at height T or above.
Due to an off-by-one bug, we were broadcasting some claim transactions
too early at T-1.
AFAICT, nothing bad resulted from this bug -- later rebroadcasts of the
transaction would eventually succeed once the correct height was
reached.
Following up on the previous commit, where we added debug_asserts
within `build_closing_transaction` to ensure neither
`value_to_holder` nor `value_to_counterparty` underflow, we now also
force-close the channel in the (presumably impossible) event that it
did happen.
When batch claiming was first added, it was only done so for claims
which were not pinnable, i.e. those which can only be claimed by us.
This was the conservative choice - pinning of outputs claimed by a batch
would leave the entire batch unable to confirm on-chain. However, if
pinning is considered an attack that can be executed with a high
probability of success, then there is no reason not to batch claims of
pinnable outputs together, separate from unpinnable outputs.
Whether specific outputs are pinnable can change over time - those that
are not pinnable will eventually become pinnable at the height at which
our counterparty can spend them. Outputs are treated as pinnable if
they're within `COUNTERPARTY_CLAIMABLE_WITHIN_BLOCKS_PINNABLE` of that
height.
Aside from outputs being pinnable or not, locktimes are also a factor
for batching claims. HTLC-timeout claims have locktimes fixed by the
counterparty's signature and thus can only be aggregated with other
HTLCs of the same CLTV, which we have to check for.
The complexity required here is worth it - aggregation can save users a
significant amount of fees in the case of a force-closure, and directly
impacts the number of UTXOs needed as a reserve for anchors.
Co-authored-by: Matt Corallo <git@bluematt.me>
This allows us to make the PaymentSendFailure error type private, as well as
reduce the visibility of the vestigial send_payment_with_route method that was
already made test and fuzz-only in a previous commit.
Removes the final usage of PaymentSendFailure from public API.
This (confusing) error matched with prior versions of LDK where users had to
handle payment retries themselves. Since auto-retry was introduced, the only
non-deprecated use remaining was for probe send errors. Probes only have
one path, though, so refactor ProbeSendFailure to omit usage of
PaymentSendFailure.
We don't make this error private yet because it's still used by some fuzzing
code as well as internally to outbound_payments, but it isn't returned by any
public functions anymore.