Async payments include the original invoice request in the payment onion.
Since invreqs may include blinded paths, it's important to factor them into our
max path length calculations since they may take up a significant portion of
the 1300-byte onion.
While in the last commit we began including invoice requests in async payment
onions on initial send, further work is needed to include them on retry. Here
we begin storing invreqs in our retry data, and pass them along for inclusion
in the onion on payment retry.
Per BOLTs PR 1149, when paying a static invoice we need to include our original
invoice request in the HTLC onion since the recipient wouldn't have received it
previously.
Past commits have set us up to include invoice requests in outbound async
payment onions. Here we actually pull the invoice request from where it's
stored in outbound_payments and pass it into the correct utility for inclusion
in the onion on initial send.
Per BOLTs PR 1149, when paying a static invoice we need to include our original
invoice request in the HTLC onion since the recipient wouldn't have received it
previously.
When transitioning outbound payments from AwaitingInvoice to
StaticInvoiceReceived, include the invreq in the new state's outbound payment
storage for future inclusion in an async payment onion.
Per BOLTs PR 1149, when paying a static invoice we need to include our original
invoice request in the HTLC onion since the recipient wouldn't have received it
previously.
Add a new invoice request parameter to outbound_payments and channelmanager
send-to-route internal utils. As of this commit the invreq will always be
passed in as None, to be updated in future commits.
Per BOLTs PR 1149, when paying a static invoice we need to include our original
invoice request in the HTLC onion since the recipient wouldn't have received it
previously.
Add a new invoice request parameter to onion_utils::create_payment_onion. As of
this commit it will always be passed in as None, to be updated in future
commits.
Per BOLTs PR 1149, when paying a static invoice we need to include our original
invoice request in the HTLC onion since the recipient wouldn't have received it
previously.
Add a new invoice request parameter to onion_utils::build_onion_payloads.
As of this commit it will always be passed in as None, to be updated in future
commits.
Per BOLTs PR 1149, when paying a static invoice we need to include our original
invoice request in the HTLC onion since the recipient wouldn't have received it
previously.
Per BOLTs PR 1149, when paying a static invoice we need to include our original
invoice request in the HTLC onion since the recipient wouldn't have received it
previously.
We use an experimental TLV type for this new onion payload field, since the
spec is still not merged in the BOLTs.
The method doesn't actually use its &self parameter, and this makes it more
obvious that we aren't going to deadlock by calling the method if the
outbound_payments lock is already acquired.
"Release" is overloaded in the trait's release_pending_messages method, since
the latter releases pending async payments onion messages to the peer manager,
vs the release_held_htlc method handles the release_held_htlc onion message by
attempting to send an HTLC to the recipient.
RouteNotFound did not fit here because that error is reserved for failing to
find a route for a payment, whereas here we are failing to create a blinded
path back to ourselves..
When we first get a public channel confirmed at six blocks, we
broadcast a `channel_announcement` once and then move on. As long
as it makes it into our local network graph that should be okay, as
we should send peers our network graph contents as they seek to
sync, however its possible an ill-timed shutdown could cause this
to fail, and relying on peers to do a full historical sync from us
may delay `channel_announcement` propagation.
Instead, here, we re-broadcast our `channel_announcement`s every
six blocks for a week, which should be way more than robust enough
to get them properly across the P2P network.
Fixes#2418
Because the new startup `ChannelMonitor` persistence semantics rely
on new information stored in `ChannelMonitor` only for claims made
in the upgraded code, users upgrading from previous version of LDK
must apply the old `ChannelMonitor` persistence semantics at least
once (as the old code will be used to handle partial claims).
When we discover we've only partially claimed an MPP HTLC during
`ChannelManager` reading, we need to add the payment preimage to
all other `ChannelMonitor`s that were a part of the payment.
We previously did this with a direct call on the `ChannelMonitor`,
requiring users write the full `ChannelMonitor` to disk to ensure
that updated information made it.
This adds quite a bit of delay during initial startup - fully
resilvering each `ChannelMonitor` just to handle this one case is
incredibly excessive.
Over the past few commits we dropped the need to pass HTLCs
directly to the `ChannelMonitor`s using the background events to
provide `ChannelMonitorUpdate`s insetad.
Thus, here we finally drop the requirement to resilver
`ChannelMonitor`s on startup.
When we claim an MPP payment, then crash before persisting all the
relevant `ChannelMonitor`s, we rely on the payment data being
available in the `ChannelManager` on restart to re-claim any parts
that haven't yet been claimed. This is fine as long as the
`ChannelManager` was persisted before the `PaymentClaimable` event
was processed, which is generally the case in our
`lightning-background-processor`, but may not be in other cases or
in a somewhat rare race.
In order to fix this, we need to track where all the MPP parts of
a payment are in the `ChannelMonitor`, allowing us to re-claim any
missing pieces without reference to any `ChannelManager` data.
Further, in order to properly generate a `PaymentClaimed` event
against the re-started claim, we have to store various payment
metadata with the HTLC list as well.
Here we finally implement claiming using the new MPP part list and
metadata stored in `ChannelMonitor`s. In doing so, we use much more
of the existing HTLC-claiming pipeline in `ChannelManager`,
utilizing the on-startup background events flow as well as properly
re-applying the RAA-blockers to ensure preimages cannot be lost.
In the next commit we'll start using (much of) the normal HTLC
claim pipeline to replay payment claims on startup. In order to do
so, however, we have to properly handle cases where we get a
`DuplicateClaim` back from the channel for an inbound-payment HTLC.
Here we do so, handling the `MonitorUpdateCompletionAction` and
allowing an already-completed RAA blocker.
Here we wrap the logic which moves claimable payments from
`claimable_payments` to `pending_claiming_payments` to a new
utility function on `ClaimablePayments`. This will allow us to call
this new logic during `ChannelManager` deserialization in a few
commits.
In a coming commit we'll use the existing `ChannelManager` claim
flow to claim HTLCs which we found partially claimed on startup,
necessitating having a full `ChannelManager` when we go to do so.
Here we move the re-claim logic down in the `ChannelManager`-read
logic so that we have that.
When we claim an MPP payment, then crash before persisting all the
relevant `ChannelMonitor`s, we rely on the payment data being
available in the `ChannelManager` on restart to re-claim any parts
that haven't yet been claimed. This is fine as long as the
`ChannelManager` was persisted before the `PaymentClaimable` event
was processed, which is generally the case in our
`lightning-background-processor`, but may not be in other cases or
in a somewhat rare race.
In order to fix this, we need to track where all the MPP parts of
a payment are in the `ChannelMonitor`, allowing us to re-claim any
missing pieces without reference to any `ChannelManager` data.
Further, in order to properly generate a `PaymentClaimed` event
against the re-started claim, we have to store various payment
metadata with the HTLC list as well.
Here we store the required MPP parts and metadata in
`ChannelMonitor`s and make them available to `ChannelManager` on
load.
When we claim an MPP payment, then crash before persisting all the
relevant `ChannelMonitor`s, we rely on the payment data being
available in the `ChannelManager` on restart to re-claim any parts
that haven't yet been claimed. This is fine as long as the
`ChannelManager` was persisted before the `PaymentClaimable` event
was processed, which is generally the case in our
`lightning-background-processor`, but may not be in other cases or
in a somewhat rare race.
In order to fix this, we need to track where all the MPP parts of
a payment are in the `ChannelMonitor`, allowing us to re-claim any
missing pieces without reference to any `ChannelManager` data.
Further, in order to properly generate a `PaymentClaimed` event
against the re-started claim, we have to store various payment
metadata with the HTLC list as well.
Here we take the first step, building a list of MPP parts and
metadata in `ChannelManager` and passing it through to
`ChannelMonitor` in the `ChannelMonitorUpdate`s.
When we started tracking which channels had MPP parts claimed
durably on-disk in their `ChannelMonitor`, we did so with a tuple.
This was fine in that it was only ever accessed in two places, but
as we will start tracking it through to the `ChannelMonitor`s
themselves in the coming commit(s), it is useful to have it in a
struct instead.
In aa09c33a17 we added a new secret
in `ChannelManager` with which to derive inbound `PaymentId`s. We
added read support for the new field, but forgot to add writing
support for it. Here we fix this oversight.
`or_default` is generally less readable than writing out the thing
we're writing, as `Default` is opaque but explicit constructors
generally are not. Thus, we ignore the clippy lint (ideally we
could invert it and ban the use of `Default` in the crate entirely
but alas).
These structs are meant for MonitoringUpdatingPersister implementation, but some
external implementations may still reuse them, so going to make them public.