This finally completes the piping of the `payment_metadata` from
from the BOLT11 invoice on the sending side all the way through the
onion sending + receiving ends to the user on the receive events.
When we receive an HTLC, we want to pass the `payment_metadata`
through to the `PaymentClaimable` event. This does most of the
internal refactoring required to do so - storing a
`RecipientOnionFields` in the inbound HTLC tracking structs,
including the `payment_metadata`.
In the future this struct will allow us to do MPP keysend receipts
(as it now stores an Optional `payment_secret` for all inbound
payments) as well as custom TLV receipts (as the struct is
extensible to store additional fields and the internal API supports
filtering for fields which are consistent across HTLCs).
If we receive an HTLC and are processing it a potential MPP part,
we always continue in the per-HTLC loop if we call the `fail_htlc`
macro, thus its nice to actually do the `continue` therein rather
than at the callsites.
If we add an entry to `claimable_payments` we have to ensure we
actually accept the HTLC we're considering, otherwise we'll end up
with an empty `claimable_payments` entry.
This adds the new `payment_metadata` to `RecipientOnionFields`,
passing the metadata from BOLT11 invoices through the send pipeline
and finally copying them info the onion when sending HTLCs.
This completes send-side support for the new payment metadata
feature.
Now that we guarantee `claim_payment` will always succeed we have
to let the user know what the deadline is. We still fail payments
if they haven't been claimed in time, which we now expose in
`PaymentClaimable`.
There's no reason to hold a lock on `per_peer_state` while we're
claiming from a since-closed channel via a `ChannelMonitorUpdate`,
which we stop doing here.
This passes the new `RecipientOnionFields` through the internal
sending APIs, ensuring we have access to the full struct when we
go to construct the sending onion so that we can include any new
fields added there.
While most lightning nodes don't (currently) support providing a
payment secret or payment metadata for spontaneous payments,
there's no specific technical reason why we shouldn't support
sending those fields to a recipient.
Further, when we eventually move to allowing custom TLV entries in
the recipient's onion TLV stream, we'll want to support it for
spontaneous payments as well.
Here we simply add the new `RecipientOnionFields` struct as an
argument to the spontaneous payment send methods. We don't yet
plumb it through the payment sending logic, which will come when we
plumb the new struct through the sending logic to replace the
existing payment secret arguments.
This moves the public payment sending API from passing an explicit
`PaymentSecret` to a new `RecipientOnionFields` struct (which
currently only contains the `PaymentSecret`). This gives us
substantial additional flexibility as we look at add both
`PaymentMetadata`, a new (well, year-or-two-old) BOLT11 invoice
extension to provide additional data sent to the recipient.
In the future, we should also add the ability to add custom TLV
entries in the `RecipientOnionFields` struct.
Many of the fields in `HTLCSource::OutboundRoute` are used to
rebuild the pending-outbound-payment map on reload if the
`ChannelManager` was not serialized though `ChannelMonitor`(s)
were after an HTLC was sent. As of 0.0.114, however, such payments
are not retryable without allowing them to fail and doing a full,
fresh, send.
Thus, some of the fields can be safely removed - we only really
care about having enough information to provide the user a failure
event, not being able to retry.
Here we drop one such field - the `payment_secret`, making our
`ChannelMonitorUpdate`s another handful of bytes smaller.
Previously, LDK would refuse to claim a payment if a channel on
which the payment was received had been closed between when the
HTLC was received and when we went to claim it. This makes sense in
the payment case - why pay an on-chain fee to claim the HTLC when
presumably the sender may retry later. Long ago it also reduced
total code in the claim pipeline.
However, this doesn't make sense if you're trying to do an atomic
swap or some other protocol that requires atomicity with some other
action - if your money got claimed elsewhere you need to be able to
claim the HTLC in lightning no matter what. Further, this is an
over-optimization - there should be a very, very low likelihood
that a channel closes between when we receive the last HTLC for a
payment and the user goes to claim the payment. Since we now have
code to handle this anyway we should allow it.
Fixes#2017.
Currently, users don't have good way of being notified when channel open
negotiations have succeeded and new channels are pending confirmation on
chain. To this end, we add a new `ChannelPending` event that is emitted
when send or receive a `funding_signed` message, i.e., at the last
moment before waiting for the confirmation period.
We track whether the event had previously been emitted in `Channel` and
remove it from `internal_funding_created` entirely. Hence, we now
only emit the event after ChannelMonitorUpdate completion, or upon
channel reestablish. This mitigates a race condition where where we
wouldn't persist the event *and* wouldn't regenerate it on restart,
therefore potentially losing it, if async CMU wouldn't complete before
ChannelManager persistence.
If the `ChainMonitor` gets an async monitor update completion, this
means the `ChannelManager` needs to be polled for event processing.
Here we wake it using the new multi-`Future`-await `Sleeper`, or
the existing `select` block in the async BP.
Fixes#2052.
In `no-std`, we exposed `wait` functions which rely on a dummy
`Condvar` which never actually sleeps. This is somwhat nonsensical,
not to mention confusing to users. Instead, we simply remove the
`wait` methods in `no-std` builds.
Rather than having three ways to await a `ChannelManager` being
persistable, this moves to just exposing the awaitable `Future` and
having sleep functions on that.
In the next commits we'll be adding a second notify pipeline - from
the `ChainMonitor` back to the background processor. This will
cause the `background-processor` to need to await multiple wakers
at once, which we cannot do in the current scheme without taking on
a full async runtime.
Building a multi-future waiter isn't so bad, and notably will allow
us to remove the existing sleep pipeline entirely, reducing the
complexity of our wakers implementation by only having one notify
path to consider.
`Send` is rather useless on a `no-std` target - we don't have
threads and are just needlessly restricting ourselves, so here we
skip it for the wakers callback.
These are useful, but we previously couldn't use them due to our
MSRV. Now that we can, we should use them, so we expose them via
our normal debug_sync wrappers.
As long as the lock order on such locks is still valid, we should allow
them regardless of whether they were constructed at the same location or
not. Note that we can only really enforce this if we require one lock
call per line, or if we have access to symbol columns (as we do on Linux
and macOS). We opt for a smaller patch by relying on the latter.
This was previously triggered by some recent test changes to
`test_manager_serialize_deserialize_inconsistent_monitor`. When the
test ends and a node is dropped causing us to persist each, we'd detect
a possible lockorder violation deadlock across three different `Mutex`
instances that are held at the same location when serializing our
`per_peer_states` in `ChannelManager::write`.
The presumed lockorder violation happens because the first `Mutex` held
shares the same construction location with the third one, while the
second `Mutex` has a different construction location. When we hold the
second one, we consider the first as a dependency, and then consider the
second as a dependency when holding the third, causing a circular
dependency (since the third shares the same construction location as the
first).
This isn't considered a lockorder violation that could result in a
deadlock as the comment suggests inline though, since we are under a
dependent write lock which no one else can have access to.
If routing nodes take less fees and pay the final node more than
`amt_to_forward`, the receiver may see that `total_msat` has been met
before all of the sender's intended HTLCs have arrived. The receiver
may then prematurely claim the payment and release the payment hash,
allowing routing nodes to claim the remaining HTLCs. Using the onion
value `amt_to_forward` to determine when `total_msat` has been met
allows the sender to control the set total.
Final nodes previously had stricter requirements on HTLC contents
matching onion value compared to intermediate nodes. This allowed
for probing, i.e. the last intermediate node could overshoot the
value by a small amount and conclude from the acceptance or rejection
of the HTLC whether the next node was the destination. This also
applies to the msat amount, however this change was already present.