Makes it easier to add new arguments without a ton of resulting test changes.
Useful for route blinding testing because we need to check for malformed HTLCs,
which is not currently supported by reconnect_nodes. It also makes it easier to
tell what is being checked in relevant tests.
Given we build `InFlightHtlcs` per route-fetch call, there's no
reason to pass them out by reference rather than simply giving the
user the full object. This also allows them to tweak the in-flight
set before fetching a route.
Useful for penultimate hops in routes to take an extra fee, if for example they
opened a JIT channel to the payee and want them to help bear the channel open
cost.
This is one of a series of commits to make sure methods are moved by
chunks so they are easily reviewable in diffs. Unfortunately they are
not purely move-only as fields need to be updated for things to
compile, but these should be quite clear.
This commit also uses the `context` field where needed for compilation
and tests to pass due to the above change.
This commit refactors a significant portion of the receive validation in
`ChannelManager::process_pending_htlc_forwards` now that we repurpose
previous MPP validation logic to accomodate keysends. This also removes
a previous restriction on claiming, as well as tests sending and
receiving MPP keysends.
Now that the `get_available_balances` min/max bounds are exact, we
can stop doing all the explicit checks in `send_htlc` entirely,
instead comparing against the `get_available_balances` bounds and
failing if the amount is out of those bounds.
This breaks support for sending amounts below the dust limit if
there is some amount of dust exposure remaining before we hit our
cap, however we will no longer generate such routes anyway.
This was a fairly old introduction to the spec to allow nodes to indicate
to their peers what chains they are interested in (i.e. will open channels
and gossip for).
We don't do any of the handling of this message in this commit and leave
that to the very next commit, so the behaviour is effectively the same
(ignore networks preference).
When we generated a `ChannelMonitorUpdate` during `ChannelManager`
deserialization, we must ensure that it gets processed before any
other `ChannelMonitorUpdate`s. The obvious hook for this is when
taking the `total_consistency_lock`, which makes it unlikely we'll
regress by forgetting this.
Here we add that call in the `PersistenceNotifierGuard`, with a
test-only atomic bool to test that this criteria is met.
This PR aims to create a "stateless" scorer. Instead of passing
in fee params at construction-time, we want to parametrize the
scorer with an associated "parameter" type, which is then
passed to the router function itself, and allows passing
different parameters per route-finding call.
`rust-bitcoin v0.30.0` introduces concrete variants for data members of
block `Header`s. To avoid having to update these across every use, we
introduce new helpers to create dummy blocks and headers, such that the
update process is a bit more straight-forward.
PaymentParameters already includes this value.
This set us up to better support route blinding, since there is no known
final_cltv_delta when paying to a blinded route.
While these transactions were still valid, we incorrectly assumed that
they would propagate with a locktime of `current_height + 1`, when in
reality, only those with a locktime strictly lower than the next height
in the chain are allowed to enter the mempool.
Previously, our local signatures would always be deterministic, whether
we'd grind for low R value signatures or not. For peers supporting
SegWit, Bitcoin Core will generally use a transaction's witness-txid, as
opposed to its txid, to advertise transactions. Therefore, to ensure a
transaction has the best chance to propagate across node mempools in the
network, each of its broadcast attempts should have a unique/distinct
witness-txid, which we can achieve by introducing random nonce data when
generating local signatures, such that they are no longer deterministic.
Untractable packages are those which cannot have their fees updated once
signed, hence why they weren't retried. There's no harm in retrying
these packages by simply re-broadcasting them though, as the fee market
could have spontaneously spiked when we first broadcast it, leading to
our transaction not propagating throughout node mempools unless
broadcast manually.
Now that we guarantee `claim_payment` will always succeed we have
to let the user know what the deadline is. We still fail payments
if they haven't been claimed in time, which we now expose in
`PaymentClaimable`.
While most lightning nodes don't (currently) support providing a
payment secret or payment metadata for spontaneous payments,
there's no specific technical reason why we shouldn't support
sending those fields to a recipient.
Further, when we eventually move to allowing custom TLV entries in
the recipient's onion TLV stream, we'll want to support it for
spontaneous payments as well.
Here we simply add the new `RecipientOnionFields` struct as an
argument to the spontaneous payment send methods. We don't yet
plumb it through the payment sending logic, which will come when we
plumb the new struct through the sending logic to replace the
existing payment secret arguments.
This moves the public payment sending API from passing an explicit
`PaymentSecret` to a new `RecipientOnionFields` struct (which
currently only contains the `PaymentSecret`). This gives us
substantial additional flexibility as we look at add both
`PaymentMetadata`, a new (well, year-or-two-old) BOLT11 invoice
extension to provide additional data sent to the recipient.
In the future, we should also add the ability to add custom TLV
entries in the `RecipientOnionFields` struct.
Previously, LDK would refuse to claim a payment if a channel on
which the payment was received had been closed between when the
HTLC was received and when we went to claim it. This makes sense in
the payment case - why pay an on-chain fee to claim the HTLC when
presumably the sender may retry later. Long ago it also reduced
total code in the claim pipeline.
However, this doesn't make sense if you're trying to do an atomic
swap or some other protocol that requires atomicity with some other
action - if your money got claimed elsewhere you need to be able to
claim the HTLC in lightning no matter what. Further, this is an
over-optimization - there should be a very, very low likelihood
that a channel closes between when we receive the last HTLC for a
payment and the user goes to claim the payment. Since we now have
code to handle this anyway we should allow it.
Fixes#2017.
This is largely motivated by some follow-up work for anchors that will
introduce an event handler for `BumpTransaction` events, which we can
now include in this new top-level `events` module.
This results in a new, potentially redundant, `ChannelMonitorUpdate`
that must be applied to `ChannelMonitor`s to broadcast the holder's
latest commitment transaction.
This is a behavior change for anchor channels since their commitments
may require additional fees to be attached through a child anchor
transaction. Recall that anchor transactions are only generated by the
event consumer after processing a `BumpTransactionEvent::ChannelClose`
event, which is yielded after applying a
`ChannelMonitorUpdateStep::ChannelForceClosed` monitor update. Assuming
the node operator is not watching the mempool to generate these anchor
transactions without LDK, an anchor channel which we had to fail when
deserializing our `ChannelManager` would have its commitment transaction
broadcast by itself, potentially exposing the node operator to loss of
funds if the commitment transaction's fee is not enough to be accepted
into the network's mempools.
When we receive an update_fulfill_htlc message, we immediately try
to "claim" the HTLC against the HTLCSource. If there is one, this
works great, we immediately generate a `ChannelMonitorUpdate` for
the corresponding inbound HTLC and persist that before we ever get
to processing our counterparty's `commitment_signed` and persisting
the corresponding `ChannelMonitorUpdate`.
However, if there isn't one (and this is the first successful HTLC
for a payment we sent), we immediately generate a `PaymentSent`
event and queue it up for the user. Then, a millisecond later, we
receive the `commitment_signed` from our peer, removing the HTLC
from the latest local commitment transaction as a side-effect of
the `ChannelMonitorUpdate` applied.
If the user has processed the `PaymentSent` event by that point,
great, we're done. However, if they have not, and we crash prior to
persisting the `ChannelManager`, on startup we get confused about
the state of the payment. We'll force-close the channel for being
stale, and see an HTLC which was removed and is no longer present
in the latest commitment transaction (which we're broadcasting).
Because we claim corresponding inbound HTLCs before updating a
`ChannelMonitor`, we assume such HTLCs have failed - attempting to
fail after having claimed should be a noop. However, in the
sent-payment case we now generate a `PaymentFailed` event for the
user, allowing an HTLC to complete without giving the user a
preimage.
Here we address this issue by storing the payment preimages for
claimed outbound HTLCs in the `ChannelMonitor`, in addition to the
existing inbound HTLC preimages already stored there. This allows
us to fix the specific issue described by checking for a preimage
and switching the type of event generated in response. In addition,
it reduces the risk of future confusion by ensuring we don't fail
HTLCs which were claimed but not fully committed to before a crash.
It does not, however, full fix the issue here - because the
preimages are removed after the HTLC has been fully removed from
available commitment transactions if we are substantially delayed
in persisting the `ChannelManager` from the time we receive the
`update_fulfill_htlc` until after a full commitment signed dance
completes we may still hit this issue. The full fix for this issue
is to delay the persistence of the `ChannelMonitorUpdate` until
after the `PaymentSent` event has been processed. This avoids the
issue entirely, ensuring we process the event before updating the
`ChannelMonitor`, the same as we ensure the upstream HTLC has been
claimed before updating the `ChannelMonitor` for forwarded
payments.
The full solution will be implemented in a later work, however this
change still makes sense at that point as well - if we were to
delay the initial `commitment_signed` `ChannelMonitorUpdate` util
after the `PaymentSent` event has been processed (which likely
requires a database update on the users' end), we'd hold our
`commitment_signed` + `revoke_and_ack` response for two DB writes
(i.e. `fsync()` calls), making our commitment transaction
processing a full `fsync` slower. By making this change first, we
can instead delay the `ChannelMonitorUpdate` from the
counterparty's final `revoke_and_ack` message until the event has
been processed, giving us a full network roundtrip to do so and
avoiding delaying our response as long as an `fsync` is faster than
a network roundtrip.
Our existing lockorder tests assume that a read lock on a thread
that is already holding the same read lock is totally fine. This
isn't at all true. The `std` `RwLock` behavior is
platform-dependent - on most platforms readers can starve writers
as readers will never block for a pending writer. However, on
platforms where this is not the case, one thread trying to take a
write lock may deadlock with another thread that both already has,
and is attempting to take again, a read lock.
Worse, our in-tree `FairRwLock` exhibits this behavior explicitly
on all platforms to avoid the starvation issue.
Thus, we shouldn't have any special handling for allowing recursive
read locks, so we simply remove it here.
fbc08477e8 purported to "move" the
`final_cltv_expiry_delta` field to `PaymentParamters` from
`RouteParameters`. However, for naive backwards-compatibility
reasons it left the existing on in place and only added a new,
redundant field in `PaymentParameters`.
It turns out there's really no reason for this - if we take a more
critical eye towards backwards compatibility we can figure out the
correct value in every `PaymentParameters` while deserializing.
We do this here - making `PaymentParameters` a `ReadableArgs`
taking a "default" `cltv_expiry_delta` when it goes to read. This
allows existing `RouteParameters` objects to pass the read
`final_cltv_expiry_delta` field in to be used if the new field
wasn't present.
This field was previous useful in manual retries for users to know when all
paths of a payment have failed and it is safe to retry. Now that we support
automatic retries in ChannelManager and no longer support manual retries, the
field is no longer useful.
For backwards compat, we now always write false for this field. If we didn't do
this, previous versions would default this field's value to true, which can be
problematic because some clients have relied on the field to indicate when a
full payment retry is safe.
Prior to this, we returned PaymentSendFailure from auto retry send payment
methods. This implied that we might return a PartialFailure from them, which
has never been the case. So it makes sense to rework the errors to be a better
fit for the methods.
We're taking error handling in a totally different direction now to make it
more asynchronous, see send_payment_internal for more information.