This is useful in bindings as the `lightning::io` module is used in
the public interface, but also useful for users who want to refer
to the `io` as used in lightning irrespective of the feature flags.
`ChannelManager::fail_htlc_backwards`' bool return value is quite
confusing - just because it returns false doesn't mean the payment
wasn't (already) failed. Worse, in some race cases around shutdown
where a payment was claimed before an unclean shutdown and then
retried on startup, `fail_htlc_backwards` could return true even
though (a duplicate copy of the same payment) was claimed, but the
claim event has not been seen by the user yet.
While its possible to use it correctly, its somewhat confusing to
have a return value at all, and definitely lends itself to misuse.
Instead, we should push users towards a model where they don't care
if `fail_htlc_backwards` succeeds - either they've locally marked
the payment as failed (prior to seeing any `PaymentReceived`
events) and will fail any attempts to pay it, or they have not and
the payment is still receivable until its timeout time is reached.
We can revisit this decision based on user feedback, but will need
to very carefully document the potential failure modes here if we
do.
As additional sanity checks, before claiming a payment, we check
that we have the full amount available in `claimable_htlcs` that
the payment should be for. Concretely, this prevents one
somewhat-absurd edge case where a user may receive an MPP payment,
wait many *blocks* before claiming it, allowing us to fail the
pending HTLCs and the sender to retry some subset of the payment
before we go to claim. More generally, this is just good
belt-and-suspenders against any edge cases we may have missed.
If we crashed during a payment claim and then detected a partial
claim on restart, we should ensure the user is aware that the
payment has been claimed. We do so here by using the new
partial-claim detection logic to create a `PaymentClaimed` event.
In a previous version of the 0-conf code we did not correctly
handle 0-conf channels getting the funding transaction reorg'd out
(and the real SCID possibly changing on us).
This supports routing outbound over 0-conf channels by utilizing
the outbound SCID alias that we assign to all channels to refer to
the selected channel when routing.
If our peer sets a minimum depth of 0, and we're set to trusting
ourselves to not double-spend our own funding transactions, send a
funding_locked message immediately after funding signed.
Note that some special care has to be taken around the
`channel_state` values - `ChannelFunded` no longer implies the
funding transaction is confirmed on-chain. Thus, for example, the
should-we-re-broadcast logic has to now accept `channel_state`
values greater than `ChannelFunded` as indicating we may still need
to re-broadcast our funding tranasction, unless `minimum_depth` is
greater than 0.
Further note that this starts writing `Channel` objects with a
`MIN_SERIALIZATION_VERSION` of 2. Thus, LDK versions prior to
0.0.99 (July 2021) will now refuse to read serialized
Channels/ChannelManagers.
In the next few commits we add support for 0conf channels, allowing
us to have an active channel with HTLC and other updates flying
prior to having an SCID available. This would break several
assumptions made in `ChannelManager`, which we address here by
looking at SCID aliases in addition to SCIDs.
Implements `build_route_from_hops`, which provides a simple way to build
a route from us (payer) to the target node (payee) via the given hops
(which should exclude the payer, but include the payee). This may be
useful, e.g., for probing the chosen path.
While the HTLC-claim process happens across all MPP parts under one
lock, this doesn't imply that they are claimed fully atomically on
disk. Ultimately, an application can crash after persisting one
`ChannelMonitorUpdate` out of multiple monitor updates needed for
the full claim.
Previously, this would leave us in a very bad state - because of
the all-channels-available check in `claim_funds` we'd refuse to
claim the payment again on restart (even though the
`PaymentReceived` event will be passed to the user again), and we'd
end up having partially claimed the payment!
The fix for the consistency part of this issue is pretty
straightforward - just check for this condition on startup and
complete the claim across all channels/`ChannelMonitor`s if we
detect it.
This still leaves us in a confused state from the perspective of
the user, however - we've actually claimed a payment but when they
call `claim_funds` we return `false` indicating it could not be
claimed.
The `ChannelMonitor` had a field for the counterparty's
`cur_revocation_points`. Somewhat confusingly, this actually stored
the counterparty's *per-commitment* points, not the (derived)
revocation points.
Here we correct this by simply renaming the references as
appropriate. Note the update in `channel.rs` makes the variable
names align correctly.
In `HTLCUpdate` and `OnchainEvent` tracking, we store the HTLC
value (rounded down to whole satoshis). This is somewhat
confusingly referred to as the `onchain_value_satoshis` even though
it refers to the commitment transaction output value, not the value
available on chain (which may have been reduced by an
HTLC-Timeout/HTLC-Success transaction).
Several fields used in tracking on-chain HTLC outputs were
named `input_idx` despite referring to the output index in the
commitment transaction. Here they are all renamed
`commitment_tx_output_idx` for clarity.
For direct channels, the channel liquidity is known with certainty. Use
this knowledge in ProbabilisticScorer by either penalizing with the
per-hop penalty or u64::max_value depending on the amount.
Scorers could benefit from having the channel's EffectiveCapacity rather
than a u64 msat value. For instance, ProbabilisticScorer can give a more
accurate penalty when given the ExactLiquidity variant. Pass a struct
wrapping the effective capacity, the proposed amount, and any in-flight
HTLC value.
For route hints, the aggregate next hops path penalty and CLTV delta
should be computed after considering each hop rather than before.
Otherwise, these aggregate values will include values from the current
hop, too.
When scoring route hints, the amount passed to the scorer should include
any fees needed for subsequent hops. This worked correctly for single-
hop hints since there are no further hops, but not for multi-hint hops
(except the final one).
Using EffectiveCapacity in scoring gives more accurate success
probabilities when the maximum HTLC value is less than the channel
capacity. Change EffectiveCapacity to prefer the channel's capacity
over its maximum HTLC limit, but still use the latter for route finding.
In fc77c57c3c we stopped using the
`FInalOnionHopData` in `OnionPayload::Invoice` directly and intend
to remove it eventually. However, in the next few commits we need
access to the payment secret when claimaing a payment, as we create
a new `PaymentPurpose` during the claim process for a new event.
In order to get access to a `PaymentPurpose` without having access
to the `FinalOnionHopData` we here change the storage of
`claimable_htlcs` to store a single `PaymentPurpose` explicitly
with each set of claimable HTLCs.
In fc77c57c3c we stopped using the
`FinalOnionHopData` in `OnionPayload::Invoice` directly and renamed
it `_legacy_hop_data` with the intent of removing it in a few
versions. However, we continue to check that it was included in the
serialized data, meaning we would not be able to remove it without
breaking ability to serialize full `ChannelManager`s.
This fixes that by making the `_legacy_hop_data` an `Option` which
we will happily handle just fine if its `None`.
We have a bunch of fancy infrastructure to ensure we can connect
blocks using all our different connection interfaces, but we only
bother to use it in a few select tests.
This expands our use of `ConnectStyle` to most of our tests by
simply randomizing the style in each test. This makes our tests
non-deterministic, but we print the connection style at start so
that it's easy to reproduce a failure deterministically.
In the next commit we'll randomize the `ConnectStyle` used in each
test. However, some tests are slightly too prescriptive, which we
address here in a few places.