P2PGossipSync logs before delegating to NetworkGraph in its
EventHandler. In order to share this handling with RapidGossipSync,
NetworkGraph needs to take a logger so that it can implement
EventHandler instead.
`ChannelManager::fail_htlc_backwards`' bool return value is quite
confusing - just because it returns false doesn't mean the payment
wasn't (already) failed. Worse, in some race cases around shutdown
where a payment was claimed before an unclean shutdown and then
retried on startup, `fail_htlc_backwards` could return true even
though (a duplicate copy of the same payment) was claimed, but the
claim event has not been seen by the user yet.
While its possible to use it correctly, its somewhat confusing to
have a return value at all, and definitely lends itself to misuse.
Instead, we should push users towards a model where they don't care
if `fail_htlc_backwards` succeeds - either they've locally marked
the payment as failed (prior to seeing any `PaymentReceived`
events) and will fail any attempts to pay it, or they have not and
the payment is still receivable until its timeout time is reached.
We can revisit this decision based on user feedback, but will need
to very carefully document the potential failure modes here if we
do.
As additional sanity checks, before claiming a payment, we check
that we have the full amount available in `claimable_htlcs` that
the payment should be for. Concretely, this prevents one
somewhat-absurd edge case where a user may receive an MPP payment,
wait many *blocks* before claiming it, allowing us to fail the
pending HTLCs and the sender to retry some subset of the payment
before we go to claim. More generally, this is just good
belt-and-suspenders against any edge cases we may have missed.
If we crashed during a payment claim and then detected a partial
claim on restart, we should ensure the user is aware that the
payment has been claimed. We do so here by using the new
partial-claim detection logic to create a `PaymentClaimed` event.
This supports routing outbound over 0-conf channels by utilizing
the outbound SCID alias that we assign to all channels to refer to
the selected channel when routing.
If our peer sets a minimum depth of 0, and we're set to trusting
ourselves to not double-spend our own funding transactions, send a
funding_locked message immediately after funding signed.
Note that some special care has to be taken around the
`channel_state` values - `ChannelFunded` no longer implies the
funding transaction is confirmed on-chain. Thus, for example, the
should-we-re-broadcast logic has to now accept `channel_state`
values greater than `ChannelFunded` as indicating we may still need
to re-broadcast our funding tranasction, unless `minimum_depth` is
greater than 0.
Further note that this starts writing `Channel` objects with a
`MIN_SERIALIZATION_VERSION` of 2. Thus, LDK versions prior to
0.0.99 (July 2021) will now refuse to read serialized
Channels/ChannelManagers.
In the next few commits we add support for 0conf channels, allowing
us to have an active channel with HTLC and other updates flying
prior to having an SCID available. This would break several
assumptions made in `ChannelManager`, which we address here by
looking at SCID aliases in addition to SCIDs.
While the HTLC-claim process happens across all MPP parts under one
lock, this doesn't imply that they are claimed fully atomically on
disk. Ultimately, an application can crash after persisting one
`ChannelMonitorUpdate` out of multiple monitor updates needed for
the full claim.
Previously, this would leave us in a very bad state - because of
the all-channels-available check in `claim_funds` we'd refuse to
claim the payment again on restart (even though the
`PaymentReceived` event will be passed to the user again), and we'd
end up having partially claimed the payment!
The fix for the consistency part of this issue is pretty
straightforward - just check for this condition on startup and
complete the claim across all channels/`ChannelMonitor`s if we
detect it.
This still leaves us in a confused state from the perspective of
the user, however - we've actually claimed a payment but when they
call `claim_funds` we return `false` indicating it could not be
claimed.
In `HTLCUpdate` and `OnchainEvent` tracking, we store the HTLC
value (rounded down to whole satoshis). This is somewhat
confusingly referred to as the `onchain_value_satoshis` even though
it refers to the commitment transaction output value, not the value
available on chain (which may have been reduced by an
HTLC-Timeout/HTLC-Success transaction).
In fc77c57c3c we stopped using the
`FInalOnionHopData` in `OnionPayload::Invoice` directly and intend
to remove it eventually. However, in the next few commits we need
access to the payment secret when claimaing a payment, as we create
a new `PaymentPurpose` during the claim process for a new event.
In order to get access to a `PaymentPurpose` without having access
to the `FinalOnionHopData` we here change the storage of
`claimable_htlcs` to store a single `PaymentPurpose` explicitly
with each set of claimable HTLCs.
In fc77c57c3c we stopped using the
`FinalOnionHopData` in `OnionPayload::Invoice` directly and renamed
it `_legacy_hop_data` with the intent of removing it in a few
versions. However, we continue to check that it was included in the
serialized data, meaning we would not be able to remove it without
breaking ability to serialize full `ChannelManager`s.
This fixes that by making the `_legacy_hop_data` an `Option` which
we will happily handle just fine if its `None`.
This update also includes a minor refactor. The return type of
`pending_monitor_events` has been changed to a `Vec` tuple with the
`OutPoint` type. This associates a `Vec` of `MonitorEvent`s with a
funding outpoint.
We've also renamed `source/sink_channel_id` to `prev/next_channel_id` in
the favour of clarity.
We only use it to check the amount when processing MPP parts, but
store the full object (including new payment metadata) in it.
Because we now store the amount in the parent structure, there is
no need for it at all in the `OnionPayload`. Sadly, for
serialization compatibility, we need it to continue to exist, at
least temporarily, but we can avoid populating the new fields in
that case.
...instead of accessing it via the `OnionPayload::Invoice` form.
This may be useful if we add MPP keysend support, but is directly
useful to allow us to drop `FinalOnionHopData` from `OnionPayload`.
In general, we should never be automatically force-closing our
users' channels unless there is some immediate risk of funds loss
(ie because of some HTLC(s) which are timing out soon). In any
other case, we should trust the user to be able to figure out what
is going on and close their channels manually instead of trying to
be overly clever and automate closures if we think the channel is
useless.
In this case, even if a peer has some required feature that does
not allow us to communicate with them, there is a strong
possibility that some LDK upgrade may allow us to in the future. In
the mean time, there is no reason to go on-chain unless the user
needs funds immediately. In such a case, the user should already
have logic to force-close channels with peers which are not
available for any reason.
The `chain::Listen` interface provides a block-connection-based
alternative to the `chain::Confirm` interface, which supports
providing transaction data at a time separate from the block
connection time.
For users who are downloading the full headers tree (e.g. from a
node over the Bitcoin P2P protocol) but who are not downloading
full blocks (e.g. because they're using BIP 157/158 filtering)
there is no API that matches exactly their event stream -
`chain::Listen` requries full blocks for each block,
`chain::Confirm` requires breaking each connection event into two
calls.
Given its incredibly trivial to take a `TransactionData` in
addition to a `Block` in `chain::Listen` we do so here, adding a
default-implementation `block_connected` which simply creates the
`TransactionData`, which ultimately all of the `chain::Listen`
implementations currently do anyway.
Closes#1128.
`ChannelDetails::outbound_capacity_msat` describes the total amount
available for sending across several HTLCs, basically just our
balance minus the reserve value maintained by our counterparty.
However, when routing we use it to guess the maximum amount we can
send in a single additional HTLC, which it is not.
There are numerous reasons why our balance may not match the amount
we can send in a single HTLC, whether the HTLC in-flight limit, the
channe's HTLC maximum, or our feerate buffer.
This commit splits the `outbound_capacity_msat` field into two -
`outbound_capacity_msat` and `outbound_htlc_limit_msat`, setting us
up for correctly handling our next-HTLC-limit in the future.
This also addresses the first of the reasons why the values may
not match - the max-in-flight limit. The inaccuracy is ultimately
tracked as #1126.