In the next commit, we will reload lost pending payments from
ChannelMonitors during restart. However, in order to avoid
re-adding pending payments which have already been fulfilled, we
must ensure that we do not fully remove pending payments until all
HTLCs for the payment have been fully removed from their
ChannelMonitors.
We do so here, introducing a new PendingOutboundPayment variant
called `Completed` which only tracks the set of pending HTLCs.
When an HTLC has been failed, we track it up until the point there
exists no broadcastable commitment transaction which has the HTLC
present, at which point Channel returns the HTLCSource back to the
ChannelManager, which fails the HTLC backwards appropriately.
When an HTLC is fulfilled, however, we fulfill on the backwards path
immediately. This is great for claiming upstream HTLCs, but when we
want to track pending payments, we need to ensure we can check with
ChannelMonitor data to rebuild pending payments. In order to do so,
we need an event similar to the HTLC failure event, but for
fulfills instead.
Specifically, if we force-close a channel, we remove its off-chain
`Channel` object entirely, at which point, on reload, we may notice
HTLC(s) which are not present in our pending payments map (as they
may have received a payment preimage, but not fully committed to
it). Thus, we'd conclude we still have a retryable payment, which
is untrue.
This commit does so, informing the ChannelManager via a new return
element where appropriate of the HTLCSource corresponding to the
failed HTLC.
This resolves several user complaints (and issues in the sample
node) where startup is substantially delayed as we're always
waiting for the chain data to sync.
Further, in an upcoming PR, we'll be reloading pending payments
from ChannelMonitors on restart, at which point we'll need the
change here which avoids handling events until after the user
has confirmed the `ChannelMonitor` has been persisted to disk.
It will avoid a race where we
* send a payment/HTLC (persisting the monitor to disk with the
HTLC pending),
* force-close the channel, removing the channel entry from the
ChannelManager entirely,
* persist the ChannelManager,
* connect a block which contains a fulfill of the HTLC, generating
a claim event,
* handle the claim event while the `ChannelMonitor` is being
persisted,
* persist the ChannelManager (before the CHannelMonitor is
persisted fully),
* restart, reloading the HTLC as a pending payment in the
ChannelManager, which now has no references to it except from
the ChannelMonitor which still has the pending HTLC,
* replay the block connection, generating a duplicate PaymentSent
event.
In the next commit we'll need ChainMonitor to "see" when a monitor
persistence completes, which means `monitor_updated` needs to move
to `ChainMonitor`. The simplest way to then communicate that
information to `ChannelManager` is via `MonitorEvet`s, which seems
to line up ok, even if they're now constructed by multiple
different places.
Failed payments may be retried, but calling get_route may return a Route
with the same failing path. Add a routing::Score trait used to
parameterize get_route, which it calls to determine how much a channel
should be penalized in terms of msats willing to pay to avoid the
channel.
Also, add a Scorer struct that implements routing::Score with a constant
constant penalty. Subsequent changes will allow for more robust scoring
by feeding back payment path success and failure to the scorer via event
handling.
The interface for get_route will change to take a scorer. Using
get_route_and_payment_hash whenever possible allows for keeping the
scorer inside get_route_and_payment_hash rather than at every call site.
Replace get_route with get_route_and_payment_hash wherever possible.
Additionally, update get_route_and_payment_hash to use the known invoice
features and the sending node's logger.
This makes it more practical for users to track channels prior to
funding, especially if the channel fails because the peer rejects
it for a parameter mismatch.
During the event of a channel close, if the funding transaction
is yet to be broadcasted then a DiscardFunding event is issued
along with the ChannelClose event.
If we attempt to send a payment, but the HTLC cannot be send due to
local channel limits, we'll provide the user an error but end up
with an entry in our pending payment map. This will result in a
memory leak as we'll never reclaim the pending payment map entry.
This is because we want the ability to retry completely failed
payments.
Upcoming commits will remove these payments on timeout to prevent
DoS issues
Also test that this removal allows retrying single-path payments
When we are prepared to forward HTLCs, we generate a
PendingHTLCsForwardable event with a time in the future when the
user should tell us to forward. This provides some basic batching
of forward events, improving privacy slightly.
After we generate the event, we expect users to spawn a timer in
the background and let us know when it finishes. However, if the
user shuts down before the timer fires, the user will restart and
have no idea that HTLCs are waiting to be forwarded/received.
To fix this, instead of serializing PendingHTLCsForwardable events
to disk while they're pending (before the user starts the timer),
we simply regenerate them when a ChannelManager is deserialized
with HTLCs pending.
Fixes#1042
We want to reuse send_payment internal functions for retries,
so some need to now be parameterized by PaymentId to avoid
generating a new PaymentId on retry
When we detect a channel `is_shutdown()` or call on it
`force_shutdown()`, we notify the user with a Event::ChannelClosed
informing about the id and closure reason.
Going forward, all lightning messages have a TLV stream suffix,
allowing new fields to be added as needed. In the P2P protocol,
messages have an explicit length, so there is no implied length in
the TLV stream itself. HTLCFailureMsg enum variants have messages
in them, but without a size prefix or any explicit end. Thus, if a
HTLCFailureMsg is read as a part of a ChannelManager, with a TLV
stream at the end, there is no way to differentiate between the end
of the message and the next field(s) in the ChannelManager.
Here we add two new variant values for HTLCFailureMsg variants in
the read path, allowing us to switch to the new values if/when we
add new TLV fields in UpdateFailHTLC or UpdateFailMalformedHTLC so
that older versions can still read the new TLV fields.