If a user restores from a backup that they know is stale, they'd
like to force-close all of their channels (or at least the ones
they know are stale) *without* broadcasting the latest state,
asking their peers to do so instead. This simply adds methods to do
so, renaming the existing `force_close_channel` and
`force_close_all_channels` methods to disambiguate further.
As like the previous commit, `commit_upfront_shutdown_pubkey` is another
static field that cannot change after the initial channel handshake. We
therefore move it out from its existing place in `ChannelConfig`.
NetGraphMsgHandler implements RoutingMessageHandler to handle gossip
messages defined in BOLT 7 and maintains a view of the network by
updating NetworkGraph. Rename it to P2PGossipSync, which better
describes its purpose, and to contrast with RapidGossipSync.
`ChannelManager::fail_htlc_backwards`' bool return value is quite
confusing - just because it returns false doesn't mean the payment
wasn't (already) failed. Worse, in some race cases around shutdown
where a payment was claimed before an unclean shutdown and then
retried on startup, `fail_htlc_backwards` could return true even
though (a duplicate copy of the same payment) was claimed, but the
claim event has not been seen by the user yet.
While its possible to use it correctly, its somewhat confusing to
have a return value at all, and definitely lends itself to misuse.
Instead, we should push users towards a model where they don't care
if `fail_htlc_backwards` succeeds - either they've locally marked
the payment as failed (prior to seeing any `PaymentReceived`
events) and will fail any attempts to pay it, or they have not and
the payment is still receivable until its timeout time is reached.
We can revisit this decision based on user feedback, but will need
to very carefully document the potential failure modes here if we
do.
This update also includes a minor refactor. The return type of
`pending_monitor_events` has been changed to a `Vec` tuple with the
`OutPoint` type. This associates a `Vec` of `MonitorEvent`s with a
funding outpoint.
We've also renamed `source/sink_channel_id` to `prev/next_channel_id` in
the favour of clarity.
The spec actually requires we never send `announcement_signatures`
(and, thus, `channel_announcement`s) until after six confirmations.
However, we would happily have sent them prior to that as long as
we exchange `funding_locked` messages with our countarparty. Thanks
to re-broadcasting this issue is largely harmless, however it could
have some negative interactions with less-robust peers. Much more
importantly, this represents an important step towards supporting
0-conf channels, where `funding_locked` messages may be exchanged
before we even have an SCID to construct the messages with.
Because there is no ACK mechanism for `announcement_signatures` we
rely on existing channel updates to stop rebroadcasting them - if
we sent a `commitment_signed` after an `announcement_signatures`
and later receive a `revoke_and_ack`, we know our counterparty also
received our `announcement_signatures`. This may resolve some rare
edge-cases where we send a `funding_locked` which our counterparty
receives, but lose connection before the `announcement_signatures`
(usually the very next message) arrives.
Sadly, because the set of places where an `announcement_signatures`
may now be generated more closely mirrors where `funding_locked`
messages may be generated, but they are now separate, there is a
substantial amount of code motion providing relevant parameters
about current block information and ensuring we can return new
`announcement_signatures` messages.
If we have not yet sent `funding_locked` only because of a pending
channel monitor update, we shouldn't consider a channel
`is_usable`. This has a number of downstream effects, including
not attempting to route payments through the channel, not sending
private `channel_update` messages to our counterparty, or sending
channel_announcement messages if our couterparty has already signed
for it.
We further gate generation of `node_announcement`s on `is_usable`,
preventing generation of those or `announcement_signatures` until
we've sent our `funding_locked`.
Finally, `during_funding_monitor_fail` is updated to test a case
where we see the funding transaction lock in but have a pending
monitor update failure, then receive `funding_locked` from our
counterparty and ensure we don't generate the above messages until
after the monitor update completes.
A single PaymentSent event is generated when a payment is fulfilled.
This is occurs when the preimage is revealed on the first claimed HTLC.
For subsequent HTLCs, the event is not generated.
In order to score channels involved with a successful payments, the
scorer must be notified of each successful path involved in the payment.
Add a PaymentPathSuccessful event for this purpose. Generate it whenever
a part is removed from a pending outbound payment. This avoids duplicate
events when reconnecting to a peer.
I realized on my own node that I don't have any visibility into how
long a monitor or manager persistence call takes, potentially
blocking other operations. This makes it much more clear by adding
a relevant log_trace!() print immediately before and immediately
after persistence.
The payment_hash may not uniquely identify the payment if it has been
reused. Include the payment_id in PaymentSent events so it can
correlated with the send_payment call.
In the next commit, we'll be originating monitor updates both from
the ChainMonitor and from the ChannelManager, making simple
sequential update IDs impossible.
Further, the existing async monitor update API was somewhat hard to
work with - instead of being able to generate monitor_updated
callbacks whenever a persistence process finishes, you had to
ensure you only did so at least once all previous updates had also
been persisted.
Here we eat the complexity for the user by moving to an opaque
type for monitor updates, tracking which updates are in-flight for
the user and only generating monitor-persisted events once all
pending updates have been committed.
In the next commit we'll need ChainMonitor to "see" when a monitor
persistence completes, which means `monitor_updated` needs to move
to `ChainMonitor`. The simplest way to then communicate that
information to `ChannelManager` is via `MonitorEvet`s, which seems
to line up ok, even if they're now constructed by multiple
different places.
As ChainMonitor will need to see those errors in a coming PR,
we need to return errors via Persister so that our ChainMonitor
chain::Watch implementation sees them.
test_simple_monitor_permanent_update_fail and
test_simple_monitor_temporary_update_fail both have a mode where
they use either chain::Watch or persister to return errors.
As we won't be doing any returns directly from the chain::Watch
wrapper in a coming commit, the chain::Watch-return form of the
test will no longer make sense.
Exposing a `RwLock<HashMap<>>` directly was always a bit strange,
and in upcoming changes we'd like to change the internal
datastructure in `ChainMonitor`.
Further, the use of `RwLock` and `HashMap` meant we weren't able
to expose the ChannelMonitors themselves to users in bindings,
leaving a bindings/rust API gap.
Thus, we take this opportunity go expose ChannelMonitors directly
via a wrapper, hiding the internals of `ChainMonitor` behind
getters. We also update tests to use the new API.
The interface for get_route will change to take a scorer. Using
get_route_and_payment_hash whenever possible allows for keeping the
scorer inside get_route_and_payment_hash rather than at every call site.
Replace get_route with get_route_and_payment_hash wherever possible.
Additionally, update get_route_and_payment_hash to use the known invoice
features and the sending node's logger.
When we detect a channel `is_shutdown()` or call on it
`force_shutdown()`, we notify the user with a Event::ChannelClosed
informing about the id and closure reason.
Now that NetworkGraph uses interior mutability, the RwLock used around
it in NetGraphMsgHandler is no longer needed. This allows for shared
ownership without a lock.
This adds the new range-based closing_signed negotiation specified
in https://github.com/lightningnetwork/lightning-rfc/pull/847 as
well as cleans up the existing closing_signed negotiation to unify
the new codepaths and the old ones.
Note that because the new range-based closing_signed negotiation
allows the channel fundee to ultimately select the fee out of a
range specified by the funder, which we, of course, always select
the highest allowed amount from. Thus, we've added an extra round
of closing_signed in the common case as we will not simply accept
the first fee we see, always preferring to make the funder pay as
much as they're willing to.
When we added the support for external signing, many of the
signing functions were allowed to return an error, closing the
channel in such a case. `sign_closing_transaction` is one such
function which can now return an error, except instead of handling
it properly we'd simply never send a `closing_signed` message,
hanging the channel until users intervene and force-close it.
Piping the channel-closing error back through the various callsites
(several of which already have pending results by the time they
call `maybe_propose_first_closing_signed`) may be rather
complicated, so instead we simply attempt to propose the initial
`closing_signed` in `get_and_clear_pending_msg_events` like we do
for holding-cell freeing.
Further, since we now (possibly) generate a `ChannelMonitorUpdate`
on `shutdown`, we may need to wait for monitor updating to complete
before we can send a `closing_signed`, meaning we need to handle
the send asynchronously anyway.
This simplifies a few function interfaces and has no impact on
behavior, aside from a few message-ordering edge-cases, as seen in
the two small test changes required.
If we receive an update_fee but do not receive a commitment_signed,
we should not persist the pending fee update to disk or hold on to
it after our peer disconnects.
In order to make the code the most readable, we add a state enum
which matches the relevant states from InboundHTLCState, allowing
for more simple code comparison between inbound HTLC handling and
update_fee handling.
When we send an update_fee to our counterparty on an outbound
channel, if we need to re-send a commitment update after
reconnection, the update_fee must be present in the re-sent
commitment update messages. However, wewere always setting the
update_fee field in the commitment update to None, causing us to
generate invalid commitment signatures and get channel
force-closures.
This fixes the issue by correctly detecting when an update_fee
needs to be re-sent, doing so when required.
When a shutdown script is omitted from open_channel or accept_channel,
it must be provided when sending shutdown. Generate the shutdown script
at channel closing time in this case rather at channel opening.
This requires producing a ChannelMonitorUpdate with the shutdown script
since it is no longer known at ChannelMonitor creation.
It is useful for accounting and informational reasons for users to
be informed when a payment has been successfully forwarded. Thus,
when an HTLC which represents a forwarded leg is claimed, we
generate a new `PaymentForwarded` event.
This requires some additional plumbing to return HTLC values from
`OnchainEvent`s. Further, when we have to go on-chain to claim the
inbound side of the payment, we do not inform the user of the fee
reward, as we cannot calculate it until we see what is confirmed
on-chain.
Substantial code structure rewrites by:
Valentine Wallace <vwallace@protonmail.com>