The spec actually requires we never send `announcement_signatures`
(and, thus, `channel_announcement`s) until after six confirmations.
However, we would happily have sent them prior to that as long as
we exchange `funding_locked` messages with our countarparty. Thanks
to re-broadcasting this issue is largely harmless, however it could
have some negative interactions with less-robust peers. Much more
importantly, this represents an important step towards supporting
0-conf channels, where `funding_locked` messages may be exchanged
before we even have an SCID to construct the messages with.
Because there is no ACK mechanism for `announcement_signatures` we
rely on existing channel updates to stop rebroadcasting them - if
we sent a `commitment_signed` after an `announcement_signatures`
and later receive a `revoke_and_ack`, we know our counterparty also
received our `announcement_signatures`. This may resolve some rare
edge-cases where we send a `funding_locked` which our counterparty
receives, but lose connection before the `announcement_signatures`
(usually the very next message) arrives.
Sadly, because the set of places where an `announcement_signatures`
may now be generated more closely mirrors where `funding_locked`
messages may be generated, but they are now separate, there is a
substantial amount of code motion providing relevant parameters
about current block information and ensuring we can return new
`announcement_signatures` messages.
If we have not yet sent `funding_locked` only because of a pending
channel monitor update, we shouldn't consider a channel
`is_usable`. This has a number of downstream effects, including
not attempting to route payments through the channel, not sending
private `channel_update` messages to our counterparty, or sending
channel_announcement messages if our couterparty has already signed
for it.
We further gate generation of `node_announcement`s on `is_usable`,
preventing generation of those or `announcement_signatures` until
we've sent our `funding_locked`.
Finally, `during_funding_monitor_fail` is updated to test a case
where we see the funding transaction lock in but have a pending
monitor update failure, then receive `funding_locked` from our
counterparty and ensure we don't generate the above messages until
after the monitor update completes.
A single PaymentSent event is generated when a payment is fulfilled.
This is occurs when the preimage is revealed on the first claimed HTLC.
For subsequent HTLCs, the event is not generated.
In order to score channels involved with a successful payments, the
scorer must be notified of each successful path involved in the payment.
Add a PaymentPathSuccessful event for this purpose. Generate it whenever
a part is removed from a pending outbound payment. This avoids duplicate
events when reconnecting to a peer.
I realized on my own node that I don't have any visibility into how
long a monitor or manager persistence call takes, potentially
blocking other operations. This makes it much more clear by adding
a relevant log_trace!() print immediately before and immediately
after persistence.
The payment_hash may not uniquely identify the payment if it has been
reused. Include the payment_id in PaymentSent events so it can
correlated with the send_payment call.
In the next commit, we'll be originating monitor updates both from
the ChainMonitor and from the ChannelManager, making simple
sequential update IDs impossible.
Further, the existing async monitor update API was somewhat hard to
work with - instead of being able to generate monitor_updated
callbacks whenever a persistence process finishes, you had to
ensure you only did so at least once all previous updates had also
been persisted.
Here we eat the complexity for the user by moving to an opaque
type for monitor updates, tracking which updates are in-flight for
the user and only generating monitor-persisted events once all
pending updates have been committed.
In the next commit we'll need ChainMonitor to "see" when a monitor
persistence completes, which means `monitor_updated` needs to move
to `ChainMonitor`. The simplest way to then communicate that
information to `ChannelManager` is via `MonitorEvet`s, which seems
to line up ok, even if they're now constructed by multiple
different places.
As ChainMonitor will need to see those errors in a coming PR,
we need to return errors via Persister so that our ChainMonitor
chain::Watch implementation sees them.
test_simple_monitor_permanent_update_fail and
test_simple_monitor_temporary_update_fail both have a mode where
they use either chain::Watch or persister to return errors.
As we won't be doing any returns directly from the chain::Watch
wrapper in a coming commit, the chain::Watch-return form of the
test will no longer make sense.
Exposing a `RwLock<HashMap<>>` directly was always a bit strange,
and in upcoming changes we'd like to change the internal
datastructure in `ChainMonitor`.
Further, the use of `RwLock` and `HashMap` meant we weren't able
to expose the ChannelMonitors themselves to users in bindings,
leaving a bindings/rust API gap.
Thus, we take this opportunity go expose ChannelMonitors directly
via a wrapper, hiding the internals of `ChainMonitor` behind
getters. We also update tests to use the new API.
The interface for get_route will change to take a scorer. Using
get_route_and_payment_hash whenever possible allows for keeping the
scorer inside get_route_and_payment_hash rather than at every call site.
Replace get_route with get_route_and_payment_hash wherever possible.
Additionally, update get_route_and_payment_hash to use the known invoice
features and the sending node's logger.
When we detect a channel `is_shutdown()` or call on it
`force_shutdown()`, we notify the user with a Event::ChannelClosed
informing about the id and closure reason.
Now that NetworkGraph uses interior mutability, the RwLock used around
it in NetGraphMsgHandler is no longer needed. This allows for shared
ownership without a lock.
This adds the new range-based closing_signed negotiation specified
in https://github.com/lightningnetwork/lightning-rfc/pull/847 as
well as cleans up the existing closing_signed negotiation to unify
the new codepaths and the old ones.
Note that because the new range-based closing_signed negotiation
allows the channel fundee to ultimately select the fee out of a
range specified by the funder, which we, of course, always select
the highest allowed amount from. Thus, we've added an extra round
of closing_signed in the common case as we will not simply accept
the first fee we see, always preferring to make the funder pay as
much as they're willing to.
When we added the support for external signing, many of the
signing functions were allowed to return an error, closing the
channel in such a case. `sign_closing_transaction` is one such
function which can now return an error, except instead of handling
it properly we'd simply never send a `closing_signed` message,
hanging the channel until users intervene and force-close it.
Piping the channel-closing error back through the various callsites
(several of which already have pending results by the time they
call `maybe_propose_first_closing_signed`) may be rather
complicated, so instead we simply attempt to propose the initial
`closing_signed` in `get_and_clear_pending_msg_events` like we do
for holding-cell freeing.
Further, since we now (possibly) generate a `ChannelMonitorUpdate`
on `shutdown`, we may need to wait for monitor updating to complete
before we can send a `closing_signed`, meaning we need to handle
the send asynchronously anyway.
This simplifies a few function interfaces and has no impact on
behavior, aside from a few message-ordering edge-cases, as seen in
the two small test changes required.
If we receive an update_fee but do not receive a commitment_signed,
we should not persist the pending fee update to disk or hold on to
it after our peer disconnects.
In order to make the code the most readable, we add a state enum
which matches the relevant states from InboundHTLCState, allowing
for more simple code comparison between inbound HTLC handling and
update_fee handling.
When we send an update_fee to our counterparty on an outbound
channel, if we need to re-send a commitment update after
reconnection, the update_fee must be present in the re-sent
commitment update messages. However, wewere always setting the
update_fee field in the commitment update to None, causing us to
generate invalid commitment signatures and get channel
force-closures.
This fixes the issue by correctly detecting when an update_fee
needs to be re-sent, doing so when required.
When a shutdown script is omitted from open_channel or accept_channel,
it must be provided when sending shutdown. Generate the shutdown script
at channel closing time in this case rather at channel opening.
This requires producing a ChannelMonitorUpdate with the shutdown script
since it is no longer known at ChannelMonitor creation.
It is useful for accounting and informational reasons for users to
be informed when a payment has been successfully forwarded. Thus,
when an HTLC which represents a forwarded leg is claimed, we
generate a new `PaymentForwarded` event.
This requires some additional plumbing to return HTLC values from
`OnchainEvent`s. Further, when we have to go on-chain to claim the
inbound side of the payment, we do not inform the user of the fee
reward, as we cannot calculate it until we see what is confirmed
on-chain.
Substantial code structure rewrites by:
Valentine Wallace <vwallace@protonmail.com>
When receiving an update_fulfill_htlc message, we immediately
forward the claim backwards along the payment path before waiting
for a full commitment_signed dance. This is great, but can cause
duplicative claims if a node sends an update_fulfill_htlc message,
disconnects, reconnects, and then has to re-send its
update_fulfill_htlc message again.
While there was code to handle this, it treated it as a channel
error on the inbound channel, which is incorrect - this is an
expected, albeit incredibly rare, condition. Instead, we handle
these double-claims correctly, simply ignoring them.
With debug_assertions enabled, we also check that the previous
close of the same HTLC was a fulfill, and that we are not moving
from a HTLC failure to an HTLC claim after its too late.
A test is also added, which hits all three failure cases in
`Channel::get_update_fulfill_htlc`.
Found by the chanmon_consistency fuzzer.
If we are a public node and have a private channel, our
counterparty needs to know the fees which we will charge to forward
payments to them. Without sending them a channel_update, they have
no way to learn that information, resulting in the channel being
effectively useless for outbound-from-us payments.
This commit fixes our lack of channel_update messages to private
channel counterparties, ensuring we always send them a
channel_update after the channel funding is confirmed.
We use `Channel::is_live()` to gate inclusion of a channel in
`ChannelManager::list_usable_channels()` and when sending an
HTLC to select whether a channel is available for
forwarding through/sending to.
In both of these cases, we should consider a channel `is_live()` when
they are pending a monitor update. Some clients may update monitors
asynchronously, thus we may simply be waiting a short duration for a
monitor update to complete, and shouldn't fail all forwarding HTLCs
during that time.
After #851, we always ensure any holding cells are free'd when
sending P2P messages, making this change much more trivially
correct - instead of having to ensure that we always free the holding
cell when a channel becomes live again after adding something to the
holding cell, we can simply rely on the fact that it always happens.
Fixes#661.
For users who get PaymentPreimages via
`get_payment_secret_preimage`, they need to provide the
PaymentPreimage back in `claim_funds` but they aren't actually
given the preimage anywhere.
This commit gives users the PaymentPreimage in the
`PaymentReceived` event.
Like the payment_secret parameter, this paramter has been the source
of much confusion, so we just drop it.
Users should prefer to do this check when registering the payment
secret instead of at claim-time.