`ChannelId` was weirdly listed in the re-export section of the docs and
reachable via multiple paths. Here we opt to make the `channel_id`
module private and leave only the `ChannelId` struct itself exposed.
In the `chanmon_consistency` fuzz, we currently "persist" the
`ChannelManager` on each loop iteration. With the new logic in the
past few commits to reduce the frequency of `ChannelManager`
persistences, this behavior now leaves a gap in our test coverage -
missing persistence notifications.
In order to cath (common-case) persistence misses, we update the
`chanmon_consistency` fuzzer to no longer persist the
`ChannelManager` unless the waker was woken and signaled to
persist, possibly reloading with a previous `ChannelManager` if we
were not signaled.
When reloading nodes A or C, the chanmon_consistency fuzzer
currently calls `get_and_clear_pending_msg_events` on the node,
potentially causing additional `ChannelMonitor` or `ChannelManager`
updates, just to check that no unexpected messages are generated.
There's not much reason to do so, the fuzzer could always swap for
a different command to call the same method, and the additional
checking requires some weird monitor persistence introspection.
Here we simplify the fuzzer by simply removing this logic.
When we handle an inbound `channel_reestablish` from our peers it
generally doesn't change any state and thus doesn't need a
`ChannelManager` persistence. Here we avoid said persistence where
possible.
If we receive a `ChannelUpdate` message which was invalid, it can
cause us to force-close the channel, which should result in a
`ChannelManager` persistence, though its not critical to do so.
When a peer connects and we send some `channel_reestablish`
messages or create a `per_peer_state` entry there's really no
reason to need to persist the `ChannelManager`. None of the
possible actions we take immediately result in a change to the
persisted contents of a `ChannelManager`, only the peer's later
`channel_reestablish` message does.
Many channel related messages don't actually change the channel
state in a way that changes the persisted channel. For example,
an `update_add_htlc` or `update_fail_htlc` message simply adds the
change to a queue, changing the channel state when we receive a
`commitment_signed` message.
In these cases there's really no reason to wake the background
processor at all - there's no response message and there's no state
update. However, note that if we close the channel we should
persist the `ChannelManager`. If we send an error message without
closing the channel, we should wake the background processor
without persisting.
Here we move to the appropriate `NotifyOption` on some of the
simpler channel message handlers.
As we now signal events-available from persistence-needed
separately, the `NotifyOption` enum should include a separate
variant for events-but-no-persistence, which we add here.
Currently, when a ChannelManager generates a notification for the
background processor, any pending events are handled and the
ChannelManager is always re-persisted.
Many channel related messages don't actually change the channel
state in a way that changes the persisted channel. For example,
an `update_add_htlc` or `update_fail_htlc` message simply adds the
change to a queue, changing the channel state when we receive a
`commitment_signed` message.
In these cases we shouldn't be re-persisting the ChannelManager as
it hasn't changed (persisted) state at all. In anticipation of
doing so in the next few commits, here we make the public API
handle the two concepts (somewhat) separately. The notification
still goes out via a single waker, however whether or not to
persist is now handled via a separate atomic bool.
Prior to any actions which may generate a `ChannelMonitorUpdate`,
and in general after startup,
`ChannelManager::process_background_events` must be called. This is
mostly accomplished by doing so on taking the
`total_consistency_lock` via the `PersistenceNotifierGuard`. In
order to skip this call in block connection logic, the
`PersistenceNotifierGuard::optionally_notify` constructor did not
call the `process_background_events` method.
However, this is very easy to misuse - `optionally_notify` does not
convey to the reader that they need to call
`process_background_events` at all.
Here we fix this by adding a separate
`optionally_notify_skipping_background_events` method, making the
requirements much clearer to callers.
This adds a test for monitor update actions being completed on
startup if a monitor update completed "while we were shut down"
(or, really, the manager didn't get persisted after the update
completed).
Because some of these tests require connecting blocks without
calling `get_and_clear_pending_msg_events`, we need to split up
the block connection utilities to only optionally call
sanity-checks.
`expect_payment_forwarded` takes a bool to indicate that the
inbound channel on which we received a forwarded payment has been
closed, but then ignores it in favor of looking at the fee in the
event. While this is generally correct, in cases where we process
an event after a channel was closed, which was generated before a
channel closed this is incorrect.
Instead, we examine the bool we already passed and use that.
When we forward a payment and receive an `update_fulfill_htlc`
message from the downstream channel, we immediately claim the HTLC
on the upstream channel, before even doing a `commitment_signed`
dance on the downstream channel. This implies that our
`ChannelMonitorUpdate`s "go out" in the right order - first we
ensure we'll get our money by writing the preimage down, then we
write the update that resolves giving money on the downstream node.
This is safe as long as `ChannelMonitorUpdate`s complete in the
order in which they are generated, but of course looking forward we
want to support asynchronous updates, which may complete in any
order.
Thus, here, we enforce the correct ordering by blocking the
downstream `ChannelMonitorUpdate` until the upstream one completes.
Like the `PaymentSent` event handling we do so only for the
`revoke_and_ack` `ChannelMonitorUpdate`, ensuring the
preimage-containing upstream update has a full RTT to complete
before we actually manage to slow anything down.
When we need to rebroadcast a `commitment_signed` on reconnect in
response to a previous update (ie not one which contains any
updates) we previously hacked in support for it by passing a `-1`
for the number of expected update_add_htlcs. This is a mess, and
with the introduction of `ReconnectArgs` we can now clean it up
easily with a new bool.
Long ago, for reasons lost to the ages, the
`PersistenceNotifierGuard::optionally_notify` constructor didn't
take a `ChannelManager` reference, but rather explicit references
to the fields of it that it needs.
This is cumbersome and useless, so we fix it here.
In the next commit, we separate `ChannelManager`'s concept of
waking a listener to both be persisted and to allow the user to
handle events. Here we rename the future-fetching method in
anticipation of this split.
This can happen due to races b/w client's call to block_connect
and adding newly created channel-monitor to chain-monitor using
watch_channel in funding_created.