The fields provided by `DataLossProtect` have been mandatory since
https://github.com/lightning/bolts/pull/754/commits/6656b70, regardless
of whether `option_dataloss_protect` or `option_remote_key` feature bits
are set.
We move the fields out of `DataLossProtect` to make encoding definitions
more succinct with `impl_writeable_msg!` and to reduce boilerplate.
This paves the way for completely removing `OptionalField` in subsequent
commits.
`PeerManager` takes a `MessageHandler` struct which contains all
the known message handlers for it to pass messages to. It then,
separately, takes a `CustomMessageHandler`. This makes no sense, we
should simply include the `CustomMessageHandler` in the
`MessageHandler` struct for consistency.
If we have more than
127 / `MAX_BUFFER_DRAIN_TICK_INTERVALS_PER_PEER` (31) peers,
`awaiting_pong_timer_tick_intervals` can overflow before we hit
the limit. This isn't super harmful, we'll still disconnect peers
as long as they don't send *any* messages between two pings, but it
does cause us to not disconnect peers which are extremely slow in
responding to messages, e.g. because they are overloaded.
PaymentParameters already includes this value.
This set us up to better support route blinding, since there is no known
final_cltv_delta when paying to a blinded route.
Currently `BackgroundProcessor` tests create persister directories in the
current working directory and rely on cleaning up in a `Drop` implementation.
Unfortunately, it seems that in the async tests that nodes are not
`drop()`ed for some reason and so the directories created by those
tests remain behind in the current working directory.
This commit at least ensures that these test directories are created in
a temporary location for the OS using `temp_dir()`. It doesn't aim to
solve the lack of cleanup in the async tests.
Partial fix for #2224 but I believe it's enough to resolve it as these
temp directories that do remain will be purged by the OS at some stage
and are overwritten by subsequent tests if there is a conflict.
If a `Notifier` has an internal `FutureState` which gathers some
sleeper callbacks, but is never actaully woken, those callbacks
will leak due to a circular `Arc` reference when the `Notifier` is
`drop`'d.
Because `Notifier`s are rarely `drop`'d in production this isn't a
huge deal, but shows up materially in bindings tests as they spawn
many nodes over the course of a short test.
Fixes#2232
Previously, we were requiring any `UPDATE` onion errors to include
a `channel_update`, as the spec mandates[1]. If we see an onion
error which is missing one we treat it as a misbehaving node that
isn't behaving according to the spec and simply remove the node.
Sadly, it appears at least some versions of CLN are such nodes, and
opt to not include `channel_update` at all if they're returning a
`temporary_channel_failure`. This causes us to completely remove
CLN nodes from our graph after they fail to forward our HTLC.
While CLN is violating the spec here, there's not a lot of reason
to not allow it, so we go ahead and do so here, treating it simply
as any other failure by letting the scorer handle it.
[1] The spec says `Please note that the channel_update field is
mandatory in messages whose failure_code includes the UPDATE flag`
however doesn't repeat it in the requirements section so its not
crazy that someone missed it when implementing.
In a synchronous `BackgroundProcessor`, the exit is done by setting
an atomic flag, which is most likely to happen while we're asleep.
Thus, we previously checked for the exit condition after the sleep
(and after we persisted the `ChannelManager`, if required, though
this is no longer required and dates back to when we didn't do a
re-persist after breaking out of the main loop).
For an async `background-processor`, this is also fine, however
because of the relatively longer sleep time, if the exit flag is
set via a sleep check returning true during event processing, we
may end up delaying exit rather substantially.
In order to avoid this, we simply check for the exit condition both
before and immediately after the sleep in `background-processor`.
While these transactions were still valid, we incorrectly assumed that
they would propagate with a locktime of `current_height + 1`, when in
reality, only those with a locktime strictly lower than the next height
in the chain are allowed to enter the mempool.
In a future commit, we plan to correctly enforce that the spending
transaction has a valid locktime relative to the chain for the node
broascasting it in `TestBroadcaster::broadcast_transaction` to. We catch
up these test node instances to their expected height, such that we do
not fail said enforcement.
The `height` argument passed to `OnchainTxHandler::block_disconnected`
represents the height being disconnected, and not the current height.
Due to the incorrect assumption, we'd generate a claim with a locktime
in the future.
Ultimately, we shouldn't be generating claims within
`block_disconnected`. Rather, we should retry the claim at a later block
height, since the bitcoin blockchain does not ever roll back without
connecting a new block. Addressing this is left for future work.
This attempts to rebroadcast/fee-bump each pending claim a monitor is
tracking for a force-closed channel. This is crucial in preventing
certain classes of pinning attacks and ensures reliability if
broadcasting fails. For implementations of `FeeEstimator` that also
support mempool fee estimation, we may broadcast a fee-bumped claim
instead, ensuring we can also react to mempool fee spikes between
blocks.