Currently `BackgroundProcessor` tests create persister directories in the
current working directory and rely on cleaning up in a `Drop` implementation.
Unfortunately, it seems that in the async tests that nodes are not
`drop()`ed for some reason and so the directories created by those
tests remain behind in the current working directory.
This commit at least ensures that these test directories are created in
a temporary location for the OS using `temp_dir()`. It doesn't aim to
solve the lack of cleanup in the async tests.
Partial fix for #2224 but I believe it's enough to resolve it as these
temp directories that do remain will be purged by the OS at some stage
and are overwritten by subsequent tests if there is a conflict.
If a `Notifier` has an internal `FutureState` which gathers some
sleeper callbacks, but is never actaully woken, those callbacks
will leak due to a circular `Arc` reference when the `Notifier` is
`drop`'d.
Because `Notifier`s are rarely `drop`'d in production this isn't a
huge deal, but shows up materially in bindings tests as they spawn
many nodes over the course of a short test.
Fixes#2232
Previously, we were requiring any `UPDATE` onion errors to include
a `channel_update`, as the spec mandates[1]. If we see an onion
error which is missing one we treat it as a misbehaving node that
isn't behaving according to the spec and simply remove the node.
Sadly, it appears at least some versions of CLN are such nodes, and
opt to not include `channel_update` at all if they're returning a
`temporary_channel_failure`. This causes us to completely remove
CLN nodes from our graph after they fail to forward our HTLC.
While CLN is violating the spec here, there's not a lot of reason
to not allow it, so we go ahead and do so here, treating it simply
as any other failure by letting the scorer handle it.
[1] The spec says `Please note that the channel_update field is
mandatory in messages whose failure_code includes the UPDATE flag`
however doesn't repeat it in the requirements section so its not
crazy that someone missed it when implementing.
In a synchronous `BackgroundProcessor`, the exit is done by setting
an atomic flag, which is most likely to happen while we're asleep.
Thus, we previously checked for the exit condition after the sleep
(and after we persisted the `ChannelManager`, if required, though
this is no longer required and dates back to when we didn't do a
re-persist after breaking out of the main loop).
For an async `background-processor`, this is also fine, however
because of the relatively longer sleep time, if the exit flag is
set via a sleep check returning true during event processing, we
may end up delaying exit rather substantially.
In order to avoid this, we simply check for the exit condition both
before and immediately after the sleep in `background-processor`.
While these transactions were still valid, we incorrectly assumed that
they would propagate with a locktime of `current_height + 1`, when in
reality, only those with a locktime strictly lower than the next height
in the chain are allowed to enter the mempool.
In a future commit, we plan to correctly enforce that the spending
transaction has a valid locktime relative to the chain for the node
broascasting it in `TestBroadcaster::broadcast_transaction` to. We catch
up these test node instances to their expected height, such that we do
not fail said enforcement.
The `height` argument passed to `OnchainTxHandler::block_disconnected`
represents the height being disconnected, and not the current height.
Due to the incorrect assumption, we'd generate a claim with a locktime
in the future.
Ultimately, we shouldn't be generating claims within
`block_disconnected`. Rather, we should retry the claim at a later block
height, since the bitcoin blockchain does not ever roll back without
connecting a new block. Addressing this is left for future work.
This attempts to rebroadcast/fee-bump each pending claim a monitor is
tracking for a force-closed channel. This is crucial in preventing
certain classes of pinning attacks and ensures reliability if
broadcasting fails. For implementations of `FeeEstimator` that also
support mempool fee estimation, we may broadcast a fee-bumped claim
instead, ensuring we can also react to mempool fee spikes between
blocks.
In the next commit, we plan to extend the `OnchainTxHandler` to retry
pending claims on a timer. This timer may fire with much more frequency
than incoming blocks, so we want to avoid manually bumping feerates
(currently by 25%) each time our fee estimator provides a lower feerate
than before.