When generating a `channel_update` either in response to a fee
configuration change or an HTLC failure, we currently poll the
channel to check if the peer's connected when setting the disabled
bit in the `channel_update`. This could cause cases where we set
the disable bit even though the peer *just* disconnected, and don't
generate a followup broadcast `channel_update` with the disabled
bit unset.
While a node generally shouldn't rebroadcast a `channel_update` it
received in an onion, there's nothing inherently stopping them from
doing so. Obviously in the fee-update case we expect the message to
propagate.
Luckily, since we already "stage" disable-changed updates, we can
check the staged state and use that to set the disabled bit in all
`channel_update` cases.
At times, PRs can go through multiple pushes in a short amount of time,
spawning a workflow run for each. Most of the time, there's no need to
let the previous jobs running if the code itself has changed (e.g., via
a force push), and we'd benefit from having those slots be used by other
PRs/branches instead.
u16 arrays are used in the historical liquidity range tracker.
Previously, we read them without applying the stride multiple,
reading bytes repeatedly and at an offset, corrupting data as we
go.
This applies the correct stride multiplayer fixing the issue.
In our test utilities, we generally refer to a `Node` struct which
holds a `ChannelManager` and a number of other structs. However, we
use the same utilities in benchmarking, where we have a different
`Node`-like struct. This made moving from macros to functions
entirely impossible, as we end up needing multiple types in a given
context.
Thus, here, we take the pain and introduce some wrapper traits
which encapsulte what we need from `Node`, swapping some of our
macros to functions.
In 6090d9e6a8 we swapped out old
debug assertions that checked that a lock was `try_lock`able to
test that certain locks weren't held when we needed to be able to
take them in some near branch. However, another slipped in after in
the `ChannelMonitorUpdate` handling rework, which is replaced with
the new debug assertions here.
This fixes two potential panics within the test if the
`BackgroundProcessor` for `nodes[0]` consumed the `ChannelPending` event
prior to us consuming it manually in `end_open_channel`. The first panic
would happen within the event handler, since `ChannelPending` was not
being handled. The second panic would happen upon expecting the
`ChannelPending` event after handling `nodes[1]`'s `funding_signed` if
the `BackgroundProcessor` handled the event first. To ensure we still
reliably receive a `ChannelPending` event once possible, we let the
`BackgroundProcessor` consume the event and notify it.
Now that we guarantee `claim_payment` will always succeed we have
to let the user know what the deadline is. We still fail payments
if they haven't been claimed in time, which we now expose in
`PaymentClaimable`.
There's no reason to hold a lock on `per_peer_state` while we're
claiming from a since-closed channel via a `ChannelMonitorUpdate`,
which we stop doing here.