Each message handler provides which features it supports. A custom
message handler may support unknown features. Therefore, these features
should be checked against instead of the features known by LDK.
Additionally, fail the connection if the peer requires features unknown
to the handler. The peer should already fail the connection in the
latter case.
When checking features, rather than checking against which features LDK
knows about, it is more useful to check against a peer's features. Add
Features::requires_unknown_bits_from such that the given features are
used instead.
CustomMessageHandler implementations may need to advertise support for
features. Add methods to CustomMessageHandler to provide these and
combine them with features from other message handlers.
The `lightning-custom-message` crate will need access to Features::or in
order combine features of a composite handler. Expose this via a
core::ops::BitOr implementation.
It's a bit confusing when we see only "Peer sent a garbage
channel_reestablish" when a peer uses lnd's SCB feature to ask us
to broadcast the latest state. This updates the error message to be
a bit clearer.
A while back, in tests, we added a `AChannelManager` trait, which
is implemented for all `ChannelManager`s, and can be used as a
bound when we need a `ChannelManager`, rather than having to
duplicate all the bounds of `ChannelManager` everywhere.
Here we do the same thing for `PeerManager`, but make it public and
use it to clean up `lightning-net-tokio` and
`lightning-background-processor`.
We should likely do the same for `AChannelManager`, but that's left
as a followup.
The previous commits set up the ability for us to hold
`ChannelMonitorUpdate`s which are pending until we're ready to pass
them to users and have them be applied. However, if the
`ChannelManager` is persisted while we're waiting to give the user
a `ChannelMonitorUpdate` we'll be confused on restart - seeing our
latest `ChannelMonitor` state as stale compared to our
`ChannelManager` - a critical error.
Luckily the solution is trivial, we simply need to store the
pending `ChannelMonitorUpdate` state and load it with the
`ChannelManager` data, allowing stale monitors on load as long as
we have the missing pending updates between where we are and the
latest `ChannelMonitor` state.
This adds handling of the new `EventCompletionAction`s after
`Event`s are handled, letting `ChannelMonitorUpdate`s which were
blocked fly after a relevant `Event`.
This will allow us to block `ChannelMonitorUpdate`s on `Event`
processing in the next commit.
Note that this gets dangerously close to breaking forwards
compatibility - if we have an `Event` with an
`EventCompletionAction` tied to it, we persist a new, even, TLV in
the `ChannelManager`. Hopefully this should be uncommon, as it
implies an `Event` was delayed until after a full round-trip to a
peer.
In the coming commits, we need to delay `ChannelMonitorUpdate`s
until future actions (specifically `Event` handling). However,
because we should only notify users once of a given
`ChannelMonitorUpdate` and they must be provided in-order, we need
to track which ones have or have not been given to users and, once
updating resumes, fly the ones that haven't already made it to
users.
To do this we simply add a `bool` in the `ChannelMonitorUpdate` set
stored in the `Channel` which indicates if an update flew and
decline to provide new updates back to the `ChannelManager` if any
updates have their flown bit unset.
Further, because we'll now by releasing `ChannelMonitorUpdate`s
which were already stored in the pending list, we now need to
support getting a `Completed` result for a monitor which isn't the
only pending monitor (or even out of order), thus we also rewrite
the way monitor updates are marked completed.
As pointed out in https://github.com/lightning/bolts/pull/754/commits/6656b70,
we can move the `shutdown_scriptpubkey` field into the TLV streams of
`OpenChannel` and `AcceptChannel` without affecting the resulting encoding.
We use `WithoutLength` encoding here to ensure that we do not encode a
length prefix along with `Script` as is normally the case.
The fields provided by `DataLossProtect` have been mandatory since
https://github.com/lightning/bolts/pull/754/commits/6656b70, regardless
of whether `option_dataloss_protect` or `option_remote_key` feature bits
are set.
We move the fields out of `DataLossProtect` to make encoding definitions
more succinct with `impl_writeable_msg!` and to reduce boilerplate.
This paves the way for completely removing `OptionalField` in subsequent
commits.
`PeerManager` takes a `MessageHandler` struct which contains all
the known message handlers for it to pass messages to. It then,
separately, takes a `CustomMessageHandler`. This makes no sense, we
should simply include the `CustomMessageHandler` in the
`MessageHandler` struct for consistency.
If we have more than
127 / `MAX_BUFFER_DRAIN_TICK_INTERVALS_PER_PEER` (31) peers,
`awaiting_pong_timer_tick_intervals` can overflow before we hit
the limit. This isn't super harmful, we'll still disconnect peers
as long as they don't send *any* messages between two pings, but it
does cause us to not disconnect peers which are extremely slow in
responding to messages, e.g. because they are overloaded.
PaymentParameters already includes this value.
This set us up to better support route blinding, since there is no known
final_cltv_delta when paying to a blinded route.
Currently `BackgroundProcessor` tests create persister directories in the
current working directory and rely on cleaning up in a `Drop` implementation.
Unfortunately, it seems that in the async tests that nodes are not
`drop()`ed for some reason and so the directories created by those
tests remain behind in the current working directory.
This commit at least ensures that these test directories are created in
a temporary location for the OS using `temp_dir()`. It doesn't aim to
solve the lack of cleanup in the async tests.
Partial fix for #2224 but I believe it's enough to resolve it as these
temp directories that do remain will be purged by the OS at some stage
and are overwritten by subsequent tests if there is a conflict.
If a `Notifier` has an internal `FutureState` which gathers some
sleeper callbacks, but is never actaully woken, those callbacks
will leak due to a circular `Arc` reference when the `Notifier` is
`drop`'d.
Because `Notifier`s are rarely `drop`'d in production this isn't a
huge deal, but shows up materially in bindings tests as they spawn
many nodes over the course of a short test.
Fixes#2232