Whenever we go to send bytes to a peer, we need to construct a
waker for tokio to call back into if we need to finish sending
later. That waker needs some reference to the peer's read task to
wake it up, hidden behind a single `*const ()`. To do this, we'd
previously simply stored a `Box<tokio::mpsc::Sender>` in that
pointer, which requires a `clone` for each waker construction. This
leads to substantial malloc traffic.
Instead, here, we replace this box with an `Arc`, leaving a single
`tokio::mpsc::Sender` floating around and simply change the
refcounts whenever we construct a new waker, which we can do
without allocations.
ChainHash is more appropriate for places where an arbitrary BlockHash is
not desirable. This type was introduced in later versions of the bitcoin
crate, thus BlockHash was used instead.
Using ChainHash also makes it easier to check if ChannelManager is
compatible with an Offer.
We use `tokio`'s `io-util` feature to provide the
`Async{Read,Write}Ext` traits, which allow us to simply launch a
read future or `poll_write` directly as well as `split` the
`TcpStream` into a read/write half. However, these traits aren't
actually doing much for us - they are really just wrapping the
`readable` future (which we can trivially use ourselves) and
`poll_write` isn't doing anything for us that `poll_write_ready`
can't.
Similarly, the split logic is actually just `Arc`ing the
`TcpStream` and busy-waiting when an operation is busy to prevent
concurrent reads/writes. However, there's no reason to prevent
concurrent access at the stream level - we aren't ever concurrently
writing or reading (though we may concurrently read and write,
which is fine).
Worse, the `io-util` feature broke MSRV (though they're likely to
fix this upstream) and carries two additional dependencies (only
one on the latest upstream tokio).
Thus, we simply drop the dependency here.
Fixes#2527.
The `tokio` `macros` feature depends on `proc-macro2`, which
recently broke its MSRV in a patch version. Such crates aren't
reasonable for us to have as dependencies, so instead we replace
the one trivial use we have of `tokio::select!()` with our own
manual future.
If the `networks` field is present in a received `Init` message, then
we need to make sure our genesis chain hash matches one of those, otherwise
we should disconnect the peer.
We now also always send our genesis chain hash in `Init` messages to
our peers.
A while back, in tests, we added a `AChannelManager` trait, which
is implemented for all `ChannelManager`s, and can be used as a
bound when we need a `ChannelManager`, rather than having to
duplicate all the bounds of `ChannelManager` everywhere.
Here we do the same thing for `PeerManager`, but make it public and
use it to clean up `lightning-net-tokio` and
`lightning-background-processor`.
We should likely do the same for `AChannelManager`, but that's left
as a followup.
`PeerManager` takes a `MessageHandler` struct which contains all
the known message handlers for it to pass messages to. It then,
separately, takes a `CustomMessageHandler`. This makes no sense, we
should simply include the `CustomMessageHandler` in the
`MessageHandler` struct for consistency.
The `lightning-net-tokio` crate-level example contained a carryover
from when it was the primary notifier of the background processor
and now just shows an "example" of creating a method to call
another method with the same parameters and then do event
processing (which doesn't make sense, the BP should do that).
Instead, the examples are simply removed and the documentation is
tweaked to include recent changes.
This is largely motivated by some follow-up work for anchors that will
introduce an event handler for `BumpTransaction` events, which we can
now include in this new top-level `events` module.
Long ago, we used the `no_connection_possible` to signal that a
peer has some unknown feature set or some other condition prevents
us from ever connecting to the given peer. In that case we'd
automatically force-close all channels with the given peer. This
was somewhat surprising to users so we removed the automatic
force-close, leaving the flag serving no LDK-internal purpose.
Distilling the concept of "can we connect to this peer again in the
future" to a simple flag turns out to be ripe with edge cases, so
users actually using the flag to force-close channels would likely
cause surprising behavior.
Thus, there's really not a lot of reason to keep the flag,
especially given its untested and likely to be broken in subtle
ways anyway.
Now that we allow `handle_channel_announcement` to (indirectly)
spawn async tasks which will complete later, we have to ensure it
can apply backpressure all the way up to the TCP socket to ensure
we don't end up with too many buffers allocated for UTXO
validation.
We do this by adding a new method to `RoutingMessageHandler` which
allows it to signal if there are "many" checks pending and
`channel_announcement` messages should be delayed. The actual
`PeerManager` implementation thereof is done in the next commit.
The `chain::Access` trait (and the `chain::AccessError` enum) is a
bit strange - it only really makes sense if users import it via the
`chain` module, otherwise they're left with a trait just called
`Access`. Worse, for bindings users its always just called
`Access`, in part because many downstream languages don't have a
mechanism to import a module and then refer to it.
Further, its stuck dangling in the `chain` top-level mod.rs file,
sitting in a module that doesn't use it at all (it's only used in
`routing::gossip`).
Instead, we give it its full name - `UtxoLookup` (and rename the
error enum `UtxoLookupError`) and put it in the a new
`routing::utxo` module, next to `routing::gossip`.
Also swaps `PublicKey` for `NodeId` in `get_next_node_announcement`
and `InitSyncTracker` to avoid unnecessary deserialization that came
from changing `UnsignedNodeAnnouncement`.
Secrets should not be exposed in-memory at the interface level as it
would be impossible the implement it against a hardware security
module/secure element.