Long ago, we used the `no_connection_possible` to signal that a
peer has some unknown feature set or some other condition prevents
us from ever connecting to the given peer. In that case we'd
automatically force-close all channels with the given peer. This
was somewhat surprising to users so we removed the automatic
force-close, leaving the flag serving no LDK-internal purpose.
Distilling the concept of "can we connect to this peer again in the
future" to a simple flag turns out to be ripe with edge cases, so
users actually using the flag to force-close channels would likely
cause surprising behavior.
Thus, there's really not a lot of reason to keep the flag,
especially given its untested and likely to be broken in subtle
ways anyway.
Now that we allow `handle_channel_announcement` to (indirectly)
spawn async tasks which will complete later, we have to ensure it
can apply backpressure all the way up to the TCP socket to ensure
we don't end up with too many buffers allocated for UTXO
validation.
We do this by adding a new method to `RoutingMessageHandler` which
allows it to signal if there are "many" checks pending and
`channel_announcement` messages should be delayed. The actual
`PeerManager` implementation thereof is done in the next commit.
Also swaps `PublicKey` for `NodeId` in `get_next_node_announcement`
and `InitSyncTracker` to avoid unnecessary deserialization that came
from changing `UnsignedNodeAnnouncement`.
Adds the macro `get_pubkey_from_node_id`
to parse `PublicKey`s back from `NodeId`s for signature
verification, as well as `make_funding_redeemscript_from_slices`
to avoid parsing back and forth between types.
Every exported macro needed to have all the macros used inside it:
1- to be exported as well.
2- be called from the `$crate` namespace so it works in other crates.
Some structs in `lightning::util::ser` needed to be made public as they were used inside the exported macros.
Use the macros like this:
```Rust
lightning::impl_writeable_tlv_based!(...)
```
The bindings have exported `io::Error` as, basically,
`io::ErrorKind`, for quite some time, so there's little reason to
not just export `io::ErrorKind` as well.
As we move towards specify supported/required feature bits in the
module(s) where they are supported, the global `known` feature set
constructors no longer make sense.
In anticipation of removing the `known` constructor, this commit
removes all remaining references to it outside of features.rs.
As we remove the concept of a global "known/supported" feature set
in LDK, we should also remove the concept of a global "required"
feature set. This does so by moving the checks for specific
required features into handlers.
Specifically, it allows the handler `peer_connected` method to
return an `Err` if the peer should be disconnected. Only one such
required feature bit is currently set - `static_remote_key`, which
is required in `ChannelManager`.
Similar to how we OR our InitFeaures and NodeFeatures across both our channel
and routing message handlers, we also want to OR the features of our onion
message handler.
When we broadcast a node announcement, the features we support are really a
combination of all the various features our different handlers support. This
commit captures this concept by OR'ing our NodeFeatures across both our channel
and routing message handlers.
When we go to send an Init message to new peers, the features we
support are really a combination of all the various features our
different handlers support. This commit captures this concept by
OR'ing our InitFeatures across both our Channel and Routing
handlers.
Note that this also disables setting the `initial_routing_sync`
flag in init messages, as was intended in
e742894492, per the comment added on
`clear_initial_routing_sync`, though this should not be a behavior
change in practice as nodes which support gossip queries ignore the
initial routing sync flag.
Like we now do for `NodeFeatures`, this converts to asking our
registered `ChannelMessageHandler` for our `InitFeatures` instead
of hard-coding them to the global LDK known set.
This allows handlers to set different feature bits based on what
our configuration actually supports rather than what LDK supports
in aggregate.
Some `NodeFeatures` will, in the future, represent features which
are not enabled by the `ChannelManager`, but by other message
handlers handlers. Thus, it doesn't make sense to determine the
node feature bits in the `ChannelManager`.
The simplest fix for this is to change to generating the
node_announcement in `PeerManager`, asking all the connected
handlers which feature bits they support and simply OR'ing them
together. While this may not be sufficient in the future as it
doesn't consider feature bit dependencies, support for those could
be handled at the feature level in the future.
This commit moves the `broadcast_node_announcement` function to
`PeerHandler` but does not yet implement feature OR'ing.
Adds the boilerplate needed for PeerManager and OnionMessenger to work
together, with some corresponding docs and misc updates mostly due to the
PeerManager public API changing.
Instead of backfilling gossip by buffering (up to) ten messages at
a time, only buffer one message at a time, as the peers' outbound
socket buffer drains. This moves the outbound backfill messages out
of `PeerHandler` and into the operating system buffer, where it
arguably belongs.
Not buffering causes us to walk the gossip B-Trees somewhat more
often, but avoids allocating vecs for the responses. While its
probably (without having benchmarked it) a net performance loss, it
simplifies buffer tracking and leaves us with more room to play
with the buffer sizing constants as we add onion message forwarding
which is an important win.
Note that because we change how often we check if we're out of
messages to send before pinging, we slightly change how many
messages are exchanged at once, impacting the
`test_do_attempt_write_data` constants.
Prior to this change, we could have failed to decode a valid payload of size
>253. This is because we were decoding the length (a BigSize, big-endian) as a
VarInt (little-endian).
Found in #1652.
This required adapting `onion_utils::decode_next_hop` to work for both payments
and onion messages.
Currently we just print out the path_id of any onion messages we receive. In
the future, these received onion messages will be redirected to their
respective handlers: i.e. an invoice_request will go to an InvoiceHandler,
custom onion messages will go to a custom handler, etc.
We need to add a new Packet struct because onion message packet hop_data fields
can be of variable length, whereas regular payment packets are always 1366
bytes.
Co-authored-by: Valentine Wallace <vwallace@protonmail.com>
Co-authored-by: Jeffrey Czyz <jkczyz@gmail.com>
Its somewhat strange to have a trait method which is named after
the intended action, rather than the action that occurred, leaving
it up to the implementor what action they want to take.
In any place where fail_htlc_backwards_internal was called for a phantom payment
failure, we weren't encoding the onion failure as if the phantom were the one
failing. Instead, we were encoding the failure as if it were coming from the
second-to-last hop. This caused our failures to not be parsed properly on the
payer's side.
Places we were encoding failures incorrectly include:
* on failure of a call to inbound_payment::verify
* on a user call to fail_htlc_backwards
Also drop some unnecessary panics when reading OnionHopData objects. This also
enables one of the phantom failure tests because we can construct OnionHopDatas
with invalid amounts.
Lastly, remove a bogus comment
Somehow, our channel type implementation doesn't echo back the
channel type as we believe it was negotiated, as we should. Though
the spec doesn't explicitly require this, some implementations may
require it and it appears to have been in the BOLTs from the start
of the channel type logic.