Commit graph

181 commits

Author SHA1 Message Date
Valentine Wallace
c106d4ff9f
OR NodeFeatures from both Channel and Routing message handlers
When we broadcast a node announcement, the features we support are really a
combination of all the various features our different handlers support. This
commit captures this concept by OR'ing our NodeFeatures across both our channel
and routing message handlers.
2022-09-09 15:58:20 -04:00
Matt Corallo
e34a2bc722 Add a new InitFeatures constructor to capture the types of flags
When `ChannelMessageHandler` implementations wish to return an
`InitFeatures` which contain all the known flags that are relevant
to channel handling, but not gossip handling, they currently need
to do so by manually constructing an InitFeatures with all known
flags and then clearing the ones they dont want.

Instead of spreading this logic out across the codebase, this
consolidates such construction to one place in features.rs.
2022-09-09 15:36:46 +00:00
Matt Corallo
06cb48afd4 OR InitFeatures from both Channel and Routing message handlers
When we go to send an Init message to new peers, the features we
support are really a combination of all the various features our
different handlers support. This commit captures this concept by
OR'ing our InitFeatures across both our Channel and Routing
handlers.

Note that this also disables setting the `initial_routing_sync`
flag in init messages, as was intended in
e742894492, per the comment added on
`clear_initial_routing_sync`, though this should not be a behavior
change in practice as nodes which support gossip queries ignore the
initial routing sync flag.
2022-09-09 15:36:46 +00:00
Matt Corallo
950ccc4340 Fetch our InitFeatures from ChannelMessageHandler
Like we now do for `NodeFeatures`, this converts to asking our
registered `ChannelMessageHandler` for our `InitFeatures` instead
of hard-coding them to the global LDK known set.

This allows handlers to set different feature bits based on what
our configuration actually supports rather than what LDK supports
in aggregate.
2022-09-09 15:36:46 +00:00
Matt Corallo
989cb064b5 Move broadcast_node_announcement to PeerManager
Some `NodeFeatures` will, in the future, represent features which
are not enabled by the `ChannelManager`, but by other message
handlers handlers. Thus, it doesn't make sense to determine the
node feature bits in the `ChannelManager`.

The simplest fix for this is to change to generating the
node_announcement in `PeerManager`, asking all the connected
handlers which feature bits they support and simply OR'ing them
together. While this may not be sufficient in the future as it
doesn't consider feature bit dependencies, support for those could
be handled at the feature level in the future.

This commit moves the `broadcast_node_announcement` function to
`PeerHandler` but does not yet implement feature OR'ing.
2022-09-08 19:50:36 +00:00
Matt Corallo
6b0afbe4d4 Send channel_{announcement,update} msgs on connection, not timer
When we connect to a new peer, immediately send them any
channel_announcement and channel_update messages for any public
channels we have with other peers. This allows us to stop sending
those messages on a timer when they have not changed and ensures
we are sending messages when we have peers connected, rather than
broadcasting at startup when we have no peers connected.
2022-09-08 19:50:36 +00:00
Valentine Wallace
b9b02b48fb
Refuse to send and forward OMs to disconnected peers
We also refuse to connect to peers that don't advertise onion message
forwarding support.
2022-09-02 16:27:30 -04:00
Valentine Wallace
f4834def9d
Implement buffering onion messages for peers.
In this commit, we check if a peer's outbound buffer has room for onion
messages, and if so pulls them from an implementer of a new trait,
OnionMessageProvider.

Makes sure channel messages are prioritized over OMs, and OMs are prioritized
over gossip.

The onion_message module remains private until further rate limiting is added.
2022-08-26 19:03:07 -04:00
Valentine Wallace
544f72b37c
PeerManager: bump the read pause limit
...to make sure we can still get channel messages out after enqueuing some big gossip messages.
2022-08-26 19:03:05 -04:00
Valentine Wallace
4adff1039f
Add boilerplate for sending and receiving onion messages in PeerManager
Adds the boilerplate needed for PeerManager and OnionMessenger to work
together, with some corresponding docs and misc updates mostly due to the
PeerManager public API changing.
2022-08-26 19:02:59 -04:00
Valentine Wallace
47e818f198
Separate gossip broadcasts into their own queue in PeerManager
This allows us to better prioritize channel messages over gossip broadcasts and
lays groundwork for rate limiting onion messages more simply, since they won't
be competing with gossip broadcasts for space in the main message queue.
2022-08-25 14:57:43 -04:00
Valentine Wallace
ab149dc9d5
PeerMan: rename drop_gossip util to be more accurate
It's more accurate to name it as dropping gossip broadcasts, as it won't drop
all gossip.
2022-08-25 14:57:43 -04:00
Valentine Wallace
1de698fdd9
PeerMan: fix bug in drop_gossip util
Fixes a flipped bool that was introduced in
4a1ee5f9a9
2022-08-25 14:56:50 -04:00
Matt Corallo
8ffaacb3c3 Drop uneccessary if {...; bool} pattern in PeerManager 2022-08-16 00:52:44 +00:00
Matt Corallo
7717fa23a8 Backfill gossip without buffering directly in LDK
Instead of backfilling gossip by buffering (up to) ten messages at
a time, only buffer one message at a time, as the peers' outbound
socket buffer drains. This moves the outbound backfill messages out
of `PeerHandler` and into the operating system buffer, where it
arguably belongs.

Not buffering causes us to walk the gossip B-Trees somewhat more
often, but avoids allocating vecs for the responses. While its
probably (without having benchmarked it) a net performance loss, it
simplifies buffer tracking and leaves us with more room to play
with the buffer sizing constants as we add onion message forwarding
which is an important win.

Note that because we change how often we check if we're out of
messages to send before pinging, we slightly change how many
messages are exchanged at once, impacting the
`test_do_attempt_write_data` constants.
2022-08-15 21:35:05 +00:00
Matt Corallo
4f6da92c4e Clarify comment on BUFFER_DRAIN_MSGS_PER_TICK. 2022-08-10 19:29:39 +00:00
Valentine Wallace
4a1ee5f9a9 Use util methods in Peer to decide when to forward
This consolidates our various checks on peer buffer space into the
`Peer` impl itself, making the thresholds at which we stop taking
various actions on a peer more readable as a whole.

This commit was primarily authored by `Valentine Wallace
<vwallace@protonmail.com>` with some amendments by `Matt Corallo
<git@bluematt.me>`.
2022-08-10 19:29:39 +00:00
Matt Corallo
19b5a48dde Remove scary disconenct warnings on PeerManager new connection fns
In 4703d4e725 we changed
PeerManager::socket_disconnected to no longer require that sockets
which the PeerManager decided to disconnect not be disconnected.
However, we forgot to remove the scary warnings on the
`new_{inbound,outbound}_connection` functions which warned of the
old behavior.

We do so here.
2022-07-25 18:21:00 +00:00
Matt Corallo
0627c0c88a Fix some test theoretical lock inversions
In the next commit we add lockorder testing based on the line each
mutex was created on rather than the particular mutex instance.
This causes some additional test failure because of lockorder
inversions for the same mutex across different tests, which is
fixed here.
2022-07-13 19:28:29 +00:00
Jeffrey Czyz
67736b7480
Parameterize NetworkGraph with Logger
P2PGossipSync logs before delegating to NetworkGraph in its
EventHandler. In order to share this handling with RapidGossipSync,
NetworkGraph needs to take a logger so that it can implement
EventHandler instead.
2022-06-06 13:02:43 -05:00
Jeffrey Czyz
574870e9f8
Move network_graph.rs to gossip.rs
The routing::network_graph module contains a few structs related to p2p
gossip. So renaming the module to 'gossip' seems more appropriate.
2022-06-02 15:15:30 -07:00
Jeffrey Czyz
ac35492877
Rename NetGraphMsgHandler to P2PGossipSync
NetGraphMsgHandler implements RoutingMessageHandler to handle gossip
messages defined in BOLT 7 and maintains a view of the network by
updating NetworkGraph. Rename it to P2PGossipSync, which better
describes its purpose, and to contrast with RapidGossipSync.
2022-06-02 15:15:30 -07:00
Elias Rohrer
e98f68aee6 Rename FundingLocked to ChannelReady. 2022-05-30 17:07:09 -07:00
valentinewallace
b20aea1cb0
Merge pull request #1472 from TheBlueMatt/2022-06-less-secp-ctx
Pull secp256k1 contexts from per-peer to per-PeerManager
2022-05-17 16:10:09 -04:00
Matt Corallo
e5c988e00c
Merge pull request #1429 from TheBlueMatt/2022-04-drop-no-conn-possible 2022-05-14 19:35:47 +00:00
Matt Corallo
e6aaf7c72d Pull secp256k1 contexts from per-peer to per-PeerManager
Instead of including a `Secp256k1` context per
`PeerChannelEncryptor`, which is relatively expensive memory-wise
and nontrivial CPU-wise to construct, we should keep one for all
peers and simply reuse it.

This is relatively trivial so we do so in this commit.

Since its trivial to do so, we also take this opportunity to
randomize the new PeerManager context.
2022-05-11 20:02:29 +00:00
Matt Corallo
b5a63070f5
Merge pull request #1023 from TheBlueMatt/2021-07-par-gossip-processing 2022-05-11 17:24:16 +00:00
Matt Corallo
46009a5f83 Add a few more simple tests of the PeerHandler
These increase coverage and caught previous lockorder inversions.
2022-05-10 23:40:20 +00:00
Matt Corallo
101bcd8da5 Drop a needless match in favor of an if let 2022-05-10 23:40:20 +00:00
Matt Corallo
96fc0f3453 Drop PeerHolder as it now only has one field 2022-05-10 23:40:20 +00:00
Matt Corallo
eb17464e78 Keep the same read buffer unless the last message was overly large
This avoids repeatedly deallocating-allocating a Vec for the peer
read buffer after every message/header.
2022-05-10 23:40:20 +00:00
Matt Corallo
ae4ceb71a5 Create a simple FairRwLock to avoid readers starving writers
Because we handle messages (which can take some time, persisting
things to disk or validating cryptographic signatures) with the
top-level read lock, but require the top-level write lock to
connect new peers or handle disconnection, we are particularly
sensitive to writer starvation issues.

Rust's libstd RwLock does not provide any fairness guarantees,
using whatever the OS provides as-is. On Linux, pthreads defaults
to starving writers, which Rust's RwLock exposes to us (without
any configurability).

Here we work around that issue by blocking readers if there are
pending writers, optimizing for readable code over
perfectly-optimized blocking.
2022-05-10 23:40:20 +00:00
Matt Corallo
97711aef96 Limit blocked PeerManager::process_events waiters to two
Only one instance of PeerManager::process_events can run at a time,
and each run always finishes all available work before returning.
Thus, having several threads blocked on the process_events lock
doesn't accomplish anything but blocking more threads.

Here we limit the number of blocked calls on process_events to two
- one processing events and one blocked at the top which will
process all available events after the first completes.
2022-05-10 23:40:20 +00:00
Matt Corallo
4f50a94a3f Avoid the peers write lock unless we need it in timer_tick_occurred
Similar to the previous commit, this avoids "blocking the world" on
every timer tick unless we need to disconnect peers.
2022-05-10 23:40:20 +00:00
Matt Corallo
a5adda18dc Avoid taking the peers write lock during event processing
Because the peers write lock "blocks the world", and happens after
each read event, always taking the write lock has pretty severe
impacts on parallelism. Instead, here, we only take the global
write lock if we have to disconnect a peer.
2022-05-10 23:40:20 +00:00
Matt Corallo
a731efcb68 Process messages with only the top-level read lock held
Users are required to only ever call `read_event` serially
per-peer, thus we actually don't need any locks while we're
processing messages - we can only be processing messages in one
thread per-peer.

That said, we do need to ensure that another thread doesn't
disconnect the peer we're processing messages for, as that could
result in a peer_disconencted call while we're processing a
message for the same peer - somewhat nonsensical.

This significantly improves parallelism especially during gossip
processing as it avoids waiting on the entire set of individual
peer locks to forward a gossip message while several other threads
are validating gossip messages with their individual peer locks
held.
2022-05-10 23:40:20 +00:00
Matt Corallo
7c8b098698 Process messages from peers in parallel in PeerManager.
This adds the required locking to process messages from different
peers simultaneously in `PeerManager`. Note that channel messages
are still processed under a global lock in `ChannelManager`, and
most work is still processed under a global lock in gossip message
handling, but parallelizing message deserialization and message
decryption is somewhat helpful.
2022-05-10 23:40:20 +00:00
Jeffrey Czyz
65920818db
Merge pull request #1389 from lightning-signer/2022-03-bitcoin
Update bitcoin crate to 0.28.1
2022-05-05 14:08:16 -05:00
Devrandom
28d33ff9e0 bitcoin crate 0.28.1 2022-05-05 18:04:42 +02:00
Matt Corallo
171dfeee07
Merge pull request #1452 from tnull/2022-04-honor-gossip-timestamp-filters
Initiate gossip sync only after receiving GossipTimestampFilter.
2022-05-03 19:16:29 +00:00
Elias Rohrer
f8a196c8e3 Initiate sync only after receiving GossipTimestampFilter. 2022-05-03 09:13:09 +02:00
Matt Corallo
e39d63c7d4 Do not force-close channels when we cannot communicate with peers
In general, we should never be automatically force-closing our
users' channels unless there is some immediate risk of funds loss
(ie because of some HTLC(s) which are timing out soon). In any
other case, we should trust the user to be able to figure out what
is going on and close their channels manually instead of trying to
be overly clever and automate closures if we think the channel is
useless.

In this case, even if a peer has some required feature that does
not allow us to communicate with them, there is a strong
possibility that some LDK upgrade may allow us to in the future. In
the mean time, there is no reason to go on-chain unless the user
needs funds immediately. In such a case, the user should already
have logic to force-close channels with peers which are not
available for any reason.
2022-04-28 02:50:06 +00:00
Matt Corallo
d0a1b2b220 Log gossip query msgs at GOSSIP instead of TRACE as they're huge 2022-04-14 02:13:22 +00:00
Matt Corallo
b010aeb5f1
Merge pull request #1326 from Psycho-Pirate/peers
Added option to send remote IP to peers
2022-03-23 21:02:17 +00:00
Matt Corallo
cb1d795559
Merge pull request #1374 from TheBlueMatt/2022-03-bindings-cleanups
Trivial Bindings Cleanups
2022-03-23 00:46:31 +00:00
psycho-pirate
fc2f793d19 Argument added in lightning-net-tokio/src/lib.rs and comments updated 2022-03-23 04:44:28 +05:30
psycho-pirate
20a81e5c14 added network address in methods, filter_address function with tests and updated documentation 2022-03-23 04:44:28 +05:30
Matt Corallo
ad19d35a09 Tag some type aliases with (C-not exported)
Type aliases are now more robustly being exported in the C bindings
generator, which requires ensuring we don't include some type
aliases which make no sense in bindings.
2022-03-21 20:09:30 +00:00
Matt Corallo
3cca221f8b Send a gossip_timestamp_filter on connect to enable gossip sync
On connection, if our peer supports gossip queries, and we never
send a `gossip_timestamp_filter`, our peer is supposed to never
send us gossip outside of explicit queries. Thus, we'll end up
always having stale gossip information after the first few
connections we make to peers.

The solution is to send a dummy `gossip_timestamp_filter`
immediately after connecting to peers.
2022-03-17 22:18:33 +00:00
Matt Corallo
b1fb7fdb9b Rename RoutingMessageHandler::sync_routing_table peer_connected
Its somewhat strange to have a trait method which is named after
the intended action, rather than the action that occurred, leaving
it up to the implementor what action they want to take.
2022-03-17 22:04:48 +00:00