Commit graph

154 commits

Author SHA1 Message Date
Matt Corallo
b5a63070f5
Merge pull request #1023 from TheBlueMatt/2021-07-par-gossip-processing 2022-05-11 17:24:16 +00:00
Matt Corallo
46009a5f83 Add a few more simple tests of the PeerHandler
These increase coverage and caught previous lockorder inversions.
2022-05-10 23:40:20 +00:00
Matt Corallo
101bcd8da5 Drop a needless match in favor of an if let 2022-05-10 23:40:20 +00:00
Matt Corallo
96fc0f3453 Drop PeerHolder as it now only has one field 2022-05-10 23:40:20 +00:00
Matt Corallo
eb17464e78 Keep the same read buffer unless the last message was overly large
This avoids repeatedly deallocating-allocating a Vec for the peer
read buffer after every message/header.
2022-05-10 23:40:20 +00:00
Matt Corallo
ae4ceb71a5 Create a simple FairRwLock to avoid readers starving writers
Because we handle messages (which can take some time, persisting
things to disk or validating cryptographic signatures) with the
top-level read lock, but require the top-level write lock to
connect new peers or handle disconnection, we are particularly
sensitive to writer starvation issues.

Rust's libstd RwLock does not provide any fairness guarantees,
using whatever the OS provides as-is. On Linux, pthreads defaults
to starving writers, which Rust's RwLock exposes to us (without
any configurability).

Here we work around that issue by blocking readers if there are
pending writers, optimizing for readable code over
perfectly-optimized blocking.
2022-05-10 23:40:20 +00:00
Matt Corallo
97711aef96 Limit blocked PeerManager::process_events waiters to two
Only one instance of PeerManager::process_events can run at a time,
and each run always finishes all available work before returning.
Thus, having several threads blocked on the process_events lock
doesn't accomplish anything but blocking more threads.

Here we limit the number of blocked calls on process_events to two
- one processing events and one blocked at the top which will
process all available events after the first completes.
2022-05-10 23:40:20 +00:00
Matt Corallo
4f50a94a3f Avoid the peers write lock unless we need it in timer_tick_occurred
Similar to the previous commit, this avoids "blocking the world" on
every timer tick unless we need to disconnect peers.
2022-05-10 23:40:20 +00:00
Matt Corallo
a5adda18dc Avoid taking the peers write lock during event processing
Because the peers write lock "blocks the world", and happens after
each read event, always taking the write lock has pretty severe
impacts on parallelism. Instead, here, we only take the global
write lock if we have to disconnect a peer.
2022-05-10 23:40:20 +00:00
Matt Corallo
a731efcb68 Process messages with only the top-level read lock held
Users are required to only ever call `read_event` serially
per-peer, thus we actually don't need any locks while we're
processing messages - we can only be processing messages in one
thread per-peer.

That said, we do need to ensure that another thread doesn't
disconnect the peer we're processing messages for, as that could
result in a peer_disconencted call while we're processing a
message for the same peer - somewhat nonsensical.

This significantly improves parallelism especially during gossip
processing as it avoids waiting on the entire set of individual
peer locks to forward a gossip message while several other threads
are validating gossip messages with their individual peer locks
held.
2022-05-10 23:40:20 +00:00
Matt Corallo
7c8b098698 Process messages from peers in parallel in PeerManager.
This adds the required locking to process messages from different
peers simultaneously in `PeerManager`. Note that channel messages
are still processed under a global lock in `ChannelManager`, and
most work is still processed under a global lock in gossip message
handling, but parallelizing message deserialization and message
decryption is somewhat helpful.
2022-05-10 23:40:20 +00:00
Jeffrey Czyz
65920818db
Merge pull request #1389 from lightning-signer/2022-03-bitcoin
Update bitcoin crate to 0.28.1
2022-05-05 14:08:16 -05:00
Devrandom
28d33ff9e0 bitcoin crate 0.28.1 2022-05-05 18:04:42 +02:00
Matt Corallo
171dfeee07
Merge pull request #1452 from tnull/2022-04-honor-gossip-timestamp-filters
Initiate gossip sync only after receiving GossipTimestampFilter.
2022-05-03 19:16:29 +00:00
Elias Rohrer
f8a196c8e3 Initiate sync only after receiving GossipTimestampFilter. 2022-05-03 09:13:09 +02:00
Matt Corallo
d0a1b2b220 Log gossip query msgs at GOSSIP instead of TRACE as they're huge 2022-04-14 02:13:22 +00:00
Matt Corallo
b010aeb5f1
Merge pull request #1326 from Psycho-Pirate/peers
Added option to send remote IP to peers
2022-03-23 21:02:17 +00:00
Matt Corallo
cb1d795559
Merge pull request #1374 from TheBlueMatt/2022-03-bindings-cleanups
Trivial Bindings Cleanups
2022-03-23 00:46:31 +00:00
psycho-pirate
fc2f793d19 Argument added in lightning-net-tokio/src/lib.rs and comments updated 2022-03-23 04:44:28 +05:30
psycho-pirate
20a81e5c14 added network address in methods, filter_address function with tests and updated documentation 2022-03-23 04:44:28 +05:30
Matt Corallo
ad19d35a09 Tag some type aliases with (C-not exported)
Type aliases are now more robustly being exported in the C bindings
generator, which requires ensuring we don't include some type
aliases which make no sense in bindings.
2022-03-21 20:09:30 +00:00
Matt Corallo
3cca221f8b Send a gossip_timestamp_filter on connect to enable gossip sync
On connection, if our peer supports gossip queries, and we never
send a `gossip_timestamp_filter`, our peer is supposed to never
send us gossip outside of explicit queries. Thus, we'll end up
always having stale gossip information after the first few
connections we make to peers.

The solution is to send a dummy `gossip_timestamp_filter`
immediately after connecting to peers.
2022-03-17 22:18:33 +00:00
Matt Corallo
b1fb7fdb9b Rename RoutingMessageHandler::sync_routing_table peer_connected
Its somewhat strange to have a trait method which is named after
the intended action, rather than the action that occurred, leaving
it up to the implementor what action they want to take.
2022-03-17 22:04:48 +00:00
Matt Corallo
252280d6bf Reduce the number of timer ticks a peer is allowed to take
In 2d3a210897, we increased the
default ping timer in `lightning-background-processor` to ten
seconds from five. However, we didn't change the timer count at
which we disconnect peers if they're not responding, which we
likely should have done. We do so here, as well as update the
documentation for `PeerManager::timer_tick_occurred` to suggest
always ticking the timer every ten seconds instead of five.
2022-03-07 19:07:14 +00:00
Matt Corallo
94639137c3 Correct handling of UnknownRequiredFeature deserialization
Quite some time ago, `UnknownRequiredFeature` was only used when a
gossip message has a missing required feature. These days, its also
used for any required TLV which we do not understand in any
message. However, the handling of it was never updated in
`PeerManager`, leaving it printing a warning about gossip and
ignoring the message entirely.

Instead, we send a warning message and disconnect.

Closes #1236, as caught by @jkczyz.
2022-01-26 02:12:35 +00:00
Matt Corallo
2d7b06e619 Send warning instead of error when we incounter bogus gossip 2022-01-11 19:48:20 +00:00
Matt Corallo
e137cfb3c4 Send warning messages when appropriate in gossip handling pipeline 2022-01-11 19:48:20 +00:00
Matt Corallo
1b3249a192 Handle sending and receiving warning messages 2022-01-11 19:48:20 +00:00
Matt Corallo
391fbfbe1a Re-broadcast our own gossip even if its same as the last broadcast
Even if our gossip hasn't changed, we should be willing to
re-broadcast it to our peers. All our peers may have been
disconnected the last time we broadcasted it.
2021-11-23 22:18:00 +00:00
Matt Corallo
74828d2435 Separate ChannelAnnouncement and ChannelUpdate broadcast conditions
When a `ChannelUpdate` message is generated for broadcast as a part
of a `BroadcastChannelAnnouncement` event, it may be newer than our
previous `ChannelUpdate` and need to be broadcast. However, if the
`ChannelAnnouncement` had already been seen we wouldn't
re-broadcast either message as the `handle_channel_announcement`
call would fail, short-circuiting the condition to broadcast both.

Instead, we split the broadcast of each message as well as the
conditional so that we always attempt to handle each message and
update our local graph state, then broadcast the message if its
update was processed successfully.
2021-11-23 22:17:18 +00:00
Elias Rohrer
3b4b74bc66 Add a new log-level for gossip messages. 2021-11-22 18:19:08 +01:00
Jeffrey Czyz
bcdd852279
Parameterize NetGraphMsgHandler with NetworkGraph
NetworkGraph is owned by NetGraphMsgHandler, but DefaultRouter requires
a reference to it. Introduce shared ownership to NetGraphMsgHandler so
that both can use the same NetworkGraph.
2021-11-01 13:14:14 -05:00
Matt Corallo
070e22bf09
Merge pull request #1137 from TheBlueMatt/2021-10-ping-fixes
Give peers which are sending us messages/receiving messages from us longer to respond to ping
2021-10-28 20:57:21 +00:00
Matt Corallo
0caa8bb5d5 Log peer public key more thoroughly when logging in peer_handler 2021-10-28 20:06:47 +00:00
Matt Corallo
be123f7d22 Give peers one timer tick to finish handshake before disconnecting
This ensures we don't let a hung connection stick around forever if
the peer never completes the initial handshake.

This also resolves a race where, on receiving a second connection
from a peer, we may reset their_node_id to None to prevent sending
messages even though the `channel_encryptor`
`is_ready_for_encryption()`. Sending pings only checks the
`channel_encryptor` status, not `their_node_id` resulting in an
`unwrap` on `None` in `enqueue_message`.
2021-10-28 20:06:47 +00:00
Matt Corallo
ed4a39fe1e Give peers which are sending us messages longer to respond to ping
See comment for rationale.
2021-10-28 20:06:47 +00:00
Matt Corallo
3f9a7de188 Util-ify enqueueing an encoded message in peer_handler
This marginally simplifies coming commits.
2021-10-28 20:06:47 +00:00
Matt Corallo
e496c9beb6 Constify the ratio in buf limits between forward and init sync msgs 2021-10-28 20:06:47 +00:00
Matt Corallo
4a58e9ad83 Add PeerManager::disconnect_all_peers to avoid complexity in BP
In the coming commits simply calling `timer_tick_occurred` will no
longer disconnect all peers, so its helpful to have a utility
method.
2021-10-26 02:04:34 +00:00
Matt Corallo
df8bde9b2e Move the two-AtomicUsize counter in peer_handler to a util struct
We also take this opportunity to drop byte_utils::le64_to_array, as
our MSRV now supports the native to_le_bytes() call.
2021-10-18 22:04:56 +00:00
Matt Corallo
ab49e4101f Make method time on trait impl explitit to help bindings generator
Associated types in C bindings is somewhat of a misnomer - we
concretize each trait to a single struct. Thus, different trait
implementations must still have the same type, which defeats the
point of associated types.

In this particular case, however, we can reasonably special-case
the `Infallible` type, as an instance of it existing implies
something has gone horribly wrong.

In order to help our bindings code figure out how to do so when
referencing a parent trait's associated type, we specify the
explicit type in the implementation method signature.
2021-09-23 04:02:58 +00:00
Matt Corallo
401d03599d Drop redundant generic bounds when the trait requires the bounds 2021-09-22 23:46:40 +00:00
Matt Corallo
e82318d374 Use Infallible for the unconstructable default custom message type
When we landed custom messages, we used the empty tuple for the
custom message type for `IgnoringMessageHandler`. This was fine,
except that we also implemented `Writeable` to panic when writing
a `()`. Later, we added support for anchor output construction in
CommitmentTransaction, signified by setting a field to `Some(())`,
which is serialized as-is.

This causes us to panic when writing a `CommitmentTransaction`
with `opt_anchors` set. Note that we never set it inside of LDK,
but downstream users may.

Instead, we implement `Writeable` to write nothing for `()` and use
`core::convert::Infallible` for the default custom message type as
it is, appropriately, unconstructable.

This also makes it easier to implement various things in bindings,
as we can always assume `Infallible`-conversion logic is
unreachable.
2021-09-22 22:01:40 +00:00
vss96
2f0f0c9bf3 Update docs to specify where process events is called 2021-09-21 17:13:22 +00:00
Matt Corallo
801d6e5256
Merge pull request #1068 from TheBlueMatt/2021-09-ser-cleanup
Simplify Message Serialization and Parse TLV Suffix
2021-09-18 01:42:29 +00:00
Matt Corallo
07b674ecb1 Drop writer size hinting/message vec preallocation
In order to avoid significant malloc traffic, messages previously
explicitly stated their serialized length allowing for Vec
preallocation during the message serialization pipeline. This added
some amount of complexity in the serialization code, but did avoid
some realloc() calls.

Instead, here, we drop all the complexity in favor of a fixed 2KiB
buffer for all message serialization. This should not only be
simpler with a similar reduction in realloc() traffic, but also
may reduce heap fragmentation by allocating identically-sized
buffers more often.
2021-09-18 01:01:41 +00:00
Matt Corallo
24065c89a9
Merge pull request #1074 from p2pderivatives/add-node-id-to-custom-msg-cb
Add node id to custom message callback
2021-09-15 18:54:15 +00:00
Jeffrey Czyz
eff9a47075
Refactor PaymentFailureNetworkUpdate event
MessageSendEvent::PaymentFailureNetworkUpdate served as a hack to pass
an HTLCFailChannelUpdate from ChannelManager to NetGraphMsgHandler via
PeerManager. Instead, remove the event entirely and move the contained
data (renamed NetworkUpdate) to Event::PaymentFailed to be processed by
an event handler.
2021-09-15 11:50:27 -05:00
Tibo-lg
2c6a078d2c Add node id to custom message callback 2021-09-15 09:29:29 +09:00
Matt Corallo
45853b3a95
Merge pull request #1031 from p2pderivatives/dlc-version-generic
Dlc version generic
2021-08-25 17:22:20 +00:00