Commit graph

3484 commits

Author SHA1 Message Date
Matt Corallo
96fc0f3453 Drop PeerHolder as it now only has one field 2022-05-10 23:40:20 +00:00
Matt Corallo
eb17464e78 Keep the same read buffer unless the last message was overly large
This avoids repeatedly deallocating-allocating a Vec for the peer
read buffer after every message/header.
2022-05-10 23:40:20 +00:00
Matt Corallo
ae4ceb71a5 Create a simple FairRwLock to avoid readers starving writers
Because we handle messages (which can take some time, persisting
things to disk or validating cryptographic signatures) with the
top-level read lock, but require the top-level write lock to
connect new peers or handle disconnection, we are particularly
sensitive to writer starvation issues.

Rust's libstd RwLock does not provide any fairness guarantees,
using whatever the OS provides as-is. On Linux, pthreads defaults
to starving writers, which Rust's RwLock exposes to us (without
any configurability).

Here we work around that issue by blocking readers if there are
pending writers, optimizing for readable code over
perfectly-optimized blocking.
2022-05-10 23:40:20 +00:00
Matt Corallo
f90983180e Wake reader future when we fail to flush socket buffer
This avoids any extra calls to `read_event` after a write fails to
flush the write buffer fully, as is required by the PeerManager
API (though it isn't critical).
2022-05-10 23:40:20 +00:00
Matt Corallo
97711aef96 Limit blocked PeerManager::process_events waiters to two
Only one instance of PeerManager::process_events can run at a time,
and each run always finishes all available work before returning.
Thus, having several threads blocked on the process_events lock
doesn't accomplish anything but blocking more threads.

Here we limit the number of blocked calls on process_events to two
- one processing events and one blocked at the top which will
process all available events after the first completes.
2022-05-10 23:40:20 +00:00
Matt Corallo
4f50a94a3f Avoid the peers write lock unless we need it in timer_tick_occurred
Similar to the previous commit, this avoids "blocking the world" on
every timer tick unless we need to disconnect peers.
2022-05-10 23:40:20 +00:00
Matt Corallo
a5adda18dc Avoid taking the peers write lock during event processing
Because the peers write lock "blocks the world", and happens after
each read event, always taking the write lock has pretty severe
impacts on parallelism. Instead, here, we only take the global
write lock if we have to disconnect a peer.
2022-05-10 23:40:20 +00:00
Matt Corallo
b1550524cf [net-tokio] Call PeerManager::process_events without blocking reads
Unlike very ancient versions of lightning-net-tokio, this does not
rely on a single global process_events future, but instead has one
per connection. This could still cause significant contention, so
we'll ensure only two process_events calls can exist at once in
the next few commits.
2022-05-10 23:40:20 +00:00
Matt Corallo
a731efcb68 Process messages with only the top-level read lock held
Users are required to only ever call `read_event` serially
per-peer, thus we actually don't need any locks while we're
processing messages - we can only be processing messages in one
thread per-peer.

That said, we do need to ensure that another thread doesn't
disconnect the peer we're processing messages for, as that could
result in a peer_disconencted call while we're processing a
message for the same peer - somewhat nonsensical.

This significantly improves parallelism especially during gossip
processing as it avoids waiting on the entire set of individual
peer locks to forward a gossip message while several other threads
are validating gossip messages with their individual peer locks
held.
2022-05-10 23:40:20 +00:00
Matt Corallo
7c8b098698 Process messages from peers in parallel in PeerManager.
This adds the required locking to process messages from different
peers simultaneously in `PeerManager`. Note that channel messages
are still processed under a global lock in `ChannelManager`, and
most work is still processed under a global lock in gossip message
handling, but parallelizing message deserialization and message
decryption is somewhat helpful.
2022-05-10 23:40:20 +00:00
Matt Corallo
6418c9ef0d
Merge pull request #1444 from ViktorTigerstrom/2022-04-use-counterparty-htlc-max-for-chan-updates
Set `ChannelUpdate` `htlc_maximum_msat` using the peer's value
2022-05-03 22:44:26 +00:00
Viktor Tigerström
224d470d38 Add correct ChannelUpdate htlc_maximum_msat test 2022-05-03 22:42:37 +02:00
Viktor Tigerström
5c7bfa7392 Set ChannelUpdate htlc_maximum_msat using the peer's value
Use the `counterparty_max_htlc_value_in_flight_msat` value, and not the
`holder_max_htlc_value_in_flight_msat` value when creating the
`htlc_maximum_msat` value for `ChannelUpdate` messages.

BOLT 7 specifies that the field MUST be less than or equal to
`max_htlc_value_in_flight_msat` received from the peer, which we
currently are not guaranteed to adhere to by using the holder value.
2022-05-03 22:42:37 +02:00
Viktor Tigerström
d13061c512 Add test for configurable in-flight limit 2022-05-03 22:42:37 +02:00
Viktor Tigerström
0beaaa76de Make our in-flight limit percentage configurable
Add a config field
`ChannelHandshakeConfig::max_inbound_htlc_value_in_flight_percent_of_channel`
which sets the percentage of the channel value we cap the total value of
outstanding inbound HTLCs to.

This field can be set to a value between 1-100, where the value
corresponds to the percent of the channel value in whole percentages.

Note that:
* If configured to another value than the default value 10, any new
channels created with the non default value will cause versions of LDK
prior to 0.0.104 to refuse to read the `ChannelManager`.

* This caps the total value for inbound HTLCs in-flight only, and
there's currently no way to configure the cap for the total value of
outbound HTLCs in-flight.

* The requirements for your node being online to ensure the safety of
HTLC-encumbered funds are different from the non-HTLC-encumbered funds.
This makes this an important knob to restrict exposure to loss due to
being offline for too long. See
`ChannelHandshakeConfig::our_to_self_delay` and
`ChannelConfig::cltv_expiry_delta` for more information.

Default value: 10.
Minimum value: 1, any values less than 1 will be treated as 1 instead.
Maximum value: 100, any values larger than 100 will be treated as 100
instead.
2022-05-03 22:35:43 +02:00
Matt Corallo
171dfeee07
Merge pull request #1452 from tnull/2022-04-honor-gossip-timestamp-filters
Initiate gossip sync only after receiving GossipTimestampFilter.
2022-05-03 19:16:29 +00:00
Matt Corallo
5feef253ec
Merge pull request #1460 from TheBlueMatt/2022-04-liquidity-log
Provide a utility to log the ProbabilisticScorer's contents
2022-05-03 16:53:10 +00:00
Elias Rohrer
f8a196c8e3 Initiate sync only after receiving GossipTimestampFilter. 2022-05-03 09:13:09 +02:00
Matt Corallo
e3a305cdb6
Merge pull request #1447 from andozw/seana.20220422.avoid-storing-FinalOnionHopData
Avoid storing a full FinalOnionHopData in OnionPayload::Invoice
2022-05-02 20:21:44 +00:00
Matt Corallo
75c3df018c Provide a utility to log the ProbabilisticScorer's contents
I wrote this when debugging a user's scorer and figured it may be
useful upstream.
2022-05-02 20:00:44 +00:00
Matt Corallo
fc77c57c3c Avoid storing a full FinalOnionHopData in OnionPayload::Invoice
We only use it to check the amount when processing MPP parts, but
store the full object (including new payment metadata) in it.
Because we now store the amount in the parent structure, there is
no need for it at all in the `OnionPayload`. Sadly, for
serialization compatibility, we need it to continue to exist, at
least temporarily, but we can avoid populating the new fields in
that case.
2022-05-02 11:41:02 -07:00
Matt Corallo
62f8df5bcb Store total payment amount in ClaimableHTLC explicitly
...instead of accessing it via the `OnionPayload::Invoice` form.
This may be useful if we add MPP keysend support, but is directly
useful to allow us to drop `FinalOnionHopData` from `OnionPayload`.
2022-05-02 09:37:23 -07:00
Matt Corallo
26c0150c12 Pass FinalOnionHopData to payment verify by reference, not clone 2022-05-02 09:37:23 -07:00
Matt Corallo
9bdce47f0e
Merge pull request #1451 from TheBlueMatt/2022-04-moar-mpp-fail-test
Add test coverage for failure of inconsistent MPP parts
2022-04-29 19:50:37 +00:00
Matt Corallo
dc8479a620
Merge pull request #1454 from TheBlueMatt/2022-04-fuzz-underflow
Reject channels if the total reserves are larger than the funding
2022-04-28 21:56:49 +00:00
Matt Corallo
f53d13bcb8
Merge pull request #1425 from valentinewallace/2021-04-wumbo
Wumbo!
2022-04-28 21:14:19 +00:00
Matt Corallo
92c87bae19 Correct error when a peer opens a channel with a huge push_msat
The calculation uses the reserve, so we should mention it in the
error we send to our peers.
2022-04-28 19:46:22 +00:00
Matt Corallo
2826af75a5 Reject channels if the total reserves are larger than the funding
The `full_stack_target` fuzzer managed to find a subtraction
underflow in the new `Channel::get_htlc_maximum` function where we
subtract both sides' reserve values from the channel funding. Such
a channel is obviously completely useless, so we should reject it
during opening instead of integer-underflowing later.

Thanks to Chaincode Labs for providing the fuzzing resources which
found this bug!
2022-04-28 19:46:13 +00:00
Valentine Wallace
5cfe19ef02
Enable wumbo channels to be created
Also redefine MAX_FUNDING_SATOSHIS_NO_WUMBO to no longer be off-by-one.
2022-04-28 15:01:21 -04:00
Valentine Wallace
e49f738630
config: add max_funding_satoshis to enforce for inbound channels
and a bonus grammar fix
2022-04-28 15:01:17 -04:00
Matt Corallo
7a8344a738 Add test coverage for failure of inconsistent MPP parts
When we receive multiple HTLCs which claim to be a part of the same
MPP but which are inconsistent for some reason, we should fail the
inconsistent HTLCs but keep the first HTLCs up until the first
inconsistency.

This works, but it turns out there was no test coverage, so we add
some here.
2022-04-28 02:44:19 +00:00
Matt Corallo
b9d097b07b Add a get_route!() macro for tests which only need a route 2022-04-28 02:44:19 +00:00
Matt Corallo
62edee5689
Merge pull request #1435 from TheBlueMatt/2022-04-1126-first-step 2022-04-28 02:43:04 +00:00
Matt Corallo
61629bc00e Consolidate Channel balance fetching into one fn returning struct
Some simple code motion to clean up how channel balances get
fetched.
2022-04-27 20:21:20 +00:00
valentinewallace
df1c4ee150
Merge pull request #1405 from TheBlueMatt/2022-04-log-scoring
Log as channel liquidities are/not updated in ProbabilisticScorer
2022-04-27 12:34:08 -04:00
valentinewallace
54e9c07ad9
Merge pull request #1453 from TheBlueMatt/2022-04-listen-filtered-blocks
Expand `chain::Listen` trivially to accept filtered block data
2022-04-27 12:25:30 -04:00
Matt Corallo
7181b53aa4
Merge pull request #1421 from TheBlueMatt/2022-04-less-gossip-trace-spam
Log gossip query msgs at GOSSIP instead of TRACE as they're huge
2022-04-27 15:59:48 +00:00
Matt Corallo
6e298ff30d Explicitly log a warning when a payment failed on the first hop 2022-04-27 00:42:08 +00:00
Matt Corallo
d629a7edb7 Expand chain::Listen trivially to accept filtered block data
The `chain::Listen` interface provides a block-connection-based
alternative to the `chain::Confirm` interface, which supports
providing transaction data at a time separate from the block
connection time.

For users who are downloading the full headers tree (e.g. from a
node over the Bitcoin P2P protocol) but who are not downloading
full blocks (e.g. because they're using BIP 157/158 filtering)
there is no API that matches exactly their event stream -
`chain::Listen` requries full blocks for each block,
`chain::Confirm` requires breaking each connection event into two
calls.

Given its incredibly trivial to take a `TransactionData` in
addition to a `Block` in `chain::Listen` we do so here, adding a
default-implementation `block_connected` which simply creates the
`TransactionData`, which ultimately all of the `chain::Listen`
implementations currently do anyway.

Closes #1128.
2022-04-26 19:14:19 +00:00
valentinewallace
72069bfc9d
Merge pull request #1436 from TheBlueMatt/2022-04-event-process-try-lock
Reorder the BP loop to make manager persistence more reliable
2022-04-26 13:02:29 -04:00
Matt Corallo
050b19cd9b Reorder the BP loop to make manager persistence more reliable
The main loop of the background processor has this line:
`peer_manager.process_events(); // Note that this may block on ChannelManager's locking`
which does, indeed, sometimes block waiting on the `ChannelManager`
to finish whatever its doing. Specifically, its the only place in
the background processor loop that we block waiting on the
`ChannelManager`, so if the `ChannelManager` is relatively busy, we
may end up being blocked there most of the time.

This should be fine, except today we had a user who's node was
particularly slow in processing some channel updates, resulting in
the background processor being blocked there (as expected). Then,
when the channel updates were completed (and persisted) the next
thing the background processor did was hand the user events to
process, creating yet more channel updates. Ultimately, the users'
node crashed before finishing the event processing. This left us
with an updated monitor on disk and an outdated manager, and they
lost the channel on startup.

Here we simply move the above quoted line to after the normal event
processing, ensuring the next thing we do after blocking on
`ChannelManager` locks is persist the manager, prior to event
handling.
2022-04-26 15:29:16 +00:00
valentinewallace
d1b984d864
Merge pull request #1448 from TheBlueMatt/2022-04-fix-warnings
Fix several "unused" warnings introduced in #1417
2022-04-25 14:48:07 -04:00
Valentine Wallace
fa59544972
channel: refactor max funding consts
MAX_FUNDING_SATOSHIS will no longer be accurately named once wumbo is merged.
Also, we'll want to check that wumbo channels don't exceed the total bitcoin supply
2022-04-25 14:07:46 -04:00
Valentine Wallace
7bf7b3a446
channelmanager: remove bogus panic warning from docs 2022-04-25 14:04:07 -04:00
Matt Corallo
68ee7ef5c7 Sort Event variants somewhat more sensibly 2022-04-25 15:04:21 +00:00
Matt Corallo
1af705579b Separate ChannelDetails' outbound capacity from the next HTLC max
`ChannelDetails::outbound_capacity_msat` describes the total amount
available for sending across several HTLCs, basically just our
balance minus the reserve value maintained by our counterparty.
However, when routing we use it to guess the maximum amount we can
send in a single additional HTLC, which it is not.

There are numerous reasons why our balance may not match the amount
we can send in a single HTLC, whether the HTLC in-flight limit, the
channe's HTLC maximum, or our feerate buffer.

This commit splits the `outbound_capacity_msat` field into two -
`outbound_capacity_msat` and `outbound_htlc_limit_msat`, setting us
up for correctly handling our next-HTLC-limit in the future.

This also addresses the first of the reasons why the values may
not match - the max-in-flight limit. The inaccuracy is ultimately
tracked as #1126.
2022-04-25 15:04:21 +00:00
Matt Corallo
fa9fc0d094 Fix several "unused" warnings introduced in #1417 2022-04-24 16:03:26 +00:00
Matt Corallo
637fb88037
Merge pull request #1378 from ViktorTigerstrom/2022-03-include-htlc-min-max
Include htlc min/max for ChannelDetails and ChannelCounterparty
2022-04-21 18:09:20 +00:00
Viktor Tigerström
3c67687066 Add htlc min/max to create invoice functions 2022-04-21 12:27:51 +02:00
Viktor Tigerström
63f0a31b59 Add outbound min/max to ChannelCounterparty 2022-04-21 12:27:51 +02:00