Commit graph

3549 commits

Author SHA1 Message Date
Matt Corallo
8a5670245a Add internal docs for ChannelMonitor::payment_preimages 2022-05-28 00:02:49 +00:00
Matt Corallo
a12d37e063 Drop return value from fail_htlc_backwards, clarify docs
`ChannelManager::fail_htlc_backwards`' bool return value is quite
confusing - just because it returns false doesn't mean the payment
wasn't (already) failed. Worse, in some race cases around shutdown
where a payment was claimed before an unclean shutdown and then
retried on startup, `fail_htlc_backwards` could return true even
though (a duplicate copy of the same payment) was claimed, but the
claim event has not been seen by the user yet.

While its possible to use it correctly, its somewhat confusing to
have a return value at all, and definitely lends itself to misuse.

Instead, we should push users towards a model where they don't care
if `fail_htlc_backwards` succeeds - either they've locally marked
the payment as failed (prior to seeing any `PaymentReceived`
events) and will fail any attempts to pay it, or they have not and
the payment is still receivable until its timeout time is reached.

We can revisit this decision based on user feedback, but will need
to very carefully document the potential failure modes here if we
do.
2022-05-28 00:02:49 +00:00
Matt Corallo
11c2f12baa Do additional pre-flight checks before claiming a payment
As additional sanity checks, before claiming a payment, we check
that we have the full amount available in `claimable_htlcs` that
the payment should be for. Concretely, this prevents one
somewhat-absurd edge case where a user may receive an MPP payment,
wait many *blocks* before claiming it, allowing us to fail the
pending HTLCs and the sender to retry some subset of the payment
before we go to claim. More generally, this is just good
belt-and-suspenders against any edge cases we may have missed.
2022-05-28 00:02:49 +00:00
Matt Corallo
0e2542176b Provide a redundant Event::PaymentClaimed on restart if needed
If we crashed during a payment claim and then detected a partial
claim on restart, we should ensure the user is aware that the
payment has been claimed. We do so here by using the new
partial-claim detection logic to create a `PaymentClaimed` event.
2022-05-28 00:02:49 +00:00
Matt Corallo
0a2a40c4fd Add a PaymentClaimed event to indicate a payment was claimed
This replaces the return value of `claim_funds` with an event. It
does not yet change behavior in any material way.
2022-05-28 00:02:49 +00:00
Matt Corallo
28c70ac506 Ensure all HTLCs for a claimed payment are claimed on startup
While the HTLC-claim process happens across all MPP parts under one
lock, this doesn't imply that they are claimed fully atomically on
disk. Ultimately, an application can crash after persisting one
`ChannelMonitorUpdate` out of multiple monitor updates needed for
the full claim.

Previously, this would leave us in a very bad state - because of
the all-channels-available check in `claim_funds` we'd refuse to
claim the payment again on restart (even though the
`PaymentReceived` event will be passed to the user again), and we'd
end up having partially claimed the payment!

The fix for the consistency part of this issue is pretty
straightforward - just check for this condition on startup and
complete the claim across all channels/`ChannelMonitor`s if we
detect it.

This still leaves us in a confused state from the perspective of
the user, however - we've actually claimed a payment but when they
call `claim_funds` we return `false` indicating it could not be
claimed.
2022-05-26 00:53:11 +00:00
Matt Corallo
bd1e20d49e Store an events::PaymentPurpose with each claimable payment
In fc77c57c3c we stopped using the
`FInalOnionHopData` in `OnionPayload::Invoice` directly and intend
to remove it eventually. However, in the next few commits we need
access to the payment secret when claimaing a payment, as we create
a new `PaymentPurpose` during the claim process for a new event.

In order to get access to a `PaymentPurpose` without having access
to the `FinalOnionHopData` we here change the storage of
`claimable_htlcs` to store a single `PaymentPurpose` explicitly
with each set of claimable HTLCs.
2022-05-19 18:16:53 +00:00
Matt Corallo
2a4259566e Enable removal of OnionPayload::Invoice::_legacy_hop_data later
In fc77c57c3c we stopped using the
`FinalOnionHopData` in `OnionPayload::Invoice` directly and renamed
it `_legacy_hop_data` with the intent of removing it in a few
versions. However, we continue to check that it was included in the
serialized data, meaning we would not be able to remove it without
breaking ability to serialize full `ChannelManager`s.

This fixes that by making the `_legacy_hop_data` an `Option` which
we will happily handle just fine if its `None`.
2022-05-19 18:16:53 +00:00
Matt Corallo
36817e0539
Merge pull request #1476 from tnull/2022-05-maximum-path-length
Consider maximum path length during path finding.
2022-05-18 19:22:42 +00:00
Elias Rohrer
87c9684c53 Consider maximum path length during path finding. 2022-05-18 19:20:00 +02:00
valentinewallace
b20aea1cb0
Merge pull request #1472 from TheBlueMatt/2022-06-less-secp-ctx
Pull secp256k1 contexts from per-peer to per-PeerManager
2022-05-17 16:10:09 -04:00
Matt Corallo
998cdc0865
Merge pull request #1418 from bruteforcecat/timeout-outbound-paymnet-retires-instead-of-count-out
add timeout retry strategy to outbound payment
2022-05-17 17:32:46 +00:00
KaFai Choi
b7d6d0dc8e
add timeout retry strategy to outbound payment 2022-05-17 13:27:22 +07:00
Matt Corallo
342698fe5a
Merge pull request #1485 from ViktorTigerstrom/2022-05-add-force-close-param
Add missing `counterparty_node_id` in `force_close_channel` calls
2022-05-16 21:48:51 +00:00
Viktor Tigerström
d543ac04c4 Add missing counterparty_node_id in force_close_channel calls 2022-05-16 22:25:46 +02:00
Arik Sosman
a5629e5ca2
Merge pull request #1479 from ViktorTigerstrom/2022-05-pass-counterparty-id-to-functions
Pass `counterparty_node_id` to `ChannelManager` functions
2022-05-16 12:44:16 -07:00
valentinewallace
257a6f3e48
Merge pull request #1475 from atalw/2022-04-paymentforwarded-event
Expose `next_channel_id` in `PaymentForwarded` event
2022-05-16 14:21:39 -04:00
atalw
1ae1de97fd
Add next_channel_id in PaymentForwarded event
This update also includes a minor refactor. The return type of
`pending_monitor_events` has been changed to a `Vec` tuple with the
`OutPoint` type. This associates a `Vec` of `MonitorEvent`s with a
funding outpoint.

We've also renamed `source/sink_channel_id` to `prev/next_channel_id` in
the favour of clarity.
2022-05-15 09:41:18 +05:30
Matt Corallo
e5c988e00c
Merge pull request #1429 from TheBlueMatt/2022-04-drop-no-conn-possible 2022-05-14 19:35:47 +00:00
Viktor Tigerström
c1798443b0 Update OpenChannelRequest documentation
As the `counterparty_node_id` is now required to be passed back to the
`ChannelManager` to accept or reject an inbound channel request, the
documentation is updated to reflect that.
2022-05-14 20:33:05 +02:00
Viktor Tigerström
70fa465924 Pass counterparty_node_id to accept_inbound_channel 2022-05-14 20:32:44 +02:00
Viktor Tigerström
c581bab8be Pass counterparty_node_id to funding_transaction_generated 2022-05-14 20:32:44 +02:00
Viktor Tigerström
14e52cd7a6 Pass counterparty_node_id to force_close_channel 2022-05-14 20:32:44 +02:00
Viktor Tigerström
84a6e7bc51 Pass counterparty_node_id to close_channel functions 2022-05-14 20:32:44 +02:00
KaFai Choi
10f9795149
move Time trait from scoring mod to util::time and set it visibile within crate 2022-05-14 09:52:59 +07:00
Viktor Tigerström
7893ddc721 Add counterparty_node_id to FundingGenerationReady 2022-05-14 02:15:32 +02:00
Jeffrey Czyz
d166044f27
Merge pull request #1480 from tnull/2022-05-fix-test-warnings
Fix unused code warnings test.
2022-05-13 12:20:56 -05:00
Elias Rohrer
79f42d72e2 Fix unused code warnings. 2022-05-13 18:03:51 +02:00
Matt Corallo
e6aaf7c72d Pull secp256k1 contexts from per-peer to per-PeerManager
Instead of including a `Secp256k1` context per
`PeerChannelEncryptor`, which is relatively expensive memory-wise
and nontrivial CPU-wise to construct, we should keep one for all
peers and simply reuse it.

This is relatively trivial so we do so in this commit.

Since its trivial to do so, we also take this opportunity to
randomize the new PeerManager context.
2022-05-11 20:02:29 +00:00
Matt Corallo
b5a63070f5
Merge pull request #1023 from TheBlueMatt/2021-07-par-gossip-processing 2022-05-11 17:24:16 +00:00
Matt Corallo
3cae233491
Merge pull request #1477 from dunxen/2022-05-invoice-expiry-nits 2022-05-11 16:12:15 +00:00
Duncan Dean
ea016cd824
Address post-ACK formatting nits from #1474 2022-05-11 10:57:38 +02:00
Matt Corallo
46009a5f83 Add a few more simple tests of the PeerHandler
These increase coverage and caught previous lockorder inversions.
2022-05-10 23:40:20 +00:00
Matt Corallo
cc7f859c01 Add support for testing recvd messages in TestChannelMessageHandler 2022-05-10 23:40:20 +00:00
Matt Corallo
45c1411b16 Require PartialEq for wire::Message in cfg(test)
...and implement wire::Type for `()` for `feature = "_test_utils"`.
2022-05-10 23:40:20 +00:00
Matt Corallo
101bcd8da5 Drop a needless match in favor of an if let 2022-05-10 23:40:20 +00:00
Matt Corallo
b222be233b [net-tokio] Explicitly yield after processing messages from a peer
This reduces instances of disconnect peers after single timer
intervals somewhat, at least on Tokio 1.14.
2022-05-10 23:40:20 +00:00
Matt Corallo
96fc0f3453 Drop PeerHolder as it now only has one field 2022-05-10 23:40:20 +00:00
Matt Corallo
eb17464e78 Keep the same read buffer unless the last message was overly large
This avoids repeatedly deallocating-allocating a Vec for the peer
read buffer after every message/header.
2022-05-10 23:40:20 +00:00
Matt Corallo
ae4ceb71a5 Create a simple FairRwLock to avoid readers starving writers
Because we handle messages (which can take some time, persisting
things to disk or validating cryptographic signatures) with the
top-level read lock, but require the top-level write lock to
connect new peers or handle disconnection, we are particularly
sensitive to writer starvation issues.

Rust's libstd RwLock does not provide any fairness guarantees,
using whatever the OS provides as-is. On Linux, pthreads defaults
to starving writers, which Rust's RwLock exposes to us (without
any configurability).

Here we work around that issue by blocking readers if there are
pending writers, optimizing for readable code over
perfectly-optimized blocking.
2022-05-10 23:40:20 +00:00
Matt Corallo
f90983180e Wake reader future when we fail to flush socket buffer
This avoids any extra calls to `read_event` after a write fails to
flush the write buffer fully, as is required by the PeerManager
API (though it isn't critical).
2022-05-10 23:40:20 +00:00
Matt Corallo
97711aef96 Limit blocked PeerManager::process_events waiters to two
Only one instance of PeerManager::process_events can run at a time,
and each run always finishes all available work before returning.
Thus, having several threads blocked on the process_events lock
doesn't accomplish anything but blocking more threads.

Here we limit the number of blocked calls on process_events to two
- one processing events and one blocked at the top which will
process all available events after the first completes.
2022-05-10 23:40:20 +00:00
Matt Corallo
4f50a94a3f Avoid the peers write lock unless we need it in timer_tick_occurred
Similar to the previous commit, this avoids "blocking the world" on
every timer tick unless we need to disconnect peers.
2022-05-10 23:40:20 +00:00
Matt Corallo
a5adda18dc Avoid taking the peers write lock during event processing
Because the peers write lock "blocks the world", and happens after
each read event, always taking the write lock has pretty severe
impacts on parallelism. Instead, here, we only take the global
write lock if we have to disconnect a peer.
2022-05-10 23:40:20 +00:00
Matt Corallo
b1550524cf [net-tokio] Call PeerManager::process_events without blocking reads
Unlike very ancient versions of lightning-net-tokio, this does not
rely on a single global process_events future, but instead has one
per connection. This could still cause significant contention, so
we'll ensure only two process_events calls can exist at once in
the next few commits.
2022-05-10 23:40:20 +00:00
Matt Corallo
a731efcb68 Process messages with only the top-level read lock held
Users are required to only ever call `read_event` serially
per-peer, thus we actually don't need any locks while we're
processing messages - we can only be processing messages in one
thread per-peer.

That said, we do need to ensure that another thread doesn't
disconnect the peer we're processing messages for, as that could
result in a peer_disconencted call while we're processing a
message for the same peer - somewhat nonsensical.

This significantly improves parallelism especially during gossip
processing as it avoids waiting on the entire set of individual
peer locks to forward a gossip message while several other threads
are validating gossip messages with their individual peer locks
held.
2022-05-10 23:40:20 +00:00
Matt Corallo
7c8b098698 Process messages from peers in parallel in PeerManager.
This adds the required locking to process messages from different
peers simultaneously in `PeerManager`. Note that channel messages
are still processed under a global lock in `ChannelManager`, and
most work is still processed under a global lock in gossip message
handling, but parallelizing message deserialization and message
decryption is somewhat helpful.
2022-05-10 23:40:20 +00:00
Matt Corallo
edd3030905
Merge pull request #1474 from dunxen/2022-05-actually-add-expiry
lightning-invoice/utils: Actually add expiry to invoices
2022-05-10 23:23:57 +00:00
Duncan Dean
3369b2962d
Add expiry to non-phantom invoice utils 2022-05-10 20:22:01 +02:00
Duncan Dean
15a840147a
Actually add expiry to phantom invoice utils 2022-05-10 20:18:06 +02:00