If we don't currently have the preimage for an inbound HTLC, that
does not guarantee we can never claim it, but instead only that we
cannot claim it unless we receive the preimage from the channel we
forwarded the channel out on.
Thus, we cannot consider a channel to have no claimable balances if
the only remaining output on the commitment ransaction is an
inbound HTLC for which we do not have the preimage, as we may be
able to claim it in the future.
This commit addresses this issue by adding a new `Balance` variant
- `MaybePreimageClaimableHTLCAwaitingTimeout`, which is generated
until the HTLC output is spent.
Fixes#1620
This uses the various new tracking added in the prior commits to
expose a new `Balance` type - `CounterpartyRevokedOutputClaimable`.
Some nontrivial work is required, however, as we now have to track
HTLC outputs as spendable in a transaction that comes *after* an
HTLC-Success/HTLC-Timeout transaction, which we previously didn't
need to do. Thus, we have to check if an
`onchain_events_awaiting_threshold_conf` event spends a commitment
transaction's HTLC output while walking events. Further, because
we now need to track HTLC outputs after the
HTLC-Success/HTLC-Timeout confirms, and because we have to track
the counterparty's `to_self` output as a contentious output which
could be claimed by either party, we have to examine the
`OnchainTxHandler`'s set of outputs to spend when determining if
certain outputs are still spendable.
Two new tests are added which test various different transaction
formats, and hopefully provide good test coverage of the various
revoked output paths.
The test intended to disconnect a transaction previously connected
but didn't disconnect enough blocks to do so, leading to it
confirming two conflicting transactions.
In the next few commits this will become an assertion failure.
Instead of backfilling gossip by buffering (up to) ten messages at
a time, only buffer one message at a time, as the peers' outbound
socket buffer drains. This moves the outbound backfill messages out
of `PeerHandler` and into the operating system buffer, where it
arguably belongs.
Not buffering causes us to walk the gossip B-Trees somewhat more
often, but avoids allocating vecs for the responses. While its
probably (without having benchmarked it) a net performance loss, it
simplifies buffer tracking and leaves us with more room to play
with the buffer sizing constants as we add onion message forwarding
which is an important win.
Note that because we change how often we check if we're out of
messages to send before pinging, we slightly change how many
messages are exchanged at once, impacting the
`test_do_attempt_write_data` constants.
Prior to this change, we could have failed to decode a valid payload of size
>253. This is because we were decoding the length (a BigSize, big-endian) as a
VarInt (little-endian).
Found in #1652.
This consolidates our various checks on peer buffer space into the
`Peer` impl itself, making the thresholds at which we stop taking
various actions on a peer more readable as a whole.
This commit was primarily authored by `Valentine Wallace
<vwallace@protonmail.com>` with some amendments by `Matt Corallo
<git@bluematt.me>`.
Pre-existing to this PR, we were reading next packet bytes with io::Read::read,
which is not guaranteed to read all the bytes we need, only guaranteed to read
*some* bytes.
We fix this to be read_exact, which is guaranteed to read all the next hop
packet bytes.
This required adapting `onion_utils::decode_next_hop` to work for both payments
and onion messages.
Currently we just print out the path_id of any onion messages we receive. In
the future, these received onion messages will be redirected to their
respective handlers: i.e. an invoice_request will go to an InvoiceHandler,
custom onion messages will go to a custom handler, etc.
This adds several utilities in service of then adding
OnionMessenger::send_onion_message, which can send to either an unblinded
pubkey or a blinded route. Sending custom TLVs and sending an onion message
containing a reply path are not yet supported.
We also need to split the construct_keys_callback macro into two macros to
avoid an unused assignment warning.
This method will help us avoid retrieving our node secret, something we want to
get rid of entirely. It will be used in upcoming commits when decoding the
onion message packet, and in future PRs to help us get rid of
KeysInterface::get_node_secret usages across the codebase
We need to add a new Packet struct because onion message packet hop_data fields
can be of variable length, whereas regular payment packets are always 1366
bytes.
Co-authored-by: Valentine Wallace <vwallace@protonmail.com>
Co-authored-by: Jeffrey Czyz <jkczyz@gmail.com>
It is proportion of the channel value to configure as the
`their_channel_reserve_satoshis` for both outbound and inbound channels.
It decides the minimum balance that the other node has to maintain on their
side, at all times.
Currently `decode_update_add_htlc_onion` returns the `channel_state`
lock to ensure that `internal_update_add_htlc` holds a single
`channel_state` lock in when the entire function execution. This is
unnecessary, and since we are moving the channel storage to the
`per_peer_state`, this no longer achieves the goal it was intended for.
We therefore avoid returning the `channel_state` from
`decode_update_add_htlc_onion`, and just retake the lock in
`internal_update_add_htlc` instead.
Blinded routes can be provided as destinations for onion messages, when the
recipient prefers to remain anonymous.
We also add supporting utilities for constructing blinded path keys, and
control TLVs structs representing blinded payloads prior to being
encoded/encrypted. These utilities and struct will be re-used in upcoming
commits for sending and receiving/forwarding onion messages.
Finally, add utilities for reading the padding from an onion message's
encrypted TLVs without an intermediate Vec.
In 4703d4e725 we changed
PeerManager::socket_disconnected to no longer require that sockets
which the PeerManager decided to disconnect not be disconnected.
However, we forgot to remove the scary warnings on the
`new_{inbound,outbound}_connection` functions which warned of the
old behavior.
We do so here.
We add `HTLCHandlingFailedConditions` to express the failure parameters,
that will be enforced by a new macro, `expect_pending_htlcs_forwardable_conditions`.
Saturating a channel beyond 1/4 of its capacity seems like a more
reasonable threshold for avoiding a path than 1/2, especially given
we should still be willing to send a payment with a lower
saturation limit if it comes to that.
This requires an (obvious) change to some router tests, but also
requires a change to the `fake_network_test`, opting to simply
remove some over-limit test code there - `fake_network_test` was
our first ever functional test, and while it worked great to ensure
LDK worked at all on day one, we now have a rather large breadth
of functional tests, and a broad "does it work at all" test is no
longer all that useful.
When an HTLC fails, we currently rely on the scorer learning the
failed channel and assigning an infinite (`u64::max_value()`)
penalty to the channel so as to avoid retrying over the exact same
path (if there's only one available path). This is common when
trying to pay a mobile client behind an LSP if the mobile client is
currently offline.
This leads to the scorer being overly conservative in some cases -
returning `u64::max_value()` when a given path hasn't been tried
for a given payment may not be the best decision, even if that
channel failed 50 minutes ago.
By tracking channels which failed on a payment part level and
explicitly refusing to route over them we can relax the
requirements on the scorer, allowing it to make different decisions
on how to treat channels that failed relatively recently without
causing payments to retry the same path forever.
This does have the drawback that it could allow two separate part
of a payment to traverse the same path even though that path just
failed, however this should only occur if the payment is going to
fail anyway, at least as long as the scorer is properly learning.
Closes#1241, superseding #1252.
In the next commit we add lockorder testing based on the line each
mutex was created on rather than the particular mutex instance.
This causes some additional test failure because of lockorder
inversions for the same mutex across different tests, which is
fixed here.
In order to avoid failing to find paths due to the new channel
saturation limit, if we fail to find enough paths, we simply
disable the saturation limit for further path finding iterations.
Because we can now increase the maximum sent over a given channel
during routefinding, we may now generate redundant paths for the
same payment. Because this is wasteful in the network, we add an
additional pass during routefinding to merge redundant paths.
Note that two tests which previously attempted to send exactly the
available liquidity over a channel which charged an absolute fee
need updating - in those cases the router will first collect a path
that is saturation-limited, then attempt to collect a second path
without a saturation limit while stil honoring the existing
utilized capacity on the channel, causing failure as the absolute
fee must be included.