Some `NodeFeatures` will, in the future, represent features which
are not enabled by the `ChannelManager`, but by other message
handlers handlers. Thus, it doesn't make sense to determine the
node feature bits in the `ChannelManager`.
The simplest fix for this is to change to generating the
node_announcement in `PeerManager`, asking all the connected
handlers which feature bits they support and simply OR'ing them
together. While this may not be sufficient in the future as it
doesn't consider feature bit dependencies, support for those could
be handled at the feature level in the future.
This commit moves the `broadcast_node_announcement` function to
`PeerHandler` but does not yet implement feature OR'ing.
When we connect to a new peer, immediately send them any
channel_announcement and channel_update messages for any public
channels we have with other peers. This allows us to stop sending
those messages on a timer when they have not changed and ensures
we are sending messages when we have peers connected, rather than
broadcasting at startup when we have no peers connected.
When we fail to forward a probe HTLC at all and immediately fail it
(e.g. due to the first hop channel closing) we'd previously
spuriously generate only a `PaymentPathFailed` event. This violates
the expected API, as users expect a `ProbeFailed` event instead.
This fixes the oversight by ensuring we generate the correct event.
Thanks to @jkczyz for pointing out this issue.
`fail_holding_cell_htlcs` calls through to
`fail_htlc_backwards_internal` for HTLCs that need to be
failed-backwards but opts to generate its own payment failure
events for `HTLCSource:;OutboundRoute` HTLCs. There is no reason
for that as `fail_htlc_backwards_internal` will also happily
generate (now-)equivalent events for `HTLCSource::OutboundRoute`
HTLCs.
Thus, we can drop the redundant code and always call
`fail_htlc_backwards_internal` for each HTLC in
`fail_holding_cell_htlcs`.
The `rejected_by_dest` field of the `PaymentPathFailed` event has
always been a bit of a misnomer, as its really more about retry
than where a payment failed. Now is as good a time as any to
rename it.
When our counterparty is the payment destination and we receive
an `HTLCFailReason::Reason` in `fail_htlc_backwards_internal` we
currently always set `rejected_by_dest` in the `PaymentPathFailed`
event, implying the HTLC should *not* be retried.
There are a number of cases where we use `HTLCFailReason::Reason`,
but most should reasonably be treated as retryable even if our
counterparty was the destination (i.e. `!rejected_by_dest`):
* If an HTLC times out on-chain, this doesn't imply that the
payment is no longer retryable, though the peer may well be
offline so retrying may not be very useful,
* If a commitment transaction "containing" a dust HTLC is
confirmed on-chain, this definitely does not imply the payment
is no longer retryable
* If the channel we intended to relay over was closed (or
force-closed) we should retry over another path,
* If the channel we intended to relay over did not have enough
capacity we should retry over another path,
* If we received a update_fail_malformed_htlc message from our
peer, we likely should *not* retry, however this should be
exceedingly rare, and appears to nearly never appear in practice
Thus, this commit simply disables the behavior here, opting to
treat all `HTLCFailReason::Reason` errors as retryable.
Note that prior to 93e645daf4 this
change would not have made sense as it would have resulted in us
retrying the payment over the same channel in some cases, however
we now "blame" our own channel and will avoid it when routing for
the same payment.
We've seen a bit of user confusion about the requirements for event
handling, largely because the idempotency and consistency
requirements weren't super clearly phrased. While we're at it, we
also consolidate some documentation out of the event handling
function onto the trait itself.
Fixes#1675.
Previously, we wouldn't mark a dust HTLC as permanently resolved if
the commitment transaction went on chain. This resulted in us
always considering the HTLC as pending on restart, when we load the
pending payments set from the monitors.
Fixes#1653.
If we receive a channel_update for one of our private channels, we
will not log the message at the usual TRACE log level as the
message falls into the gossip range. However, for our own channels
they aren't *just* gossip, as we store that info and it changes
how we generate invoices. Thus, we add a log in `ChannelManager`
here at the DEBUG log level.
This allows users who don't wish to block a full thread to receive
persistence events.
The `Future` added here is really just a trivial list of callbacks,
but from that we can build a (somewhat ineffecient)
std::future::Future implementation and can (at least once a mapping
for Box<dyn Trait> is added) include the future in no-std bindings
as well.
Fixes#1595
In this commit, we check if a peer's outbound buffer has room for onion
messages, and if so pulls them from an implementer of a new trait,
OnionMessageProvider.
Makes sure channel messages are prioritized over OMs, and OMs are prioritized
over gossip.
The onion_message module remains private until further rate limiting is added.
Adds the boilerplate needed for PeerManager and OnionMessenger to work
together, with some corresponding docs and misc updates mostly due to the
PeerManager public API changing.
This allows us to better prioritize channel messages over gossip broadcasts and
lays groundwork for rate limiting onion messages more simply, since they won't
be competing with gossip broadcasts for space in the main message queue.
If we don't currently have the preimage for an inbound HTLC, that
does not guarantee we can never claim it, but instead only that we
cannot claim it unless we receive the preimage from the channel we
forwarded the channel out on.
Thus, we cannot consider a channel to have no claimable balances if
the only remaining output on the commitment ransaction is an
inbound HTLC for which we do not have the preimage, as we may be
able to claim it in the future.
This commit addresses this issue by adding a new `Balance` variant
- `MaybePreimageClaimableHTLCAwaitingTimeout`, which is generated
until the HTLC output is spent.
Fixes#1620
This uses the various new tracking added in the prior commits to
expose a new `Balance` type - `CounterpartyRevokedOutputClaimable`.
Some nontrivial work is required, however, as we now have to track
HTLC outputs as spendable in a transaction that comes *after* an
HTLC-Success/HTLC-Timeout transaction, which we previously didn't
need to do. Thus, we have to check if an
`onchain_events_awaiting_threshold_conf` event spends a commitment
transaction's HTLC output while walking events. Further, because
we now need to track HTLC outputs after the
HTLC-Success/HTLC-Timeout confirms, and because we have to track
the counterparty's `to_self` output as a contentious output which
could be claimed by either party, we have to examine the
`OnchainTxHandler`'s set of outputs to spend when determining if
certain outputs are still spendable.
Two new tests are added which test various different transaction
formats, and hopefully provide good test coverage of the various
revoked output paths.
The test intended to disconnect a transaction previously connected
but didn't disconnect enough blocks to do so, leading to it
confirming two conflicting transactions.
In the next few commits this will become an assertion failure.
Instead of backfilling gossip by buffering (up to) ten messages at
a time, only buffer one message at a time, as the peers' outbound
socket buffer drains. This moves the outbound backfill messages out
of `PeerHandler` and into the operating system buffer, where it
arguably belongs.
Not buffering causes us to walk the gossip B-Trees somewhat more
often, but avoids allocating vecs for the responses. While its
probably (without having benchmarked it) a net performance loss, it
simplifies buffer tracking and leaves us with more room to play
with the buffer sizing constants as we add onion message forwarding
which is an important win.
Note that because we change how often we check if we're out of
messages to send before pinging, we slightly change how many
messages are exchanged at once, impacting the
`test_do_attempt_write_data` constants.
Prior to this change, we could have failed to decode a valid payload of size
>253. This is because we were decoding the length (a BigSize, big-endian) as a
VarInt (little-endian).
Found in #1652.
This consolidates our various checks on peer buffer space into the
`Peer` impl itself, making the thresholds at which we stop taking
various actions on a peer more readable as a whole.
This commit was primarily authored by `Valentine Wallace
<vwallace@protonmail.com>` with some amendments by `Matt Corallo
<git@bluematt.me>`.
It was always somewhat strange to have a bunch of notification
logic in `channelmanager`, and with the next commit adding a bunch
more, its moved here first.
Pre-existing to this PR, we were reading next packet bytes with io::Read::read,
which is not guaranteed to read all the bytes we need, only guaranteed to read
*some* bytes.
We fix this to be read_exact, which is guaranteed to read all the next hop
packet bytes.
This required adapting `onion_utils::decode_next_hop` to work for both payments
and onion messages.
Currently we just print out the path_id of any onion messages we receive. In
the future, these received onion messages will be redirected to their
respective handlers: i.e. an invoice_request will go to an InvoiceHandler,
custom onion messages will go to a custom handler, etc.