Currently we entirely ignore the BADONION bit when deciding how to
handle HTLC failures. This opens us up to an attack where a
malicious node always fails HTLCs backwards via
`update_fail_malformed_htlc` with an error code of
`BADONION|NODE|PERM|X`. In this case, we may decide to interpret
this as a permanent node failure for the node encrypting the onion,
i.e. the counterparty of the node who sent the
`update_fail_malformed_htlc` message and ultimately failed the
HTLC.
Thus, any node we route through could cause us to fully remove its
counterparty from our network graph. Luckily we do not do any
persistent tracking of removed nodes, and thus will re-add the
removed node once it is re-announced or on restart, however we are
likely to add such persistent tracking (at least in-memory) in the
future.
For non-gossip-broadcast messages, our current flow is to first
serialize the message into a `Vec`, and then allocate a new `Vec`
into which we write the encrypted+MAC'd message and header.
This is somewhat wasteful, and its rather simple to instead
allocate only one buffer and encrypt the message in-place.
In 47e818f198, forwarding broadcasted
gossip messages was split into a separate per-peer message buffer.
However, both it and the original regular-message queue are
encrypted immediately when the messages are enqueued. Because the
lightning P2P encryption algorithm is order-dependent, this causes
messages to fail their MAC checks as the messages from the two
queues may not be sent to peers in the order in which they were
encrypted.
The fix is to simply queue broadcast gossip messages unencrypted,
encrypting them when we add them to the regular outbound buffer.
I'm not sure why rustc didn't complain about the unused parameter
or why we're allowed to get away without explicitly bounding the
`Sign` in the `KeysInterface`, but the current code requires all
`BlindedPath` construction to explicitly turbofish an unused type.
The bindings generator is pretty naive in its generic resolution
and doesn't like `where` clauses for bounds that are simple traits.
This should eventually change, but for now its simplest to just
inline the relevant generic bounds.
The C bindings generator isn't capable of figuring out if a blanket
impl applies in a given context, and instead opts to always write
out any relevant impl's for a trait. Thus, blanket impls should be
disabled when building with `#[cfg(c_bindings)]`.
Now that `{Signed,}RawInvoice` implement the std `Hash` trait,
having a method called `hash` is ambiguous.
Instead, we rename the `hash` methods `signed_hash` to make it
clear that the hash is the one used for the purpose of signing the
invoice.
Similar to how we OR our InitFeaures and NodeFeatures across both our channel
and routing message handlers, we also want to OR the features of our onion
message handler.
When ChannelMessageHandler implementations wish to return a NodeFeatures which
contain all the known flags that are relevant to channel handling, but not
gossip handling, they currently need to do so by manually constructing a
NodeFeatures with all known flags and then clearing the ones they don't want.
Instead of spreading this logic across the codebase, this consolidates such
construction into one place in features.rs.
When we broadcast a node announcement, the features we support are really a
combination of all the various features our different handlers support. This
commit captures this concept by OR'ing our NodeFeatures across both our channel
and routing message handlers.
5a8ede09fb updated the documentation
on `get_claimable_balance` to note that if a channel went on-chain
with an LDK version older than 0.0.108 some
counterparty-revoked-output claimable balances my be missing.
However, this failed to account for the fact that we rely on the
entirely-new-in-0.0.111
`confirmed_commitment_tx_counterparty_output` field for some
balances as well.
Thus, the comment should have been in terms of 0.0.111, not
0.0.108.
When we receive a block we always test if we should send our
channel_ready via `check_get_channel_ready`. If the channel in
question requires confirmations, we quickly return if the funding
transaction has not yet confirmed (or even been defined), however
for 0conf channels the checks are necessarily more involved.
In any case, we wish to panic if the funding transaction has
confirmations prior to when it should have been broadcasted. This
is useful as it is easy for users to violate our broadcast-time
invariants without noticing and the panic gives us an opportunity
to catch it.
Sadly, in the case of 0conf channels, if we hadn't yet seen the
funding transaction at all but receive a block we would hit this
sanity check as we don't check whether there are actually funding
transaction confirmations prior to panicing.
When `ChannelMessageHandler` implementations wish to return an
`InitFeatures` which contain all the known flags that are relevant
to channel handling, but not gossip handling, they currently need
to do so by manually constructing an InitFeatures with all known
flags and then clearing the ones they dont want.
Instead of spreading this logic out across the codebase, this
consolidates such construction to one place in features.rs.
When we go to send an Init message to new peers, the features we
support are really a combination of all the various features our
different handlers support. This commit captures this concept by
OR'ing our InitFeatures across both our Channel and Routing
handlers.
Note that this also disables setting the `initial_routing_sync`
flag in init messages, as was intended in
e742894492, per the comment added on
`clear_initial_routing_sync`, though this should not be a behavior
change in practice as nodes which support gossip queries ignore the
initial routing sync flag.
Like we now do for `NodeFeatures`, this converts to asking our
registered `ChannelMessageHandler` for our `InitFeatures` instead
of hard-coding them to the global LDK known set.
This allows handlers to set different feature bits based on what
our configuration actually supports rather than what LDK supports
in aggregate.
Some `NodeFeatures` will, in the future, represent features which
are not enabled by the `ChannelManager`, but by other message
handlers handlers. Thus, it doesn't make sense to determine the
node feature bits in the `ChannelManager`.
The simplest fix for this is to change to generating the
node_announcement in `PeerManager`, asking all the connected
handlers which feature bits they support and simply OR'ing them
together. While this may not be sufficient in the future as it
doesn't consider feature bit dependencies, support for those could
be handled at the feature level in the future.
This commit moves the `broadcast_node_announcement` function to
`PeerHandler` but does not yet implement feature OR'ing.
When we connect to a new peer, immediately send them any
channel_announcement and channel_update messages for any public
channels we have with other peers. This allows us to stop sending
those messages on a timer when they have not changed and ensures
we are sending messages when we have peers connected, rather than
broadcasting at startup when we have no peers connected.
When we fail to forward a probe HTLC at all and immediately fail it
(e.g. due to the first hop channel closing) we'd previously
spuriously generate only a `PaymentPathFailed` event. This violates
the expected API, as users expect a `ProbeFailed` event instead.
This fixes the oversight by ensuring we generate the correct event.
Thanks to @jkczyz for pointing out this issue.
`fail_holding_cell_htlcs` calls through to
`fail_htlc_backwards_internal` for HTLCs that need to be
failed-backwards but opts to generate its own payment failure
events for `HTLCSource:;OutboundRoute` HTLCs. There is no reason
for that as `fail_htlc_backwards_internal` will also happily
generate (now-)equivalent events for `HTLCSource::OutboundRoute`
HTLCs.
Thus, we can drop the redundant code and always call
`fail_htlc_backwards_internal` for each HTLC in
`fail_holding_cell_htlcs`.