We do this to prevent payment failures while the `ChannelUpdate` for the
new `ChannelConfig` still propagates throughout the network. In a follow
up commit, we'll honor forwarding HTLCs that were constructed based on
either the previous or current `ChannelConfig`.
To handle expiration (when we should stop allowing the previous config),
we rely on the ChannelManager's `timer_tick_occurred` method. After
enough ticks, the previous config is cleared from memory, and only the
current config applies moving forward.
A new `update_channel_config` method is exposed on the `ChannelManger`
to update the `ChannelConfig` for a set of channels atomically. New
`ChannelUpdate` events are generated for each eligible channel.
Note that as currently implemented, a buggy and/or
auto-policy-management client could spam the network with updates as
there is no rate-limiting in place. This could already be done with
`broadcast_node_announcement`, though users are less inclined to update
that as frequently as its data is mostly static.
As we prepare to expose an API to update a channel's ChannelConfig,
we'll also want to expose this struct to consumers such that they have
insights into the current ChannelConfig applied for each channel.
ChannelConfig now has its static fields removed. We introduce a new
LegacyChannelConfig struct that maintains the serialization as
previously defined by ChannelConfig to remain backwards compatible with
clients running 0.0.107 and earlier.
As like the previous commit, `commit_upfront_shutdown_pubkey` is another
static field that cannot change after the initial channel handshake. We
therefore move it out from its existing place in `ChannelConfig`.
In the near future, we plan to allow users to update their
`ChannelConfig` after the initial channel handshake. In order to reuse
the same struct and expose it to users, we opt to move out all static
fields that cannot be updated after the initial channel handshake.
Instead of implementing EventHandler for P2PGossipSync, implement it on
NetworkGraph. This allows RapidGossipSync to handle events, too, by
delegating to its NetworkGraph.
P2PGossipSync logs before delegating to NetworkGraph in its
EventHandler. In order to share this handling with RapidGossipSync,
NetworkGraph needs to take a logger so that it can implement
EventHandler instead.
P2PGossipSync has a Secp256k1 context field, which it only uses to pass
to NetworkGraph methods. Move the field to NetworkGraph so other callers
don't need to pass in a Secp256k1 context.
NetGraphMsgHandler implements RoutingMessageHandler to handle gossip
messages defined in BOLT 7 and maintains a view of the network by
updating NetworkGraph. Rename it to P2PGossipSync, which better
describes its purpose, and to contrast with RapidGossipSync.
A NetworkUpdate indicating ChannelClosed actually corresponds to a
channel failure as described in BOLT 4:
0x2000 (NODE): node failure (otherwise channel)
Rename the enum variant to ChannelFailure and rename NetworkGraph
methods close_channel_from_update and fail_node to channel_failed and
node_failed, respectively.
Create a wrapper struct for rapid gossip sync that can be passed to
BackgroundProcessor's start method, allowing it to only start pruning
the network graph upon rapid gossip sync's completion.
If the user broadcasts a funding transaction before the
counterparty provides a `funding_signed` we will panic in
`check_get_channel_ready`. This is expected - the user did
something which may lead to loss of funds, and we *really* need to
let them know.
However, the fuzzer can do this and we shouldn't treat it as a bug,
its a totally expected panic. Thus, we disable the panic in fuzz.
Thanks to Chaincode for providing fuzzing resources which managed
to hit this panic.
This is useful in bindings as the `lightning::io` module is used in
the public interface, but also useful for users who want to refer
to the `io` as used in lightning irrespective of the feature flags.
`ChannelManager::fail_htlc_backwards`' bool return value is quite
confusing - just because it returns false doesn't mean the payment
wasn't (already) failed. Worse, in some race cases around shutdown
where a payment was claimed before an unclean shutdown and then
retried on startup, `fail_htlc_backwards` could return true even
though (a duplicate copy of the same payment) was claimed, but the
claim event has not been seen by the user yet.
While its possible to use it correctly, its somewhat confusing to
have a return value at all, and definitely lends itself to misuse.
Instead, we should push users towards a model where they don't care
if `fail_htlc_backwards` succeeds - either they've locally marked
the payment as failed (prior to seeing any `PaymentReceived`
events) and will fail any attempts to pay it, or they have not and
the payment is still receivable until its timeout time is reached.
We can revisit this decision based on user feedback, but will need
to very carefully document the potential failure modes here if we
do.
As additional sanity checks, before claiming a payment, we check
that we have the full amount available in `claimable_htlcs` that
the payment should be for. Concretely, this prevents one
somewhat-absurd edge case where a user may receive an MPP payment,
wait many *blocks* before claiming it, allowing us to fail the
pending HTLCs and the sender to retry some subset of the payment
before we go to claim. More generally, this is just good
belt-and-suspenders against any edge cases we may have missed.
If we crashed during a payment claim and then detected a partial
claim on restart, we should ensure the user is aware that the
payment has been claimed. We do so here by using the new
partial-claim detection logic to create a `PaymentClaimed` event.