As the map values are no longer only `channel_id`s, but also a
`counterparty_node_id`s, the map is renamed to better correspond to
whats actually stored in the map.
When we receive a `channel_reestablish` with a `data_loss_protect`
that proves we're running with a stale state, instead of
force-closing the channel, we immediately panic. This lines up with
our refusal to run if we find a `ChannelMonitor` which is stale
compared to our `ChannelManager` during `ChannelManager`
deserialization. Ultimately both are an indication of the same
thing - that the API requirements on `chain::Watch` were violated.
In the "running with outdated state but ChannelMonitor(s) and
ChannelManager lined up" case specifically its likely we're running
off of an old backup, in which case connecting to peers with
channels still live is explicitly dangerous. That said, because
this could be an operator error that is correctable, panicing
instead of force-closing may allow for normal operation again in
the future (cc #1207).
In any case, we provide instructions in the panic message for how
to force-close channels prior to peer connection, as well as a note
on how to broadcast the latest state if users are willing to take
the risk.
Note that this is still somewhat unsafe until we resolve#1563.
This is mostly motivated by the fact that payments may happen while the
latest `ChannelUpdate` indicating our new `ChannelConfig` is still
propagating throughout the network. By temporarily allowing the previous
config, we can help reduce payment failures across the network.
We do this to prevent payment failures while the `ChannelUpdate` for the
new `ChannelConfig` still propagates throughout the network. In a follow
up commit, we'll honor forwarding HTLCs that were constructed based on
either the previous or current `ChannelConfig`.
To handle expiration (when we should stop allowing the previous config),
we rely on the ChannelManager's `timer_tick_occurred` method. After
enough ticks, the previous config is cleared from memory, and only the
current config applies moving forward.
A new `update_channel_config` method is exposed on the `ChannelManger`
to update the `ChannelConfig` for a set of channels atomically. New
`ChannelUpdate` events are generated for each eligible channel.
Note that as currently implemented, a buggy and/or
auto-policy-management client could spam the network with updates as
there is no rate-limiting in place. This could already be done with
`broadcast_node_announcement`, though users are less inclined to update
that as frequently as its data is mostly static.
As we prepare to expose an API to update a channel's ChannelConfig,
we'll also want to expose this struct to consumers such that they have
insights into the current ChannelConfig applied for each channel.
ChannelConfig now has its static fields removed. We introduce a new
LegacyChannelConfig struct that maintains the serialization as
previously defined by ChannelConfig to remain backwards compatible with
clients running 0.0.107 and earlier.
As like the previous commit, `commit_upfront_shutdown_pubkey` is another
static field that cannot change after the initial channel handshake. We
therefore move it out from its existing place in `ChannelConfig`.
In the near future, we plan to allow users to update their
`ChannelConfig` after the initial channel handshake. In order to reuse
the same struct and expose it to users, we opt to move out all static
fields that cannot be updated after the initial channel handshake.
If the user broadcasts a funding transaction before the
counterparty provides a `funding_signed` we will panic in
`check_get_channel_ready`. This is expected - the user did
something which may lead to loss of funds, and we *really* need to
let them know.
However, the fuzzer can do this and we shouldn't treat it as a bug,
its a totally expected panic. Thus, we disable the panic in fuzz.
Thanks to Chaincode for providing fuzzing resources which managed
to hit this panic.
If our peer sets a minimum depth of 0, and we're set to trusting
ourselves to not double-spend our own funding transactions, send a
funding_locked message immediately after funding signed.
Note that some special care has to be taken around the
`channel_state` values - `ChannelFunded` no longer implies the
funding transaction is confirmed on-chain. Thus, for example, the
should-we-re-broadcast logic has to now accept `channel_state`
values greater than `ChannelFunded` as indicating we may still need
to re-broadcast our funding tranasction, unless `minimum_depth` is
greater than 0.
Further note that this starts writing `Channel` objects with a
`MIN_SERIALIZATION_VERSION` of 2. Thus, LDK versions prior to
0.0.99 (July 2021) will now refuse to read serialized
Channels/ChannelManagers.
In the next few commits we add support for 0conf channels, allowing
us to have an active channel with HTLC and other updates flying
prior to having an SCID available. This would break several
assumptions made in `ChannelManager`, which we address here by
looking at SCID aliases in addition to SCIDs.
While the HTLC-claim process happens across all MPP parts under one
lock, this doesn't imply that they are claimed fully atomically on
disk. Ultimately, an application can crash after persisting one
`ChannelMonitorUpdate` out of multiple monitor updates needed for
the full claim.
Previously, this would leave us in a very bad state - because of
the all-channels-available check in `claim_funds` we'd refuse to
claim the payment again on restart (even though the
`PaymentReceived` event will be passed to the user again), and we'd
end up having partially claimed the payment!
The fix for the consistency part of this issue is pretty
straightforward - just check for this condition on startup and
complete the claim across all channels/`ChannelMonitor`s if we
detect it.
This still leaves us in a confused state from the perspective of
the user, however - we've actually claimed a payment but when they
call `claim_funds` we return `false` indicating it could not be
claimed.
The `ChannelMonitor` had a field for the counterparty's
`cur_revocation_points`. Somewhat confusingly, this actually stored
the counterparty's *per-commitment* points, not the (derived)
revocation points.
Here we correct this by simply renaming the references as
appropriate. Note the update in `channel.rs` makes the variable
names align correctly.
During a `channel_reestablish` now we send a warning message when we receive a old commitment transaction from the peer.
In addition, this commit include the update of functional test to make sure that the receiver will generate warn messages.
Signed-off-by: Vincenzo Palazzo <vincenzopalazzodev@gmail.com>
Use the `counterparty_max_htlc_value_in_flight_msat` value, and not the
`holder_max_htlc_value_in_flight_msat` value when creating the
`htlc_maximum_msat` value for `ChannelUpdate` messages.
BOLT 7 specifies that the field MUST be less than or equal to
`max_htlc_value_in_flight_msat` received from the peer, which we
currently are not guaranteed to adhere to by using the holder value.
Add a config field
`ChannelHandshakeConfig::max_inbound_htlc_value_in_flight_percent_of_channel`
which sets the percentage of the channel value we cap the total value of
outstanding inbound HTLCs to.
This field can be set to a value between 1-100, where the value
corresponds to the percent of the channel value in whole percentages.
Note that:
* If configured to another value than the default value 10, any new
channels created with the non default value will cause versions of LDK
prior to 0.0.104 to refuse to read the `ChannelManager`.
* This caps the total value for inbound HTLCs in-flight only, and
there's currently no way to configure the cap for the total value of
outbound HTLCs in-flight.
* The requirements for your node being online to ensure the safety of
HTLC-encumbered funds are different from the non-HTLC-encumbered funds.
This makes this an important knob to restrict exposure to loss due to
being offline for too long. See
`ChannelHandshakeConfig::our_to_self_delay` and
`ChannelConfig::cltv_expiry_delta` for more information.
Default value: 10.
Minimum value: 1, any values less than 1 will be treated as 1 instead.
Maximum value: 100, any values larger than 100 will be treated as 100
instead.
In 2826af75a5 we fixed a fuzz crash
in which the total reserve values in a channel were greater than
the funding amount, checked when an incoming channel is accepted.
This, however, did not fix the same issue for outbound channels,
where a peer can accept a channel with a nonsense reserve value in
the `accept_channel` message. The `full_stack_target` fuzzer
eventually found its way into the same issue, which this resolves.
Thanks (again) to Chaincode Labs for providing the fuzzing
resources which found this bug!
Currently, if a channel's funding is locked in and then later
reorg'd back to half of the channel's minimum-depth we will
immediately force-close the channel. However, this can happen at
the fork-point while processing a reorg, and generally reorgs do
not reduce the block height at all, making this a rather useless
endeavor.
Ideally we'd never auto-force-close channels at all due to a reorg,
instead simply marking it as inactive until the funding
transaction is re-confirmed (or allowing the user to attempt to
force-close or force-closing once we're confident we have
completed reorg processing if we're at risk of losing funds
already received in the channel).
Sadly, we currently do not support changing a channel's SCID and
updating our SCID maps, so we cannot yet remove the automated
force-close logic. Still, there is no reason to do it until a
funding transaction has been removed from the chain.
This implements that change - only force-closeing once a channel's
funding transaction has been reorg'd out (still potentially at a
reorg's fork point). This continues to imply a 1-confirmation
channel will always be force-closed after a reorg of the funding
transaction, and will imply a similar behavior with 0-conf
channels.