Channel serialization should happen "as if
remove_uncommitted_htlcs_and_mark_paused had just been called".
This is true for the most part, but outbound RemoteRemoved HTLCs
were being serialized as normal, even though
`remote_uncommitted_htlcs_and_mark_paused` resets them to
`Committed`.
This led to a bug identified by the `chanmon_consistency_target`
fuzzer wherein, if we receive a update_*_htlc message bug not the
corresponding commitment_signed prior to a serialization roundtrip,
we'd force-close the channel due to the peer "attempting to
fail/claim an HTLC which was already failed/claimed".
Previously we handled most of the logic of announcement_signatures
in ChannelManager, rather than Channel. This is somewhat unique as
far as our message processing goes, but it also avoided having to
pass the node_secret in to the Channel.
Eventually, we'll move the node_secret behind the signer anyway, so
there isn't much reason for this, and storing the
announcement_signatures-provided signatures in the Channel allows
us to recreate the channel_announcement later for rebroadcast,
which may be useful.
Currently our serialization is very compact, and contains version
numbers to indicate which versions the code can read a given
serialized struct. However, if you want to add a new field without
needlessly breaking the ability of previous versions of the code to
read the struct, there is not a good way to do so.
This adds dummy, currently empty, TLVs to the major structs we
serialize out for users, providing an easy place to put new
optional fields without breaking previous versions.
Previously, if we got disconnected from a peer while there were
HTLCs pending forwarding in the holding cell, we'd clear them and
fail them all backwards. This is largely fine, but since we now
have support for handling such HTLCs on reconnect, we might as
well not, instead relying on our timeout logic to fail them
backwards if it takes too long to forward them.
If there is no pending channel update messages when monitor updating
is restored (though there may be an RAA to send), and we're
connected to our peer and not awaiting a remote RAA, we need to
free anything in our holding cell.
However, we don't want to immediately free the holding cell during
channel_monitor_updated as it presents a somewhat bug-prone case of
reentrancy:
a) it would re-enter user code around a monitor update while being
called from user code notifying us of the same monitor being
updated, making deadlocs very likely (in fact, our fuzzers
would have a bug here!),
b) the re-entrancy only occurs in a very rare case, making it
likely users will not hit it in testing, only deadlocking in
production.
Thus, we add a holding-cell-free pass over each channel in
get_and_clear_pending_msg_events. This fits up nicely with the
anticipated bug - users almost certainly need to process new
network messages immediately after monitor updating has been
restored to send messages which were not sent originally when the
monitor updating was paused.
Without this, chanmon_fail_consistency was able to find a stuck
condition where we sit on an HTLC failure in our holding cell and
don't ever handle it (at least until we have other actions to take
which empty the holding cell).
Currently, we only send an update_channel message after
disconnecting a peer and waiting some time. We do not send a
followup when the peer has been reconnected for some time.
This changes that behavior to make the disconnect and reconnect
channel updates symmetric, and also simplifies the state machine
somewhat to make it more clear.
Finally, it serializes the current announcement state so that we
usually know when we need to send a new update_channel.
Current Bitcoin Core's policy will reject a p2wsh as a dust if it's
under 330 satoshis. A typical p2wsh output is 43 bytes big to which
Core's `GetDustThreshold()` sums up a minimal spend of 67 bytes (even
if a p2wsh witnessScript might be smaller). `dustRelayFee` is set
to 3000 sat/kb, thus 110 * 3000 / 1000 = 330. As all time-sensitive
outputs are p2wsh, a value of 330 sat is the lower bound desired
to ensure good propagation of transactions. We give a bit margin to
our counterparty and pick up 660 satoshis as an accepted
`dust_limit_satoshis` upper bound.
As this reasoning is tricky and error-prone we hardcode it instead of
letting the user picking up a non-sense value.
Further, this lower bound of 330 sats is also hardcoded as another constant
(MIN_DUST_LIMIT_SATOSHIS) instead of being dynamically computed on
feerate (derive_holder_dust_limit_satoshis`). Reducing risks of
non-propagating transactions in casee of failing fee festimation.
During the block API refactor, we started calling
Channel::force_shutdown when a channel is closed due to a bogus
funding tx. However, we still set the channel's state to Shutdown
prior to doing so, leading to an assertion in force_shutdown (that
the channel is not already closed).
This removes the state-set call and adds a (long-overdue) test for
this case.
Fixes: 60b962a18e
Instead of relying on the user to ensure the funding transaction is
correct (and panicing when it is confirmed), we should check it is
correct when it is generated. By taking the full funding transaciton
from the user on generation, we can also handle broadcasting for
them instead of doing so via an event.
Electrum clients primarily operate in a world where they query (and
subscribe to notifications for) transactions by script_pubkeys.
They may never learn very much about the actual blockchain and
orient their events around individual transactions, not the
blockchain.
This makes our ChannelManager interface somewhat more amenable to
such a client by splitting `block_connected` into
`transactions_confirmed` and `update_best_block`. The first handles
checking the funding transaction and storing its height/confirmation
block, whereas the second handles funding_locked and reorg logic.
Sadly, this interface is somewhat easy to misuse - notifying the
channel of the funding transaction being reorganized out of the
chain is complicated when the only notification received is that
a new block is connected at a given height. This will be addressed
in a future commit.
Previously, we expected every block to be connected in-order,
allowing us to track confirmations by simply incrementing a counter
for each new block connected. In anticipation of moving to a
update-height model in the next commit, this moves to tracking
confirmations by simply storing the height at which the funding
transaction was confirmed.
This commit also corrects our "funding was reorganized out of the
best chain" heuristic, instead of a flat 6 blocks, it uses half the
confirmation count required as the point at which we force-close.
Even still, for low confirmation counts (eg 1 block), an ill-timed
reorg may still cause spurious force-closes, though that behavior
is not new in this commit.
We allow users to configure the to_self_delay, which is analogous to
the cltv_expiry_delta in terms of its security context, so we should
allow users to specify both.
We similarly bound it on the lower end, but reduce that bound
somewhat now that it is configurable.
Useful for constructing route hints for private channels in invoices.
Co-authored-by: Valentine Wallace <vwallace@protonmail.com>
Co-authored-by: Antoine Riard <ariard@student.42.fr>
This will be used to expose forwarding info for route hints in the next commit.
Co-authored-by: Valentine Wallace <vwallace@protonmail.com>
Co-authored-by: Antoine Riard <ariard@student.42.fr>
This will be filled in in upcoming commits, then exposed in ChannelDetails
to allow constructing route hints for invoices.
Also update the cltv_expiry_deta comment in msgs::ChannelUpdate
Co-authored-by: Valentine Wallace <vwallace@protonmail.com>
Co-authored-by: Antoine Riard <ariard@student.42.fr>
ChannelMonitor keeps track of the last block connected. However, it is
initialized with the default block hash, which is a problem if the
ChannelMonitor is serialized before a block is connected. Instead, pass
ChannelManager's last_block_hash, which is initialized with a "birthday"
hash, when creating a new ChannelMonitor.
Tracking the last block was only used to de-duplicate block_connected
calls, but this is no longer required as of the previous commit.
Further, the ChannelManager can pass the latest block hash when needing
to create a ChannelMonitor rather than have each Channel maintain an
up-to-date copy. This is implemented in the next commit.
Calling block_connected for the same block was necessary when clients
were expected to re-fetch filtered blocks with relevant transactions. At
the time, all listeners were notified of the connected block again for
re-scanning. Thus, de-duplication logic was put in place.
However, this requirement is no longer applicable for ChannelManager as
of commit bd39b20f64, so remove this
logic.
The instructions for `ChannelManagerReadArgs` indicate that you need
to connect blocks on a newly-deserialized `ChannelManager` in a
separate pass from the newly-deserialized `ChannelMontiors` as the
`ChannelManager` assumes the ability to update the monitors during
block [dis]connected events, saying that users need to:
```
4) Reconnect blocks on your ChannelMonitors
5) Move the ChannelMonitors into your local chain::Watch.
6) Disconnect/connect blocks on the ChannelManager.
```
This is fine for `ChannelManager`'s purpose, but is very awkward
for users. Notably, our new `lightning-block-sync` implemented
on-load reconnection in the most obvious (and performant) way -
connecting the blocks all at once, violating the
`ChannelManagerReadArgs` API.
Luckily, the events in question really don't need to be processed
with the same urgency as most channel monitor updates. The only two
monitor updates which can occur in block_[dis]connected is either
a) in block_connected, we identify a now-confirmed commitment
transaction, closing one of our channels, or
b) in block_disconnected, the funding transaction is reorganized
out of the chain, making our channel no longer funded.
In the case of (a), sending a monitor update which broadcasts a
conflicting holder commitment transaction is far from
time-critical, though we should still ensure we do it. In the case
of (b), we should try to broadcast our holder commitment transaction
when we can, but within a few minutes is fine on the scale of
block mining anyway.
Note that in both cases cannot simply move the logic to
ChannelMonitor::block[dis]_connected, as this could result in us
broadcasting a commitment transaction from ChannelMonitor, then
revoking the now-broadcasted state, and only then receiving the
block_[dis]connected event in the ChannelManager.
Thus, we move both events into an internal invent queue and process
them in timer_chan_freshness_every_min().
The return value from Channel::force_shutdown previously always
returned a `ChannelMonitorUpdate`, but expected it to only be
applied in the case that it *also* returned a Some for the funding
transaction output.
This is confusing, instead we move the `ChannelMontiorUpdate`
inside the Option, making it hold a tuple instead.
ChainMonitor accesses a set of ChannelMonitors behind a single Mutex.
As a result, update_channel operations cannot be parallelized. It also
requires using a RefCell around a ChannelMonitor when implementing
chain::Listen.
Moving the Mutex into ChannelMonitor avoids these problems and aligns it
better with other interfaces. Note, however, that get_funding_txo and
get_outputs_to_watch now clone the underlying data rather than returning
references.
* Implemented protocol.
* Made feature optional.
* Verify that the default value is true.
* Verify that on shutdown,
if Channel.supports_shutdown_anysegwit is enabled,
the script can be a witness program.
* Added a test that verifies that a scriptpubkey
for an unreleased segwit version is handled successfully.
* Added a test that verifies that
if node has op_shutdown_anysegwit disabled,
a scriptpubkey with an unreleased segwit version on shutdown
throws an error.
* Added peer InitFeatures to handle_shutdown
* Check if shutdown script is valid when given upfront.
* Added a test to verify that an invalid test results in error.
* Added a test to check that if a segwit script with version 0 is provided,
the updated anysegwit check detects it and returns unsupported.
* An empty script is only allowed when sent as upfront shutdown script,
so make sure that check is only done for accept/open_channel situations.
* Instead of reimplementing a variant of is_witness_script,
just call it and verify that the witness version is not 0.