There is a possible race condition when both the latest block hash and
height are needed. Combine these in one struct and place them behind a
single lock.
Instead of relying on the user to ensure the funding transaction is
correct (and panicing when it is confirmed), we should check it is
correct when it is generated. By taking the full funding transaciton
from the user on generation, we can also handle broadcasting for
them instead of doing so via an event.
When we force-close a channel, for whatever reason, it is nice to
send an error message to our peer. This allows them to closes the
channel on their end instead of trying to send through it and
failing. Further, it may induce them to broadcast their commitment
transaction, possibly getting that confirmed and saving us on fees.
This commit adds a few more cases where we should have been sending
error messages but weren't. It also includes an almost-global
replace in tests of the second argument in
`check_closed_broadcast!()` from false to true (indicating an error
message is expected). There are only a few exceptions, notably
those where the closure is the result of our counterparty having
sent *us* an error message.
Previously, we expected every block to be connected in-order,
allowing us to track confirmations by simply incrementing a counter
for each new block connected. In anticipation of moving to a
update-height model in the next commit, this moves to tracking
confirmations by simply storing the height at which the funding
transaction was confirmed.
This commit also corrects our "funding was reorganized out of the
best chain" heuristic, instead of a flat 6 blocks, it uses half the
confirmation count required as the point at which we force-close.
Even still, for low confirmation counts (eg 1 block), an ill-timed
reorg may still cause spurious force-closes, though that behavior
is not new in this commit.
While its not necessarily a common operation on a running node,
`get_our_node_id()` is used incredibly heavily in tests, and there
is no reason to not eat the extra ~64 bytes to just cache it.
This expands the assertions on block ordering to apply to
`#[cfg(test)]` builds in addition to normal builds, requiring that
unit and functional tests have syntactically-valid (ie the previous
block hash pointer and the heights match the blocks) blockchains.
This requires a reasonably nontrivial diff in the functional tests
however it is mostly straightforward changes.
Sadly the connected-in-order tests have to be skipped in our normal
test suite as many tests violate it. Luckily we can still enforce
it in the tests which run in other crates.
Co-authored-by: Matt Corallo <git@bluematt.me>
Co-authored-by: Jeffrey Czyz <jkczyz@gmail.com>
We allow users to configure the to_self_delay, which is analogous to
the cltv_expiry_delta in terms of its security context, so we should
allow users to specify both.
We similarly bound it on the lower end, but reduce that bound
somewhat now that it is configurable.
Useful for constructing route hints for private channels in invoices.
Co-authored-by: Valentine Wallace <vwallace@protonmail.com>
Co-authored-by: Antoine Riard <ariard@student.42.fr>
This will be used to expose forwarding info for route hints in the next commit.
Co-authored-by: Valentine Wallace <vwallace@protonmail.com>
Co-authored-by: Antoine Riard <ariard@student.42.fr>
`wait` doesn't capture enough of what's going on, but also Java
Java doesn't accpet methods just called `wait`, as it conflicts
with existing sync primitives on all Objects.
When a block is disconnected, the hash of the disconnected block was
used to update the last connected block. However, this amounts to a
no-op because these hashes should be equal. Successive disconnections
would update the hash but leave it one block off.
Normally, this not a problem because the last block_disconnected should
be followed by block_connected since the former is triggered by a chain
re-org. However, this assumes the user calls the API correctly and that
no failure occurs that would prevent block_connected from being called
(e.g., if fetching the connected block fails).
Instead, update the last block hash with the disconnected block's
previous block hash.
ChannelManager reads channel_state and last_block_hash while processing
funding_created and funding_signed messages. It writes these while
processing block_connected and block_disconnected events. To avoid any
potential deadlocks, have each site hold these locks independent of one
another and in a consistent order.
Additionally, use a RwLock instead of Mutex for last_block_hash since
exclusive access is not needed in funding_created / funding_signed and
cannot be guaranteed in block_connected / block_disconnected because of
the reads in the former.
ChannelMonitor keeps track of the last block connected. However, it is
initialized with the default block hash, which is a problem if the
ChannelMonitor is serialized before a block is connected. Instead, pass
ChannelManager's last_block_hash, which is initialized with a "birthday"
hash, when creating a new ChannelMonitor.
Tracking the last block was only used to de-duplicate block_connected
calls, but this is no longer required as of the previous commit.
Further, the ChannelManager can pass the latest block hash when needing
to create a ChannelMonitor rather than have each Channel maintain an
up-to-date copy. This is implemented in the next commit.
When ChannelMonitors are persisted, they need to store the most recent
block hash seen. However, for newly created channels the default block
hash is used. If persisted before a block is connected, the funding
output may be missed when syncing after a restart. Instead, initialize
ChannelManager with a "birthday" hash so it can be used later when
creating channels.
Creates a MessageSendEvent for sending a reply_channel_range message.
This event will be fired when handling inbound query_channel_range
messages in the NetGraphMessageHandler.
The instructions for `ChannelManagerReadArgs` indicate that you need
to connect blocks on a newly-deserialized `ChannelManager` in a
separate pass from the newly-deserialized `ChannelMontiors` as the
`ChannelManager` assumes the ability to update the monitors during
block [dis]connected events, saying that users need to:
```
4) Reconnect blocks on your ChannelMonitors
5) Move the ChannelMonitors into your local chain::Watch.
6) Disconnect/connect blocks on the ChannelManager.
```
This is fine for `ChannelManager`'s purpose, but is very awkward
for users. Notably, our new `lightning-block-sync` implemented
on-load reconnection in the most obvious (and performant) way -
connecting the blocks all at once, violating the
`ChannelManagerReadArgs` API.
Luckily, the events in question really don't need to be processed
with the same urgency as most channel monitor updates. The only two
monitor updates which can occur in block_[dis]connected is either
a) in block_connected, we identify a now-confirmed commitment
transaction, closing one of our channels, or
b) in block_disconnected, the funding transaction is reorganized
out of the chain, making our channel no longer funded.
In the case of (a), sending a monitor update which broadcasts a
conflicting holder commitment transaction is far from
time-critical, though we should still ensure we do it. In the case
of (b), we should try to broadcast our holder commitment transaction
when we can, but within a few minutes is fine on the scale of
block mining anyway.
Note that in both cases cannot simply move the logic to
ChannelMonitor::block[dis]_connected, as this could result in us
broadcasting a commitment transaction from ChannelMonitor, then
revoking the now-broadcasted state, and only then receiving the
block_[dis]connected event in the ChannelManager.
Thus, we move both events into an internal invent queue and process
them in timer_chan_freshness_every_min().
The return value from Channel::force_shutdown previously always
returned a `ChannelMonitorUpdate`, but expected it to only be
applied in the case that it *also* returned a Some for the funding
transaction output.
This is confusing, instead we move the `ChannelMontiorUpdate`
inside the Option, making it hold a tuple instead.
If the ChannelManager never receives any blocks, it'll return a default blockhash
on deserialization. It's preferable for this to be an Option instead.
Add a utility for syncing a set of chain listeners to a common chain
tip. Required to use before creating an SpvClient when the chain
listener used with the client is actually a set of listeners each of
which may have had left off at a different block. This would occur when
the listeners had been persisted individually at different frequencies
(e.g., a ChainMonitor's individual ChannelMonitors).