Instead of relying on the user to ensure the funding transaction is
correct (and panicing when it is confirmed), we should check it is
correct when it is generated. By taking the full funding transaciton
from the user on generation, we can also handle broadcasting for
them instead of doing so via an event.
When we force-close a channel, for whatever reason, it is nice to
send an error message to our peer. This allows them to closes the
channel on their end instead of trying to send through it and
failing. Further, it may induce them to broadcast their commitment
transaction, possibly getting that confirmed and saving us on fees.
This commit adds a few more cases where we should have been sending
error messages but weren't. It also includes an almost-global
replace in tests of the second argument in
`check_closed_broadcast!()` from false to true (indicating an error
message is expected). There are only a few exceptions, notably
those where the closure is the result of our counterparty having
sent *us* an error message.
Electrum clients primarily operate in a world where they query (and
subscribe to notifications for) transactions by script_pubkeys.
They may never learn very much about the actual blockchain and
orient their events around individual transactions, not the
blockchain.
This makes our ChannelManager interface somewhat more amenable to
such a client by splitting `block_connected` into
`transactions_confirmed` and `update_best_block`. The first handles
checking the funding transaction and storing its height/confirmation
block, whereas the second handles funding_locked and reorg logic.
Sadly, this interface is somewhat easy to misuse - notifying the
channel of the funding transaction being reorganized out of the
chain is complicated when the only notification received is that
a new block is connected at a given height. This will be addressed
in a future commit.
Previously, we expected every block to be connected in-order,
allowing us to track confirmations by simply incrementing a counter
for each new block connected. In anticipation of moving to a
update-height model in the next commit, this moves to tracking
confirmations by simply storing the height at which the funding
transaction was confirmed.
This commit also corrects our "funding was reorganized out of the
best chain" heuristic, instead of a flat 6 blocks, it uses half the
confirmation count required as the point at which we force-close.
Even still, for low confirmation counts (eg 1 block), an ill-timed
reorg may still cause spurious force-closes, though that behavior
is not new in this commit.
While its not necessarily a common operation on a running node,
`get_our_node_id()` is used incredibly heavily in tests, and there
is no reason to not eat the extra ~64 bytes to just cache it.
chain::Filter::register_output may return an in-block dependent
transaction that spends the output. Test the scenario where the txdata
given to ChainMonitor::block_connected includes a commitment transaction
whose HTLC output is spent in the same block but not included in txdata.
Instead, it is returned by chain::Filter::register_output when given the
commitment transaction's HTLC output. This is a common scenario for
Electrum clients, which provided filtered txdata.
This expands the assertions on block ordering to apply to
`#[cfg(test)]` builds in addition to normal builds, requiring that
unit and functional tests have syntactically-valid (ie the previous
block hash pointer and the heights match the blocks) blockchains.
This requires a reasonably nontrivial diff in the functional tests
however it is mostly straightforward changes.
Many functional tests rely on being able to call block_connected
arbitrarily, jumping back in time to confirm a transaction at a
specific height. Instead, this takes us one step towards having a
well-formed blockchain in the functional tests.
We also take this opportunity to reduce the number of blocks
connected during tests, requiring a number of constant tweaks in
various functional tests.
Co-authored-by: Valentine Wallace <vwallace@protonmail.com>
Co-authored-by: Matt Corallo <git@bluematt.me>
Sadly the connected-in-order tests have to be skipped in our normal
test suite as many tests violate it. Luckily we can still enforce
it in the tests which run in other crates.
Co-authored-by: Matt Corallo <git@bluematt.me>
Co-authored-by: Jeffrey Czyz <jkczyz@gmail.com>
We allow users to configure the to_self_delay, which is analogous to
the cltv_expiry_delta in terms of its security context, so we should
allow users to specify both.
We similarly bound it on the lower end, but reduce that bound
somewhat now that it is configurable.
Useful for constructing route hints for private channels in invoices.
Co-authored-by: Valentine Wallace <vwallace@protonmail.com>
Co-authored-by: Antoine Riard <ariard@student.42.fr>
This will be used to expose forwarding info for route hints in the next commit.
Co-authored-by: Valentine Wallace <vwallace@protonmail.com>
Co-authored-by: Antoine Riard <ariard@student.42.fr>
This will be filled in in upcoming commits, then exposed in ChannelDetails
to allow constructing route hints for invoices.
Also update the cltv_expiry_deta comment in msgs::ChannelUpdate
Co-authored-by: Valentine Wallace <vwallace@protonmail.com>
Co-authored-by: Antoine Riard <ariard@student.42.fr>
We currently only use it to override the graph-specific features
returned in the route, though we should also use it to enable or
disable MPP.
Note that tests which relied on MPP behavior have had all of their
get_route calls upgraded to provide the MPP flag.
In the past we skipped doing this since invoice parsing occurs in a
different crate. However, we need to accept InvoiceFeatures in routing
now that we support MPP route collection, to detect if we can select
multiple paths or not. Further, we should probably take
rust-lightning-invoice as either a module or a subcrate in this repo.
This is largely useful for bindings, and the off-github discussion
around #814 concluded these should be pub, but the PR was not
updated to capture this. Now that the bindings support generation
for the structs, expose them.
`wait` doesn't capture enough of what's going on, but also Java
Java doesn't accpet methods just called `wait`, as it conflicts
with existing sync primitives on all Objects.
When a block is disconnected, the hash of the disconnected block was
used to update the last connected block. However, this amounts to a
no-op because these hashes should be equal. Successive disconnections
would update the hash but leave it one block off.
Normally, this not a problem because the last block_disconnected should
be followed by block_connected since the former is triggered by a chain
re-org. However, this assumes the user calls the API correctly and that
no failure occurs that would prevent block_connected from being called
(e.g., if fetching the connected block fails).
Instead, update the last block hash with the disconnected block's
previous block hash.
ChannelManager reads channel_state and last_block_hash while processing
funding_created and funding_signed messages. It writes these while
processing block_connected and block_disconnected events. To avoid any
potential deadlocks, have each site hold these locks independent of one
another and in a consistent order.
Additionally, use a RwLock instead of Mutex for last_block_hash since
exclusive access is not needed in funding_created / funding_signed and
cannot be guaranteed in block_connected / block_disconnected because of
the reads in the former.