To enable gossip_queries message decoding, this commit implements the
wire module's Encoding trait for each message type. It also adds these
messages to the wire module's Message enum and the read function to
enable decoding of a buffer.
Support for the gossip_queries feature flag (bits 6/7) is added to the
Features struct. This feature is available in the Init and Node
contexts. The gossip_queries feature is not fully implemented so this
feature is disabled when sent to peers in the Init message.
This takes the now-public `NetworkGraph` message handling functions
and splits them all into two methods - one which takes a required
Secp256k1 context and verifies signatures and one which takes only
the unsigned part of the message and does not take a Secp256k1
context.
This both clarifies the public API as well as simplifies it, all
without duplicating code.
Finally, this adds an assertion in the Router fuzzer to make sure
the constants used for message deserialization are correct.
This makes the public utility methods in `NetworkGraph` able to do
UTXO lookups ala `NetworkMsgHandler`'s `RoutingMessageHandler`
implementation, slightly simplifying the public interface.
We also take this opportunity to verify signatures before calling
out to UTXO lookups, under the "do actions in order of
cheapest-to-most-expensive to reduce DoS surface" principle.
These functions were created but previously not exported, however
they are useful if we want to skip signature checking when accepting
routing messages (which we really should be doing in the routing
fuzzer).
Like the previous commit for channel-closed monitor updates for
inbound channels during processing of a funding_created message,
this resolves a more general issue for closing outbound channels
which have sent a funding_created but not yet received a
funding_signed.
This issue was also detected by full_stack_target.
To make similar issues easier to detect in testing and fuzzing, an
additional assertion is added to panic on updates to a channel
monitor before registering it.
The full_stack_target managed to find a bug where, if we receive
a funding_created message which has a channel_id identical to an
existing channel, we'll end up
(a) having the monitor update for the new channel fail (due to
duplicate outpoint),
(b) creating a monitor update for the new channel as we
force-close it,
(c) panicing due to the force-close monitor update is applied to
the original channel and is considered out-of-order.
Obviously we shouldn't be creating a force-close monitor update for
a channel which can never appear on chain, so we do that here and
add a test which previously failed and checks a few
duplicate-channel-id cases.
In updating the router fuzzer, it discovered that a remote peer can
cause us to overflow while multiplying the channel capacity value.
Since the value should never exceed 21 million BTC, we just add a
check for that.
We had code in the router to support sending a payment via a single
hop across channels exclusively provided by the next-/last-hop hints.
However, in updating the fuzzer, I noted that this case not only
didn't work, but paniced in some cases.
Here, we both fix the panic, as well as write a new test which
ensures we don't break support for such routing in the future.
If we receive a preimage for an outgoing HTLC that solves an output on a
backwards force-closed channel, we need to claim the output on-chain.
Note that this commit also gets rid of the channel monitor redundantly setting
`self.counterparty_payment_script` in `check_spend_counterparty_transaction`.
Co-authored-by: Antoine Riard <ariard@student.42.fr>
Co-authored-by: Valentine Wallace <vwallace@protonmail.com>
Now callers will separately retrieve the claim requests/
holder revokable script and the new watched holder outputs.
This will be used in the next commit for times when we
need to get holder claim requests, but don't have access to
the holder commitment transaction.
Helpful for debugging. I also included the change in the provide_preimage method
signature which will be used in an upcoming commit, because commit-wise it was
easier to combine the changes.
If the channel is hitting the chain right as we receive a preimage,
previous to this commit the relevant ChannelMonitor would never
learn of this preimage.
The ChannelMonitor already monitors the chain for counterparties
revealing preimages, and will give the HTLCSources back to the
ChannelManager for claiming. Thus it's unnecessary for the ChannelManager
to monitor these HTLCs itself.
See is_resolving_htlc_output:
- if the counterparty broadcasted and then claimed one of the HTLCs we
offered them, line 2015 is where the ChannelMonitor gives the ChannelManager
the HTLC source
- if we broadcasted and they claimed an HTLC we offered them, line 2025 is
where the ChannelMonitor gives the ChannelManager the HTLC source
If a persister returns a temporary failure, the channel monitor should be able
to be put on ice and then revived later. If a persister returns a permanent
failure, the channel should be force closed.
Intended to be a cross-platform implementation of the
channelmonitor::Persist trait.
This adds a new lightning-persister crate, that uses the
newly exposed lightning crate's test utilities.
Notably, this crate is pretty small right now. However, due to
future plans to add more data persistence (e.g. persisting the
ChannelManager, etc) and a desire to avoid pulling in filesystem
usage into the core lightning package, it is best for it to be
separated out.
Note: Windows necessitates the use of OpenOptions with the `write`
permission enabled to `sync_all` on a newly opened channel's
data file.
- The ChainMonitor should:
Whenever a new channel is added or updated, these updates
should be conveyed to the persister and persisted to disk.
Even if the update errors while it's being applied, the
updated monitor still needs to be persisted.
We remove test_no_failure_dust_htlc_local_commitment from our test
framework as this test deliberately throwing junk transaction in
our monitoring parsing code is hitting new assertions.
This test was added in #333, but it sounds as an oversight as the
correctness intention of this test (i.e verifying lack of dust
HTLCs canceling back in case of junk commitment transaction) doesn't
currently break.
This test is a mutation to underscore the detetection logic bug
we had before #653. HTLC value routed is above the remaining
balance, thus inverting HTLC and `to_remote` output. HTLC
will come second and it wouldn't be seen by pre-#653 detection
as we were eneumerate()'ing on a watched outputs vector (Vec<TxOut>)
thus implictly relying on outputs order detection for correct
spending children filtering.
Previously, outputs were monitored based on txid and an index yelled
from an enumeration over the returned selected outputs by monitoring
code. This is always have been broken but was only discovered while
introducing anchor outputs as those ones rank always first per BIP69.
We didn't have test cases where a HTLC was bigger than a party balance
on a holder commitment and thus not ranking first.
Next commit introduce test coverage.
In review of the final doc changes in #649, I noticed there
appeared to be redundant monitored-outpoints function in
`ChannelMonitor` - `get_monitored_outpoints()` and
`get_outputs_to_watch()`.
In 6f08779b04,
get_monitored_outpoints() was added, with its behavior largely the
same as today's - only returning the set of remote commitment txn
outputs that we've learned about on-chain. This is clearly not
sufficient, and in 73dce207dd,
`get_outputs_to_watch` was added which was overly cautious to
ensure nothing was missed. Still, the author of 73dce207dd
(me) seemed entirely unaware of the work in 6f08779b04
(also me), despite the function being the literal next function in
the same file. This is presumably because it was assumed that
`get_monitored_outpoints` referred to oupoints for which we should
monitor for spends of (which is true), while `get_outputs_to_watch`
referred to outpouts which we should monitor for the transaction
containing said output (which is not true), or something of that
nature. Specifically, it is the expected behavior that the only
time we care about `Filter::register_tx` is for the funding
transaction (which we aren't aware of the inputs of), but for all
other transactions we register interest on the basis of an outpoint
in the previous transaction (ie via `Filter::register_output`).
Here we drop the broken-on-day-one `get_monitored_outpoints()`
version, but assert in testing that the values which it would return
are all present in `get_outputs_to_watch()`.
This also pays a fee on the transactions we generate in response to
SpendableOutputDescriptors in tests.
This fixes the known issues in #630, though we should test for
standardness in other ways as well.
This resolves a number of bugs around how we calculate feerates on
closing transactions:
* We previously calculated the weight wrong both by always
counting two outputs instead of counting the number of outputs
that actually exist in the closing transaction and by not
counting the witness redeemscript.
* We use assertions to check the calculated weight matches what we
actually build (with debug_assertions for variable-length sigs).
* As an additional sanity check, we really should check that the
transaction had at least min-relay-fee when we were the channel
initator.
It was noticed (via clippy) by @casey that we were taking and then
immediately dropping the total_consistency_lock because `let _ =`
doesn't actually bind the response to anything. This appears to be
a consequence of wanting `if let Some(_) =` to not hold a ref to
the contained value at all, but is relatively surprising to me.
Previously, we had a concept of "rescaning" blocks when we detected
a need to monitor for a new set of outputs in future blocks while
connecting a block. In such cases, we'd need to possibly learn about
these new spends later in the *same block*, requiring clients who
filter blocks to get a newly-filtered copy of the same block. While
redoing the chain access API, it became increasingly clear this was
an overly complicated API feature, and it seems likely most clients
will not use it anyway.
Further, any client who *does* filter blocks can simply update their
filtering algorithm to include any descendants of matched
transactions in the filter results, avoiding the need for rescan
support entirely.
Thus, it was decided that we'd move forward without rescan support
in #649, however to avoid significant further changes in the
already-large 649, we decided to fully remove support in a
follow-up.
Here, we remove the API features that existed for rescan and fix
the few tests to not rely on it.
After this commit, we now only ever have one possible version of
block connection transactions, making it possible to be
significantly more confident in our test coverage actually
capturing all realistic scenarios.
Given the chain::Watch interface is defined in terms of ChannelMonitor
and ChannelMonitorUpdateErr, move channelmonitor.rs from the ln module
to the chain module.
Transaction data from a block may be filtered before it is passed to
block_connected functions, which may need the index of each transaction
within the block. Rather than define each function in terms of a slice
of tuples, define a type alias for the slice where it can be documented.
Outputs to watch are tracked by ChannelMonitor as of
73dce207dd. Instead of determining new
outputs to watch independently using ChainWatchedUtil, do so by
comparing against outputs already tracked. Thus, ChainWatchedUtil and
WatchEvent are no longer needed.