On each block, for each `ChannelMonitor`, we log a status statement
in `OnChainTx::update_claims_view_from_requests`. This can add up
to quite a bit, and is generally not very interesting when we don't
actually do anything if there's no claims to bump.
Here we drop the log if we have no claims to work with, but retain
it if we process any claims.
On a high-traffic/channel node, `Channel .* does not qualify for a
feerate change.*` is our most common log, and it doesn't provide
much useful information. It's logged in two cases - (a) where the
estimator feerate is less than the current channel feerate but not
by more than half twice and (b) where we'd like to update the
channel feerate but the peer is disconnected or channel not
available for updates.
Because these conditions can persist and we log them once a minute
the volume of logs can add up quickly. Here we simply remove the
log in case (a), though leave (b) as its anticipated to be somewhat
quieter and does indicate a persistent issue that should be
addressed (possibly by closing the channel).
Multiple times we've had users wonder why they see `Error handling
message from.*; ignoring: Couldn't find channel for update` in
their logs and wonder if its related to their channel
force-closing. While this does indicate a peer is sending us gossip
our of order (and thus misbehaving), its not relevant to channel
operation and the logged message and level should indicate that.
Thus, here, we move the level to Gossip and add "gossip" between
"handling" and "message" (so it reads "Error handling gossip
message from.*").
Fixes#2471
In order to continuously monitor our dependencies for security
vulnerabilities, we introduce a new CI job that will use `cargo audit`
to check for any known vulnerabilities.
This job is run on a daily schedule. For each new advisory, a new issue
will be created.
.. as the `electrsd` crate doesn't support it.
While we previously did so in our CI script, we now also `cfg`-gate the
tests and dependencies for easier handling.
Preallocate for 8 items in the vec. I chose this value for
1. features
2. description
3. payment hash
4. expire time
5. min_final_cltv
6. payment secret
7. route hint
8. for the memes
As part of the ongoing async signer work, our holder signatures must
also be capable of being obtained asynchronously. We expose a new
`ChannelMonitor::signer_unblocked` method to retry pending onchain
claims by re-signing and rebroadcasting transactions. Unfortunately, we
cannot retry said claims without them being registered first, so if
we're not able to obtain the signature synchronously, we must return the
transaction as unsigned and ensure it is not broadcast.
This method is meant to be used as a last resort when a user is forced
to broadcast the current state, even if it is stale, in an attempt to
claim their funds in the channel. Previously, we'd return the commitment
and HTLC transactions such that they broadcast them themselves. Doing so
required a different code path, one which was not tested, to obtain
these transactions than our usual path when force closing. It's not
worth maintaining both, and it's much simpler for us to broadcast
instead.
Previously, we only had blanket impls for `KVStore`. However, in order
to enable the use of `dyn KVStore + Send + Sync` instead of a `KVStore`
generic, we here also add the corresponding blanket implementations for
said type signature.
The whole point of full_stack_target is to just expose our entire
API to the fuzzer and see what happens. Sadly, we're really only
exposing a small subset of our API. This improves that by exposing
a handful of other assorted methods from ChannelManager and
PeerManager.
A client node might choose not to handle `Event::BumptTransaction`
events and leave bumping / Anchor output spending to a trusted
counterparty.
However, `Event::BumptTransaction` currently doesn't offer any clear
indication what channel and/or counterparty it is referring to. In order
to allow filtering these events, we here expose the `channel_id` and
`counterparty_node_id` fields.
This exposes details around pending HTLCs in ChannelDetails. The state
of the HTLC in the state machine is also included, so it can be
determined which protocol message the HTLC is waiting for to advance.