https://github.com/tkaitchuck/aHash/pull/196 bumped the MSRV of
`ahash` in a patch release, which makes it rather difficult for us
to have it as a dependency.
Further, it seems that `ahash` hasn't been particularly robust in
the past, notably
https://github.com/tkaitchuck/aHash/issues/163 and
https://github.com/tkaitchuck/aHash/issues/166.
Luckily, `core` provides `SipHasher` even on no-std (sadly its
SipHash-2-4 unlike the SipHash-1-3 used by the `DefaultHasher` in
`std`). Thus, we drop the `ahash` dependency entirely here and
simply wrap `SipHasher` for our `no-std` HashMaps.
In the next commit we'll drop the `ahash` dependency in favor of
directly calling `getrandom` to seed our hash tables. However,
we'd like to depend on `getrandom` only on certain platforms *and*
only when certain features (no-std) are set.
This introduces an indirection crate to do so, allowing us to
depend on it only when `no-std` is set but only depending on
`getrandom` on platforms which it supports.
When the `max_total_routing_fee_msat` parameter was added to
`RouteParameters`, the serialization used `map` to get the max fee,
accidentally writing an `Option<Option<u64>>`, but then read it as
an `Option<u64>`. Thus, any `Route`s with a `route_params` written
will fail to be read back.
Luckily, this is an incredibly rarely-used bit of code, so only one
user managed to hit it.
`Route`'s blinded_path serialization logic writes a blinded path
`Option` per path hop, however on read we (correctly) only read one
blinded path `Option` per path. This causes serialization of
`Route`s with blinded paths to fail to round-trip.
Here we fix this by writing blinded paths per path.
`fails_paying_for_bolt12_invoice` tests that we fail to send a
payment if the router returns `Ok` but includes a bogus route (one
with 0-length paths). While this marginally increases our test
coverage, in the next commit we'll be testing that all routes
round-trip serialization, which fails here as bogus routes are not
supported in deserialization.
Because this isn't particularly critical test coverage, we simply
opt to drop the test entirely here.
When an `std::future::Future` is `poll()`ed, we're only supposed to
use the latest `Waker` provided. However, we currently push an
`StdWaker` onto our callback list every time `poll` is called,
waking every `Waker` but also using more and more memory until the
`Future` itself is woken.
Here we fix this by removing any `StdWaker`s stored for a given
`Future` when it is `drop`ped or prior to pushing a new `StdWaker`
onto the list when `poll`ed.
Sadly, the introduction of a `Drop` impl for `Future` means we
can't trivially destructure the struct any longer, causing a few
methods to need to take `Future`s by reference rather than
ownership and `clone` a few `Arc`s.
Fixes#2874
When an `std::future::Future` is `poll()`ed, we're only supposed to
use the latest `Waker` provided. However, we currently push an
`StdWaker` onto our callback list every time `poll` is called,
waking every `Waker` but also using more and more memory until the
`Future` itself is woken.
Here we take a step towards fixing this by giving each `Future` a
unique index and storing which `Future` an `StdWaker` came from in
the callback list. This sets us up to deduplicate `StdWaker`s by
`Future`s in the next commit.
In the next commit we'll fix a memory leak due to keeping too many
`std::task::Waker` callbacks in `FutureState` from redundant `poll`
calls, but first we need to split handling of `StdWaker`-based
future wake callbacks from normal ones, which we do here.
On each block, for each `ChannelMonitor`, we log two status
statements in `OnChainTx::update_claims_view_from_matched_txn`.
This can add up to quite a bit, and is generally not very
interesting when we don't actually do anything if there's no claims
to bump.
Here we drop both logs if we have no claims to work with, but
retain it if we process any claims.
On each block, for each `ChannelMonitor`, we log a status statement
in `OnChainTx::update_claims_view_from_requests`. This can add up
to quite a bit, and is generally not very interesting when we don't
actually do anything if there's no claims to bump.
Here we drop the log if we have no claims to work with, but retain
it if we process any claims.
On a high-traffic/channel node, `Channel .* does not qualify for a
feerate change.*` is our most common log, and it doesn't provide
much useful information. It's logged in two cases - (a) where the
estimator feerate is less than the current channel feerate but not
by more than half twice and (b) where we'd like to update the
channel feerate but the peer is disconnected or channel not
available for updates.
Because these conditions can persist and we log them once a minute
the volume of logs can add up quickly. Here we simply remove the
log in case (a), though leave (b) as its anticipated to be somewhat
quieter and does indicate a persistent issue that should be
addressed (possibly by closing the channel).
Multiple times we've had users wonder why they see `Error handling
message from.*; ignoring: Couldn't find channel for update` in
their logs and wonder if its related to their channel
force-closing. While this does indicate a peer is sending us gossip
our of order (and thus misbehaving), its not relevant to channel
operation and the logged message and level should indicate that.
Thus, here, we move the level to Gossip and add "gossip" between
"handling" and "message" (so it reads "Error handling gossip
message from.*").
Fixes#2471
In order to continuously monitor our dependencies for security
vulnerabilities, we introduce a new CI job that will use `cargo audit`
to check for any known vulnerabilities.
This job is run on a daily schedule. For each new advisory, a new issue
will be created.
.. as the `electrsd` crate doesn't support it.
While we previously did so in our CI script, we now also `cfg`-gate the
tests and dependencies for easier handling.
Preallocate for 8 items in the vec. I chose this value for
1. features
2. description
3. payment hash
4. expire time
5. min_final_cltv
6. payment secret
7. route hint
8. for the memes
As part of the ongoing async signer work, our holder signatures must
also be capable of being obtained asynchronously. We expose a new
`ChannelMonitor::signer_unblocked` method to retry pending onchain
claims by re-signing and rebroadcasting transactions. Unfortunately, we
cannot retry said claims without them being registered first, so if
we're not able to obtain the signature synchronously, we must return the
transaction as unsigned and ensure it is not broadcast.
This method is meant to be used as a last resort when a user is forced
to broadcast the current state, even if it is stale, in an attempt to
claim their funds in the channel. Previously, we'd return the commitment
and HTLC transactions such that they broadcast them themselves. Doing so
required a different code path, one which was not tested, to obtain
these transactions than our usual path when force closing. It's not
worth maintaining both, and it's much simpler for us to broadcast
instead.
Previously, we only had blanket impls for `KVStore`. However, in order
to enable the use of `dyn KVStore + Send + Sync` instead of a `KVStore`
generic, we here also add the corresponding blanket implementations for
said type signature.
The whole point of full_stack_target is to just expose our entire
API to the fuzzer and see what happens. Sadly, we're really only
exposing a small subset of our API. This improves that by exposing
a handful of other assorted methods from ChannelManager and
PeerManager.