In our route tests we need some "random" bytes for the router to
randomize amounts using. We generate this by building an actual
`KeysManager` and then deriving some random bytes using the
`EntropySource` trait. However, `get_route` (what we're normally
testing) doesn't actually use the random bytes, and even if it did,
using a `KeysManager` is just a fancy way of creating a constant,
so there's really no reason to do all the fancy crypto.
Instead, here, we change our routing tests and benchmarks to simply
use `[42; 32]` as the "random" bytes.
Until now, our routing benchmarks used a synthetic scorer,
generated by scoring random paths to build up some history. This is
pretty far removed from real-world routing conditions, as
alternative paths generally have no scoring information and even
the paths we do take have only one or two past scoring results.
Instead, we fetch a static serialized scorer, generated using
minutely probes. This means future changes to the scorer's data may
be harder to benchmark, but makes for substantially more realistic
benchmarks for changes which don't impact the serialized state.
Define an interface for BOLT 12 static invoice messages. The underlying
format consists of the original bytes and the parsed contents.
The bytes are later needed for serialization. This is because it must
mirror all the offer TLV records, including unknown ones, which aren't
represented in the contents.
Invoices may be created from an offer.
When we have `ChannelMonitorUpdate`s which are completing both
synchronously and asynchronously, we need to consider a channel as
unblocked based on the `ChannelManager` monitor update queue,
rather than by checking the `update_id`s.
Consider the case where a channel is updated, leading to a
`ChannelMonitorUpdate` which completes asynchronously. The update
completes, but prior to the `ChannelManager` receiving the
`MonitorEvent::Completed` it generates a further
`ChannelMonitorUpdate`. This second update completes synchronously.
As a result, when the `MonitorEvent` is processed, the event's
`monitor_update_id` is the first update, but there are no updates
queued and the channel should be free to return to be unblocked.
Here we fix this by looking only at the `ChannelManager` update
queue, rather than the update_id of the `MonitorEvent`.
While we don't anticipate many users having both synchronous and
asynchronous persists in the same application, there isn't much
cost to supporting it, which we do here.
Found by the chanmon_consistency target.
Now that we are gearing up to support fully async monitor storage,
we really need to fuzz monitor updates not completing before a
reload, which we do here in the `chanmon_consistency` fuzzer.
While there are more parts to async monitor updating that we need
to fuzz, this at least gets us started by having basic async
restart cases handled. In the future, we should extend this to make
sure some basic properties (eg claim/balance consistency) remain
true through `chanmon_consistency` runs.
This includes when building TxCreationKeys, as well as for open_channel
and accept_channel messages. Note: this is only for places where we are
retrieving the current per commitment point, which excludes
channel_reestablish.
For quite some time, LDK has force-closed channels if the peer
sends us a feerate update which is below our `FeeEstimator`'s
concept of a channel lower-bound. This is intended to ensure that
channel feerates are always sufficient to get our commitment
transaction confirmed on-chain if we do need to force-close.
However, we've never checked our channel feerate regularly - if a
peer is offline (or just uninterested in updating the channel
feerate) and the prevailing feerates on-chain go up, we'll simply
ignore it and allow our commitment transaction to sit around with a
feerate too low to get confirmed.
Here we rectify this oversight by force-closing channels with stale
feerates, checking after each block. However, because fee
estimators are often buggy and force-closures piss off users, we
only do so rather conservatively. Specifically, we only force-close
if a channel's feerate is below the minimum `FeeEstimator`-provided
minimum across the last day.
Further, because fee estimators are often especially buggy on
startup (and because peers haven't had a chance to update the
channel feerates yet), we don't force-close channels until we have
a full day of feerate lower-bound history.
This should reduce the incidence of force-closures substantially,
but it is expected this will still increase force-closures somewhat
substantially depending on the users' `FeeEstimator`.
Fixes#993
When we connect 100 blocks in a row, requiring the fuzz input to
contain 100 fee estimator results is uneccessary, so add a bool
that lets us skip those reads.
Closure due to feerate disagreements are a specific closure reason
which admins can understand and tune their config (in the form of
their `FeeEstimator`) to avoid, so having a separate
`ClosureReason` for it is useful.
In the next commit we'll add a second field to
`ChannelError::Close` so here we prep by converting existing calls
to the constructor function, which is almost a full-file sed.