If we end up "paying" for an `htlc_minimum_msat` with fees, we
increment `already_collected_value_msat` by more than the amount
of the path that we collected (who's `value_contribution_msat` is
higher than the total payment amount, despite having been reduced
down to the payment amount).
This throws off our total value collection target, though in the
coming commit(s) it would also throw off our path selection
calculations.
In general we should avoid taking paths that we are confident will
not work as much possible, but we should be willing to try each
payment at least once, even if its over a channel that failed
recently. A full Bitcoin penalty for such a channel seems
reasonable - lightning fees are unlikely to ever reach that point
so such channels will be scored much worse than any other potential
path, while still being below `u64::max_value()`.
When we consider sending an HTLC over a given channel impossible
due to our current knowledge of the channel's liquidity, we
currently always assign a penalty of `u64::max_value()`. However,
because we now refuse to retry a payment along the same path in
the router itself, we can now make this value configurable. This
allows users to have a relatively high knowledge decay interval
without the side-effect of refusing to try the only available path
in cases where a channel is intermittently available.
When an HTLC fails, we currently rely on the scorer learning the
failed channel and assigning an infinite (`u64::max_value()`)
penalty to the channel so as to avoid retrying over the exact same
path (if there's only one available path). This is common when
trying to pay a mobile client behind an LSP if the mobile client is
currently offline.
This leads to the scorer being overly conservative in some cases -
returning `u64::max_value()` when a given path hasn't been tried
for a given payment may not be the best decision, even if that
channel failed 50 minutes ago.
By tracking channels which failed on a payment part level and
explicitly refusing to route over them we can relax the
requirements on the scorer, allowing it to make different decisions
on how to treat channels that failed relatively recently without
causing payments to retry the same path forever.
This does have the drawback that it could allow two separate part
of a payment to traverse the same path even though that path just
failed, however this should only occur if the payment is going to
fail anyway, at least as long as the scorer is properly learning.
Closes#1241, superseding #1252.
ReadOnlyNetworkGraph uses BTreeMap to store its nodes and channels, but
these data structures are not supported by C bindings. Expose look-up
functions on these maps in lieu of such support.
Dependabot has a ton of issues with its rust integration that makes
it wholly useless, and very annoying:
* It has no concept of MSRV, opening PRs that are not going to pass
CI.
* It has no concept of patch-level - if we depend on tokio 1.X,
that means any version of tokio > 1.X, but dependabot insists on
opening a PR to "update us" to tokio 1.X + 1, even though it
doesn't impact what version of our users use (and often violates
MSRV).
* It has no concept of dependencies that rely on each other,
causing it to open a PR to update us to bitcoin_hashes X + 1,
even though we're still depending on rust-bitcoin Y which
depends on bitcoin_hashes X, causing build failure.
* It hogs CI resources, getting CI run twice, once for the branch
once for the PR.
* It creates branches directly on the rust-lightning repo, making
it look like the work is somehow connected to the
lightningdevkit project, even though it isn't, and spamming the
local clones of project contributors.
At the end of the day, dependabot has never meaningfully
contributed to notifying us of an important dependency, and,
really, we don't have enough dependencies for it to matter.
In order to avoid failing to find paths due to the new channel
saturation limit, if we fail to find enough paths, we simply
disable the saturation limit for further path finding iterations.
Because we can now increase the maximum sent over a given channel
during routefinding, we may now generate redundant paths for the
same payment. Because this is wasteful in the network, we add an
additional pass during routefinding to merge redundant paths.
Note that two tests which previously attempted to send exactly the
available liquidity over a channel which charged an absolute fee
need updating - in those cases the router will first collect a path
that is saturation-limited, then attempt to collect a second path
without a saturation limit while stil honoring the existing
utilized capacity on the channel, causing failure as the absolute
fee must be included.
Currently we only opt to split a payment into an MPP if we have
completely and totally used a channel's available capacity (up to
the announced htlc_max or on-chain capacity, whichever is lower).
This is obviously destined to fail as channels are unlikely to have
their full capacity available.
Here we do the minimum viable fix by simply limiting channels to
only using up to a configurable power-of-1/2. We default this new
configuration knob to 1 (1/2 of the channel) so as to avoid a
substantial change but in the future we may consider changing this
to 2 (1/4) or even 3 (1/8).
`LowerBoundedFeeEstimator` is a wrapper for `Deref`s to `FeeEstimator`s
that limits the get_est_sat_per_1000_weight() method to no less than 253
sats/kW.
Because we serialize `Instant`s using wallclock time in
`ProbabilisticScorer`, if time goes backwards across restarts we
may end up with `Instant`s in the future, which causes rustc prior
to 1.60 to panic when calculating durations. Here we simply avoid
this by setting the time to `now` if we get a time in the future.
As the map values are no longer only `channel_id`s, but also a
`counterparty_node_id`s, the map is renamed to better correspond to
whats actually stored in the map.
When we send payment probes, we generate the [`PaymentHash`] based on a
probing cookie secret and a random [`PaymentId`]. This allows us to
discern probes from real payments, without keeping additional state.
Using this field just for MPP doesn't make sense when it could
intuitively also be used to indicate single-path payments. We therefore
rename `max_mpp_path_count` to `max_path_count` and make sure that a
value of 1 ensures MPP is not even tried.
This fixes an insta-panic in `ChannelMonitor` deserialization where
we always `unwrap` a previous value to determine the default value
of a later field. However, because we always ran the `unwrap`
before the previous field is read, we'd always panic.
The fix is rather simple - use a `OptionDeserWrapper` for
`default_value` fields and only fill in the default value if no
value was read while walking the TLV stream.
The only complexity comes from our desire to support
`read_tlv_field` calls that use an explicit field rather than an
`Option` of some sort, which requires some statement which can
assign both an `OptionDeserWrapper<T>` variable and a `T` variable.
We settle on `x = t.into()` and implement `From<T> for
OptionDeserWrapper<T>` which works, though it requires users to
specify types explicitly due to Rust determining expression types
prior to macro execution, completely guessing with no knowlege for
integer expressions (see
https://github.com/rust-lang/rust/issues/91369).
A user might want to explicitly penalize or prioritize a particular
node. We now allow them to do so by specifying a manual penalty
override for a given node that is then returned by the scorer.