We setup an MPP scenario with two paths in which we need to overpay to
reach `htlc_minimum_msat`. We then fail the overpaid path and check that
on retry our `max_total_routing_fee_msat` only accounts for the path
fees, but not for the fees overpaid in the first attempt.
We exclude any candidate hops if we find that using them would let the
aggregated path routing fees exceed `max_total_routing_fee_msat`.
Moreover, we return an error if the aggregated fees over all paths of
the selected route would surpass `max_total_routing_fee_msat`.
Currently, users have no means to upper-bound the total fees accruing
when finding a route. Here, we add a corresponding field to
`RouteParameters` which will be used to limit the candidate set during
path finding in the following commits.
`ChannelManager::finish_force_close_channel` exists to do cleanups
which must happen without the `per_peer_state` mutex held. However,
because it lacked lock assertions, several changes snuck in
recently which resulted in it running with peer-state locks held,
risking a deadlock if some HTLCs need to be failed.
We've run into this several times in the wild, likely due to
https://github.com/ElementsProject/lightning/issues/6200 wherein a node on the
path will error with 0x1000 but not provide a channel update (a spec
violation).
Previously, we would blame the inbound edge even though the buggy peer wanted
us to blame the outbound edge. Since this issue seems to be recurring and our
blaming the inbound edge is causing us to punish innocent channels, trust the
peer that the outbound edge is the one to blame.
Previously this value would be incorrectly set to true because we wouldn't
account for blinded hops when determining if we were processing the last hop's
failure packet.
Our ultimate goal with this field is to set
PaymentPathFailed::payment_failed_permanently, so use this name rather than
flipping a bool back and forth across methods.
While there is no great way to handle a true failure to persist a
`ChannelMonitorUpdate`, it is confusing for users for there to be
no error variant at all on an I/O operation.
Thus, here we re-add the error variant removed over the past
handful of commits, but rather than handle it in a truly unsafe
way, we simply panic, optimizing for maximum mutex poisoning to
ensure any future operations fail and return immediately.
In the future, we may consider changing the handling of this to
instead set some "disconnect all peers and fail all operations"
bool to give the user a better chance to shutdown in a semi-orderly
fashion, but there's only so much that can be done in lightning if
we truly cannot persist new updates.
In general, doc comments on trait impl blocks are not very visible
in rustdoc output, and unless they provide useful information they
should be elided.
Here we drop useless doc comments on `ChainMonitor`'s `Watch` impl
methods.
Now that `handle_new_monitor_update` can no longer return an error,
it similarly no longer needs any code to handle errors. Here we
remove that code, substantially reducing macro variants.
Since we now (almost) support async monitor update persistence, the
documentation on `ChannelMonitorUpdateStatus` can be updated to no
longer suggest users must keep a local copy that persists before
returning. However, because there are still a few remaining issues,
we note that async support is currently beta and explicily warn of
potential for funds-loss.
Fixes#1684
The `MonitorEvent::CommitmentTxConfirmed` has always been a result
of us force-closing the channel, not the counterparty doing so.
Thus, it was always a bit of a misnomer. Worse, it carried over
into the channel's `ClosureReason` in the event API.
Here we simply rename it and use the proper `ClosureReason`.
When a `ChannelMonitorUpdate` fails to apply, it generally means
we cannot reach our storage backend. This, in general, is a
critical issue, but is often only a transient issue.
Sadly, users see the failure variant and return it on any I/O
error, resulting in channel force-closures due to transient issues.
Users don't generally expect force-closes in most cases, and
luckily with async `ChannelMonitorUpdate`s supported we don't take
any risk by "delaying" the `ChannelMonitorUpdate` indefinitely.
Thus, here we drop the `PermanentFailure` variant entirely, making
all failures instead be "the update is in progress, but won't ever
complete", which is equivalent if we do not close the channel
automatically.
Two tests in the payment tests currently rely on failing to persist
ChannelMonitorUpdates as their method of failing payments before
they even get out the door.
In the coming commits we'll drop the persist failure error codes,
so here rewrite these tests to rely on trying to send more than is
available in a channel.
Earlier @benthecarman re-exported `RouteHint` to make life-easier
for developpers that use `lightning-invoice` and don't use the
`lightning`-crate.
This only solved part of the issue. To create a `RouteHint` the
developer must also have access to `RouteHintHop`.
See also:
PR https://github.com/lightningdevkit/rust-lightning/pull/2572
commit 79b426f49b
In bindings, we can't use unbounded generic types, and thus have to
rip out the `ScoreParams` and replace them with static
`ProbabilisticScoringFeeParams` universally. To make this easier,
using `Default::default()` everywhere allows the type to change out
from under the test without the test needing to change.
Our "what is the success probability of paying over a channel with
the given liquidity bounds" calculation currently assumes the
probability of where the liquidity lies in a channel is constant
across the entire capacity of a channel. This is obviously a
somewhat dubious assumption given most nodes don't materially
rebalance and flows within the network often push liquidity
"towards the edges".
Here we add an option to consider this when scoring channels during
routefinding. Specifically, if a new `linear_success_probability`
flag is unset on `ProbabilisticScoringFeeParameters`, rather than
assuming a PDF of `1` (across the channel's capacity scaled from 0
to 1), we use `(x - 0.5)^2`.
This assumes liquidity is likely to be near the edges, which
matches experimental results. Further, calculating the CDF (i.e.
integral) between arbitrary points on the PDF is trivial, which we
do as our main scoring function.
While this (finally) introduces floats in our scoring, its not
practical to exponentiate using fixed-precision, and benchmarks
show this is a performance regression, but not a huge one, more
than made up for by the increase in payment success rates.
When we started considering the in-flight amounts when scoring, we
took the approach of considering the in-flight amount as an
effective reduction in the channel's total capacity. When we were
scoring using a flat success probability PDF, that was fine,
however in the next commit we'll move to a highly nonlinear one,
which makes this a pretty confusing heuristic.
Here, instead, we move to considering the in-flight amount as
simply an extension of the amount we're trying to send over the
channel, which is equivalent for the flat success probability PDF,
but makes much more sense in a nonlinear world.
If we are examining a channel for which we have no information at
all, we traditionally assume the HTLC success probability is
proportional to the channel's capacity. While this may be the case,
it is not the case that a tiny payment over a huge channel is
guaranteed to succeed, as we assume. Rather, the probability of
such success is likely closer to 50% than 100%.
Here we try to capture this by simply scaling the success
probability for channels where we have no information down
linearly. We pick 75% as the upper bound rather arbitrarily - while
50% may be more accurate, its possible it would lead to an
over-reliance on channels which we have paid through in the past,
which aren't necessarily always the best candidates.
Note that we only do this scaling for the historical bucket
tracker, as there we can be confident we've never seen a successful
HTLC completion on the given channel. If we were to apply the same
scaling to the simple liquidity bounds based scoring we'd penalize
channels we've never tried over those we've only ever fails to pay
over, which is obviously not a good outcome.
Our "what is the success probability of paying over a channel with
the given liquidity bounds" calculation is reused in a few places,
and is a key assumption across our main score calculation and the
historical bucket score calculations.
Here we break it out into a function to make it easier to
experiment with different success probability calculations.
Note that this drops the numerator +1 in the liquidity scorer,
which was added to compensate for the divisor + 1 (which exists to
avoid divide-by-zero), making the new math slightly less correct
but not by any material amount.