We'd previously ignore the existing amount transactions were already
attempting to spend when deciding whether we should add more inputs
throughout coin selection. This would result in us attaching more inputs
than necessary to satisfy our target amount. In the case of HTLC
transactions, we'd burn the HTLC amount completely, since the pre-signed
transaction has zero fee (input amount == output amount).
Along the way, we also fix the slight overpayment in anchor
transactions. We now properly account for the fees the transaction
already paid for, simply by pretending the fees are part of the anchor
input amount.
Since we don't know the total input amount of an external claim (those
which come anchor channels), we can't limit our feerate bumps by the
amount of funds we have available to use. Instead, we choose to limit it
by a margin of the new feerate estimate.
If the last hop was provided by route hint we assume it's not an announced channel.
If furthermore only a single route hint is provided we refrain from probing through
all the way to the end and instead probe up to the second-to-last channel.
Optimally we'd do this not based on above mentioned assumption but
rather by checking inclusion in our network graph. However, we don't
have access to our graph in `ChannelManager`.
We add a `ChannelManager::send_preflight_probes` method that can be used
to send pre-flight probes given some [`RouteParameters`]. Additionally,
we add convenience methods in for spontaneous probes and send pre-flight
probes for a given invoice.
As pre-flight probes might take up some of the available liquidity, we
here introduce that channels whose available liquidity is less than the
required amount times
`UserConfig::preflight_probing_liquidity_limit_multiplier` won't be used
to send pre-flight probes.
This commit is a more or less a carbon copy of the pre-flight
probing code recently added to LDK Node.
When sending preflight probes, we want to exclude last hops that are
possibly announced. To this end, we here include a new field in
`RouteHop` that will be `true` when we either def. know the hop to be
announced, or, if there exist public channels between the hop's
counterparties that this hop might refer to (i.e., be an alias for).
Sadly the pinning introduced in 050f5a9029
was brittle in the face of any further syn updates, and has already
broken.
Here we fix it by looking up the actual version of syn to pin.
Note that this dependency is somewhat nonsense as its actually only
a `criterion` dependency, pulled in even though we haven't set the
bench flag (as we aren't yet using `resolver = 2`).
Our `Trusted*` wrappers in `chan_utils` expose additional inner
fields by reference. However, because they were not explicitly
marked as returning a reference with the wrapped struct's
lifetimes, rustc was considering them to return a reference with
the wrapper struct's lifetime.
This is unnecessarily restrictive, and resulted in the addition of
a clone in 9850c5814a which we remove
here.
Scoring buckets are stored as fixed point ints, with a 5-bit
fractional part (i.e. a value of 1.0 is stored as "32"). Now that
we also have 32 buckets, this leads to the codebase having many
references to 32 which could reasonably be confused for each other.
Thus, we add a constant here for the value 1.0 in our fixed-point
scheme.
`historical_estimated_channel_liquidity_probabilities` previously
decayed to `Some(([0; 8], [0; 8]))`. This was thought to be useful
in that it allowed identification of cases where data was previously
available but is now decayed away vs cases where data was never
available. However, with the introduction of
`historical_estimated_payment_success_probability` (which uses the
existing scoring routines so will decay to `None`) this is
unnecessarily confusing.
Given data which has decayed to zero will also not be used anyway,
there's little reason to keep the old behavior, and we now decay to
`None`.
We also take this opportunity to split the overloaded
`get_decayed_buckets`, removing uneccessary code during scoring.
Points in the 0th minimum bucket either indicate we sent a payment
which is < 1/16,384th of the channel's capacity or, more likely,
we failed to send a payment. In either case, averaging the success
probability across the full range of upper-bounds doesn't make a
whole lot of sense - if we've never managed to send a "real"
payment over a channel, we should be considering it quite poor.
To address this, we special-case the 0th minimum bucket and only
look at the largest-offset max bucket when calculating the success
probability.
The lower-bound of the scoring history buckets generally never get
used - if we try to send a payment and it fails, we don't learn
a new lower-bound for the liquidity of a channel, and if we
successfully send a payment we only learn a lower-bound that
applied *before* we sent the payment, not after it completed.
If we assume channels have some "steady-state" liquidity, then
tracking our liquidity estimates *after* a payment doesn't really
make sense - we're not super likely to make a second payment across
the same channel immediately (or, if we are, we can use our
un-decayed liquidity estimates for that). By the time we do go to
use the same channel again, we'd assume that its back at its
"steady-state" and the impacts of our payment have been lost.
To combat both of these effects, here we "subtract" the impact of
any just-successful payments from our liquidity estimates prior to
updating the historical buckets.
Currently we store our historical estimates of channel liquidity in
eight evenly-sized buckets, each representing a full octile of the
channel's total capacity. This lacks precision, especially at the
edges of channels where liquidity is expected to lie.
To mitigate this, we'd originally checked if a payment lies within
a bucket by comparing it to a sliding scale of 64ths of the
channel's capacity. This allowed us to assign penalties to payments
that fall within any more than the bottom 64th or lower than the
top 64th of a channel.
However, this still lacks material precision - on a 1 BTC channel
we could only consider failures for HTLCs above 1.5 million sats.
With today's lightning usage often including 1-100 sat payments in
tips, this is a rather significant lack of precision.
Here we rip out the existing buckets and replace them with 32
*unequal* sized buckets. This allows us to focus our precision at
the edges of a channel (where the liquidity is likely to lie, and
where precision helps the most).
We set the size of the edge buckets to 1/16,384th of the channel,
with the size increasing exponentially until it approaches the
inner buckets. For backwards compatibility, the buckets divide
evenly into octets, allowing us to convert the existing buckets
into the new ones cleanly.
This allows us to consider HTLCs down to 6,000 sats for 1 BTC
channels. In order to avoid failing to penalize channels which have
always failed, we drop the sliding scale for comparisons and simply
check if the payment is above the minimum bucket we're analyzing and
below *or in* the maximum one. This generates somewhat more
pessimistic scores, but fixes the lower bound where we suddenly
assign a 0% failure probability.
While this does represent a regression in routing performance, in
some cases the impact of not having to examine as many nodes
dominates, leading to a performance increase.
On a Xeon E3-1220 v5, the `large_mpp_routes` benchmark shows a 15%
performance increase, while the more stable benchmarks show an 8%
and 15% performance regression.
We're working with rust-bitcoin to remove the `core2` dependency
at https://github.com/rust-bitcoin/rust-bitcoin/pull/2066 but until
that lands and we can upgrade rust-bitcoin we're stuck with it. In
the mean time, we should still pass our MSRV tests.
`blockstream.info` is currently down, causing our CI to fail. This
shouldn't really be a thing, so we drop the blockstream.info-based
test here.
More generally, I'm not really a fan of having tests which run
(outside of CI) and call out to external servers - a developer
working on LDK shouldn't have to have internet access to run our
test suite and shouldn't be registering their presence with a third
party to run our tests.
Previously, we'd leave the payment secret field empty while sending
probes, which resulted in having them rejected
with `(PERM|invalid_onion_payload)` by Eclair nodes.
In order to mitigate the issue, we just set a random payment secret.
`ChannelId` was weirdly listed in the re-export section of the docs and
reachable via multiple paths. Here we opt to make the `channel_id`
module private and leave only the `ChannelId` struct itself exposed.
In the `chanmon_consistency` fuzz, we currently "persist" the
`ChannelManager` on each loop iteration. With the new logic in the
past few commits to reduce the frequency of `ChannelManager`
persistences, this behavior now leaves a gap in our test coverage -
missing persistence notifications.
In order to cath (common-case) persistence misses, we update the
`chanmon_consistency` fuzzer to no longer persist the
`ChannelManager` unless the waker was woken and signaled to
persist, possibly reloading with a previous `ChannelManager` if we
were not signaled.
When reloading nodes A or C, the chanmon_consistency fuzzer
currently calls `get_and_clear_pending_msg_events` on the node,
potentially causing additional `ChannelMonitor` or `ChannelManager`
updates, just to check that no unexpected messages are generated.
There's not much reason to do so, the fuzzer could always swap for
a different command to call the same method, and the additional
checking requires some weird monitor persistence introspection.
Here we simplify the fuzzer by simply removing this logic.