Strings defined by third parties may contain control characters. Provide
a wrapper such that these are replaced when displayed. Useful in node
aliases and offer fields.
This is part of moving the Router trait into ChannelManager, which will help
allow ChannelManager to fetch routes on-the-fly as part of supporting
trampoline payments.
If we send payments over a path where a channel ended up being
closed, we'll remove it before we call
`ProbabilisticPaymentScorer::payment_path_failed`. This should be
fine, except that `payment_path_failed` does not break out of its
scoring loop if a channel is missing, causing it to assign a
minimum available-liquidity of the payment amount even to channels
which our attempt never arrived at.
The fix is simple - add the missing check and break.
Because we now never generate an `EffectiveCapacity` with an
`htlc_maximum_msat` set to `None`, making it non-`Option`al
effectively removes dead code, which we do here.
We currently construct `DirectedChannelInfo`s for routing before
checking if the given direction has its directional info filled in.
We then always check for directional info before actually deciding
to route over a channel, as otherwise we assume the channel is not
online.
This makes for somewhat redundant checks, and `DirectedCHannelInfo`
isn't, by itself, a very useful API. Because fetching the HTLC-max
or effective channel capacity gives spurious data if no directional
info is available, there's little reason to have that data
available, and so we here check for directional info first. This
effectively merges `DirectionalChannelInfo` and
`DirectionalChannelInfoWithUpdate`.
In 56b07e52aa we made
`MultiThreadedLockableScore` fully bindings-compatible. However, it
did not add a `WriteableScore` implementation for it. This was an
oversight as it is a `WriteableScore` in Rust and needs to be for
use in other parts of the API.
Here we add the required impl in a way that the bindings generator
is able to handle it and add conversion utilities.
While applying gossip info from RGS-server, number of harmless
errors might arise which should be ignored. E.g. client should not
fail if there is a duplicate gossip for same channel or duplicate
update.
Even at relatively high payment volumes, decaying knowledge of each
individual channel every hour causes aggressive retrying of
channels as we quickly forget the state of a channel. Even with the
historical tracker, this isn't fully remedied, as we'll track the
history bounds with the decayed value.
Instead, we decay every six hours here, reducing how often we'll
retry a channel due to decay.
In addition to this, the decay likely needs to be substantially
more linear, as tracked in #1752.
We had some user confusion on how the probabilistic scorer works,
especially in reference to the half-life parameter. This attempts
to clarify how the bounds work, and how they are decayed.
We never had the `NetworkGraph::node_failed` method implemented. The
scorer now handles non-permanent failures by downgrading nodes, so we
don't want that implemented.
The method is renamed to `node_failed_permanent` to explicitly indicate
that this is the only case it handles. We also add tracking in the form
of two maps as fields of `NetworkGraph`, namely, `removed_nodes` and
`removed_channels`. We track these removed graph entries to ensure we
don't just resync them right away from gossip soon after removing them.
We stop tracking these removed nodes whenever `remove_stale_channels_and_tracking()`
is called and the entries have been tracked for longer than
`REMOVED_ENTRIES_TRACKING_AGE_LIMIT_SECS` which is currently set to one
week.
To avoid scoring based on incredibly old historical liquidity data,
we add a new half-life here which is used to (very slowly) decay
historical liquidity tracking buckets.
This simplifies adding additional half lives in
DirectedChannelLiquidity by simply storing a reference to the full
parameter set rather than only the single current half-life.
Our current `ProbabilisticScorer` attempts to build a model of the
current liquidity across the payment channel network. This works
fine to ignore channels we *just* tried to pay through, but it
fails to remember patterns over longer time horizons.
Specifically, there are *many* channels within the network that are
very often either fully saturated in one direction, or are
regularly rebalanced and rarely saturated. While our model may
discover that, when it decays its offsets or if there is a
temporary change in liquidity, it effectively forgets the "normal"
state of the channel.
This causes substantially suboptimal routing in practice, and
avoiding discarding older knowledge when new datapoints come in is
a potential solution to this.
Here, we implement one such design, using the decaying buckets
added in the previous commit to calculate a probability of payment
success based on a weighted average of recent liquidity estimates
for a channel.
For each min/max liquidity bucket pair (where the min liquidity is
less than the max liquidity), we can calculate the probability that
a payment succeeds using our traditional `amount / capacity`
formula. From there, we weigh the probability by the number of
points in each bucket pair, calculating a total probability for the
payment, and assigning a penalty using the same log-probability
calculation used for the non-historical penalties.
This introduces two new fields to the `ChannelLiquidity` struct -
`min_liquidity_offset_history` and `max_liquidity_offset_history`,
both an array of 8 `u16`s. Each entry represents the proportion of
time that we spent with the min or max liquidity offset in the
given 1/8th of the channel's liquidity range. ie the first bucket
in `min_liquidity_offset_history` represents the proportion of time
we've thought the channel's minimum liquidity is lower than 1/8th's
the channel's capacity.
Each bucket is stored, effectively, as a fixed-point number with
5 bits for the fractional part, which is incremented by one (ie 32)
each time we update our liquidity estimates and decide our
estimates are in that bucket. We then decay each bucket by
2047/2048.
Thus, memory of a payment sticks around for more than 8,000
data points, though the majority of that memory decays after 1,387
data points.
When we log liquidity updates, if we decline to update anything as
the new bounds are already within the old bounds, the
directionality of the log entries was reversed.
While we're at it, we also log the old values.
In the bindings, we don't directly export any `std` types. Instead,
we provide trivial wrappers around them which expose only a
bindings-compatible API (and which is code in-crate, where the
bindings generator can see it).
We never quite finished this for `MultiThreadedLockableScore` - due
to some limitations in the bindings generator and the way the
scores are used in `lightning-invoice`, the scoring API ended up
being further concretized in patches for the bindings. Luckily the
scoring interface has been rewritten somewhat, and it so happens
that we can now generate bindings with the upstream code.
The final piece of the puzzle is done here, where we add a struct
which wraps `MutexGuard` so that we can expose the lock for
`MultiThreadedLockableScore`.
As we move towards specify supported/required feature bits in the
module(s) where they are supported, the global `known` feature set
constructors no longer make sense.
In anticipation of removing the `known` constructor, this commit
removes all remaining references to it outside of features.rs.
As we move towards specify supported/required feature bits in the
module(s) where they are supported, the global `known` feature set
constructors no longer make sense.
Here we stop relying on the `known` method in the `routing` module,
which was only used in tests.
As we remove the concept of a global "known/supported" feature set
in LDK, we should also remove the concept of a global "required"
feature set. This does so by moving the checks for specific
required features into handlers.
Specifically, it allows the handler `peer_connected` method to
return an `Err` if the peer should be disconnected. Only one such
required feature bit is currently set - `static_remote_key`, which
is required in `ChannelManager`.
The C bindings generator isn't capable of figuring out if a blanket
impl applies in a given context, and instead opts to always write
out any relevant impl's for a trait. Thus, blanket impls should be
disabled when building with `#[cfg(c_bindings)]`.
When we broadcast a node announcement, the features we support are really a
combination of all the various features our different handlers support. This
commit captures this concept by OR'ing our NodeFeatures across both our channel
and routing message handlers.
When we go to send an Init message to new peers, the features we
support are really a combination of all the various features our
different handlers support. This commit captures this concept by
OR'ing our InitFeatures across both our Channel and Routing
handlers.
Note that this also disables setting the `initial_routing_sync`
flag in init messages, as was intended in
e742894492, per the comment added on
`clear_initial_routing_sync`, though this should not be a behavior
change in practice as nodes which support gossip queries ignore the
initial routing sync flag.
The `rejected_by_dest` field of the `PaymentPathFailed` event has
always been a bit of a misnomer, as its really more about retry
than where a payment failed. Now is as good a time as any to
rename it.
Added two methods, `process_path_inflight_htlcs` and
`remove_path_inflight_htlcs`, that updates that `payment_cache` map with
path information that may have failed, succeeded, or have been given up
on.
Introduced `AccountForInflightHtlcs`, which will wrap our user-provided
scorer. We move the `S:Score` type parameterization from the `Router` to
`find_route`, so we can use our newly introduced
`AccountForInflightHtlcs`.
`AccountForInflightHtlcs` keeps track of a map of inflight HTLCs by
their short channel id, direction, and give us the value that is being
used up.
This map will in turn be populated prior to calling `find_route`, where
we’ll use `create_inflight_map`, to generate a current map of all
inflight HTLCs based on what was stored in `payment_cache`.
If we receive a ChannelAnnouncement message but we already have the
channel, there's no reason to do a chain lookup. Instead of
immediately calling the user-provided `chain::Access` when handling
a ChannelAnnouncement, we first check if we have the corresponding
channel in the graph.
Note that if we do have the corresponding channel but it was not
previously checked against the blockchain, we should still check
with the `chain::Access` and update if necessary.