Useful since we're working on getting rid of KeysInterface::get_node_secret to
complete support for remote signing.
Will be used in upcoming work to check whether an outbound onion message
blinded path has our node id as the introduction node id
In 56b07e52aa we made
`MultiThreadedLockableScore` fully bindings-compatible. However, it
did not add a `WriteableScore` implementation for it. This was an
oversight as it is a `WriteableScore` in Rust and needs to be for
use in other parts of the API.
Here we add the required impl in a way that the bindings generator
is able to handle it and add conversion utilities.
This uses the work done in the preceding commits to implement encoding a user's
custom TLV in outbound onion messages, and decoding custom TLVs in inbound
onion messages, to be provided to the new CustomOnionMessageHandler.
As we're moving towards monitor update async being a supported
use-case, we shouldn't call an async monitor update "failed", but
rather "in progress". This simply updates the internal channel.rs
enum name to reflect the new thinking.
If the initial ChannelMonitor persistence is done asynchronously
but does not complete before the node restarts (with a
ChannelManager persistence), we'll start back up with a channel
present but no corresponding ChannelMonitor.
Because the Channel is pending-monitor-update and has not yet
broadcasted its initial funding transaction or sent channel_ready,
this is not a violation of our API contract nor a safety violation.
However, the previous code would refuse to deserialize the
ChannelManager treating it as an API contract violation.
The solution is to test for this case explicitly and drop the
channel entirely as if the peer disconnected before we received
the funding_signed for outbound channels or before sending the
channel_ready for inbound channels.
As we integrate the support of anchor outputs, we'll want to know if
each input we're working with came from an anchor outputs channel.
Instead of threading through a `opt_anchors` boolean across several
methods on `PackageSolvingData` and `PackageTemplate`, we decide to
store a reference in each `PackageSolvingData` variant instead that
features a change in behavior between channels with and without anchor
outputs.
While applying gossip info from RGS-server, number of harmless
errors might arise which should be ignored. E.g. client should not
fail if there is a duplicate gossip for same channel or duplicate
update.
OnionMessageContents specifies the data TLV that the sender wants in the onion
message. This enum only has one variant for now, Custom. When offers are added,
additional variants for invoice, invoice_request, and invoice_error will be
added.
This commit does not actually implement sending the custom OM contents, just
the API change.
OnionMessenger::new will now take a custom onion message handler trait
implementation. This handler will be used in upcoming commit(s) to handle
inbound custom onion messages.
The new trait also specifies what custom messages are supported via its
associated type, CustomMessage. This associated type must implement a new
CustomOnionMessagesContents trait, which requires custom messages to support
being written, being read, and supplying their TLV type.
Useful in decoding a custom message, so (a) the message type can be provided to
the handler and (b) None can be returned if the message type is unknown.
Used in upcoming commit(s) to support custom onion messages.
Even at relatively high payment volumes, decaying knowledge of each
individual channel every hour causes aggressive retrying of
channels as we quickly forget the state of a channel. Even with the
historical tracker, this isn't fully remedied, as we'll track the
history bounds with the decayed value.
Instead, we decay every six hours here, reducing how often we'll
retry a channel due to decay.
In addition to this, the decay likely needs to be substantially
more linear, as tracked in #1752.
We had some user confusion on how the probabilistic scorer works,
especially in reference to the half-life parameter. This attempts
to clarify how the bounds work, and how they are decayed.
We never had the `NetworkGraph::node_failed` method implemented. The
scorer now handles non-permanent failures by downgrading nodes, so we
don't want that implemented.
The method is renamed to `node_failed_permanent` to explicitly indicate
that this is the only case it handles. We also add tracking in the form
of two maps as fields of `NetworkGraph`, namely, `removed_nodes` and
`removed_channels`. We track these removed graph entries to ensure we
don't just resync them right away from gossip soon after removing them.
We stop tracking these removed nodes whenever `remove_stale_channels_and_tracking()`
is called and the entries have been tracked for longer than
`REMOVED_ENTRIES_TRACKING_AGE_LIMIT_SECS` which is currently set to one
week.
After a `Notifier` has been `notify`'d, attempts to `get_future`
should return a `Future` which is pre-completed, however this was
not the case.
This commit simply fixes the behavior, adding a test to demonstrate
the issue.
To avoid scoring based on incredibly old historical liquidity data,
we add a new half-life here which is used to (very slowly) decay
historical liquidity tracking buckets.
This simplifies adding additional half lives in
DirectedChannelLiquidity by simply storing a reference to the full
parameter set rather than only the single current half-life.
Our current `ProbabilisticScorer` attempts to build a model of the
current liquidity across the payment channel network. This works
fine to ignore channels we *just* tried to pay through, but it
fails to remember patterns over longer time horizons.
Specifically, there are *many* channels within the network that are
very often either fully saturated in one direction, or are
regularly rebalanced and rarely saturated. While our model may
discover that, when it decays its offsets or if there is a
temporary change in liquidity, it effectively forgets the "normal"
state of the channel.
This causes substantially suboptimal routing in practice, and
avoiding discarding older knowledge when new datapoints come in is
a potential solution to this.
Here, we implement one such design, using the decaying buckets
added in the previous commit to calculate a probability of payment
success based on a weighted average of recent liquidity estimates
for a channel.
For each min/max liquidity bucket pair (where the min liquidity is
less than the max liquidity), we can calculate the probability that
a payment succeeds using our traditional `amount / capacity`
formula. From there, we weigh the probability by the number of
points in each bucket pair, calculating a total probability for the
payment, and assigning a penalty using the same log-probability
calculation used for the non-historical penalties.
This introduces two new fields to the `ChannelLiquidity` struct -
`min_liquidity_offset_history` and `max_liquidity_offset_history`,
both an array of 8 `u16`s. Each entry represents the proportion of
time that we spent with the min or max liquidity offset in the
given 1/8th of the channel's liquidity range. ie the first bucket
in `min_liquidity_offset_history` represents the proportion of time
we've thought the channel's minimum liquidity is lower than 1/8th's
the channel's capacity.
Each bucket is stored, effectively, as a fixed-point number with
5 bits for the fractional part, which is incremented by one (ie 32)
each time we update our liquidity estimates and decide our
estimates are in that bucket. We then decay each bucket by
2047/2048.
Thus, memory of a payment sticks around for more than 8,000
data points, though the majority of that memory decays after 1,387
data points.
When we log liquidity updates, if we decline to update anything as
the new bounds are already within the old bounds, the
directionality of the log entries was reversed.
While we're at it, we also log the old values.