As we prepare to expose an API to update a channel's ChannelConfig,
we'll also want to expose this struct to consumers such that they have
insights into the current ChannelConfig applied for each channel.
Provide a wrapper struct for 32-byte node aliases, which implements
Display for printing. Support the UTF-8 character encoding, but replace
control characters and terminate at the first null character. Fall back
to ASCII if the byte sequence is an invalid encoding.
Instead of implementing EventHandler for P2PGossipSync, implement it on
NetworkGraph. This allows RapidGossipSync to handle events, too, by
delegating to its NetworkGraph.
P2PGossipSync logs before delegating to NetworkGraph in its
EventHandler. In order to share this handling with RapidGossipSync,
NetworkGraph needs to take a logger so that it can implement
EventHandler instead.
P2PGossipSync has a Secp256k1 context field, which it only uses to pass
to NetworkGraph methods. Move the field to NetworkGraph so other callers
don't need to pass in a Secp256k1 context.
NetGraphMsgHandler implements RoutingMessageHandler to handle gossip
messages defined in BOLT 7 and maintains a view of the network by
updating NetworkGraph. Rename it to P2PGossipSync, which better
describes its purpose, and to contrast with RapidGossipSync.
A NetworkUpdate indicating ChannelClosed actually corresponds to a
channel failure as described in BOLT 4:
0x2000 (NODE): node failure (otherwise channel)
Rename the enum variant to ChannelFailure and rename NetworkGraph
methods close_channel_from_update and fail_node to channel_failed and
node_failed, respectively.
Create a wrapper struct for rapid gossip sync that can be passed to
BackgroundProcessor's start method, allowing it to only start pruning
the network graph upon rapid gossip sync's completion.
This supports routing outbound over 0-conf channels by utilizing
the outbound SCID alias that we assign to all channels to refer to
the selected channel when routing.
Implements `build_route_from_hops`, which provides a simple way to build
a route from us (payer) to the target node (payee) via the given hops
(which should exclude the payer, but include the payee). This may be
useful, e.g., for probing the chosen path.
For direct channels, the channel liquidity is known with certainty. Use
this knowledge in ProbabilisticScorer by either penalizing with the
per-hop penalty or u64::max_value depending on the amount.
Scorers could benefit from having the channel's EffectiveCapacity rather
than a u64 msat value. For instance, ProbabilisticScorer can give a more
accurate penalty when given the ExactLiquidity variant. Pass a struct
wrapping the effective capacity, the proposed amount, and any in-flight
HTLC value.
For route hints, the aggregate next hops path penalty and CLTV delta
should be computed after considering each hop rather than before.
Otherwise, these aggregate values will include values from the current
hop, too.
When scoring route hints, the amount passed to the scorer should include
any fees needed for subsequent hops. This worked correctly for single-
hop hints since there are no further hops, but not for multi-hint hops
(except the final one).
Using EffectiveCapacity in scoring gives more accurate success
probabilities when the maximum HTLC value is less than the channel
capacity. Change EffectiveCapacity to prefer the channel's capacity
over its maximum HTLC limit, but still use the latter for route finding.
`ChannelDetails::outbound_capacity_msat` describes the total amount
available for sending across several HTLCs, basically just our
balance minus the reserve value maintained by our counterparty.
However, when routing we use it to guess the maximum amount we can
send in a single additional HTLC, which it is not.
There are numerous reasons why our balance may not match the amount
we can send in a single HTLC, whether the HTLC in-flight limit, the
channe's HTLC maximum, or our feerate buffer.
This commit splits the `outbound_capacity_msat` field into two -
`outbound_capacity_msat` and `outbound_htlc_limit_msat`, setting us
up for correctly handling our next-HTLC-limit in the future.
This also addresses the first of the reasons why the values may
not match - the max-in-flight limit. The inaccuracy is ultimately
tracked as #1126.