C-Lightning versions prior to 0.10 (incorrectly) enforce that the
reply_channel_range first_blocknum field is set to at least the
value they sent in their query_channel_range message. Sending a 0
results in them responding with an Error message, closing open
channels spuriously.
Further, C-Lightning versions prior to 0.10 require that the
reply_channel_range first_blocknum is either the same block implied
as the last block of the previous reply_channel_range or one
greater. This is not only a creative interpretation of the spec,
but a perfectly reasonable implementation might still receive an
Error message in the case of replies split by an empty block.
This code is extracted and modified from a previous version of
the original query_channel_range PR in commit
44ba52ccf1. The original commit is by
`bmancini55 <bmancini@gmail.com>`.
Lightning invoices allow for zero or more multi-hop route hints. Update
get_route's interface to accept such hints, although only the last hop
from each is used for the time being.
Moves RouteHint from lightning-invoice crate to lightning crate. Adds a
PrivateRoute wrapper around RouteHint for use in lightning-invoice.
NetworkGraph is one of the largest structures we generally
deserialize, so it makes for a good benchmark, even if it isn't the
most complicated one.
As of this commit, on an Intel 2687W v3, these benchmarks take:
test routing::network_graph::benches::read_network_graph ... bench: 2,101,420,078 ns/iter (+/- 6,649,020)
test routing::network_graph::benches::write_network_graph ... bench: 344,696,835 ns/iter (+/- 229,061)
We currently copy the features objects in each channel as we walk
the graph during route calculation. This implies a significant
amount of malloc traffic as the features flags object are stored
on the heap.
Instead, because they features being referenced are in the network
graph which we hold a reference to, we can simply store references
to them.
This nontrivially improves our get_route benchmark by around 5%.
While walking the graph doing Dijkstra's, we may decrease the
amount being sent along one path, and not others, based on the
htlc_minimum_msat value. This may result in a lower relative fees
on that path in comparison to others. In the extreme, this may
result in finding a second path to a node with a lower fee than the
first path found (normally impossible in a Dijkstra's
implementation, as we walk next hops in fee-order).
In such a case, we end up with parts of our state invalid as
further hops beyond that node have been filled in using the
original total fee information.
Instead, we simply track which nodes have been processed and ignore
and further updates to them (as it implies we've reduced the amount
we're sending to find a lower absolute fee, but a higher relative
fee, which isn't a better route).
We check that we are in exactly this case in test builds with new
in-line assertions. Note that these assertions successfully detect
the bug in the previous commit.
Sadly this requires an extra HashMap lookup every time we pop an
element off of our heap, though we can skip a number of heap pushes
during the channel walking.
This is the same code as was recently failing in our benchmarks,
adapted to use a random starting seed instead of a fixed one and
a smaller iteration to reduce runtime.
When walking the network graph to calculate a route, we always
calculate the minimum fee which is required to make one further
hop. This avoids some extra hop processing at the end of each path
selection loop (saving around 10% runtime in our benchmarks).
However, if we use the real value which we expect
to send over a channel in that calculation, we may find an
alternate path to the same node which is more expensive but
capacity-constrained, resulting in us considering it cheaper as the
relative fee paid will be lower.
Instead, we can use the `minimal_value_contribution_msat`, which is
a constant through an entire path finding iteration, as the amount,
preventing any basis change in the relative fee paid.
Currently, the "best source" for a given node tracked during
Dijkstra's is updated with a different critera from the heap
sorting, resulting in loops in calculated paths.
This adds a test for the specific failure currently seen, utilizing
the new path-htlc-minimum tracking in the heap entries in place of
the per-hop htlc-minimum values which the MPP changeset partially
used.
If we walk the network graph and find a route that meets are
payment amount and has a liquidiy limit far in excess of our
payment amount, we may select the same path several times in the
main path-gathering loop.
Instead, if the path we selected was not limited by the available
liquidity, we set the middle hop in the path's liquidity available
to zero, disabling that channel in future path finding steps.
Previously, we'd happily send funds through a path where, while
generating the path, in some middle hope we reduce the value being
sent to meet an htlc_maximum, making a later hop invalid due to it
no longer meeting its htlc_minimum. Instead, we need to track the
path's htlc-minimum while we're transiting the graph.
The new MPP routing algorithm attempts to build paths with a higher
value than our payment to find paths which may allow us to pay a
fee to meet an htlc_minimum limit. This is great if we're
min-bounded, however it results in us rejecting paths where we are
bounded by a maximum near the end of the path, with fees, and then
bounded by a minimum earlier in the path. Further, it means we will
not find the cheapest path where paths have a lower relative fee
but a higher absolute fee.
Instead, we calculate routes using the actual amount we wish to
send. To maintain the previous behavior of searching for cheaper
paths where we can "pay the difference" to meet an htlc_minimum, we
detect if we were minimum-bounded during graph walking and, if we
are, we walk the graph again with a higher value.
Useful for constructing route hints for private channels in invoices.
Co-authored-by: Valentine Wallace <vwallace@protonmail.com>
Co-authored-by: Antoine Riard <ariard@student.42.fr>
Modify NetGraphMsgHandler::handle_query_channel_range to always use
first_blocknum=0 in replies. This is spec compliant after changes to
make sequence completion explicity using sync_complete.
Modifies NetGraphMsgHandler::handle_query_channel_range to use a constant
max value in replies. Modifies tests to generate 8000 channels instead
of making this value configurable.
We currently only use it to override the graph-specific features
returned in the route, though we should also use it to enable or
disable MPP.
Note that tests which relied on MPP behavior have had all of their
get_route calls upgraded to provide the MPP flag.
Initial implementation of handling query_channel_range messages in
NetGraphMsgHandler. Enqueues a sequence of reply message in the pending
message events buffer.
We could have possibly constructed a slightly inconsistent
path: since we reduce value being transferred all the way, we
could have violated htlc_minimum_msat on some channels
we already passed (assuming dest->source direction). Here,
we recompute the fees again, so that if that's the case, we
match the currently underpaid htlc_minimum_msat with fees.