Now that NetworkGraph uses interior mutability, the RwLock used around
it in NetGraphMsgHandler is no longer needed. This allows for shared
ownership without a lock.
In preparation for giving NetworkGraph shared ownership, wrap individual
fields in RwLock. This allows removing the outer RwLock used in
NetGraphMsgHandler.
Bolt 12 details the process of picking up route hints from payee
using the lightning invoice. This PR brings the changes to use
multiple route hints from payee picked from the invoice.
The route hints are processed in the following manner:-
- `get_route()` receives the hints in `last_hops`.
- Every `RouteHintHop` in `RouteHint` is processed based on
feasiblity of channel capacity and fees.
- If a `RouteHintHop` then preceeding `RouteHintHop`s are not
processed.
- A direct route is checked from `first_hops_targets` to the
first `RouteHintHop` if the respective `RouteHint` is
processed from the payee's end till the first `RouteHintHop`.
`partial_route_hint_test`, `ignores_empty_last_hops_test`,
`multi_hint_last_hops_test` and `last_hops_with_public_channel_test`
test usage of partial route hints for building optimal route,
processing empty route hint hops, complete usage of private route
hints and presence of public channels in route hints respectively.
Resolves: #945
After the merge of #984, Jeff pointed out that `ChannelDetails` has
become a bit of a "bag of variables", and that a few of the variable
names in #984 were more confusing than necessary in context.
This addresses several issues by:
* Splitting counterparty parameters into a separate
`ChannelCounterpartyParameters` struct,
* using the name `unspendable_punishment_reserve` for both outbound
and inbound channel reserves, differentiating them based on their
position in the counterparty parameters struct or not,
* Using the name `force_close_spend_delay` instead of
`spend_csv_on_our_commitment_funds` to better communicate what
is occurring.
This adds four new fields in `ChannelDetails`:
1. holder_selected_ and counterparty_selected_channel_reserve_delay
are useful to determine what amount of the channel is
unavailable for payments.
2. confirmations_required is useful when awaiting funding
confirmation to determine how long you will need to wait.
3. to_self_delay is useful to determine how long it will take to
receive funds after a force-close.
Fixes#983.
Previous to this PR, TLV serialization involved iterating from 0 to the highest
given TLV type. This worked until we decided to implement keysend, which has a
TLV type of ~5.48 billion.
So instead, we now specify the type of whatever is being (de)serialized (which
can be an Option, a Vec type, or a non-Option (specified in the serialization macros as "required").
Lightning invoices allow for zero or more multi-hop route hints. Update
get_route's interface to accept such hints, although only the last hop
from each is used for the time being.
Moves RouteHint from lightning-invoice crate to lightning crate. Adds a
PrivateRoute wrapper around RouteHint for use in lightning-invoice.
NetworkGraph is one of the largest structures we generally
deserialize, so it makes for a good benchmark, even if it isn't the
most complicated one.
As of this commit, on an Intel 2687W v3, these benchmarks take:
test routing::network_graph::benches::read_network_graph ... bench: 2,101,420,078 ns/iter (+/- 6,649,020)
test routing::network_graph::benches::write_network_graph ... bench: 344,696,835 ns/iter (+/- 229,061)
We currently copy the features objects in each channel as we walk
the graph during route calculation. This implies a significant
amount of malloc traffic as the features flags object are stored
on the heap.
Instead, because they features being referenced are in the network
graph which we hold a reference to, we can simply store references
to them.
This nontrivially improves our get_route benchmark by around 5%.
While walking the graph doing Dijkstra's, we may decrease the
amount being sent along one path, and not others, based on the
htlc_minimum_msat value. This may result in a lower relative fees
on that path in comparison to others. In the extreme, this may
result in finding a second path to a node with a lower fee than the
first path found (normally impossible in a Dijkstra's
implementation, as we walk next hops in fee-order).
In such a case, we end up with parts of our state invalid as
further hops beyond that node have been filled in using the
original total fee information.
Instead, we simply track which nodes have been processed and ignore
and further updates to them (as it implies we've reduced the amount
we're sending to find a lower absolute fee, but a higher relative
fee, which isn't a better route).
We check that we are in exactly this case in test builds with new
in-line assertions. Note that these assertions successfully detect
the bug in the previous commit.
Sadly this requires an extra HashMap lookup every time we pop an
element off of our heap, though we can skip a number of heap pushes
during the channel walking.
This is the same code as was recently failing in our benchmarks,
adapted to use a random starting seed instead of a fixed one and
a smaller iteration to reduce runtime.
When walking the network graph to calculate a route, we always
calculate the minimum fee which is required to make one further
hop. This avoids some extra hop processing at the end of each path
selection loop (saving around 10% runtime in our benchmarks).
However, if we use the real value which we expect
to send over a channel in that calculation, we may find an
alternate path to the same node which is more expensive but
capacity-constrained, resulting in us considering it cheaper as the
relative fee paid will be lower.
Instead, we can use the `minimal_value_contribution_msat`, which is
a constant through an entire path finding iteration, as the amount,
preventing any basis change in the relative fee paid.
Currently, the "best source" for a given node tracked during
Dijkstra's is updated with a different critera from the heap
sorting, resulting in loops in calculated paths.
This adds a test for the specific failure currently seen, utilizing
the new path-htlc-minimum tracking in the heap entries in place of
the per-hop htlc-minimum values which the MPP changeset partially
used.
If we walk the network graph and find a route that meets are
payment amount and has a liquidiy limit far in excess of our
payment amount, we may select the same path several times in the
main path-gathering loop.
Instead, if the path we selected was not limited by the available
liquidity, we set the middle hop in the path's liquidity available
to zero, disabling that channel in future path finding steps.
Previously, we'd happily send funds through a path where, while
generating the path, in some middle hope we reduce the value being
sent to meet an htlc_maximum, making a later hop invalid due to it
no longer meeting its htlc_minimum. Instead, we need to track the
path's htlc-minimum while we're transiting the graph.
The new MPP routing algorithm attempts to build paths with a higher
value than our payment to find paths which may allow us to pay a
fee to meet an htlc_minimum limit. This is great if we're
min-bounded, however it results in us rejecting paths where we are
bounded by a maximum near the end of the path, with fees, and then
bounded by a minimum earlier in the path. Further, it means we will
not find the cheapest path where paths have a lower relative fee
but a higher absolute fee.
Instead, we calculate routes using the actual amount we wish to
send. To maintain the previous behavior of searching for cheaper
paths where we can "pay the difference" to meet an htlc_minimum, we
detect if we were minimum-bounded during graph walking and, if we
are, we walk the graph again with a higher value.