This change implements non-strict forwarding, allowing the node to
forward an HTLC along a channel other than the one specified
by short_channel_id in the onion message, so long as the receiver has
the same node public key intended by short_channel_id
([BOLT](57ce4b1e05/04-onion-routing.md (non-strict-forwarding))).
This can improve payment reliability when there are multiple channels
with the same peer e.g. when outbound liquidity is replenished by
opening a new channel.
The implemented forwarding strategy now chooses the channel with the
lowest outbound liquidity that can forward an HTLC to maximize the
probability of being able to successfully forward a subsequent HTLC.
Fixes#1278.
In the next commit, we'll use the new `node_counter`s to remove a
`HashMap` from the router, using a `Vec` to store all our per-node
information. In order to make finding entries in that `Vec` cheap,
here we store the source and destintaion `node_counter`s in
`ChannelInfo`, givind us the counters for both ends of a channel
without doing a second `HashMap` lookup.
These counters are simply a unique number describing each node.
They have no specific meaning, but start at 0 and count up, with
counters being reused after a node has been deleted.
Currently, every block connection triggers the persistence of all
ChannelMonitors with an updated best_block. This approach poses
challenges for large node operators managing thousands of channels.
Furthermore, it leads to a thundering herd problem
(https://en.wikipedia.org/wiki/Thundering_herd_problem), overwhelming
the storage with simultaneous requests.
To address this issue, we now persist ChannelMonitors at a
regular cadence, spreading their persistence across blocks to
mitigate spikes in write operations.
Outcome: After doing this, Ldk's IO footprint should be reduced
by ~50 times. The processing time required to sync each block
will be significantly reduced, particularly for nodes with 1000s
of channels, as write latency plays a significant role in this process.
As a result, the Node/ChainMonitor will be blocked for a shorter
duration, leading to further efficiency gains.
When creating blinded paths, introduction nodes are limited to peers
with at least three channels to prevent easily guessing the recipient.
Relax this check when the recipient is unannounced since they won't be
in the NetworkGraph.
DefaultRouter will ignore blinded paths where the sender is the
introduction node. Similar to message paths, advance such paths by one
so that payments may be sent to them.
Similar to blinded paths for onion messages, if given a blinded payment
path where we are the introduction node, the path must be advanced by
one in order to use it.
When using advance_path_by_one when we are the introduction node, any
error will result having the first hop of the input blinded path
removed. Instead, only remove the first hop on success. Otherwise, the
path will be invalid.
When creating blinded paths for receiving onion payments, allow using
the recipient's only peer as the introduction node when the recipient is
unannounced. This allows for sending payments without going through an
intermediary, which is useful for testing or when only connected to an
LSP with an empty NetworkGraph.
When creating blinded paths for receiving onion messages, allow using
the recipient's only peer as the introduction node when the recipient is
unannounced. This allows for sending messages without going through an
intermediary, which is useful for testing or when only connected to an
LSP with an empty NetworkGraph.
UserConfig::manually_handle_bolt12_invoices allows deferring payment of
BOLT12 invoices by generating Event::InvoiceReceived. Expose
ChannelManager::send_payment_for_bolt12_invoice to allow users to pay
the Bolt12Invoice included in the event. While the event contains the
PaymentId for reference, that parameter is now removed from the method
in favor of extracting the PaymentId from the invoice's payer_metadata.
BOLT12 invoices are automatically paid once they have been verified.
Users may want to manually pay them by first performing additional
checks. Add a manually_handle_bolt12_invoices configuration option that
when set generates an Event::InvoiceReceived instead of paying the
invoice.
Add a builder for creating static invoices for an offer. Building produces a
semantically valid static invoice for the offer, which can then be signed with
the key associated with the offer's signing pubkey.
Some users may want to handle a Bolt12Invoice asynchronously, either in
a different process or by first performing additional verification
before paying the invoice. Add an InvoiceReceived event to facilitate
this.
In our route tests we need some "random" bytes for the router to
randomize amounts using. We generate this by building an actual
`KeysManager` and then deriving some random bytes using the
`EntropySource` trait. However, `get_route` (what we're normally
testing) doesn't actually use the random bytes, and even if it did,
using a `KeysManager` is just a fancy way of creating a constant,
so there's really no reason to do all the fancy crypto.
Instead, here, we change our routing tests and benchmarks to simply
use `[42; 32]` as the "random" bytes.
Until now, our routing benchmarks used a synthetic scorer,
generated by scoring random paths to build up some history. This is
pretty far removed from real-world routing conditions, as
alternative paths generally have no scoring information and even
the paths we do take have only one or two past scoring results.
Instead, we fetch a static serialized scorer, generated using
minutely probes. This means future changes to the scorer's data may
be harder to benchmark, but makes for substantially more realistic
benchmarks for changes which don't impact the serialized state.
Define an interface for BOLT 12 static invoice messages. The underlying
format consists of the original bytes and the parsed contents.
The bytes are later needed for serialization. This is because it must
mirror all the offer TLV records, including unknown ones, which aren't
represented in the contents.
Invoices may be created from an offer.