When an offer is short-lived, the likelihood of a channel used in a
compact blinded path going away is low. Require passing the absolute
expiry of an offer to ChannelManager::create_offer_builder so that it
can be used to determine whether or not compact blinded path should be
used.
Use the same criteria for creating blinded paths for refunds as well.
There's no need to save space when creating reply paths since they are
part of onion messages rather than in QR codes. Use normal blinded paths
for these instead as they are less likely to become invalid in case of
channel closure.
Using compact blinded paths isn't always necessary or desirable. For
instance, reply paths are communicated via onion messages where space
isn't a premium unlike in QR codes. Additionally, long-lived paths could
become invalid if the channel associated with the SCID is closed.
Refactor MessageRouter::create_blinded_paths into two methods: one for
compact blinded paths and one for normal blinded paths.
An upcoming change to the MessageRouter trait will require reusing
DefaultMessageRouter::create_blinded_paths. Move the code to a utility
function so facilitate this.
Our "what is the channel's current state" structs have slowly
grown to be rather nontrivial, and now include eight structs with
many fields. Thus, it makes sense to pull them out of
`ln::channelmanager` and into their own module.
This also makes things easier for the C bindings which don't
support `pub use` from a private module.
This removes dependency of watched_outputs from
per_commitment_claimable_outpoints, it is required since we will
no longer have direct access to per_commitment_claimable_outpoints
once we start publishing PersistClaimInfo as part of #3049.
This never really made a lot of sense from an API perspective, but
was required to avoid handing the background processor an explicit
`OnionMessegner`, which we are now doing. Thus, we can simply drop
these bounds as unnecessary.
This adds an `OnionMessenger::process_pending_events_async`
mirroring the same in `ChannelManager`. However, unlike the one in
`ChannelManager`, this processes the events in parallel by spawning
all futures and using the new `MultiFuturePoller`.
Because `OnionMessenger` just generates a stream of messages to
store/fetch, we first process all the events to store new messages,
`await` them, then process all the events to fetch stored messages,
ensuring reordering shouldn't result in lost messages (unless we
race with a peer disconnection, which could happen anyway).
In the next commit, `OnionMessenger` events are handled in parallel
using rust async. When we do that, we'll want to handle
`OnionMessageIntercepted` events prior to
`OnionMessagePeerConnected` ones.
While we'd generally prefer to handle all events in the order they
were generated, if we want to handle them in parallel, we don't
want a `OnionMessageIntercepted` event to start being processed,
then handle an `OnionMessagePeerConnected` prior to the first
completing. This could cause us to store a freshly-intercepted
message for a peer in a DB that was just wiped because the peer
is now connected.
This does run the risk of processing a `OnionMessagePeerConnected`
event prior to an `OnionMessageIntercepted` event (because a peer
connected, then disconnected, then we received a message for that
peer all before any events were handled), that is somewhat less
likely and discarding a message in a rare race is better than
leaving a message lying around undelivered.
Thus, here, we store `OnionMessenger` events in separate `Vec`s
which we can pull from in message-type-order.
One of the most common first-steps in troubleshooting routefinding
issues is we ask for the local channel state to determine what the
available HTLC bounds are. While we log first-hop channel details
when we decline to use them, this doesn't tell us if we have
missing channels, and thus here we log all first-hop channels at
the start.
We also take this opportunity to log the limits that were violated
any time we log that we're not using a channel, rather than only
when its a first-hop.
- Introduce a new struct for keeping expectations organized.
- Add a boolean field to track whether a response is expected,
and hence whether a `reply_path` should be included with the response.
- Update Ping and Pong roles for bidirectional communication.
- Introduce panic for when there is no responder and we were expecting
to include a `reply_path`.
- Refactor `handle_custom_message` code.
And expand handle_onion_message_response return Type
1. Introduce a new function in OnionMessenger to create blinded paths.
2. Use it in handle_onion_message_response to create a reply_path for
the right variant and use it in onion_message.
3. Expand the return type of handle_onion_message_response to handle three cases:
1. Ok(None) in case of no response to be sent.
2. Ok(Some(SendSuccess) and Err(SendError) in case of successful and
unsuccessful queueing up of response messages respectively.
This allows the user to get access to the Success/Failure status of the sending
of response and handle it accordingly.
This commit modifies handle_onion_message_response to be accessible
publicly as handle_onion_message, enabling users to respond
asynchronously. Additionally, a new test is introduced to validate this
functionality.
Add a method to BlindedPath that given a network graph will compact the
IntroductionNode as the DirectedShortChannelId variant. Call this method
from DefaultMessageRouter so that Offer paths use the compact
representation (along with reply paths). This leaves payment paths in
Bolt12Invoice using the NodeId variant, as the compact representation
isn't as useful there.