Our "what is the channel's current state" structs have slowly
grown to be rather nontrivial, and now include eight structs with
many fields. Thus, it makes sense to pull them out of
`ln::channelmanager` and into their own module.
This also makes things easier for the C bindings which don't
support `pub use` from a private module.
This removes dependency of watched_outputs from
per_commitment_claimable_outpoints, it is required since we will
no longer have direct access to per_commitment_claimable_outpoints
once we start publishing PersistClaimInfo as part of #3049.
This never really made a lot of sense from an API perspective, but
was required to avoid handing the background processor an explicit
`OnionMessegner`, which we are now doing. Thus, we can simply drop
these bounds as unnecessary.
When `OnionMessenger` first developed a timer and events interface,
we accessed the `OnionMessenger` indirectly via the `PeerManager`.
While this is a fairly awkward interface, it avoided a large pile
of generics on the background processor interfaces. However, since
we now have an `AOnionMessenger` trait, this concern is no longer
significant. Further, because we now want to use the built-in
`OnionMessenger` async event processing method, we really need a
direct referene to the `OnionMessenger` in the background
processor, which we add here optionally.
This adds an `OnionMessenger::process_pending_events_async`
mirroring the same in `ChannelManager`. However, unlike the one in
`ChannelManager`, this processes the events in parallel by spawning
all futures and using the new `MultiFuturePoller`.
Because `OnionMessenger` just generates a stream of messages to
store/fetch, we first process all the events to store new messages,
`await` them, then process all the events to fetch stored messages,
ensuring reordering shouldn't result in lost messages (unless we
race with a peer disconnection, which could happen anyway).
In the next commit, `OnionMessenger` events are handled in parallel
using rust async. When we do that, we'll want to handle
`OnionMessageIntercepted` events prior to
`OnionMessagePeerConnected` ones.
While we'd generally prefer to handle all events in the order they
were generated, if we want to handle them in parallel, we don't
want a `OnionMessageIntercepted` event to start being processed,
then handle an `OnionMessagePeerConnected` prior to the first
completing. This could cause us to store a freshly-intercepted
message for a peer in a DB that was just wiped because the peer
is now connected.
This does run the risk of processing a `OnionMessagePeerConnected`
event prior to an `OnionMessageIntercepted` event (because a peer
connected, then disconnected, then we received a message for that
peer all before any events were handled), that is somewhat less
likely and discarding a message in a rare race is better than
leaving a message lying around undelivered.
Thus, here, we store `OnionMessenger` events in separate `Vec`s
which we can pull from in message-type-order.
One of the most common first-steps in troubleshooting routefinding
issues is we ask for the local channel state to determine what the
available HTLC bounds are. While we log first-hop channel details
when we decline to use them, this doesn't tell us if we have
missing channels, and thus here we log all first-hop channels at
the start.
We also take this opportunity to log the limits that were violated
any time we log that we're not using a channel, rather than only
when its a first-hop.
- Introduce a new struct for keeping expectations organized.
- Add a boolean field to track whether a response is expected,
and hence whether a `reply_path` should be included with the response.
- Update Ping and Pong roles for bidirectional communication.
- Introduce panic for when there is no responder and we were expecting
to include a `reply_path`.
- Refactor `handle_custom_message` code.
And expand handle_onion_message_response return Type
1. Introduce a new function in OnionMessenger to create blinded paths.
2. Use it in handle_onion_message_response to create a reply_path for
the right variant and use it in onion_message.
3. Expand the return type of handle_onion_message_response to handle three cases:
1. Ok(None) in case of no response to be sent.
2. Ok(Some(SendSuccess) and Err(SendError) in case of successful and
unsuccessful queueing up of response messages respectively.
This allows the user to get access to the Success/Failure status of the sending
of response and handle it accordingly.
This commit modifies handle_onion_message_response to be accessible
publicly as handle_onion_message, enabling users to respond
asynchronously. Additionally, a new test is introduced to validate this
functionality.
Add a method to BlindedPath that given a network graph will compact the
IntroductionNode as the DirectedShortChannelId variant. Call this method
from DefaultMessageRouter so that Offer paths use the compact
representation (along with reply paths). This leaves payment paths in
Bolt12Invoice using the NodeId variant, as the compact representation
isn't as useful there.
Instead of passing Vec<PublicKey> to MessageRouter::crate_blinded_path,
pass Vec<ForwardNode>. This way callers can include a short_channel_id
for a more compact BlindedPath encoding.
When sending an onion message to a blinded path, the short channel id
between hops isn't need in each hop's encrypted_payload since it is not
a payment. However, using the short channel id instead of the node id
gives a more compact representation. Update BlindedPath::new_for_message
to allow for this.
ddf75afd16 introduced the ability to re-exchange our `ChannelOpen`
after a peer disconnects if we didn't complete funding on our end.
It did not implement nor consider what would happen if we
re-connected after we created our own funding transactions, and
currently it panics (and even if it did not it would replay the
`FundingTransactionGenerated` event to users).
While we'd very much like to replay the `open_channel` flow even
if we have already received an `accept_channel` and funded the
channel, we cannot as the peer will likely provide different key
material in their `accept_channel`, causing us to need a different
funding transaction.
Thus, here, we simply close channels which have been funded but not
yet signed when our peer disconnects.