PackageTemplate aims to replace InputMaterial, introducing a clean
interface to manipulate a wide range of claim types without
OnchainTxHandler aware of special content of each.
This is used in next commits.
Package.rs aims to gather interfaces to communicate between
onchain channel transactions parser (ChannelMonitor) and outputs
claiming logic (OnchainTxHandler). These interfaces are data
structures, generated per-case by ChannelMonitor and consumed
blindly by OnchainTxHandler.
Previously we handled most of the logic of announcement_signatures
in ChannelManager, rather than Channel. This is somewhat unique as
far as our message processing goes, but it also avoided having to
pass the node_secret in to the Channel.
Eventually, we'll move the node_secret behind the signer anyway, so
there isn't much reason for this, and storing the
announcement_signatures-provided signatures in the Channel allows
us to recreate the channel_announcement later for rebroadcast,
which may be useful.
Currently our serialization is very compact, and contains version
numbers to indicate which versions the code can read a given
serialized struct. However, if you want to add a new field without
needlessly breaking the ability of previous versions of the code to
read the struct, there is not a good way to do so.
This adds dummy, currently empty, TLVs to the major structs we
serialize out for users, providing an easy place to put new
optional fields without breaking previous versions.
Previously, if we got disconnected from a peer while there were
HTLCs pending forwarding in the holding cell, we'd clear them and
fail them all backwards. This is largely fine, but since we now
have support for handling such HTLCs on reconnect, we might as
well not, instead relying on our timeout logic to fail them
backwards if it takes too long to forward them.
If there is no pending channel update messages when monitor updating
is restored (though there may be an RAA to send), and we're
connected to our peer and not awaiting a remote RAA, we need to
free anything in our holding cell.
However, we don't want to immediately free the holding cell during
channel_monitor_updated as it presents a somewhat bug-prone case of
reentrancy:
a) it would re-enter user code around a monitor update while being
called from user code notifying us of the same monitor being
updated, making deadlocs very likely (in fact, our fuzzers
would have a bug here!),
b) the re-entrancy only occurs in a very rare case, making it
likely users will not hit it in testing, only deadlocking in
production.
Thus, we add a holding-cell-free pass over each channel in
get_and_clear_pending_msg_events. This fits up nicely with the
anticipated bug - users almost certainly need to process new
network messages immediately after monitor updating has been
restored to send messages which were not sent originally when the
monitor updating was paused.
Without this, chanmon_fail_consistency was able to find a stuck
condition where we sit on an HTLC failure in our holding cell and
don't ever handle it (at least until we have other actions to take
which empty the holding cell).
Both break_chan_entry and try_chan_entry do almost identical work,
only differing on if they `break` or `return` in response to an
error. Because we will now also need an option to do neither, we
break out the common code into a shared `convert_chan_err` macro.
The channel restoration code in channel monitor updating and peer
reconnection both do incredibly similar things, and there is
little reason to have them be separate. Sadly because they require
holding a lock with a reference to elements in the lock, its not
practical to make them utility functions, so instead we introduce
a two-step macro here which will eventually be used for both.
Because we still support pre-NLL Rust, the macro has to be in two
parts - one which runs with the channel_state lock, and one which
does not.
We currently generate duplicative PaymentFailed/PaymentSent events
in two cases:
a) If we receive a update_fulfill_htlc message, followed by a
disconnect, then a resend of the same update_fulfill_htlc
message, we will generate a PaymentSent event for each message.
b) When a Channel is closed, any outbound HTLCs which were relayed
through it are simply dropped when the Channel is. From there,
the ChannelManager relies on the ChannelMonitor having a copy of
the relevant fail-/claim-back data and processes the HTLC
fail/claim when the ChannelMonitor tells it to.
If, due to an on-chain event, an HTLC is failed/claimed, and
then we serialize the ChannelManager, but do not re-serialize
the relevant ChannelMonitor, we may end up getting a duplicative
event.
In order to provide the expected consistency, we add explicit
tracking of pending outbound payments using their unique
session_priv field which is generated when the payment is sent.
Then, before generating PaymentFailed/PaymentSent events, we check
that the session_priv for the payment is still pending.
Thix fixes#209.
To avoid caller data struct storing HTLC-related information when
a revokeable output is claimed on top of a commitment/second-stage
HTLC transactions, we split `keysinterface::sign_justice_transaction`
in two new halves `keysinterfaces::sign_justice_revoked_output` and
`keysinterfaces::sign_justice_revoked_htlc`.
Further, this split offers more flexibility to signer policy as a
commitment revokeable output might be of a value far more significant
than HTLC ones.
When we had a event which caused us to set the persist flag in a
PersistenceNotifier in between wait calls, we will still wait,
potentially not persisting a ChannelManager when we should.
Worse, for wait_timeout, this caused us to always wait up to the
timeout, but then always return true that a persistence is needed.
Instead, we simply check the persist flag before waiting, returning
immediately if it is set.
Currently, when a user calls `ChannelManager::timer_tick_occurred`
we always set the persister's update flag to true. This results in
a ChannelManager persistence after each timer tick, even when
nothing happened.
Instead, we add a new flag to `PersistenceNotifierGuard` to
indicate if we should skip setting the update flag.
Currently, we only send an update_channel message after
disconnecting a peer and waiting some time. We do not send a
followup when the peer has been reconnected for some time.
This changes that behavior to make the disconnect and reconnect
channel updates symmetric, and also simplifies the state machine
somewhat to make it more clear.
Finally, it serializes the current announcement state so that we
usually know when we need to send a new update_channel.