+ remove MaybeReadableArgs trait as it is now unused
+ remove onion_utils::DecodeInput as it would've now needed to be parameterized
by the CustomOnionMessageHandler trait, and we'd like to avoid either
implementing DecodeInput in messenger or having onion_utils depend on
onion_message::*
Co-authored-by: Matt Corallo <git@bluematt.me>
Co-authored-by: Valentine Wallace <vwallace@protonmail.com>
The bindings have exported `io::Error` as, basically,
`io::ErrorKind`, for quite some time, so there's little reason to
not just export `io::ErrorKind` as well.
As we're moving towards monitor update async being a supported
use-case, we shouldn't call an async monitor update "failed", but
rather "in progress". This simply updates the internal channel.rs
enum name to reflect the new thinking.
If the initial ChannelMonitor persistence is done asynchronously
but does not complete before the node restarts (with a
ChannelManager persistence), we'll start back up with a channel
present but no corresponding ChannelMonitor.
Because the Channel is pending-monitor-update and has not yet
broadcasted its initial funding transaction or sent channel_ready,
this is not a violation of our API contract nor a safety violation.
However, the previous code would refuse to deserialize the
ChannelManager treating it as an API contract violation.
The solution is to test for this case explicitly and drop the
channel entirely as if the peer disconnected before we received
the funding_signed for outbound channels or before sending the
channel_ready for inbound channels.
OnionMessenger::new will now take a custom onion message handler trait
implementation. This handler will be used in upcoming commit(s) to handle
inbound custom onion messages.
The new trait also specifies what custom messages are supported via its
associated type, CustomMessage. This associated type must implement a new
CustomOnionMessagesContents trait, which requires custom messages to support
being written, being read, and supplying their TLV type.
If we receive a monitor event from a forwarded-to channel which
contains a preimage for an HTLC, we have to propogate that preimage
back to the forwarded-from channel monitor. However, once we have
that update, we're running in a relatively unsafe state - we have
the preimage in memory, but if we were to crash the forwarded-to
channel monitor will not regenerate the update with the preimage
for us. If we haven't managed to write the monitor update to the
forwarded-from channel by that point, we've lost the preimage, and,
thus, money!
When a `chain::Watch` `ChannelMonitor` update method is called, the
user has three options:
(a) persist the monitor update immediately and return success,
(b) fail to persist the monitor update immediately and return
failure,
(c) return a flag indicating the monitor update is in progress and
will complete in the future.
(c) is rather harmless, and in some deployments should be expected
to be the return value for all monitor update calls, but currently
requires returning `Err(ChannelMonitorUpdateErr::TemporaryFailure)`
which isn't very descriptive and sounds scarier than it is.
Instead, here, we change the return type used to be a single enum
(rather than a Result) and rename `TemporaryFailure`
`UpdateInProgress`.
The `forward_htlc` was prior to this commit only held at the same time
as the `channel_state` lock during the write process of the
`ChannelManager`. This commit removes the lock order dependency, by
taking the `channel_state`lock temporarily during the write process.
As we are eventually removing the `channel_state` lock, this commit
moves the `forward_htlcs` map out of the `channel_state` lock, to ease
that process.
See doc updates for more info on the edge case this prevents, and
there isn't really a strong reason why we would need to broadcast
the latest state immediately. Specifically, in the case of HTLC
claims (the most important reason to ensure we have state on chain
if it cannot be persisted), we will still force-close if there are
HTLCs which need claiming and are going to expire.
Surprisingly, there were no tests which failed as a result of this
change, but a new one has been added.
Now that the features contexts track the full set of all known
features, rather than the set of supported features, all defined
features should be listed in the context definition macro.
This adds a compile-time assertion to check that all bits for known
features are set in the context known set.
Now that the `*Features::known` constructor has been removed, there
is no reason to define feature bits as either optional required in
`features.rs` - that logic now belongs in the modules that are
responsible for the given features.
Instead, we only list all features in each context.
As we move towards specify supported/required feature bits in the
module(s) where they are supported, the global `known` feature set
constructors no longer make sense.
Here we (finally) remove the `known` constructor entirely,
modifying tests in the `features` module as required.
As we move towards specify supported/required feature bits in the
module(s) where they are supported, the global `known` feature set
constructors no longer make sense.
In anticipation of removing the `known` constructor, this commit
removes all remaining references to it outside of features.rs.
As we move towards specify supported/required feature bits in the
module(s) where they are supported, the global `known` feature set
constructors no longer make sense.
Here we stop relying on the `known` method in the channel modules.
As we move towards specify supported/required feature bits in the
module(s) where they are supported, the global `known` feature set
constructors no longer make sense.
Here we stop relying on the `known` method in the
functional_test_utils module.
Historically, LDK has considered the "set of known/supported
feature bits" to be an LDK-level thing. Increasingly this doesn't
make sense - different message handlers may provide or require
different feature sets.
In a previous PR, we began the process of transitioning with
feature bits sent to peers being sourced from the attached message
handler.
This commit makes further progress by moving the concept of which
feature bits are supported by our ChannelManager into
channelmanager.rs itself, via the new `provided_*_features`
methods, rather than in features.rs via the `known_channel_features`
and `known` methods.
Each test featuring HTLCs had a minimum and maximum feerate case. This
is no longer necessary for the zero HTLC transaction anchors variant as
the commitment feerate does not impact whether HTLCs can be trimmed or
not, only the dust limit does.
With the zero fee HTLC transaction anchors variant, HTLCs can no longer
be trimmed due to their amount being too low to have a mempool valid
HTLC transaction. Now they can only be trimmed based on the dust limit
of each party within the channel.
There's no need to broadcast our local commitment transaction if we've
already seen a confirmed one as it'll be immediately rejected as a
duplicate/conflict.
This will also help prevent dispatching spurious events for bumping
commitment and HTLC transactions through anchor outputs (once
implemented in future work) and the dispatch for said events follows the
same flow as our usual commitment broadcast.
As we remove the concept of a global "known/supported" feature set
in LDK, we should also remove the concept of a global "required"
feature set. This does so by moving the checks for specific
required features into handlers.
Specifically, it allows the handler `peer_connected` method to
return an `Err` if the peer should be disconnected. Only one such
required feature bit is currently set - `static_remote_key`, which
is required in `ChannelManager`.
In the next commit we'll enforce counterparty `InitFeatures`
matching our required set in `ChannelManager`, implying they must
be set for many tests where they previously did not need to be (as
they were enforced in `PeerManager`, which is not used in
functional tests).