This field was previous useful in manual retries for users to know when all
paths of a payment have failed and it is safe to retry. Now that we support
automatic retries in ChannelManager and no longer support manual retries, the
field is no longer useful.
For backwards compat, we now always write false for this field. If we didn't do
this, previous versions would default this field's value to true, which can be
problematic because some clients have relied on the field to indicate when a
full payment retry is safe.
An overflow can occur when multiplying the offer amount by the requested
quantity when no amount is given in the request. Return an error instead
of overflowing.
An overflow can occur when multiplying the offer amount by the requested
quantity when checking if the given amount is enough. Return an error
instead of overflowing.
Fuzz testing bech32 decoding along with deserializing the underlying
message can result in overly exhaustive searches. Instead, the message
deserializations are now fuzzed separately. Add fuzzing for bech32
decoding.
In order to fuzz test Bech32Encode parsing independent of the underlying
message deserialization, the trait needs to be exposed. Conditionally
expose it only for fuzzing.
An invoice is serialized as a TLV stream and encoded as bytes. Add a
fuzz test that parses the TLV stream and deserializes the underlying
Invoice. Then compare the original bytes with those obtained by
re-serializing the Invoice.
An invoice request is serialized as a TLV stream and encoded as bytes.
Add a fuzz test that parses the TLV stream and deserializes the
underlying InvoiceRequest. Then compare the original bytes with those
obtained by re-serializing the InvoiceRequest.
Forcing users to pass a genesis block hash has ended up being
error-prone largely due to byte-swapping questions for bindings
users. Further, our API is currently inconsistent - in
`ChannelManager` we take a `Bitcoin::Network` but in `NetworkGraph`
we take the genesis block hash.
Luckily `NetworkGraph` is the only remaining place where we require
users pass the genesis block hash, so swapping it for a `Network`
is a simple change.
Prior to this, we returned PaymentSendFailure from auto retry send payment
methods. This implied that we might return a PartialFailure from them, which
has never been the case. So it makes sense to rework the errors to be a better
fit for the methods.
We're taking error handling in a totally different direction now to make it
more asynchronous, see send_payment_internal for more information.
The `Channel::get_shutdown` docs are very clear - if the channel
jumps to `Shutdown` as a result of not being funded when we go to
initiate shutdown we should not generate a `ChannelMonitorUpdate`
as there's no need to bother with the shutdown script - we're
force-closing anyway.
However, this wasn't actually implemented, potentially causing a
spurious monitor update for no reason.
Building on the previous commits, this finishes our transition to
doing all message-sending in the monitor update completion
pipeline, unifying our immediate- and async- `ChannelMonitor`
update and persistence flows.
In the previous commit, we moved all our `ChannelMonitorUpdate`
pipelines to use a new async path via the
`handle_new_monitor_update` macro. This avoids having two message
sending pathways and simply sends messages in the "monitor update
completed" flow, which is shared between sync and async monitor
updates.
Here we reuse the new macro for handling `funding_signed` messages
when doing an initial `ChannelMonitor` persistence. This provides
a similar benefit, simplifying the code a trivial amount, but
importantly allows us to fully remove the original
`handle_monitor_update_res` macro.
We currently have two codepaths on most channel update functions -
most methods return a set of messages to send a peer iff the
`ChannelMonitorUpdate` succeeds, but if it does not we push the
messages back into the `Channel` and then pull them back out when
the `ChannelMonitorUpdate` completes and send them then. This adds
a substantial amount of complexity in very critical codepaths.
Instead, here we swap all our channel update codepaths to
immediately set the channel-update-required flag and only return a
`ChannelMonitorUpdate` to the `ChannelManager`. Internally in the
`Channel` we store a queue of `ChannelMonitorUpdate`s, which will
become critical in future work to surface pending
`ChannelMonitorUpdate`s to users at startup so they can complete.
This leaves some redundant work in `Channel` to be cleaned up
later. Specifically, we still generate the messages which we will
now ignore and regenerate later.
This commit updates the `ChannelMonitorUpdate` pipeline across all
the places we generate them.
The TODO mentioned in `handle_monitor_update_res` about how we
might forget about HTLCs in case of permanent monitor update
failure still applies in spite of all our changes. If a channel is
drop'd in general, monitor-pending updates may be lost if the
monitor update failed to persist.
This was always the case, and is ultimately the general form of the
the specific TODO, so we simply leave comments there
In a previous PR, we added a `MonitorUpdateCompletionAction` enum
which described actions to take after a `ChannelMonitorUpdate`
persistence completes. At the time, it was only used to execute
actions in-line, however in the next commit we'll start (correctly)
leaving the existing actions until after monitor updates complete.
Because we store some (not large, but not zero) state per-peer,
it's useful to limit the number of peers we have connected, at
least with some buffer.
Much more importantly, each channel has a relatively large cost,
especially around the `ChannelMonitor`s we have to build for each.
Thus, here, we limit the number of channels per-peer which aren't
(yet) on-chain, as well as limit the number of (inbound) peers
which don't have a (funded-on-chain) channel.
Fixes#1889
Long ago, we used the `no_connection_possible` to signal that a
peer has some unknown feature set or some other condition prevents
us from ever connecting to the given peer. In that case we'd
automatically force-close all channels with the given peer. This
was somewhat surprising to users so we removed the automatic
force-close, leaving the flag serving no LDK-internal purpose.
Distilling the concept of "can we connect to this peer again in the
future" to a simple flag turns out to be ripe with edge cases, so
users actually using the flag to force-close channels would likely
cause surprising behavior.
Thus, there's really not a lot of reason to keep the flag,
especially given its untested and likely to be broken in subtle
ways anyway.
This fixes new errors in `full_stack_target` pointed out by
Chaincode's generous fuzzing infrastructure. Specifically, there's
no reason to check the error message in the
`funding_transaction_generated` return value - it can only return
a failure if the channel has closed since the funding transaction
was generated (which is fine) or if the signer refuses to sign
(which can't happen in fuzzing).
In general, we should be checking if a `Peer` has `their_features`
set as the "is this peer connected and have they finished the
handshake" flag as it indicates an `Init` message was received.
While none of these appear to be reachable bugs, there were a
number of places where we checked other flags for this purpose,
which may lead to sending messages before `Init` in the future.
Here we clean these cases up to always use the correct check (via
the new util method).
If we have a peer that sends a non-`Init` first message, we'll call
`peer_disconnected` without ever having called `peer_connected`
(which has to wait until we have an `Init` message). This is a
violation of our API guarantees, though should generally not be an
issue.
Because this bug was repeated in a few places, we also take this
opportunity to DRY up the logic which checks the peer state before
calling `peer_disconnected`.
Found by the new `ChannelManager` assertions and the
`full_stack_target` fuzzer.
Over the next few commits, this macro will replace the
`handle_monitor_update_res` macro. It takes a different approach -
instead of receiving the message(s) that need to be re-sent after
the monitor update completes and pushing them back into the
channel, we'll not get the messages from the channel at all until
we're ready for them.
This will unify our message sending into only actually fetching +
sending messages in the common monitor-update-completed code,
rather than both there *and* in the functions that call `Channel`
when new messages are originated.
In order to support fully async `ChannelMonitor` updating, we need
to ensure that we can replay `ChannelMonitorUpdate`s if we shut
down after persisting a `ChannelManager` but without completing a
`ChannelMonitorUpdate` persistence. In order to support that we
(obviously) have to store the `ChannelMonitorUpdate`s in the
`ChannelManager`, which we do here inside the `Channel`.
We do so now because in the coming commits we will start using the
async persistence flow for all updates, and while we won't yet
support fully async monitor updating it's nice to get some of the
foundational structures in place now.