This resolves several user complaints (and issues in the sample
node) where startup is substantially delayed as we're always
waiting for the chain data to sync.
Further, in an upcoming PR, we'll be reloading pending payments
from ChannelMonitors on restart, at which point we'll need the
change here which avoids handling events until after the user
has confirmed the `ChannelMonitor` has been persisted to disk.
It will avoid a race where we
* send a payment/HTLC (persisting the monitor to disk with the
HTLC pending),
* force-close the channel, removing the channel entry from the
ChannelManager entirely,
* persist the ChannelManager,
* connect a block which contains a fulfill of the HTLC, generating
a claim event,
* handle the claim event while the `ChannelMonitor` is being
persisted,
* persist the ChannelManager (before the CHannelMonitor is
persisted fully),
* restart, reloading the HTLC as a pending payment in the
ChannelManager, which now has no references to it except from
the ChannelMonitor which still has the pending HTLC,
* replay the block connection, generating a duplicate PaymentSent
event.
In the next commit, we'll be originating monitor updates both from
the ChainMonitor and from the ChannelManager, making simple
sequential update IDs impossible.
Further, the existing async monitor update API was somewhat hard to
work with - instead of being able to generate monitor_updated
callbacks whenever a persistence process finishes, you had to
ensure you only did so at least once all previous updates had also
been persisted.
Here we eat the complexity for the user by moving to an opaque
type for monitor updates, tracking which updates are in-flight for
the user and only generating monitor-persisted events once all
pending updates have been committed.
In the next commit we'll need ChainMonitor to "see" when a monitor
persistence completes, which means `monitor_updated` needs to move
to `ChainMonitor`. The simplest way to then communicate that
information to `ChannelManager` is via `MonitorEvet`s, which seems
to line up ok, even if they're now constructed by multiple
different places.
Failed payments may be retried, but calling get_route may return a Route
with the same failing path. Add a routing::Score trait used to
parameterize get_route, which it calls to determine how much a channel
should be penalized in terms of msats willing to pay to avoid the
channel.
Also, add a Scorer struct that implements routing::Score with a constant
constant penalty. Subsequent changes will allow for more robust scoring
by feeding back payment path success and failure to the scorer via event
handling.
As ChainMonitor will need to see those errors in a coming PR,
we need to return errors via Persister so that our ChainMonitor
chain::Watch implementation sees them.
test_simple_monitor_permanent_update_fail and
test_simple_monitor_temporary_update_fail both have a mode where
they use either chain::Watch or persister to return errors.
As we won't be doing any returns directly from the chain::Watch
wrapper in a coming commit, the chain::Watch-return form of the
test will no longer make sense.
Exposing a `RwLock<HashMap<>>` directly was always a bit strange,
and in upcoming changes we'd like to change the internal
datastructure in `ChainMonitor`.
Further, the use of `RwLock` and `HashMap` meant we weren't able
to expose the ChannelMonitors themselves to users in bindings,
leaving a bindings/rust API gap.
Thus, we take this opportunity go expose ChannelMonitors directly
via a wrapper, hiding the internals of `ChainMonitor` behind
getters. We also update tests to use the new API.
The interface for get_route will change to take a scorer. Using
get_route_and_payment_hash whenever possible allows for keeping the
scorer inside get_route_and_payment_hash rather than at every call site.
Replace get_route with get_route_and_payment_hash wherever possible.
Additionally, update get_route_and_payment_hash to use the known invoice
features and the sending node's logger.
This makes it more practical for users to track channels prior to
funding, especially if the channel fails because the peer rejects
it for a parameter mismatch.
During the event of a channel close, if the funding transaction
is yet to be broadcasted then a DiscardFunding event is issued
along with the ChannelClose event.
If we attempt to send a payment, but the HTLC cannot be send due to
local channel limits, we'll provide the user an error but end up
with an entry in our pending payment map. This will result in a
memory leak as we'll never reclaim the pending payment map entry.
This is because we want the ability to retry completely failed
payments.
Upcoming commits will remove these payments on timeout to prevent
DoS issues
Also test that this removal allows retrying single-path payments
When we are prepared to forward HTLCs, we generate a
PendingHTLCsForwardable event with a time in the future when the
user should tell us to forward. This provides some basic batching
of forward events, improving privacy slightly.
After we generate the event, we expect users to spawn a timer in
the background and let us know when it finishes. However, if the
user shuts down before the timer fires, the user will restart and
have no idea that HTLCs are waiting to be forwarded/received.
To fix this, instead of serializing PendingHTLCsForwardable events
to disk while they're pending (before the user starts the timer),
we simply regenerate them when a ChannelManager is deserialized
with HTLCs pending.
Fixes#1042
We want to reuse send_payment internal functions for retries,
so some need to now be parameterized by PaymentId to avoid
generating a new PaymentId on retry
If a counterparty (or an old channel of ours) uses a non-segwit
script for their cooperative close payout, they may include an
output which is unbroadcastable due to not meeting the network dust
limit.
Here we check for this condition, force-closing the channel instead
if we find an output in the closing transaction which does not meet
the limit.
There is little reason for users to be paying out to non-Segwit
scripts when closing channels at this point. Given we will soon, in
rare cases, force-close during shutdown when a counterparty closes
to a non-Segwit script, we should also require it of our own users.
546 sat/vbyte is the current default dust limit on most
implementations, matching the network dust limit for P2SH outputs.
Implementations don't currently appear to send any larger dust
limits, and allowing a larger dust limit implies higher payment
failure risk, so we'd like to be as tight as we can here.
Associated types in C bindings is somewhat of a misnomer - we
concretize each trait to a single struct. Thus, different trait
implementations must still have the same type, which defeats the
point of associated types.
In this particular case, however, we can reasonably special-case
the `Infallible` type, as an instance of it existing implies
something has gone horribly wrong.
In order to help our bindings code figure out how to do so when
referencing a parent trait's associated type, we specify the
explicit type in the implementation method signature.
When we landed custom messages, we used the empty tuple for the
custom message type for `IgnoringMessageHandler`. This was fine,
except that we also implemented `Writeable` to panic when writing
a `()`. Later, we added support for anchor output construction in
CommitmentTransaction, signified by setting a field to `Some(())`,
which is serialized as-is.
This causes us to panic when writing a `CommitmentTransaction`
with `opt_anchors` set. Note that we never set it inside of LDK,
but downstream users may.
Instead, we implement `Writeable` to write nothing for `()` and use
`core::convert::Infallible` for the default custom message type as
it is, appropriately, unconstructable.
This also makes it easier to implement various things in bindings,
as we can always assume `Infallible`-conversion logic is
unreachable.