This removes one more place where we directly access the node_id
secret key in `ChannelManager`, slowly marching towards allowing
the node_id secret key to be offline in the signer.
More importantly, it allows more ChannelAnnouncement logic to move
into the `Channel` without having to pass the node secret key
around, avoiding the announcement logic being split across two
files.
A single PaymentSent event is generated when a payment is fulfilled.
This is occurs when the preimage is revealed on the first claimed HTLC.
For subsequent HTLCs, the event is not generated.
In order to score channels involved with a successful payments, the
scorer must be notified of each successful path involved in the payment.
Add a PaymentPathSuccessful event for this purpose. Generate it whenever
a part is removed from a pending outbound payment. This avoids duplicate
events when reconnecting to a peer.
Previously, we would reject inbound channels if the funder wasn't
able to meet our channel reserve on their first commitment
transaction only if they also failed to push enough to us for us
to not meet their initial channel reserve as well.
There's not a lot of reason to care about us meeting their reserve,
however - its largely expected that they may not push enough to us
in the initial open to meet it, and its not actually our problem if
they don't.
Further, we used our own fee, instead of the channel's actual fee,
to calculate fee affordability of the initial commitment
transaction.
We resolve both issues here, rewriting the combined affordability
check conditionals in inbound channel open handling and adding a
fee affordability check for outbound channels as well.
The prior code may have allowed a counterparty to start the channel
with "no punishment" states - violating the reason for the reserve
threshold.
In upcoming commits, we'll be making the payment secret and payment hash/preimage
derivable from info about the payment + a node secret. This means we don't
need to store any info about incoming payments and can eventually get rid of the
channelmanager::pending_inbound_payments map.
Traits in top-level modules is somewhat confusing - generally
top-level modules are just organizational modules and don't contain
things themselves, instead placing traits and structs in
sub-modules. Further, its incredibly awkward to have a `scorer`
sub-module, but only have a single struct in it, with the relevant
trait it is the only implementation of somewhere else. Not having
`Score` in the `scorer` sub-module is further confusing because
it's the only module anywhere that references scoring at all.
NetworkGraph is owned by NetGraphMsgHandler, but DefaultRouter requires
a reference to it. Introduce shared ownership to NetGraphMsgHandler so
that both can use the same NetworkGraph.
As payments fail, the channel responsible for the failure may be
penalized. Implement Scorer::payment_path_failed to penalize the failed
channel using a configured penalty. As time passes, the penalty is
reduced using exponential decay, though penalties will accumulate if the
channel continues to fail. The decay interval is also configurable.
An upcoming Router interface will be used for finding a Route both when
initially sending a payment and also when retrying failed payment paths.
Unify the three varieties of get_route so the interface can consist of a
single method implemented by the new `find_route` method. Give get_route
pub(crate) visibility so it can still be used in tests.
A payee can be identified by a pubkey and optionally have an associated
set of invoice features and route hints. Use this in get_route instead
of three separate parameters. This may be included in PaymentPathFailed
later to use when finding a new route.
This resolves several user complaints (and issues in the sample
node) where startup is substantially delayed as we're always
waiting for the chain data to sync.
Further, in an upcoming PR, we'll be reloading pending payments
from ChannelMonitors on restart, at which point we'll need the
change here which avoids handling events until after the user
has confirmed the `ChannelMonitor` has been persisted to disk.
It will avoid a race where we
* send a payment/HTLC (persisting the monitor to disk with the
HTLC pending),
* force-close the channel, removing the channel entry from the
ChannelManager entirely,
* persist the ChannelManager,
* connect a block which contains a fulfill of the HTLC, generating
a claim event,
* handle the claim event while the `ChannelMonitor` is being
persisted,
* persist the ChannelManager (before the CHannelMonitor is
persisted fully),
* restart, reloading the HTLC as a pending payment in the
ChannelManager, which now has no references to it except from
the ChannelMonitor which still has the pending HTLC,
* replay the block connection, generating a duplicate PaymentSent
event.
In the next commit, we'll be originating monitor updates both from
the ChainMonitor and from the ChannelManager, making simple
sequential update IDs impossible.
Further, the existing async monitor update API was somewhat hard to
work with - instead of being able to generate monitor_updated
callbacks whenever a persistence process finishes, you had to
ensure you only did so at least once all previous updates had also
been persisted.
Here we eat the complexity for the user by moving to an opaque
type for monitor updates, tracking which updates are in-flight for
the user and only generating monitor-persisted events once all
pending updates have been committed.
In the next commit we'll need ChainMonitor to "see" when a monitor
persistence completes, which means `monitor_updated` needs to move
to `ChainMonitor`. The simplest way to then communicate that
information to `ChannelManager` is via `MonitorEvet`s, which seems
to line up ok, even if they're now constructed by multiple
different places.
Failed payments may be retried, but calling get_route may return a Route
with the same failing path. Add a routing::Score trait used to
parameterize get_route, which it calls to determine how much a channel
should be penalized in terms of msats willing to pay to avoid the
channel.
Also, add a Scorer struct that implements routing::Score with a constant
constant penalty. Subsequent changes will allow for more robust scoring
by feeding back payment path success and failure to the scorer via event
handling.
In order to avoid significant malloc traffic, messages previously
explicitly stated their serialized length allowing for Vec
preallocation during the message serialization pipeline. This added
some amount of complexity in the serialization code, but did avoid
some realloc() calls.
Instead, here, we drop all the complexity in favor of a fixed 2KiB
buffer for all message serialization. This should not only be
simpler with a similar reduction in realloc() traffic, but also
may reduce heap fragmentation by allocating identically-sized
buffers more often.
330 sat/vbyte, the current value, is not sufficient to ensure a
future segwit script longer than 32 bytes meets the dust limit if
used for a shutdown script. Thus, we can either check the value
on shutdown or we can simply require segwit outputs and require a
dust value of no less than 354 sat/vbyte.
We swap the minimum dust value to 354 sat/vbyte here, requiring
segwit scripts in a future commit.
See https://github.com/lightningnetwork/lightning-rfc/issues/905
PaymentFailed events contain an optional NetworkUpdate describing
changes to the NetworkGraph as conveyed by a node along a failed payment
path according to BOLT 4. An EventHandler should apply the update to the
graph so that future routing decisions can account for it.
Implement EventHandler for NetGraphMsgHandler to update NetworkGraph.
Previously, NetGraphMsgHandler::handle_htlc_fail_channel_update
implemented this behavior.
MessageSendEvent::PaymentFailureNetworkUpdate served as a hack to pass
an HTLCFailChannelUpdate from ChannelManager to NetGraphMsgHandler via
PeerManager. Instead, remove the event entirely and move the contained
data (renamed NetworkUpdate) to Event::PaymentFailed to be processed by
an event handler.
Now that NetworkGraph uses interior mutability, the RwLock used around
it in NetGraphMsgHandler is no longer needed. This allows for shared
ownership without a lock.