The network serialization format for all messages was changed some
time ago to include a TLV suffix for all messages, however we never
bothered to implement it as there isn't a lot of use validating a
TLV stream with nothing to do with it. However, messages are
increasingly utilizing the TLV suffix feature, and there are some
compatibility concerns with messages written as a part of other
structs having their format changed (see previous commit).
Thus, here we go ahead and convert most message serialization to a
new macro which includes a TLV suffix after a series of fields,
simplifying several serialization implementations in the process.
Going forward, all lightning messages have a TLV stream suffix,
allowing new fields to be added as needed. In the P2P protocol,
messages have an explicit length, so there is no implied length in
the TLV stream itself. HTLCFailureMsg enum variants have messages
in them, but without a size prefix or any explicit end. Thus, if a
HTLCFailureMsg is read as a part of a ChannelManager, with a TLV
stream at the end, there is no way to differentiate between the end
of the message and the next field(s) in the ChannelManager.
Here we add two new variant values for HTLCFailureMsg variants in
the read path, allowing us to switch to the new values if/when we
add new TLV fields in UpdateFailHTLC or UpdateFailMalformedHTLC so
that older versions can still read the new TLV fields.
In order to avoid significant malloc traffic, messages previously
explicitly stated their serialized length allowing for Vec
preallocation during the message serialization pipeline. This added
some amount of complexity in the serialization code, but did avoid
some realloc() calls.
Instead, here, we drop all the complexity in favor of a fixed 2KiB
buffer for all message serialization. This should not only be
simpler with a similar reduction in realloc() traffic, but also
may reduce heap fragmentation by allocating identically-sized
buffers more often.
330 sat/vbyte, the current value, is not sufficient to ensure a
future segwit script longer than 32 bytes meets the dust limit if
used for a shutdown script. Thus, we can either check the value
on shutdown or we can simply require segwit outputs and require a
dust value of no less than 354 sat/vbyte.
We swap the minimum dust value to 354 sat/vbyte here, requiring
segwit scripts in a future commit.
See https://github.com/lightningnetwork/lightning-rfc/issues/905
The common user desire is to get the set of claimable balances for
all non-closed channels. In order to do so, they really want to
just ask their `ChainMonitor` for the set of balances, which they
can do here by passing the `ChannelManager::list_channels` output
to `ChainMonitor::get_claimable_balances`.
In general, we should always allow users to query for how much is
currently in-flight being claimed on-chain at any time.
This does so by examining the confirmed claims on-chain and
breaking down what is left to be claimed into a new
`ClaimableBalance` enum.
Fixes#995.
This tracks how any HTLC outputs in broadcast commitment
transactions are resolved on-chain, storing the result of the HTLC
resolution persistently in the ChannelMonitor.
This can be used to determine which outputs may still be available
for claiming on-chain.
Decorate the user-supplied EventHandler with NetGraphMsgHandler in
the BackgroundProcessor. The resulting handler will intercept
PaymentFailed events in order to update the NetworkGraph in the
background before delegating to the user's event handler.
PaymentFailed events contain an optional NetworkUpdate describing
changes to the NetworkGraph as conveyed by a node along a failed payment
path according to BOLT 4. An EventHandler should apply the update to the
graph so that future routing decisions can account for it.
Implement EventHandler for NetGraphMsgHandler to update NetworkGraph.
Previously, NetGraphMsgHandler::handle_htlc_fail_channel_update
implemented this behavior.
MessageSendEvent::PaymentFailureNetworkUpdate served as a hack to pass
an HTLCFailChannelUpdate from ChannelManager to NetGraphMsgHandler via
PeerManager. Instead, remove the event entirely and move the contained
data (renamed NetworkUpdate) to Event::PaymentFailed to be processed by
an event handler.
CounterpartyForwardingInfo is public (previously exposed with a
`pub use`), and used inside of ChannelCounterparty in
channelmanager.rs. However, it is defined in channel.rs, away from
where it is used.
This would be fine, except that the bindings generator is somewhat
confused by this - it doesn't currently support interpreting
`pub use` as a struct to expose, instead ignoring it.
Fixes https://github.com/lightningdevkit/ldk-garbagecollected/issues/44
Now that NetworkGraph uses interior mutability, the RwLock used around
it in NetGraphMsgHandler is no longer needed. This allows for shared
ownership without a lock.
In preparation for giving NetworkGraph shared ownership, wrap individual
fields in RwLock. This allows removing the outer RwLock used in
NetGraphMsgHandler.
When communicating the maximum fee we're willing to accept on a
cooperative closing transaction to our peer, we currently tell them
we'll accept `u64::max_value()` if they're the ones who have to pay
it. Spec-wise this is fine - they aren't allowed to try to claim
our balance, and we don't care how much of their own funds they
want to spend on transaction fees.
However, the Eclair folks prefer to check all values on the wire
do not exceed 21 million BTC, which seems like generally good
practice to avoid overflows and such issues. Thus, our close
messages are rejected by Eclair.
Here we simply relax our stated maximum to be the real value - our
counterparty's current balance in satoshis.
Fixes#1071