This simplifies the tlv serialization read macro somewhat by
allowing callsites to simply read into an `Option<Vec>` instead of
needing to read into an `Option<VecReadWrapper>` when using
`vec_type`.
Latest rustc nightly compiles are filled with warnings like the
following, which we fix here:
```
warning: trailing semicolon in macro used in expression position
--> lightning/src/util/macro_logger.rs:163:114
|
163 | $logger.log(&$crate::util::logger::Record::new($lvl, format_args!($($arg)+), module_path!(), file!(), line!()));
| ^
|
::: lightning/src/chain/chainmonitor.rs:165:9
|
165 | log_debug!(self.logger, "New best block {} at height {} provided via block_connected", header.block_hash(), height);
| -------------------------------------------------------------------------------------------------------------------- in this macro invocation
|
= note: `#[warn(semicolon_in_expressions_from_macros)]` on by default
= warning: this was previously accepted by the compiler but is being phased out; it will become a hard error in a future release!
= note: for more information, see issue #79813 <https://github.com/rust-lang/rust/issues/79813>
= note: this warning originates in the macro `log_internal` (in Nightly builds, run with -Z macro-backtrace for more info)
```
This adds the new range-based closing_signed negotiation specified
in https://github.com/lightningnetwork/lightning-rfc/pull/847 as
well as cleans up the existing closing_signed negotiation to unify
the new codepaths and the old ones.
Note that because the new range-based closing_signed negotiation
allows the channel fundee to ultimately select the fee out of a
range specified by the funder, which we, of course, always select
the highest allowed amount from. Thus, we've added an extra round
of closing_signed in the common case as we will not simply accept
the first fee we see, always preferring to make the funder pay as
much as they're willing to.
This adds a new TLV-based enum serialization macro entitled
`impl_writeable_tlv_based_enum_upgradable`. As the name implies,
the new macro allows us to ignore odd-numbered variant entries.
Because the new macro implements only `MaybeReadable` and not
`Readable`, it is not applicable in many contexts, here only being
added for the two `OnchainEvent` structs.
This makes it much simpler to deal with `MaybeReadable` types in
`Vec`s in TLVs as we can transparently deal with them as `vec`,
with the wrapper doing the Right Thing.
This requires we implement `MaybeReadable` for all `Readable` which
has some downstream implications, but nothing too bad.
This allows decode_tlv_stream!() to be called with either a mutable
reference to a stream or a stream itself and allows
encode_tlv_stream!() to be called with an excess , at the end of
the parameter list.
Trimmed-to-dust HTLCs are at risk of being burnt as miner fees
at anytime during their lifetime due to the broadcast of either
holder commitment transaction or counterparty's one.
To hedge against this risk, we introduce a new config setting
`max_balance_dust_htlc_msat`, with the initial value of
5_000_000 msat.
When a shutdown script is omitted from open_channel or accept_channel,
it must be provided when sending shutdown. Generate the shutdown script
at channel closing time in this case rather at channel opening.
This requires producing a ChannelMonitorUpdate with the shutdown script
since it is no longer known at ChannelMonitor creation.
KeysInterface::get_shutdown_pubkey is used to form P2WPKH shutdown
scripts. However, BOLT 2 allows for a wider variety of scripts. Refactor
KeysInterface to allow any supported script while still maintaining
serialization backwards compatibility with P2WPKH script pubkeys stored
simply as the PublicKey.
Add an optional TLV field to Channel and ChannelMonitor to support the
new format, but continue to serialize the legacy PublicKey format.
This allows TLV serialization macros to read non-Option-wrapped
types but allow them to be missing, filling them in with the
provided default value as needed.
It is useful for accounting and informational reasons for users to
be informed when a payment has been successfully forwarded. Thus,
when an HTLC which represents a forwarded leg is claimed, we
generate a new `PaymentForwarded` event.
This requires some additional plumbing to return HTLC values from
`OnchainEvent`s. Further, when we have to go on-chain to claim the
inbound side of the payment, we do not inform the user of the fee
reward, as we cannot calculate it until we see what is confirmed
on-chain.
Substantial code structure rewrites by:
Valentine Wallace <vwallace@protonmail.com>
This should provide some additional future extensibility, allowing
for new informational events which can be safely ignored to be
ignored by older versions.
Private nodes should never wish to forward HTLCs at all, which we
support here by disabling forwards out over private channels by
default. As private nodes should not have any public channels, this
suffices, without allowing users to disable forwarding over
channels announced in the routing graph already.
Closes#969
Currently the base fee we apply is always the expected cost to
claim an HTLC on-chain in case of closure. This results in
significantly higher than market rate fees [1], and doesn't really
match the actual forwarding trust model anyway - as long as
channel counterparties are honest, our HTLCs shouldn't end up
on-chain no matter what the HTLC sender/recipient do.
While some users may wish to use a feerate that implies they will
not lose funds even if they go to chain (assuming no flood-and-loot
style attacks), they should do so by calculating fees themselves;
since they're already charging well above market-rate,
over-estimating some won't have a large impact.
Worse, we current re-calculate fees at forward-time, not based on
the fee we set in the channel_update. This means that the fees
others expect to pay us (and which they calculate their route based
on), is not what we actually want to charge, and that any attempt
to forward through us is inherently race-y.
This commit adds a configuration knob to set the base fee
explicitly, defaulting to 1 sat, which appears to be market-rate
today.
[1] Note that due to an msat-vs-sat bug we currently actually
charge 1000x *less* than the calculated cost.
This was missed prior to 0.0.98, so requires a
backwards-compatibility wrapper inside the `Channel` serialization
logic, but it's not very complicated to do so.
If we are a public node and have a private channel, our
counterparty needs to know the fees which we will charge to forward
payments to them. Without sending them a channel_update, they have
no way to learn that information, resulting in the channel being
effectively useless for outbound-from-us payments.
This commit fixes our lack of channel_update messages to private
channel counterparties, ensuring we always send them a
channel_update after the channel funding is confirmed.
We changed the sort order of log levels to be more natural, but this
comparison wasn't updated accordingly. Likely the reason it was
left strange for so long is it also had the comparison argument
ordering flipped.
For log entries which may have a variable level, we can't call an
arbitrary macro and need to be able to pass an explicit level. This
does so without breaking the compile-time disabling of certain log
levels.
Further, we "fix" the comparison order of log levels to make more
significant levels sort "higher", which implicitly makes more sense
than sorting "lower".
Finally, we remove the "Off" log level as no log entry should ever
be logged at the "Off" level - that would be nonsensical.
`Event::PaymentSent` serialization has a bug where we
double-serialize the payment_preimage field but do not attempt to
read it twice. This results in a failure to read `ChannelManager`s
from disk if we have a pending `Event::PaymentSent` pending
awaiting handling when we serialize.
Instead of attempting to read both versions, we opt to simply fix
the serialization, assuming it is exceedingly rare for such a
scenario to appear (and if it does, we can assist in manual
recovery).
The release notes have been updated to note this inconsistency.
Found by the chanmon_consistency fuzz target.
Previous to this PR, TLV serialization involved iterating from 0 to the highest
given TLV type. This worked until we decided to implement keysend, which has a
TLV type of ~5.48 billion.
So instead, we now specify the type of whatever is being (de)serialized (which
can be an Option, a Vec type, or a non-Option (specified in the serialization macros as "required").
Latest rustc warns that "this URL is not a hyperlink" and notes that
"bare URLs are not automatically turned into clickable links". This
resolves those warnings.
Thanks to Joshua Nelson for pointing out the correct syntax for
this.
VecReadWrapper is only used in TLVs so there is no need to prepend
a length before writing/reading the objects - we can instead simply
read until we reach the end of the TLV stream.
This substantially improves deserialization performance when LLVM
decides not to inline many short methods, eg when not building
with LTO/codegen-units=1.
Even with the default bench params of LTO/codegen-units=1, the
serialization benchmarks on an Intel 2687W v3 take:
test routing::network_graph::benches::read_network_graph ... bench: 1,955,616,225 ns/iter (+/- 4,135,777)
test routing::network_graph::benches::write_network_graph ... bench: 165,905,275 ns/iter (+/- 118,798)
With the new `serialized_length()` method potentially being
significantly more efficient than `LengthCalculatingWriter`, this
commit ensures we call `serialized_length()` when calculating
length of a larger struct.
Specifically, prior to this commit a call to
`serialized_length()` on a large object serialized with
`impl_writeable`, `impl_writeable_len_match`, or
`encode_varint_length_prefixed_tlv` (and
`impl_writeable_tlv_based`) would always serialize all inner fields
of that object using `LengthCalculatingWriter`. This would ignore
any `serialized_length()` overrides by inner fields. Instead, we
override `serialized_length()` on all of the above by calculating
the serialized size using calls to `serialized_length()` on inner
fields.
Further, writes to `LengthCalculatingWriter` should never fail as
its `write` method never returns an error. Thus, any write failures
indicate a bug in an object's write method or in our
object-creation sanity checking. We `.expect()` such write calls
here.
As of this commit, on an Intel 2687W v3, the serialization
benchmarks take:
test routing::network_graph::benches::read_network_graph ... bench: 2,039,451,296 ns/iter (+/- 4,329,821)
test routing::network_graph::benches::write_network_graph ... bench: 166,685,412 ns/iter (+/- 352,537)
When writing out libsecp256k1 objects during serialization in a
TLV, we potentially calculate the TLV length twice before
performing the actual serialization (once when calculating the
total TLV-stream length and once when calculating the length of the
secp256k1-object-containing TLV). Because the lengths of secp256k1
objects is a constant, we'd ideally like LLVM to entirely optimize
out those calls and simply know the expected length. However,
without cross-language LTO, there is no way for LLVM to verify that
there are no side-effects of the calls to libsecp256k1, leaving
LLVM with no way to optimize them out.
This commit adds a new method to `Writeable` which returns the
length of an object once serialized. It is implemented by default
using `LengthCalculatingWriter` (which LLVM generally optimizes out
for Rust objects) and overrides it for libsecp256k1 objects.
As of this commit, on an Intel 2687W v3, the serialization
benchmarks take:
test routing::network_graph::benches::read_network_graph ... bench: 2,035,402,164 ns/iter (+/- 1,855,357)
test routing::network_graph::benches::write_network_graph ... bench: 308,235,267 ns/iter (+/- 140,202)
Now that our MSRV supports the native methods, we have no need
for the helpers anymore. Because LLVM was already matching our
byte_utils methods as byteswap functions, this should have no
impact on generated (optimzied) code.
This removes most of the byte_utils usage, though some remains to
keep the patch size reasonable.
This makes it so that users cannot usefully implement their own
`EventsProvider`, which would require substantial new logic in the
bindings generator (for generic methods). In the case of
`EventsProvider`, because there are no Rust methods which accept an
`EventsProvider` as an argument, this is perfectly OK as the
generated code would be entirely unused anyway.