KeysInterface::get_shutdown_pubkey is used to form P2WPKH shutdown
scripts. However, BOLT 2 allows for a wider variety of scripts. Refactor
KeysInterface to allow any supported script while still maintaining
serialization backwards compatibility with P2WPKH script pubkeys stored
simply as the PublicKey.
Add an optional TLV field to Channel and ChannelMonitor to support the
new format, but continue to serialize the legacy PublicKey format.
This allows TLV serialization macros to read non-Option-wrapped
types but allow them to be missing, filling them in with the
provided default value as needed.
It is useful for accounting and informational reasons for users to
be informed when a payment has been successfully forwarded. Thus,
when an HTLC which represents a forwarded leg is claimed, we
generate a new `PaymentForwarded` event.
This requires some additional plumbing to return HTLC values from
`OnchainEvent`s. Further, when we have to go on-chain to claim the
inbound side of the payment, we do not inform the user of the fee
reward, as we cannot calculate it until we see what is confirmed
on-chain.
Substantial code structure rewrites by:
Valentine Wallace <vwallace@protonmail.com>
This should provide some additional future extensibility, allowing
for new informational events which can be safely ignored to be
ignored by older versions.
Private nodes should never wish to forward HTLCs at all, which we
support here by disabling forwards out over private channels by
default. As private nodes should not have any public channels, this
suffices, without allowing users to disable forwarding over
channels announced in the routing graph already.
Closes#969
Currently the base fee we apply is always the expected cost to
claim an HTLC on-chain in case of closure. This results in
significantly higher than market rate fees [1], and doesn't really
match the actual forwarding trust model anyway - as long as
channel counterparties are honest, our HTLCs shouldn't end up
on-chain no matter what the HTLC sender/recipient do.
While some users may wish to use a feerate that implies they will
not lose funds even if they go to chain (assuming no flood-and-loot
style attacks), they should do so by calculating fees themselves;
since they're already charging well above market-rate,
over-estimating some won't have a large impact.
Worse, we current re-calculate fees at forward-time, not based on
the fee we set in the channel_update. This means that the fees
others expect to pay us (and which they calculate their route based
on), is not what we actually want to charge, and that any attempt
to forward through us is inherently race-y.
This commit adds a configuration knob to set the base fee
explicitly, defaulting to 1 sat, which appears to be market-rate
today.
[1] Note that due to an msat-vs-sat bug we currently actually
charge 1000x *less* than the calculated cost.
This was missed prior to 0.0.98, so requires a
backwards-compatibility wrapper inside the `Channel` serialization
logic, but it's not very complicated to do so.
If we are a public node and have a private channel, our
counterparty needs to know the fees which we will charge to forward
payments to them. Without sending them a channel_update, they have
no way to learn that information, resulting in the channel being
effectively useless for outbound-from-us payments.
This commit fixes our lack of channel_update messages to private
channel counterparties, ensuring we always send them a
channel_update after the channel funding is confirmed.
We changed the sort order of log levels to be more natural, but this
comparison wasn't updated accordingly. Likely the reason it was
left strange for so long is it also had the comparison argument
ordering flipped.
For log entries which may have a variable level, we can't call an
arbitrary macro and need to be able to pass an explicit level. This
does so without breaking the compile-time disabling of certain log
levels.
Further, we "fix" the comparison order of log levels to make more
significant levels sort "higher", which implicitly makes more sense
than sorting "lower".
Finally, we remove the "Off" log level as no log entry should ever
be logged at the "Off" level - that would be nonsensical.
`Event::PaymentSent` serialization has a bug where we
double-serialize the payment_preimage field but do not attempt to
read it twice. This results in a failure to read `ChannelManager`s
from disk if we have a pending `Event::PaymentSent` pending
awaiting handling when we serialize.
Instead of attempting to read both versions, we opt to simply fix
the serialization, assuming it is exceedingly rare for such a
scenario to appear (and if it does, we can assist in manual
recovery).
The release notes have been updated to note this inconsistency.
Found by the chanmon_consistency fuzz target.
Previous to this PR, TLV serialization involved iterating from 0 to the highest
given TLV type. This worked until we decided to implement keysend, which has a
TLV type of ~5.48 billion.
So instead, we now specify the type of whatever is being (de)serialized (which
can be an Option, a Vec type, or a non-Option (specified in the serialization macros as "required").
Latest rustc warns that "this URL is not a hyperlink" and notes that
"bare URLs are not automatically turned into clickable links". This
resolves those warnings.
Thanks to Joshua Nelson for pointing out the correct syntax for
this.
VecReadWrapper is only used in TLVs so there is no need to prepend
a length before writing/reading the objects - we can instead simply
read until we reach the end of the TLV stream.
This substantially improves deserialization performance when LLVM
decides not to inline many short methods, eg when not building
with LTO/codegen-units=1.
Even with the default bench params of LTO/codegen-units=1, the
serialization benchmarks on an Intel 2687W v3 take:
test routing::network_graph::benches::read_network_graph ... bench: 1,955,616,225 ns/iter (+/- 4,135,777)
test routing::network_graph::benches::write_network_graph ... bench: 165,905,275 ns/iter (+/- 118,798)
With the new `serialized_length()` method potentially being
significantly more efficient than `LengthCalculatingWriter`, this
commit ensures we call `serialized_length()` when calculating
length of a larger struct.
Specifically, prior to this commit a call to
`serialized_length()` on a large object serialized with
`impl_writeable`, `impl_writeable_len_match`, or
`encode_varint_length_prefixed_tlv` (and
`impl_writeable_tlv_based`) would always serialize all inner fields
of that object using `LengthCalculatingWriter`. This would ignore
any `serialized_length()` overrides by inner fields. Instead, we
override `serialized_length()` on all of the above by calculating
the serialized size using calls to `serialized_length()` on inner
fields.
Further, writes to `LengthCalculatingWriter` should never fail as
its `write` method never returns an error. Thus, any write failures
indicate a bug in an object's write method or in our
object-creation sanity checking. We `.expect()` such write calls
here.
As of this commit, on an Intel 2687W v3, the serialization
benchmarks take:
test routing::network_graph::benches::read_network_graph ... bench: 2,039,451,296 ns/iter (+/- 4,329,821)
test routing::network_graph::benches::write_network_graph ... bench: 166,685,412 ns/iter (+/- 352,537)
When writing out libsecp256k1 objects during serialization in a
TLV, we potentially calculate the TLV length twice before
performing the actual serialization (once when calculating the
total TLV-stream length and once when calculating the length of the
secp256k1-object-containing TLV). Because the lengths of secp256k1
objects is a constant, we'd ideally like LLVM to entirely optimize
out those calls and simply know the expected length. However,
without cross-language LTO, there is no way for LLVM to verify that
there are no side-effects of the calls to libsecp256k1, leaving
LLVM with no way to optimize them out.
This commit adds a new method to `Writeable` which returns the
length of an object once serialized. It is implemented by default
using `LengthCalculatingWriter` (which LLVM generally optimizes out
for Rust objects) and overrides it for libsecp256k1 objects.
As of this commit, on an Intel 2687W v3, the serialization
benchmarks take:
test routing::network_graph::benches::read_network_graph ... bench: 2,035,402,164 ns/iter (+/- 1,855,357)
test routing::network_graph::benches::write_network_graph ... bench: 308,235,267 ns/iter (+/- 140,202)
Now that our MSRV supports the native methods, we have no need
for the helpers anymore. Because LLVM was already matching our
byte_utils methods as byteswap functions, this should have no
impact on generated (optimzied) code.
This removes most of the byte_utils usage, though some remains to
keep the patch size reasonable.
This makes it so that users cannot usefully implement their own
`EventsProvider`, which would require substantial new logic in the
bindings generator (for generic methods). In the case of
`EventsProvider`, because there are no Rust methods which accept an
`EventsProvider` as an argument, this is perfectly OK as the
generated code would be entirely unused anyway.
This also includes a `VecWriteWrapper` and `VecReadWrapper` which
implements serialization for any `Readable`/`Writeable` type that is
in a Vec. We do this instead of implementing `Readable`/`Writeable`
directly as there isn't always a univerally-defined way to serialize
a Vec and this makes things more explicit.
Finally, this tweaks existing macros (and in the new macros) to
support a trailing `,` after a list, eg
`write_tlv_fields!(stream, {(0, a),}, {});` whereas previously the
trailing `,` after the `(0, a)` would be a compile-error.
Currently our serialization is very compact, and contains version
numbers to indicate which versions the code can read a given
serialized struct. However, if you want to add a new field without
needlessly breaking the ability of previous versions of the code to
read the struct, there is not a good way to do so.
This adds dummy, currently empty, TLVs to the major structs we
serialize out for users, providing an easy place to put new
optional fields without breaking previous versions.
We currently generate duplicative PaymentFailed/PaymentSent events
in two cases:
a) If we receive a update_fulfill_htlc message, followed by a
disconnect, then a resend of the same update_fulfill_htlc
message, we will generate a PaymentSent event for each message.
b) When a Channel is closed, any outbound HTLCs which were relayed
through it are simply dropped when the Channel is. From there,
the ChannelManager relies on the ChannelMonitor having a copy of
the relevant fail-/claim-back data and processes the HTLC
fail/claim when the ChannelMonitor tells it to.
If, due to an on-chain event, an HTLC is failed/claimed, and
then we serialize the ChannelManager, but do not re-serialize
the relevant ChannelMonitor, we may end up getting a duplicative
event.
In order to provide the expected consistency, we add explicit
tracking of pending outbound payments using their unique
session_priv field which is generated when the payment is sent.
Then, before generating PaymentFailed/PaymentSent events, we check
that the session_priv for the payment is still pending.
Thix fixes#209.