To separate out the logic in the `sign` module, which will start to be
convoluted with multiple signer types, we're splitting out each signer
type into its own submodule, following the taproot.rs example from a
previous commit.
ChannelSignerType is an enum that contains variants of all currently
supported signer types. Given that those signer types are enumerated
as associated types in multiple places, it is prudent to denote one
type as the authority on signer types.
SignerProvider seemed like the best option. Thus, instead of
ChannelSignerType declaring the associated types itself, it simply
uses their definitions from SignerProvider.
Previously, SignerProvider was not laid out to support multiple signer
types. However, with the distinction between ECDSA and Taproot signers,
we now need to account for SignerProviders needing to support both.
This approach does mean that if ever we introduced another signer type
in the future, all implementers of SignerProvider would need to add it
as an associated type, and would also need to write a set of dummy
implementations for any Signer trait they do not wish to support.
For the time being, the TaprootSigner associated type is cfg-gated.
For Taproot support, we need to define an alternative trait to
EcdsaChannelSigner. This trait will be implemented by all signers
that wish to support Taproot channels.
In 7f0fd868ad, `channel_keys_id` was
added as an argument to `SignerProvider::get_destination_script`,
allowing implementors to generate a new script for each channel.
This is great, however users then have no way to re-derive the
corresponding private key when they ultimately receive a
`SpendableOutputDescriptor::StaticOutput`. Instead, they have to
track all the addresses as they derive them separately. In many
cases this is fine, but we should support both deployments, which
we do here by simply including the missing `channel_keys_id` for
the user.
Currently all channel keys and their basepoints exist uniformly as
`PublicKey` type, which not only makes in harder for a developer to
distinguish those entities, but also does not engage the language
type system to check if the correct key is being used in any
particular function.
Having struct wrappers around keys also enables more nuanced
semantics allowing to express Lightning Protocol rules in language.
For example, the code allows to derive `HtlcKey` from
`HtlcBasepoint` and not from `PaymentBasepoint`.
This change is transparent for channel monitors that will use the
internal public key of a wrapper.
Payment, DelayedPayment, HTLC and Revocation basepoints and their
derived keys are now wrapped into a specific struct that make it
distinguishable for the Rust type system. Functions that require a
specific key or basepoint should not use generic Public Key, but
require a specific key wrapper struct to engage Rust type
verification system and make it more clear for developers which
key is used.
Now that we use the `rust-bitcoin` `WitnessProgram` to check our
addresses, we can just rely on it, rather than checking the program
length and version.
- If a path within a route passes through the same channelID twice,
that shows the path is looped and will be rejected by nodes.
- Add a check to explicitly reject such payment before trying to send
them.
In other languages (Java and C#, notably), overriding `Eq` without
overriding `Hash` can lead to surprising or broken behavior. Even
in Rust, its usually the case that you actually want both. Here we
add missing `Hash` derivations for P2P messages, to at least
address the first pile of warnings the C# compiler dumps.
Implementation of standard traits on arrays longer than 32 elements
was shipped in rustc 1.47, which is below our MSRV of 1.48 and we
can use to remove some unnecessary manual implementation of
`PartialEq` on `OnionPacket`.
If we remove an HTLC (or fee update), commit, and receive our
counterparty's `revoke_and_ack`, we remove all knowledge of said
HTLC (or fee update). However, the latest local commitment
transaction that we can broadcast still contains the HTLC (or old
fee), thus we are not eligible for initiating the `closing_signed`
negotiation if we're shutting down and are generally expecting a
counterparty `commitment_signed` immediately.
Because we don't have any tracking of these updates in the `Channel`
(only the `ChannelMonitor` is aware of the HTLC being in our latest
local commitment transaction), we'd previously send a
`closing_signed` too early, causing LDK<->LDK channels with an HTLC
pending towards the channel initiator at the time of `shutdown` to
always fail to cooperatively close.
To fix this race, we add an additional unpersisted bool to
`Channel` and use that to gate sending the initial `closing_signed`.
Quite a while ago we added checks for the total current dust
exposure on a channel to explicitly limit dust inflation attacks.
When we did this, we kept the existing upper bound on the channel's
feerate in place. However, these two things are redundant - the
point of the feerate upper bound is to prevent dust inflation, and
it does so in a crude way that can cause spurious force-closures.
Here we simply drop the upper bound entirely, relying on the dust
inflation limit to prevent dust inflation instead.
This breaks backwards compatibility with versions of LDK prior to
0.0.113 as they expect to always read signer data.
This also substantially reduces allocations during `ChannelManager`
serialization, as we currently don't pre-allocate the `Vec` that
the signer gets written in to. We could alternatively pre-allocate
that `Vec`, but we've been set up to skip the write entirely for a
while, and 0.0.113 was released nearly a year ago. Users
downgrading to LDK 0.0.112 and before at this point should not be
expected.
When we check gossip message signatures, there's no reason to
serialize out the full gossip message before hashing, and it
generates a lot of allocations during the initial startup when we
fetch the full gossip from peers.
When we forward gossip messages, we store them in a separate buffer
before we encrypt them (and commit to the order in which they'll
appear on the wire). Rather than storing that buffer encoded with
no headroom, requiring re-allocating to add the message length and
two MAC blocks, we here add the headroom prior to pushing it into
the gossip buffer, avoiding an allocation.
When buffering outbound messages for peers, `LinkedList` adds
rather substantial allocation overhead, which we avoid here by
swapping for a `VecDeque`.
When decrypting P2P messages, we already have a read buffer that we
read the message into. There's no reason to allocate a new `Vec` to
store the decrypted message when we can just overwrite the read
buffer and call it a day.