Because `ChannelManager` doesn't have a corresponding `Channel`
after the channels are closed, we'd always used an `update_id` of
`u64::MAX` for any `ChannelMonitorUpdate`s we need to build after
the channel is closed.
This completely breaks the abstraction of `update_id`s and leaks
into persistence logic - because we might have more than one
`ChannelMonitorUpdate` with the same (`u64::MAX`) value, suddenly
instead of being able to safely use `update_id` as IDs, the
`MonitorUpdatingPersister` has to have special logic to handle
this.
Worse, because we don't have a unique ID with which to refer to
post-close `ChannelMonitorUpdate`s we cannot track when they
complete async persistence. This means we cannot properly support
async persist for forwarded payments where the inbound edge has hit
the chain prior to the preimage coming to us.
Here we rectify this by using consistent `update_id`s even after a
channel has closed. In order to do so we have to keep some state
for all channels for which the `ChannelMonitor` has not been
archived (after which point we can be confident we will not need to
update them). While this violates our long-standing policy of
having no state at all in `ChannelManager`s for closed channels,
its only a `(ChannelId, u64)` pair per channel, so shouldn't be
problematic for any of our users (as they already store a whole
honkin `ChannelMonitor` for these channels anyway).
While limited changes are made to the connection-count-limiting
logic, reviewers should carefully analyze the interactions the new
map created here has with that logic.
In the next commit we'll drop the magic `u64::MAX`
`ChannelMonitorUpdate::update_id` value used when we don't know the
`ChannelMonitor`'s `latest_update_id` (i.e. when the channel is
closed). In order to do so, we will store further information about
`ChannelMonitor`s in the per-peer structure, keyed by the
counterparty's node ID, which will be used when applying
`ChannelMonitorUpdate`s to closed channels.
By taking advantage of the change in the previous commit, that
information is now reliably available when we generate the
`ChannelMonitorUpdate` (when claiming HTLCs), but in order to
ensure it is available when applying the `ChannelMonitorUpdate` we
need to use `BackgroundEvent::MonitorUpdateRegeneratedOnStartup`
instead of
`BackgroundEvent::ClosedMonitorUpdateRegeneratedOnStartup` where
possible.
Here we do this, leaving `ClosedMonitorUpdateRegeneratedOnStartup`
only used to ensure very old channels (created in 0.0.118 or
earlier) which are not in the `ChannelManager` are force-closed on
startup.
Currently we store in-flight `ChannelMonitorUpdate`s in the
per-peer structure in `ChannelManager`. This is nice and simple as
we're generally updating it when we're updating other per-peer
data, so we already have the relevant lock(s) and map entries.
Sadly, when we're claiming an HTLC against a closed channel, we
didn't have the `counterparty_node_id` available until it was
added in 0.0.124 (and now we only have it for HTLCs which were
forwarded in 0.0.124). This means we can't look up the per-peer
structure when claiming old HTLCs, making it difficult to track the
new `ChannelMonitorUpdate` as in-flight.
While we could transition the in-flight `ChannelMonitorUpdate`
tracking to a new global map indexed by `OutPoint`, doing so would
result in a major lock which would be highly contended across
channels with different peers.
Instead, as we move towards tracking in-flight
`ChannelMonitorUpdate`s for closed channels we'll keep our existing
storage, leaving only the `counterparty_node_id` issue to contend
with.
Here we simply accept the issue, requiring that
`counterparty_node_id` be available when claiming HTLCs against a
closed channel. On startup, we explicitly check for any forwarded
HTLCs which came from a closed channel where the forward happened
prior to 0.0.124, failing to deserialize, or logging an warning if
the channel is still open (implying things may work out, but panics
may occur if the channel closes prior to HTLC resolution).
While this is a somewhat dissapointing resolution, LDK nodes which
forward HTLCs are generally fairly well-upgraded, so it is not
anticipated to be an issue in practice.
When a lightning node wishes to send payments to a BIP 353 human
readable name (using BOLT 12), it first has to resolve that name to
a DNS TXT record. bLIP 32 defines a way to do so over onion
messages, and this completes our implementation thereof by adding
the server side.
It operates by simply accepting new messages and spawning tokio
tasks to do DNS lookups using the `dnsse_prover` crate. It also
contains full end-to-end tests of the BIP 353 -> BOLT 12 -> payment
logic using the new server code to do the resolution.
Note that because we now have a workspace crate which sets the
"lightning/dnssec" feature in its `dev-dependencies`, a naive
`cargo test` will test the "dnssec" feature.
Now that `ChannelManager` supports using bLIP 32 to resolve BIP 353
Human Readable Names we should encourage users to use that feature
by making the "default" (in various type aliases) to use
`ChannelManager` as the `DNSResolverMessageHandler`.
Now that we have the ability to resolve BIP 353 Human Readable
Names directly and have tracking for outbound payments waiting on
an offer resolution, we can implement full BIP 353 support in
`ChannelManager`.
Users will need one or more known nodes which offer DNS resolution
service over onion messages using bLIP 32, which they pass to
`ChannelManager::pay_for_offer_from_human_readable_name`, as well
as the `HumanReadableName` itself.
From there, `ChannelManager` asks the DNS resolver to provide a
DNSSEC proof, which it verifies, parses into an `Offer`, and then
pays.
For those who wish to support on-chain fallbacks, sadly, this will
not work, and they'll still have to use `OMNameResolver` directly
in order to use their existing `bitcoin:` URI parsing.
When we receive a payment to an offer we issued resolved with a
human readable name, it may have been resolved using a wildcard
DNS entry which we want to map to a specific recipient account
locally. To do this, we need the human readable name from the
`InvoiceRequest` in the `PaymentClaim{able,ed}`, which we pipe
through here using `InvoiceRequestFields`.
When we resolve a Human Readable Name to a BOLT 12 `offer`, we may
end up resolving to a wildcard DNS name covering all possible
`user` parts. In that case, if we just blindly pay the `offer`, the
recipient would have no way to tell which `user` we paid.
Instead, BOLT 12 defines a field to include the HRN resolved in the
`invoice_request`, which we implement here.
We also take this opportunity to remove constant parameters from
the `outbound_payment.rs` interface to `channelmanager.rs`
Domain names implicitly have a trailing `.`, which we require in
bLIP 32 but generally shouldn't be exposing to the user in
`HumanReadableName`s (after all, they're human-readable). Here we
make sure the trailing `.` is dropped in `HumanReadableName`s
before we re-add them when building the bLIP 32 messages.
When creating an InvoiceRequests, users may choose to either use a
transient signing pubkey generated by LDK or provide a static one.
Disallow the latter as it allows users to reuse the same pubkey, which
results in poor sender privacy.
If we're receiving a keysend to a blinded path, then we created the payment
secret within. Using our inbound_payment_key, we can decrypt the payment secret
bytes to get the payment's min_cltv_expiry_delta and min amount, to verify the
payment is valid. However, if we're receiving an MPP keysend *not* to a blinded
path, then we did not create the payment secret and shouldn't verify it since
it's only used to correlate MPP parts.
Therefore, store whether the payment secret is recipient-generated in our pending
inbound payment data so we know whether to verify it or not.
Bolt11InvoiceParameters allows for setting currency and
duration_since_epoch. If currency is not set, test that the one
corresponding to ChannelManager's chain hash is usd. If
duration_since_epoch, is not set then highest seen timestamp is used in
non-std compilations.
ChannelManager::create_bolt11_invoice is a simpler and more flexible way
of creating a BOLT11 invoice, so deprecate the corresponding functions
in the invoice_utils module.
Now that the lightning crate depends on the lightning_invoice crate, the
utility functions previously living in the latter can be implemented on
ChannelManager. Additionally, the parameters are now moved to a struct
in order to remove the increasingly combinatorial blow-up of methods.
The new Bolt11InvoiceParameters is used to determine what values to set
in the invoice. Using None for any given parameter results in a
reasonable the default or a behavior determined by the ChannelManager as
detailed in the documentation.
When creating an invoice using a ChannelManager, payments for a specific
ChainHash / Network are only valid. Use the one from the ChannelManager
instead of allowing arbitrary ones in the form of a Currency.
Add a new payment type for this, because normally the payment hash is factored
into the payment secrets we create for invoices, but static invoices don't have
a payment hash since they are paid via keysend.
This key will be used in upcoming commits for encrypting metadata bytes for
spontaneous payments' payment secrets, to be included in the blinded paths of
static invoices for async payments. We need a new type of payment secret for
these payments because they don't have an a prior known payment hash, see the
next commit.
LDK versions prior to 0.0.104 had stateful inbound payments written in this
map. In 0.0.104, we added support for stateless inbound payments with
deterministically generated payment secrets, and maintained deprecated support
for stateful inbound payments until 0.0.116. After 0.0.116, no further inbound
payments could have been written into this map.
The upcoming ChannelManager::create_bolt11_invoice will not support
setting a specific creation time, so remove that functionality from the
invoice_utils functions. This will avoid duplicate code when
deprecating.
`indexmap` 2.6.0 upgraded to `hashbrown` 0.15, which unfortunately
bumped their MSRV to rustc 1.65 with the 0.15.1 release. So we pin
`indexmap` to 2.5.0 to fix our MSRV CI.
Try to make it a bit more clear that there are downsides to solely
relying on `lightning-net-tokio`, and it's still recommended to
occasionally call this function in a separate loop.
Instead of first building a map from peers to a list of channels
then pulling out of that to build the `per_peer_state`, we build
`per_peer_state` immediately and store channels in it immediately.
This avoids an unnecessary map indirection but also gives us
access to the new fields in `per_peer_state` when reading
`Channel`s which we'll need in a coming commit.