1. InteractiveTxConstructorArgs is introduced to act as a single, more
readable input to InteractiveTxConstructor::new().
2. Various documentation updates.
For now this is unneeded as we do not provide any inputs as channel
acceptor and we do not allow creating outbound channels yet. It will
be re-added when that functionality is introduced.
If a peer creates a channel with us which never reaches the funding
stage (or never gets any commitment updates after creation), we'll
avoid inserting the `update_id` into
`closed_channel_monitor_update_ids` at runtime to avoid keeping a
`PeerState` entry around for no reason. However, on startup we
still create a `ChannelMonitorUpdate` with a `ChannelForceClosed`
update step to ensure the `ChannelMonitor` is locked and shut down.
This is pretty redundant, and results in a bunch of on-startup
`ChannelMonitorUpdate`s for any old but non-archived
`ChannelMonitor`s. Instead, here, we check if a `ChannelMonitor`
already saw a `ChannelForceClosed` update step before we generate
the on-startup `ChannelMonitorUpdate`.
This also allows us to skip the `closed_channel_monitor_update_ids`
insertion as we can be confident we'll never have a
`ChannelMonitorUpdate` for this channel at all.
Because `ChannelManager` doesn't have a corresponding `Channel`
after the channels are closed, we'd always used an `update_id` of
`u64::MAX` for any `ChannelMonitorUpdate`s we need to build after
the channel is closed.
This completely breaks the abstraction of `update_id`s and leaks
into persistence logic - because we might have more than one
`ChannelMonitorUpdate` with the same (`u64::MAX`) value, suddenly
instead of being able to safely use `update_id` as IDs, the
`MonitorUpdatingPersister` has to have special logic to handle
this.
Worse, because we don't have a unique ID with which to refer to
post-close `ChannelMonitorUpdate`s we cannot track when they
complete async persistence. This means we cannot properly support
async persist for forwarded payments where the inbound edge has hit
the chain prior to the preimage coming to us.
Here we rectify this by using consistent `update_id`s even after a
channel has closed. In order to do so we have to keep some state
for all channels for which the `ChannelMonitor` has not been
archived (after which point we can be confident we will not need to
update them). While this violates our long-standing policy of
having no state at all in `ChannelManager`s for closed channels,
its only a `(ChannelId, u64)` pair per channel, so shouldn't be
problematic for any of our users (as they already store a whole
honkin `ChannelMonitor` for these channels anyway).
While limited changes are made to the connection-count-limiting
logic, reviewers should carefully analyze the interactions the new
map created here has with that logic.
In the next commit we'll drop the magic `u64::MAX`
`ChannelMonitorUpdate::update_id` value used when we don't know the
`ChannelMonitor`'s `latest_update_id` (i.e. when the channel is
closed). In order to do so, we will store further information about
`ChannelMonitor`s in the per-peer structure, keyed by the
counterparty's node ID, which will be used when applying
`ChannelMonitorUpdate`s to closed channels.
By taking advantage of the change in the previous commit, that
information is now reliably available when we generate the
`ChannelMonitorUpdate` (when claiming HTLCs), but in order to
ensure it is available when applying the `ChannelMonitorUpdate` we
need to use `BackgroundEvent::MonitorUpdateRegeneratedOnStartup`
instead of
`BackgroundEvent::ClosedMonitorUpdateRegeneratedOnStartup` where
possible.
Here we do this, leaving `ClosedMonitorUpdateRegeneratedOnStartup`
only used to ensure very old channels (created in 0.0.118 or
earlier) which are not in the `ChannelManager` are force-closed on
startup.
Currently we store in-flight `ChannelMonitorUpdate`s in the
per-peer structure in `ChannelManager`. This is nice and simple as
we're generally updating it when we're updating other per-peer
data, so we already have the relevant lock(s) and map entries.
Sadly, when we're claiming an HTLC against a closed channel, we
didn't have the `counterparty_node_id` available until it was
added in 0.0.124 (and now we only have it for HTLCs which were
forwarded in 0.0.124). This means we can't look up the per-peer
structure when claiming old HTLCs, making it difficult to track the
new `ChannelMonitorUpdate` as in-flight.
While we could transition the in-flight `ChannelMonitorUpdate`
tracking to a new global map indexed by `OutPoint`, doing so would
result in a major lock which would be highly contended across
channels with different peers.
Instead, as we move towards tracking in-flight
`ChannelMonitorUpdate`s for closed channels we'll keep our existing
storage, leaving only the `counterparty_node_id` issue to contend
with.
Here we simply accept the issue, requiring that
`counterparty_node_id` be available when claiming HTLCs against a
closed channel. On startup, we explicitly check for any forwarded
HTLCs which came from a closed channel where the forward happened
prior to 0.0.124, failing to deserialize, or logging an warning if
the channel is still open (implying things may work out, but panics
may occur if the channel closes prior to HTLC resolution).
While this is a somewhat dissapointing resolution, LDK nodes which
forward HTLCs are generally fairly well-upgraded, so it is not
anticipated to be an issue in practice.
When a lightning node wishes to send payments to a BIP 353 human
readable name (using BOLT 12), it first has to resolve that name to
a DNS TXT record. bLIP 32 defines a way to do so over onion
messages, and this completes our implementation thereof by adding
the server side.
It operates by simply accepting new messages and spawning tokio
tasks to do DNS lookups using the `dnsse_prover` crate. It also
contains full end-to-end tests of the BIP 353 -> BOLT 12 -> payment
logic using the new server code to do the resolution.
Note that because we now have a workspace crate which sets the
"lightning/dnssec" feature in its `dev-dependencies`, a naive
`cargo test` will test the "dnssec" feature.
Now that `ChannelManager` supports using bLIP 32 to resolve BIP 353
Human Readable Names we should encourage users to use that feature
by making the "default" (in various type aliases) to use
`ChannelManager` as the `DNSResolverMessageHandler`.
Now that we have the ability to resolve BIP 353 Human Readable
Names directly and have tracking for outbound payments waiting on
an offer resolution, we can implement full BIP 353 support in
`ChannelManager`.
Users will need one or more known nodes which offer DNS resolution
service over onion messages using bLIP 32, which they pass to
`ChannelManager::pay_for_offer_from_human_readable_name`, as well
as the `HumanReadableName` itself.
From there, `ChannelManager` asks the DNS resolver to provide a
DNSSEC proof, which it verifies, parses into an `Offer`, and then
pays.
For those who wish to support on-chain fallbacks, sadly, this will
not work, and they'll still have to use `OMNameResolver` directly
in order to use their existing `bitcoin:` URI parsing.
When we receive a payment to an offer we issued resolved with a
human readable name, it may have been resolved using a wildcard
DNS entry which we want to map to a specific recipient account
locally. To do this, we need the human readable name from the
`InvoiceRequest` in the `PaymentClaim{able,ed}`, which we pipe
through here using `InvoiceRequestFields`.
When we resolve a Human Readable Name to a BOLT 12 `offer`, we may
end up resolving to a wildcard DNS name covering all possible
`user` parts. In that case, if we just blindly pay the `offer`, the
recipient would have no way to tell which `user` we paid.
Instead, BOLT 12 defines a field to include the HRN resolved in the
`invoice_request`, which we implement here.
We also take this opportunity to remove constant parameters from
the `outbound_payment.rs` interface to `channelmanager.rs`
Domain names implicitly have a trailing `.`, which we require in
bLIP 32 but generally shouldn't be exposing to the user in
`HumanReadableName`s (after all, they're human-readable). Here we
make sure the trailing `.` is dropped in `HumanReadableName`s
before we re-add them when building the bLIP 32 messages.