When receiving a payment over a BlindedPaymentPath, a PaymentContext is
included but was not authenticated. The previous commit adds an HMAC of
the payment::ReceiveTlvs (which contains the PaymentContext) and the
nonce used to create the HMAC. This commit verifies the authenticity
when parsing the InboundOnionPayload. This prevents a malicious actor
from for forging it.
In order to authenticate a PaymentContext, an HMAC and Nonce must be
included along with it in payment::ReceiveTlvs. Compute the HMAC when
constructing a BlindedPaymentPath and include it in the recipient's
BlindedPaymentTlvs. Authentication will be added in an upcoming commit.
Now that NodeSigner::get_inbound_payment_key returns an ExpandedKey
instead of KeyMaterial, the latter is no longer needed. Remove
KeyMaterial and replace its uses with [u8; 32].
NodeSinger::get_inbound_payment_key_material returns KeyMaterial, which
is used for constructing an ExpandedKey. Change the trait to return an
ExpandedKey directly instead. This allows for direct access to the
ExpandedKey when a NodeSigner referenced is available. Otherwise, it
would either need to be reconstructed or passed in separately.
When receiving a PaymentContext from a blinded payment, the context must
be authenticated. Otherwise, the context can be forged and would appear
within a PaymentPurpose. Add functions for constructing and verifying an
HMAC for the ReceiveTlvs, which contains the PaymentContext.
This commit adds counterparty node IDs to `PaymentForwarded`
to address the potential ambiguity of using `ChannelIds` alone,
especially in cases like v1 0conf opens where `ChannelIds`
may not be unique. Including the counterparty node IDs
provides better clarity and makes the information more useful.
A claim transaction with locktime T can only be mined at block heights
of T+1 or above, so it should only be broadcast at height T or above.
Due to an off-by-one bug, we were broadcasting some claim transactions
too early at T-1.
AFAICT, nothing bad resulted from this bug -- later rebroadcasts of the
transaction would eventually succeed once the correct height was
reached.
Following up on the previous commit, where we added debug_asserts
within `build_closing_transaction` to ensure neither
`value_to_holder` nor `value_to_counterparty` underflow, we now also
force-close the channel in the (presumably impossible) event that it
did happen.
When batch claiming was first added, it was only done so for claims
which were not pinnable, i.e. those which can only be claimed by us.
This was the conservative choice - pinning of outputs claimed by a batch
would leave the entire batch unable to confirm on-chain. However, if
pinning is considered an attack that can be executed with a high
probability of success, then there is no reason not to batch claims of
pinnable outputs together, separate from unpinnable outputs.
Whether specific outputs are pinnable can change over time - those that
are not pinnable will eventually become pinnable at the height at which
our counterparty can spend them. Outputs are treated as pinnable if
they're within `COUNTERPARTY_CLAIMABLE_WITHIN_BLOCKS_PINNABLE` of that
height.
Aside from outputs being pinnable or not, locktimes are also a factor
for batching claims. HTLC-timeout claims have locktimes fixed by the
counterparty's signature and thus can only be aggregated with other
HTLCs of the same CLTV, which we have to check for.
The complexity required here is worth it - aggregation can save users a
significant amount of fees in the case of a force-closure, and directly
impacts the number of UTXOs needed as a reserve for anchors.
Co-authored-by: Matt Corallo <git@bluematt.me>
This allows us to make the PaymentSendFailure error type private, as well as
reduce the visibility of the vestigial send_payment_with_route method that was
already made test and fuzz-only in a previous commit.
Removes the final usage of PaymentSendFailure from public API.
This (confusing) error matched with prior versions of LDK where users had to
handle payment retries themselves. Since auto-retry was introduced, the only
non-deprecated use remaining was for probe send errors. Probes only have
one path, though, so refactor ProbeSendFailure to omit usage of
PaymentSendFailure.
We don't make this error private yet because it's still used by some fuzzing
code as well as internally to outbound_payments, but it isn't returned by any
public functions anymore.
This method has been deprecated for several versions in favor of
ChannelManager::send_payment, and we want to remove it from the public API
entirely prior to the 0.1 release. However, >150 tests use it so put off
removing the method entirely.
This moves panics to a higher level, allows failures to be handled
gracefully in some cases, and supports more explicit testing without
using `#[should_panic]`.
There are multiple factors affecting the locktime of a package:
- HTLC transactions rely on a fixed timelock due to the counterparty's
signature.
- HTLC timeout claims on the counterparty's commitment transaction
require satisfying a CLTV timelock.
- The locktime can be set to the latest height to avoid fee sniping.
These factors were combined in a single method, making the separate
factors less clear.
In `build_closing_transaction`, we check that `value_to_holder` and
`value_to_counterparty`, which are signed, are not lower than the
dust limit. However, in doing this check, we convert them to signed
integers, which could result in an underflow and a failed detection.
This scenario should not be reachable, but here we add debug_asserts
to positive ensure that scenario isn't hit.
When a `ChannelMonitor` sees a payment preimage on chain for an
outbound HTLC, it creates a `MonitorEvent` containing the preimage
to pass to the inbound edge. The inclusion of the transaction
containing the payment preimage (plus six confirmations) also
results in the corresponding `Balance` being removed from the live
balance set, allowing the `ChannelMonitor` to be pruned (after a
further 4032 blocks).
While `MonitorEvent`s should always be processed in a timely
manner, if a node is suffering from a bug where they are not, its
possible for 4038 blocks to pass with the preimage-containing
`MonitorEvent` still pending. If that happens, its possible the
`ChannelMonitor` is archived even though the preimage in it is
needed in another channel (or `ChannelMonitor`), causing funds
loss.
Luckily the fix is simple - check for pending events before
allowing a `ChannelMonitor` to be archived.
Fixes#2153
At various points we've had issues where `Event` processing for a
user possibly is taking a long time, causing other things to stall.
However, due to lack of logging during `Event` processing itself
this can be rather difficult to debug. In
85eb8145fb we attempted to add
logging for this, but in doing so ended up with more verbose
logging than we were comfortable with.
Instead, here, we log only when we actually process an `Event`,
directly in the callsite passing the `Event` to the user.
Fixes#3331
When we handle a `ChannelMonitorUpdate` completion we always
complete everything that was waiting on any updates to the same
channel all at once. Thus, we need to skip all updates if there's
pending updates besides the one that was just completed.
We handled this correctly for open channels, but the shortcut for
closed channels ignored any other pending updates entirely.
Here we fix this, which is ultimately required for tests which are
added in a few commits to pass.
c99d3d785d updated
`ChannelMonitorUpdate::update_id` to continue counting up even
after the channel is closed. It, however, accidentally updated the
`ChannelMonitorUpdate` application logic to skip testing that
`ChannelMonitorUpdate`s are well-ordered after the channel has been
closed (in an attempt to ensure other checks in the same
conditional block were applied).
This fixes that oversight.
Closing channels requires a two step process - first
`update_maps_on_chan_removal` is called while holding the same
per-peer lock under which the channel reached the terminal state,
then after dropping the same lock(s), `finish_close_channel` is
called.
Because the channel is closed and thus no further
`ChannelMonitorUpdate`s are generated for the off-chain state, we'd
previously applied the `ChannelMonitorUpdate` in
`finish_close_channel`. This was tweaked somewhat in
c99d3d785d when we stopped using
`u64::MAX` for any updates after closure. However, we worked around
the races that implied by setting the `update_id` only when we go
to apply the `ChannelMonitorUpdate`, rather than when we create it.
In a coming commit, we'll need to have an `update_id` immediately
upon creation (to track in-flight updates that haven't reached
application yet). This implies that we can no longer apply closure
`ChannelMonitorUpdate`s after dropping the per-peer lock(s), as the
updates must be well-ordered with any later updates to the same
channel, even after it has been closed.
Thus, here, we add `ChannelMonitorUpdate` handling to
`update_maps_on_chan_removal`, renaming it `locked_close_channel`
to better capture its new purpose.
In the coming commits we'll start handling `ChannelMonitorUpdate`s
during channel closure in-line rather than after dropping locks via
`finish_close_channel`. In order to make that easy, here we add a
new `REMAIN_LOCKED_UPDATE_ACTIONS_PROCESSED_LATER` variant to
`handle_new_monitor_update!` which can attempt to apply an update
without dropping the locks and processing
`MonitorUpdateCompletionAction`s immediately.
During block connection, we cannot apply `ChannelMonitorUpdate`s if
we're running during the startup sequence (i.e. before the user has
called any methods outside of block connection). We previously
handled this by simply always pushing any `ChannelMonitorUpdate`s
generated during block connection into the
`pending_background_events` queue.
However, this results in `ChannelMonitorUpdate`s going through the
queue when we could just push them immediately. Here we explicitly
check `background_events_processed_since_startup` and use that to
decide whether to push updates through the background queue
instead.
On startup, if we have a channel which was closed immediately
before shutdown such that the `ChannelMonitorUpdate` marking the
channel as closed is still in-flight, it doesn't make sense to
generate a fresh `ChannelMonitorUpdate` marking the channel as
closed immediately after the existing in-flight one.
Here we detect this case and drop the extra update, though its not
all that harmful it does avoid some test changes in the coming
commits.
When deciding if we should remove a `PeerState` entry we want to
ensure we don't remove if there are pending updates in
`in_flight_monitor_updates`. Previously this was done with a simple
`in_flight_monitor_updates.is_empty()`, however this can prevent
removal of `PeerState` entries if a channel had an update at some
point (leaving an entry in the map) but the update was ultimately
completed.
Instead, we need to iterate over the entries in
`in_flight_monitor_updates` and decline to remove `PeerState`s only
if there is an entry for a pending update still in-flight.
2e685ffb24 removed a field from UserConfig, which
broke this fuzz target. The hardcoded strings needed to be updated to remove
this field from them too.