When receiving a BOLT 12 invoice originating from either an invoice
request or a refund, the invoice should only be paid once. To accomplish
this, require that the invoice includes an encrypted payment id in the
payer metadata. This allows ChannelManager to track a payment when
requesting but prior to receiving the invoice. Thus, it can determine if
the invoice has already been paid.
Metadata such as the PaymentId should be encrypted when included in an
InvoiceRequest or a Refund, as it is user data and is exposed to the
payment recipient. Add an encryption key to ExpandedKey for this purpose
instead of reusing offers_base_key.
InvoiceRequest::verify_and_respond_using_derived_keys takes a payment
hash. To avoid generating one for invoice requests that ultimately
cannot be verified, split the method into one for verifying and another
for responding.
We use `tokio`'s `io-util` feature to provide the
`Async{Read,Write}Ext` traits, which allow us to simply launch a
read future or `poll_write` directly as well as `split` the
`TcpStream` into a read/write half. However, these traits aren't
actually doing much for us - they are really just wrapping the
`readable` future (which we can trivially use ourselves) and
`poll_write` isn't doing anything for us that `poll_write_ready`
can't.
Similarly, the split logic is actually just `Arc`ing the
`TcpStream` and busy-waiting when an operation is busy to prevent
concurrent reads/writes. However, there's no reason to prevent
concurrent access at the stream level - we aren't ever concurrently
writing or reading (though we may concurrently read and write,
which is fine).
Worse, the `io-util` feature broke MSRV (though they're likely to
fix this upstream) and carries two additional dependencies (only
one on the latest upstream tokio).
Thus, we simply drop the dependency here.
Fixes#2527.
Releasing write locks in between monitor updates
requires storing a set of cloned keys to iterate
over. For efficiency purposes, that set of keys
is an actual set, as opposed to array, which means
that the iteration order may not be consistent.
The test was relying on an event array index to
access the revocation transaction. We change that
to accessing a hash map keyed by the txid, fixing
the test.
Previously, updating block data on a chain monitor
would acquire a write lock on all of its associated
channel monitors and not release it until the loop
completed.
Now, we instead acquire it on each iteration,
fixing #2470.
- Split Score from LockableScore to ScoreLookUp to handle read
operations and ScoreUpdate to handle write operations
- Change all struct that implemented Score to implement ScoreLookUp
and/or ScoreUpdate
- Change Mutex's to RwLocks to allow multiple data readers
- Change LockableScore to Deref in ScorerAccountingForInFlightHtlcs
as we only need to read
- Add ScoreLookUp and ScoreUpdate docs
- Remove reference(&'a) and Sized from Score in ScorerAccountingForInFlightHtlcs
as Score implements Deref
- Split MultiThreadedScoreLock into MultiThreadedScoreLockWrite and MultiThreadedScoreLockRead.
After splitting LockableScore, we split MultiThreadedScoreLock following
the same way, splitting a single score into two srtucts, one for read and
other for write.
MultiThreadedScoreLock is used in c_bindings.
This previously led to a debug panic in the router because we wouldn't account
for the blinded path fee when calculating first_hop<>intro_node hop's available
liquidity and construct an invalid path that forwarded more over said hop than
was actually available.
This also led to us hitting unreachable code, see direct_to_matching_intro_nodes
test description.
The BOLT spec mandates that channels not be announced until they
have at least six confirmations. This is important to enforce not
because we particularly care about any specific DoS concerns, but
because if we do not we may have to handle reorgs of channel
funding transactions which change their SCID or have conflicting
SCIDs.
Because a `UtxoLookup` implementation is likely to need a reference
to the `PeerManager` which contains a reference to the
`P2PGossipSync`, it is likely to be impossible to get a mutable
reference to the `P2PGossipSync` by the time we want to add a
`UtxoLookup` without a ton of boilerplate and trait wrapping.
Instead, we simply place the `UtxoLookup` in a `RwLock`, allowing
us to modify it without a mutable self reference.
The lifetime bounds updates in tests required in this commit are
entirely unclear to me, but do allow tests to continue building, so
somehow make rustc happier.
In LDK, we expect users operating nodes on the public network to
implement the `UtxoSource` interface in order to validate the
gossip they receive from the network.
Sadly, because the DoS attack of flooding a node's gossip store
isn't a common issue, and because we do not provide an
implementation off-the-shelf to make doing so easily, many of our
downstream users do not have a `UtxoSource` implementation.
In order to change that, here we implement an async `UtxoSource`
in the `lightning-block-sync` crate, providing one for users who
sync the chain from Bitcoin Core's RPC or REST interfaces.
In 3f32f60ae7 we exposed the
historical success probability buckets directly, with a long method
doc explaining how to use it. While this is great for logging
exactly what the internal model thinks, its also helpful to let
users know what the internal model thinks the success probability
is directly, allowing them to compare route success probabilities.
Here we do so but only for the historical tracking buckets.
When we attempt to score a channel which has a success probability
very low, we may have a log well above our cut-off of two. For the
liquidity penalties this works great, we bound it by
`NEGATIVE_LOG10_UPPER_BOUND` and `min` the two scores. For the
amount liquidity penalty we didn't do any `min`ing at all.
This fix is to min the log itself first and then reuse the min'd
log in both calculations.
Currently we let an `htlc_amount >= channel_capacity` pass through
from `penalty_msat` to
`calculate_success_probability_times_billion`, but only if its only
marginally bigger (less than 65/64ths). This is fine as
`calculate_success_probability_times_billion` handles bogus values
just fine (it will always return a zero probability in such cases).
However, this is risky, and in fact breaks in the coming commits,
so instead check it before ever calling through to the historical
bucket probability calculations.
Here we implement `WatchtowerPersister`, which provides a test-only
sample implementation of `Persist` similar to how we might imagine a
user to build watchtower-like functionality in the persistence pipeline.
We test that the `WatchtowerPersister` is able to successfully build and
sign a valid justice transaction that sweeps a counterparty's funds if
they broadcast an old commitment.