Using EffectiveCapacity in scoring gives more accurate success
probabilities when the maximum HTLC value is less than the channel
capacity. Change EffectiveCapacity to prefer the channel's capacity
over its maximum HTLC limit, but still use the latter for route finding.
In fc77c57c3c we stopped using the
`FInalOnionHopData` in `OnionPayload::Invoice` directly and intend
to remove it eventually. However, in the next few commits we need
access to the payment secret when claimaing a payment, as we create
a new `PaymentPurpose` during the claim process for a new event.
In order to get access to a `PaymentPurpose` without having access
to the `FinalOnionHopData` we here change the storage of
`claimable_htlcs` to store a single `PaymentPurpose` explicitly
with each set of claimable HTLCs.
In fc77c57c3c we stopped using the
`FinalOnionHopData` in `OnionPayload::Invoice` directly and renamed
it `_legacy_hop_data` with the intent of removing it in a few
versions. However, we continue to check that it was included in the
serialized data, meaning we would not be able to remove it without
breaking ability to serialize full `ChannelManager`s.
This fixes that by making the `_legacy_hop_data` an `Option` which
we will happily handle just fine if its `None`.
We have a bunch of fancy infrastructure to ensure we can connect
blocks using all our different connection interfaces, but we only
bother to use it in a few select tests.
This expands our use of `ConnectStyle` to most of our tests by
simply randomizing the style in each test. This makes our tests
non-deterministic, but we print the connection style at start so
that it's easy to reproduce a failure deterministically.
In the next commit we'll randomize the `ConnectStyle` used in each
test. However, some tests are slightly too prescriptive, which we
address here in a few places.
This update also includes a minor refactor. The return type of
`pending_monitor_events` has been changed to a `Vec` tuple with the
`OutPoint` type. This associates a `Vec` of `MonitorEvent`s with a
funding outpoint.
We've also renamed `source/sink_channel_id` to `prev/next_channel_id` in
the favour of clarity.
As the `counterparty_node_id` is now required to be passed back to the
`ChannelManager` to accept or reject an inbound channel request, the
documentation is updated to reflect that.
Instead of including a `Secp256k1` context per
`PeerChannelEncryptor`, which is relatively expensive memory-wise
and nontrivial CPU-wise to construct, we should keep one for all
peers and simply reuse it.
This is relatively trivial so we do so in this commit.
Since its trivial to do so, we also take this opportunity to
randomize the new PeerManager context.
Because we handle messages (which can take some time, persisting
things to disk or validating cryptographic signatures) with the
top-level read lock, but require the top-level write lock to
connect new peers or handle disconnection, we are particularly
sensitive to writer starvation issues.
Rust's libstd RwLock does not provide any fairness guarantees,
using whatever the OS provides as-is. On Linux, pthreads defaults
to starving writers, which Rust's RwLock exposes to us (without
any configurability).
Here we work around that issue by blocking readers if there are
pending writers, optimizing for readable code over
perfectly-optimized blocking.
Only one instance of PeerManager::process_events can run at a time,
and each run always finishes all available work before returning.
Thus, having several threads blocked on the process_events lock
doesn't accomplish anything but blocking more threads.
Here we limit the number of blocked calls on process_events to two
- one processing events and one blocked at the top which will
process all available events after the first completes.
Because the peers write lock "blocks the world", and happens after
each read event, always taking the write lock has pretty severe
impacts on parallelism. Instead, here, we only take the global
write lock if we have to disconnect a peer.
Users are required to only ever call `read_event` serially
per-peer, thus we actually don't need any locks while we're
processing messages - we can only be processing messages in one
thread per-peer.
That said, we do need to ensure that another thread doesn't
disconnect the peer we're processing messages for, as that could
result in a peer_disconencted call while we're processing a
message for the same peer - somewhat nonsensical.
This significantly improves parallelism especially during gossip
processing as it avoids waiting on the entire set of individual
peer locks to forward a gossip message while several other threads
are validating gossip messages with their individual peer locks
held.
This adds the required locking to process messages from different
peers simultaneously in `PeerManager`. Note that channel messages
are still processed under a global lock in `ChannelManager`, and
most work is still processed under a global lock in gossip message
handling, but parallelizing message deserialization and message
decryption is somewhat helpful.
Add test that ensures that channels are closed with
`ClosureReason::DisconnectedPeer` if the peer disconnects before the
funding transaction has been broadcasted.
During a `channel_reestablish` now we send a warning message when we receive a old commitment transaction from the peer.
In addition, this commit include the update of functional test to make sure that the receiver will generate warn messages.
Signed-off-by: Vincenzo Palazzo <vincenzopalazzodev@gmail.com>