The current `cmp::max` doesnt align with the function comment, ie its
comparing 2530 and `feerate_plus_quarter` instead of `feerate_per_kw
+ 2530` and `feerate_plus_quarter` which is fixed in this commit
The initial noise handshake on connection establishment must complete
within a single timer tick. This timeout is enforced via
`awaiting_pong_timer_tick_intervals` whenever a timer tick fires while
our handshake has yet to complete. Currently, on an inbound connection,
if a timer tick fires after we've sent act two of the noise handshake
along with our init message and before receiving the counterparty's init
message, we begin enforcing such timeout. Even if we immediately
continue to process the counterparty's init message to complete to
handshake, the timeout enforcement is not cleared. With the handshake
complete, `awaiting_pong_timer_tick_intervals` is now tracked to enforce
a pong timeout, except a ping was never actually sent. If a single timer
tick fires again without having received a message from the peer, or
enough timer ticks fire to trigger the
`MAX_BUFFER_DRAIN_TICK_INTERVALS_PER_PEER` logic, we'll end up
disconnecting the peer due to a timeout for a pong we'll never receive.
We fix this by always resetting `awaiting_pong_timer_tick_intervals`
upon processing our counterparty's init message.
Blinded paths use a pubkey to identify the introduction node, but it
will soon allow using a directed short channel id instead. Add an
introduction_node_id method to BlindedPath to facilitate lookup in the
latter case.
Allow either using a node id or a short channel id when forwarding an
onion message. This allows for a more compact representation of blinded
paths, which is advantageous for reducing offer QR code size.
Follow-up commits will implement handling the short channel id case as
it requires looking up the destination node id.
cc78b77c71 fixed an important
downgrade bug, but neglected to add test coverage. Here we recitfy
that by adding a few simple tests of common cases.
Tests heavily tweaked by Matt Corallo <git@bluematt.me>.
If we are reading an object that is `MaybeReadable` in a TLV stream
using `upgradable_required`, it may return early with `Ok(None)`.
In this case, it will not read any further TLVs from the TLV
stream. This is fine, except that we generally expect
`MaybeReadable` always consume the correct number of bytes for the
full object, even if it doesn't understand it.
This could pose a problem, for example, in cases where we're
reading a TLV-stream `MaybeReadable` object inside another
TLV-stream object. In that case, the `MaybeReadable` object may
return `Ok(None)` and not consume all the available bytes, causing
the outer TLV read to fail as the TLV length does not match.
`impl_writeable_tlv_based_enum_upgradable` professed to supporting
upgrades by returning `None` from `MaybeReadable` when unknown
variants written by newer versions of LDK were read. However, it
generally didn't support this as it didn't discard bytes for
unknown types, resulting in corrupt reading.
This is fixed here for enum variants written as a TLV stream,
however we don't have a length prefix for tuple enum variants, so
the documentation on the macro is updated to mention that
downgrades are not supported for tuple variants.
New rustc beta now warns on duplicate imports when one of the
imports is from a wildcard import or the default prelude. Thus, to
avoid this here we prefer to always use `crate::prelude::*` and let
it decide if we actually need to import anything.
New rustc beta now warns on duplicate imports when one of the
imports is from a wildcard import or the default prelude. Thus, for
simplicity, we need to make our `crate::prelude` mostly identical
to the `std` one, allowing us to always simply use the
`crate::prelude` and let it decide if we need to import anything.
New rustc now warns on duplicate imports when one of the imports
is from a wildcard import or the default prelude. Thus, because we
often don't actually use the imports from our prelude (as they
exist to duplicate the `std` default prelude), we have to mark most
of our `crate::prelude` imports with `#[allow(unused_imports)]`,
which we do here.
Previously, `handle_message` was a single large method consisting of two
logical parts: one modifying the peer state hence requiring us to hold
the `peer_lock` `MutexGuard`, and, after calling `mem::drop(peer_lock)`,
the remainder which does not only *not* require to hold the
`MutexGuard`, but relies on it being dropped to avoid double-locking.
However, the `mem::drop` was easily overlooked, making reasoning about
lock orders etc. a headache. Here, we therefore have
`handle_message` call two sub-methods reflecting the two logical parts,
allowing us to avoid the explicit `mem::drop`, while at the same time
making it less error-prone due to the two methods' signatures.
- We might generate channel updates to be broadcast when
we are not connected to any peers to broadcast them to.
- This PR ensures to cache them and broadcast them only when
we are connected to some peers.
Other Changes:
1. Introduce a test.
2. Update the relevant current tests affected by this change.
3. Fix a typo.
4. Introduce two functions in functional_utils that optionally
connect and disconnect a dummy node during broadcast testing.