This allows users who don't wish to block a full thread to receive
persistence events.
The `Future` added here is really just a trivial list of callbacks,
but from that we can build a (somewhat ineffecient)
std::future::Future implementation and can (at least once a mapping
for Box<dyn Trait> is added) include the future in no-std bindings
as well.
Fixes#1595
We've had some users complain that `duration_since` is panic'ing
for them. This is possible if the machine being run on is buggy and
the "monotonic clock" goes backwards, which sadly some ancient
systems can do.
Rust addressed this issue in 1.60 by forcing
`Instant::duration_since` to not panic if the machine is buggy
(and time goes backwards), but for users on older rust versions we
do the same by hand here.
Made sure that every hop has a unique receipient. When we simulate
calling `channel_penalty_msat` in `TestRouter`’s find route, use
actual previous node ids instead of just using the payer’s.
Added two methods, `process_path_inflight_htlcs` and
`remove_path_inflight_htlcs`, that updates that `payment_cache` map with
path information that may have failed, succeeded, or have been given up
on.
Introduced `AccountForInflightHtlcs`, which will wrap our user-provided
scorer. We move the `S:Score` type parameterization from the `Router` to
`find_route`, so we can use our newly introduced
`AccountForInflightHtlcs`.
`AccountForInflightHtlcs` keeps track of a map of inflight HTLCs by
their short channel id, direction, and give us the value that is being
used up.
This map will in turn be populated prior to calling `find_route`, where
we’ll use `create_inflight_map`, to generate a current map of all
inflight HTLCs based on what was stored in `payment_cache`.
Introduces a new `PaymentInfo` struct that contains both the previous
`attempts` count that was tracked as well as the paths that are also
currently inflight.
In this commit, we check if a peer's outbound buffer has room for onion
messages, and if so pulls them from an implementer of a new trait,
OnionMessageProvider.
Makes sure channel messages are prioritized over OMs, and OMs are prioritized
over gossip.
The onion_message module remains private until further rate limiting is added.
Adds the boilerplate needed for PeerManager and OnionMessenger to work
together, with some corresponding docs and misc updates mostly due to the
PeerManager public API changing.
This allows us to better prioritize channel messages over gossip broadcasts and
lays groundwork for rate limiting onion messages more simply, since they won't
be competing with gossip broadcasts for space in the main message queue.
If we don't currently have the preimage for an inbound HTLC, that
does not guarantee we can never claim it, but instead only that we
cannot claim it unless we receive the preimage from the channel we
forwarded the channel out on.
Thus, we cannot consider a channel to have no claimable balances if
the only remaining output on the commitment ransaction is an
inbound HTLC for which we do not have the preimage, as we may be
able to claim it in the future.
This commit addresses this issue by adding a new `Balance` variant
- `MaybePreimageClaimableHTLCAwaitingTimeout`, which is generated
until the HTLC output is spent.
Fixes#1620
Previously, only `log_error` and `log_trace` macros have been exported.
This change exports the macros of all log levels, which enables them to
be used downstream.
Previously, we were decoding payload lengths as a VarInt. Per the spec, this is
wrong -- it should be decoded as a BigSize. This bug also exists in our
payment payload decoding, to be fixed separately.
Upcoming reply path tests caught this bug because we hadn't encoded a payload
greater than 253 before, so we hadn't hit the problem that VarInts are encoded
as little-endian whereas BigSizes are encoded as big-endian.
If we receive a ChannelAnnouncement message but we already have the
channel, there's no reason to do a chain lookup. Instead of
immediately calling the user-provided `chain::Access` when handling
a ChannelAnnouncement, we first check if we have the corresponding
channel in the graph.
Note that if we do have the corresponding channel but it was not
previously checked against the blockchain, we should still check
with the `chain::Access` and update if necessary.
Users need to make decisions about storage sizing and we need to
have advice on the maximum size of various things users need to
store. ChannelMonitorUpdates are likely the worst case of this,
they're usually at max a few KB, but can get up to a few hundred
KB for commitment transactions that have 400+ HTLCs pending.
We had one user report an update (likely) going over 400 KiB, which
isn't immediately obvious to me is practical, but its within a few
multiples of trivially-reachable sizes, so its likely that did
occur. To be on the safe side, we simply recommend users ensure
they can support "upwards of 1 MiB" here.
The `bitcoin_key_1` and `bitcoin_key_2` fields in
`channel_announcement` messages are sorted according to node_ids
rather than the keys themselves, however the on-chain funding
script is sorted according to the bitcoin keys themselves. Thus,
with some probability, we end up checking that the on-chain script
matches the wrong script and rejecting the channel announcement.
The correct solution is to use our existing channel funding script
generation function which ensure we always match what we generate.
This was found in testing the Java bindings, where a test checks
that retunring the generated funding script in `chain::Access`
results in the constructed channel ending up in our network graph.