In 2826af75a5 we fixed a fuzz crash
in which the total reserve values in a channel were greater than
the funding amount, checked when an incoming channel is accepted.
This, however, did not fix the same issue for outbound channels,
where a peer can accept a channel with a nonsense reserve value in
the `accept_channel` message. The `full_stack_target` fuzzer
eventually found its way into the same issue, which this resolves.
Thanks (again) to Chaincode Labs for providing the fuzzing
resources which found this bug!
The `full_stack_target` fuzzer managed to find a subtraction
underflow in the new `Channel::get_htlc_maximum` function where we
subtract both sides' reserve values from the channel funding. Such
a channel is obviously completely useless, so we should reject it
during opening instead of integer-underflowing later.
Thanks to Chaincode Labs for providing the fuzzing resources which
found this bug!
When we receive multiple HTLCs which claim to be a part of the same
MPP but which are inconsistent for some reason, we should fail the
inconsistent HTLCs but keep the first HTLCs up until the first
inconsistency.
This works, but it turns out there was no test coverage, so we add
some here.
The `chain::Listen` interface provides a block-connection-based
alternative to the `chain::Confirm` interface, which supports
providing transaction data at a time separate from the block
connection time.
For users who are downloading the full headers tree (e.g. from a
node over the Bitcoin P2P protocol) but who are not downloading
full blocks (e.g. because they're using BIP 157/158 filtering)
there is no API that matches exactly their event stream -
`chain::Listen` requries full blocks for each block,
`chain::Confirm` requires breaking each connection event into two
calls.
Given its incredibly trivial to take a `TransactionData` in
addition to a `Block` in `chain::Listen` we do so here, adding a
default-implementation `block_connected` which simply creates the
`TransactionData`, which ultimately all of the `chain::Listen`
implementations currently do anyway.
Closes#1128.
The main loop of the background processor has this line:
`peer_manager.process_events(); // Note that this may block on ChannelManager's locking`
which does, indeed, sometimes block waiting on the `ChannelManager`
to finish whatever its doing. Specifically, its the only place in
the background processor loop that we block waiting on the
`ChannelManager`, so if the `ChannelManager` is relatively busy, we
may end up being blocked there most of the time.
This should be fine, except today we had a user who's node was
particularly slow in processing some channel updates, resulting in
the background processor being blocked there (as expected). Then,
when the channel updates were completed (and persisted) the next
thing the background processor did was hand the user events to
process, creating yet more channel updates. Ultimately, the users'
node crashed before finishing the event processing. This left us
with an updated monitor on disk and an outdated manager, and they
lost the channel on startup.
Here we simply move the above quoted line to after the normal event
processing, ensuring the next thing we do after blocking on
`ChannelManager` locks is persist the manager, prior to event
handling.
MAX_FUNDING_SATOSHIS will no longer be accurately named once wumbo is merged.
Also, we'll want to check that wumbo channels don't exceed the total bitcoin supply
`ChannelDetails::outbound_capacity_msat` describes the total amount
available for sending across several HTLCs, basically just our
balance minus the reserve value maintained by our counterparty.
However, when routing we use it to guess the maximum amount we can
send in a single additional HTLC, which it is not.
There are numerous reasons why our balance may not match the amount
we can send in a single HTLC, whether the HTLC in-flight limit, the
channe's HTLC maximum, or our feerate buffer.
This commit splits the `outbound_capacity_msat` field into two -
`outbound_capacity_msat` and `outbound_htlc_limit_msat`, setting us
up for correctly handling our next-HTLC-limit in the future.
This also addresses the first of the reasons why the values may
not match - the max-in-flight limit. The inaccuracy is ultimately
tracked as #1126.
Default to creating tlv onions for nodes for which we haven't received
any features through node announcements or which aren't in the
`network_graph`, and where no other features are known such as invoice
features nor features in the init msg for nodes we have channels to.