The instructions for `ChannelManagerReadArgs` indicate that you need
to connect blocks on a newly-deserialized `ChannelManager` in a
separate pass from the newly-deserialized `ChannelMontiors` as the
`ChannelManager` assumes the ability to update the monitors during
block [dis]connected events, saying that users need to:
```
4) Reconnect blocks on your ChannelMonitors
5) Move the ChannelMonitors into your local chain::Watch.
6) Disconnect/connect blocks on the ChannelManager.
```
This is fine for `ChannelManager`'s purpose, but is very awkward
for users. Notably, our new `lightning-block-sync` implemented
on-load reconnection in the most obvious (and performant) way -
connecting the blocks all at once, violating the
`ChannelManagerReadArgs` API.
Luckily, the events in question really don't need to be processed
with the same urgency as most channel monitor updates. The only two
monitor updates which can occur in block_[dis]connected is either
a) in block_connected, we identify a now-confirmed commitment
transaction, closing one of our channels, or
b) in block_disconnected, the funding transaction is reorganized
out of the chain, making our channel no longer funded.
In the case of (a), sending a monitor update which broadcasts a
conflicting holder commitment transaction is far from
time-critical, though we should still ensure we do it. In the case
of (b), we should try to broadcast our holder commitment transaction
when we can, but within a few minutes is fine on the scale of
block mining anyway.
Note that in both cases cannot simply move the logic to
ChannelMonitor::block[dis]_connected, as this could result in us
broadcasting a commitment transaction from ChannelMonitor, then
revoking the now-broadcasted state, and only then receiving the
block_[dis]connected event in the ChannelManager.
Thus, we move both events into an internal invent queue and process
them in timer_chan_freshness_every_min().
The return value from Channel::force_shutdown previously always
returned a `ChannelMonitorUpdate`, but expected it to only be
applied in the case that it *also* returned a Some for the funding
transaction output.
This is confusing, instead we move the `ChannelMontiorUpdate`
inside the Option, making it hold a tuple instead.
functional_tests.rs is huge, so anything we can do to split it up
some is helpful. This also exposes a somewhat glaring lack of
reorgs in our existing tests.
We could have possibly constructed a slightly inconsistent
path: since we reduce value being transferred all the way, we
could have violated htlc_minimum_msat on some channels
we already passed (assuming dest->source direction). Here,
we recompute the fees again, so that if that's the case, we
match the currently underpaid htlc_minimum_msat with fees.
If the ChannelManager never receives any blocks, it'll return a default blockhash
on deserialization. It's preferable for this to be an Option instead.
Refactor synchronize_listeners by pulling out a function returning the
validated block header of a BlockSource's best chain tip. This is needed
when a node is started from scratch and has no listeners to sync.
The implementation of chain::Listen for ChannelMonitor required using a
RefCell since its block_connected method required a mutable borrow. This
is no longer the case since ChannelMonitor now uses interior mutability
via a Mutex. So the RefCell is no longer needed.
Now that ChannelMonitor uses an internal Mutex to support interior
mutability, ChainMonitor can use a RwLock to manage its ChannelMonitor
map. This allows parallelization of update_channel operations since an
exclusive lock only needs to be held when adding to the map in
watch_channel.
ChainMonitor accesses a set of ChannelMonitors behind a single Mutex.
As a result, update_channel operations cannot be parallelized. It also
requires using a RefCell around a ChannelMonitor when implementing
chain::Listen.
Moving the Mutex into ChannelMonitor avoids these problems and aligns it
better with other interfaces. Note, however, that get_funding_txo and
get_outputs_to_watch now clone the underlying data rather than returning
references.
We currently "support" not having a router or channel in memory by
forcing users to implement the same, but its trivial to provide our
own dummy implementations.
This will switch to use the clang/C WASM ABI instead of the
wasm_bindgen WASM ABI as of rustc 1.51 (or nightly since [1]),
allowing us to link C and Rust code in a single wasm binary.
[1] https://github.com/rust-lang/rust/pull/79998
For ChannelManager, at least, we have two separate functions called
block_connected, one in the Listen trait, one in the struct, we
need to be explicit with which one we're trying to call.
Add a utility for syncing a set of chain listeners to a common chain
tip. Required to use before creating an SpvClient when the chain
listener used with the client is actually a set of listeners each of
which may have had left off at a different block. This would occur when
the listeners had been persisted individually at different frequencies
(e.g., a ChainMonitor's individual ChannelMonitors).
Adds a lightweight client for polling one or more block sources for the
best chain tip. Notifies listeners of blocks connected or disconnected
since the last poll. Useful for keeping a Lightning node in sync with
the chain.
Add an interface for being notified of block connected and disconnected
events, along with a notifier for generating such events. Used while
polling block sources for a new tip in order to feed these events into
ChannelManager and ChainMonitor.
ChainPoller defines a strategy for polling a single BlockSource. It
handles validating chain data returned from the BlockSource. Thus, other
implementations of Poll must be defined in terms of ChainPoller.
SPV clients need to poll one or more block sources for the best chain
tip and to retrieve related chain data. The Poll trait serves as an
adaptor interface for BlockSource. Implementations may define an
appropriate polling strategy.