We currently only use it to override the graph-specific features
returned in the route, though we should also use it to enable or
disable MPP.
Note that tests which relied on MPP behavior have had all of their
get_route calls upgraded to provide the MPP flag.
In the past we skipped doing this since invoice parsing occurs in a
different crate. However, we need to accept InvoiceFeatures in routing
now that we support MPP route collection, to detect if we can select
multiple paths or not. Further, we should probably take
rust-lightning-invoice as either a module or a subcrate in this repo.
`get_outputs_to_watch` returned a reference to an existing
`HashMap` avoiding extra clones, but there isn't a huge reason to
do so now that we have to clone to copy it out of the
`ChannelMonitor` mutex. Instead, return a `Vec` since it may be
less memory and it allows us to have a bindings C mapping for the
function.
Co-authored-by: Jeffrey Czyz <jkczyz@gmail.com>
Co-authored-by: Matt Corallo <git@bluematt.me>
This is largely useful for bindings, and the off-github discussion
around #814 concluded these should be pub, but the PR was not
updated to capture this. Now that the bindings support generation
for the structs, expose them.
When the (somewhat anti-pattern)
`impl Deref for X { type Target = X; .. }` is used, this avoids an
infinite dereference exception trying to figure out what type to
resolve `is_null` against.
`wait` doesn't capture enough of what's going on, but also Java
Java doesn't accpet methods just called `wait`, as it conflicts
with existing sync primitives on all Objects.
The deserialization process for `ChannelManager`/`ChannelMonitor`
data includes reloading any relevant `chain::Filter` with data
provided from the `ChannelMonitor`, but its nice if we adapt the
data to `chain::Filter` calls for users.
When a block is disconnected, the hash of the disconnected block was
used to update the last connected block. However, this amounts to a
no-op because these hashes should be equal. Successive disconnections
would update the hash but leave it one block off.
Normally, this not a problem because the last block_disconnected should
be followed by block_connected since the former is triggered by a chain
re-org. However, this assumes the user calls the API correctly and that
no failure occurs that would prevent block_connected from being called
(e.g., if fetching the connected block fails).
Instead, update the last block hash with the disconnected block's
previous block hash.
ChannelManager reads channel_state and last_block_hash while processing
funding_created and funding_signed messages. It writes these while
processing block_connected and block_disconnected events. To avoid any
potential deadlocks, have each site hold these locks independent of one
another and in a consistent order.
Additionally, use a RwLock instead of Mutex for last_block_hash since
exclusive access is not needed in funding_created / funding_signed and
cannot be guaranteed in block_connected / block_disconnected because of
the reads in the former.
ChannelMonitor keeps track of the last block connected. However, it is
initialized with the default block hash, which is a problem if the
ChannelMonitor is serialized before a block is connected. Instead, pass
ChannelManager's last_block_hash, which is initialized with a "birthday"
hash, when creating a new ChannelMonitor.
Tracking the last block was only used to de-duplicate block_connected
calls, but this is no longer required as of the previous commit.
Further, the ChannelManager can pass the latest block hash when needing
to create a ChannelMonitor rather than have each Channel maintain an
up-to-date copy. This is implemented in the next commit.
Calling block_connected for the same block was necessary when clients
were expected to re-fetch filtered blocks with relevant transactions. At
the time, all listeners were notified of the connected block again for
re-scanning. Thus, de-duplication logic was put in place.
However, this requirement is no longer applicable for ChannelManager as
of commit bd39b20f64, so remove this
logic.
When ChannelMonitors are persisted, they need to store the most recent
block hash seen. However, for newly created channels the default block
hash is used. If persisted before a block is connected, the funding
output may be missed when syncing after a restart. Instead, initialize
ChannelManager with a "birthday" hash so it can be used later when
creating channels.
Initial implementation of handling query_channel_range messages in
NetGraphMsgHandler. Enqueues a sequence of reply message in the pending
message events buffer.
Creates a MessageSendEvent for sending a reply_channel_range message.
This event will be fired when handling inbound query_channel_range
messages in the NetGraphMessageHandler.
The instructions for `ChannelManagerReadArgs` indicate that you need
to connect blocks on a newly-deserialized `ChannelManager` in a
separate pass from the newly-deserialized `ChannelMontiors` as the
`ChannelManager` assumes the ability to update the monitors during
block [dis]connected events, saying that users need to:
```
4) Reconnect blocks on your ChannelMonitors
5) Move the ChannelMonitors into your local chain::Watch.
6) Disconnect/connect blocks on the ChannelManager.
```
This is fine for `ChannelManager`'s purpose, but is very awkward
for users. Notably, our new `lightning-block-sync` implemented
on-load reconnection in the most obvious (and performant) way -
connecting the blocks all at once, violating the
`ChannelManagerReadArgs` API.
Luckily, the events in question really don't need to be processed
with the same urgency as most channel monitor updates. The only two
monitor updates which can occur in block_[dis]connected is either
a) in block_connected, we identify a now-confirmed commitment
transaction, closing one of our channels, or
b) in block_disconnected, the funding transaction is reorganized
out of the chain, making our channel no longer funded.
In the case of (a), sending a monitor update which broadcasts a
conflicting holder commitment transaction is far from
time-critical, though we should still ensure we do it. In the case
of (b), we should try to broadcast our holder commitment transaction
when we can, but within a few minutes is fine on the scale of
block mining anyway.
Note that in both cases cannot simply move the logic to
ChannelMonitor::block[dis]_connected, as this could result in us
broadcasting a commitment transaction from ChannelMonitor, then
revoking the now-broadcasted state, and only then receiving the
block_[dis]connected event in the ChannelManager.
Thus, we move both events into an internal invent queue and process
them in timer_chan_freshness_every_min().
The return value from Channel::force_shutdown previously always
returned a `ChannelMonitorUpdate`, but expected it to only be
applied in the case that it *also* returned a Some for the funding
transaction output.
This is confusing, instead we move the `ChannelMontiorUpdate`
inside the Option, making it hold a tuple instead.