Without this the following could happen:
* InboundPeerConnected is called while we already have an inbound
connection with the peer. This calls removePeer which calls Disconnect.
* If the peer is starting up in Start, it may be sending messages
synchronously via SendMessage(true, ...). This eventually calls the
writeMessage function which will exit if disconnect is set to 1.
* Since Disconnect was called, disconnect will be 1 and writeMessage
will exit, causing writeHandler to exit.
* If there is more than 1 message being sent, later messages will
queue in queueHandler but be unable to get into sendQueue as the
writeHandler goroutine has exited.
* The synchronous sends will be waiting on the errChan indefinitely
and startReady will never get closed meaning Disconnect will never
proceed.
The end result is that the server's mutex will be held until shutdown.
Avoid this by using writeMessage to bypass the writeHandler goroutine.
Finally, The LightningNode object is removed from ChannelEdgePolicy.
This is a step towards letting ChannelEdgePolicy reflect exactly the
schema that is on disk.
This is also nice because the `Node` object is not necessarily always
required when the ChannelEdgePolicy is loaded from the DB, so now it
only get's loaded when needed.
In preparation for the next commit which will remove the
`*LightningNode` from the `ChannelEdgePolicy` struct,
`FetchLightningNode` is modified to take in an optional transaction so
that it can be utilised in places where a transaction exists.
To prepare for the `kvdb.Backend` member of `ChannelEdgeInfo` being
removed, the `FetchOtherNode` method is moved from the `ChannelEdgeInfo`
to the `ChannelGraph` struct.
Having a `ForEachChannel` method on the `LightningNode` struct itself
results in a `kvdb.Backend` object needing to be stored within the
LightningNode struct. In this commit, this method is replaced with a
`ForEachNodeChannel` method on the `ChannelGraph` struct will perform
the same function without needing the db pointer to be stored within the
LightningNode. This change, the LightningNode struct more closely
represents the schema on disk.
The existing `ForEachNodeChannel` method on `ChannelGraph` is renamed to
`ForEachNodeDirectedChannel`. It performs a slightly different function
since it's call-back operates on Cached policies.
We've only ever made macaroons with the v2 versions, so we should
explicitly reject those that aren't actually v2. We add a basic test
along the way, and also add a similar check for the version encoded in
the macaroon ID.
Prior to this commit, taproot channels had a bug:
- If a disconnect happened before peer.AddNewChannel was called,
then the subsequent reconnect would call peer.AddNewChannel and
attempt the ChannelReestablish dance.
- peer.AddNewChannel would call NewLightningChannel with
populated nonce ChannelOpts. This in turn would call
InitRemoteMusigNonces which would create a new musig pair session
and set the channel's pendingVerificationNonce to nil.
- During the reestablish dance, ProcessChanSyncMsg would be called.
This would also call InitRemoteMusigNonces, except it would fail
since pendingVerificationNonce was set to nil in the previous
invocation.
To fix this, we add a new functional option to signal to the init logic
that it doesn't need to call InitRemoteMusigNonces in in
ProcessChanSyncMsg.