Nudging test authors towards not mining too many blocks makes sense,
especially in lnd where we have a lot of integration tests.
But the lntest package is also used in other projects where this
restriction might lead to large refactors.
To be able to stage those refactors we also want to allow this limit to
be configurable if lntest is used as a library.
Before this commit, it was possible for a request to be sent on the
`chanWatchRequests` channel in `WaitForChannelPolicyUpdate` and then for
the `ticker.C` case to select _before_ the `eventChan` select gets
triggered when the `topologyWatcher` closes the `eventChan` in its call
to `handlePolicyUpdateWatchRequest`. This could lead to a "close of a
closed channel" panic.
To fix this, this commit ensures that we only move on to the next
iteration of the select statement in `WaitForChannelPolicyUpdate` once
the request sent on `chanWatchRequests` has been fully handled.
In this commit we document an unexpected behavior found when connecting
a bitcoind node to a btcd node. We mitigate this in our test by
reconnecting the nodes when the connection is broken. We also limit the
connection made from `bitcoind` to be v1 only.
We change how the `bitcoind` node connects to the miner by creating a
temp RPC client and use it to connect to the miner. In addition we also
assert the connection is made.
This commit adds a script to hunt flakes for a specific unit test with
trace logs. Also rename the make commands to make them more clear on
whether it's a unit test, itest, or paralleled itest.
The exposed AddNode, AddEdge and UpdateEdge methods of the Builder are
currently synchronous since even though they pass messages to the
network handler which spins off the handling in a goroutine, the public
methods still wait for a response from the handling before returning.
The only part that is actually done asynchronously is the topology
notifications.
We previously tried to simplify things in [this
commit](d757b3bcfc)
but we soon realised that there was a reason for sending the messages to
the central/synchronous network handler first: it was to ensure
consistency for topology clients: ie, the ordering between when there is
a new topology client or if it is cancelled needs to be consistent and
handled synchronously with new network updates. So for example, if a new
update comes in right after a topology client cancels its subscription,
then it should _not_ be notified. Similariy for new subscriptions. So
this commit was reverted soon after.
We can, however, still simplify things as is done in this commit by
noting that _only topology subscriptions and notifications_ need to be
handled separately. The actual network updates do not need to. So that
is what is done here.
This refactor will make moving the topology subscription logic to a new
subsystem later on much easier.
Turns out that actions/setup-go starting with @v4 also adds caching.
With that, our cache size on disk has almost doubled, leading to the
GitHub runner running out of space in certain situation.
We fix that by disabling the automated caching since we already have our
own, custom-tailored version.
For the lncli cmd we now always initiate the coop close even if
there are active HTLCs on the channel. In case HTLCs are on the
channel and the coop close is initiated LND handles the closing
flow in the background and the lncli cmd will block until the
transaction is broadcasted to the mempool. In the background LND
disallows any new HTLCs and waits until all HTLCs are resolved
before kicking of the negotiation process.
Moreover if active HTLCs are present and the no_wait param is not
set the error msg is now highlightning it so the user can react
accordingly.