In this commit, we attempt to fix a race condition that may occur in the
current AMP and MPP tests.
It appears the following scenario is possible:
* The `mppTestContext` [is used to create 6 channels back to
back](https://github.com/lightningnetwork/lnd/blob/master/lntest/itest/lnd_amp_test.go#L43)
* The method used to create the channel ends up calling
[`openChannelAndAssert`](edd4152682/lntest/itest/lnd_mpp_test.go (L300))
which'll open the channel, mine 6 blocks, [then ensure that the
channel gets
advertised](edd4152682/lntest/itest/assertions.go (L78))
* Later on, [we wait for all nodes to hear about all channels on the
network
level](https://github.com/lightningnetwork/lnd/blob/master/lntest/itest/lnd_amp_test.go#L62)
I think the issue here is that we'll potentially already have mined 30
or so blocks before getting to the final nodes, and those nodes may have
already heard about the channel already. This may then cause their
[`lightningNetworkWatcher`](edd4152682/lntest/node.go (L1213))
goroutine to not properly dispatch this, since it's assumed that the
channel hasn't been announced (or notified) when the method is called.
One solution here is to just check if the channel is already in the
node's graph or not, when we go to register the notification. If we do
this in the same state machine as the watcher, then we ensure if the
channel is already known, the client is immediately notified. One thing
that can help us debug this more in the future is adding additional
logging in some of these helper goroutines so we can more easily track
the control flow.
This commit implements this solution by checking to ensure that the
channel isn't already known in our channel graph before attempting to
wait for its notification, as we may already have missed the
notification before this registration request came in.
This commit creates the file lnd_misc_test.go to hold all miscellaneous
tests in the file lnd_test.go. From now on, the lnd_test.go will only be
responsible for handling the "top" level functionalities such as
splitting test cases and run them. Newly created test cases should find
their places in the related test files, or create new one when needed.
This commit refactors the function assertNumConnections to use
wait.NoError. Prior to this commit, `make lint` will fail on this
function. While fixing it, it's noticed that wait.NoError suits the
case so it's refactored to use it.
In #5364 we added a new error path in the `StopDaemon` method to return
an error if shutdown was attempted while a rescan/recover instance was
in progress. Since the wallet actually won't fully stop (atm)
mid-recovery, the call effectively didn't do anything in that scenario,
so we started to return an error to properly reflect that. However this
causes certain itests to fail, as during recovery, the stop attempt will
fail leading to the test itself failing.
In this commit, we wrap the calls to stop a running daemon within a
`wait.NoError` call so we'll continually try to shut down the daemon
rather than quit on the first try.
Fixes#5423.
We do this to trigger a bug that will be resolved in a follow-up commit.
This bug prevents inbound legacy channels from being rejected by the
recipient if they have at least one anchor channel already opened
without an on-chain balance.
This commit refactored the function NewNode to take a *testing.T so that
the unexpected error is checked inside it. The caller is now free from
checking the errors.
This permits an AMP invoice to be "pseudo-reusable", where the invoice
paramters can be used multiple times so long as a new payment address is
supplied. This prevents additional round trips between payer and payee
to obtain a new invoice, even though the payments/invoices won't be
logically associated via the RPC interface like they would when the full
reusable invoices are deployed.
In some rare instances it can happen that the nodes don't find each
other again after one of them has been re-created and the other one has
been restarted in the SCB tests. By making sure the re-created has the
same P2P port again as before, we make sure they can connect to each
other again successfully for executing DLP.
Since there is a lot of connecting and disconnecting between nodes in
the channel backup tests, we try to speed up that process by lowering
the min backoff from 1 second to 50 milliseconds. We also make sure we
never wait more than 1 second if it does take multple attempts. This
should sum up and hopefully speed up our tests a bit.