This happens under CI, but it's not informative. Sleep and retry.
Also, "except (OSError, Exception)" does not seem to do what you'd think:
this clause never gets run.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This prepares for when they start being u64, not strings with msat appended.
This has a strange side effect on our schema: despite the name,
decodepay's `fee_base_msat` is actually a u64, which we now convert to
msat on decode.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
These are useful for requests:
1. "outpoint": <txid>:<outnum>
2. "feerate": strings or a number
3. "outputdesc": bitcoin-style addresses-as-keys
4. "msat_or_all": amount or "all"
4. "msat_or_any": amount or "any"
5. "short_channel_id_dir": scid with /0 or /1.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This adds our first (basic) schema, and sews support into pyln-testing
so it will load schemas for any method for doc/schemas/{method}.schema.json.
All JSON responses in a test run are checked against the schema (if any).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Since we turned many errors into warnings, we want our tests to fail
when they happen unexpectedly. We make WARNING clear in the strings
we print, too, to help out.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We were getting a couple of starvations, so we need a fair filelock. I
also wasn't too happy with the lock as is, so I hand-coded it quickly.
Should be correct, but the overall timeout will tell us how well we
are doing on CI.
Both my machine and apparently the CI tester machines regularly run
into issues with load on the system, causing timeouts (and
unresponsiveness). The throttler throttles the speed with which new
instances of c-lightning get started to avoid overloading. Since the
plugin used for parallelism when testing spawns multiple processes we
need to lock on the fs. Since we have that file open already, we'll
also write a couple of performance metics to it.
For performance reasons we were starting one for each session, which caused
the same postgres DB to be re-used for multiple tests (all test run in the
same worker process), but this could lead to interactions if there is a
timeout or a test happens to touch the `db_provider`. It turns out that we
were only saving about 15 seconds on a 1250 second run anyway, which is a
small cost for increased test isolation.
We were not removing the base test directory if we had other files in there,
which was the case for postgres runs. This now explicitly check for `test_*`
directories which are an indicator of a failed test.
And when it's set, and we're SLOW_MACHINE, simply disable valgrind.
Since Travis (SLOW_MACHINE=1) only does VALGRIND=1 DEVELOPER=1 tests,
and VALGRIND=0 DEVELOPER=0 tests, it was missing tests which needed
DEVELOPER and !VALGRIND.
Instead, this demotes them to non-valgrind tests for SLOW_MACHINEs.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
pytest captures the output by monkey patching out `sys.stdout`. This may
conflict with our use of `sys.stdout` when configuring logging, resulting in
the "Write to closed file" issue that is spamming the logs. By making the
logging configuration a fixture hopefully we always use the correct
stdout (after pytest has monkey-patched it).
For some reason we fail to remove the test directory in some cases. My
hypothesis is that it is a daemon that is not completely shut down yet, and
still writes to the directory. This commit intercepts the error, prints any
files in the directory and re-raises the error. This should allow us to debug
the reappears.
Some tests may not spawn a node at all, so make sure that our assumption that
the directory exists in the fixture cleanup is correct by creating the
directory.
We were hardcoding the chainparams->chain_hash which caused the query to
return an empty result. By parametrizing the test we can make it work on
elements.
In the c-lightning tests we have `tests/conftest.py` which annotates test
function with the outcome. If we use pyln-testing outside of the c-lightning
tree we cannot rely on that annotation being there, so we assume it passed.
Quite a few of the things in the LightningNode class are tailored to their use
in the c-lightning tests, so I decided to split those customizations out into
a sub-class, and adding one more fixture that just serves the class. This
allows us to override the LightningNode implementation in our own tests, while
still having sane defaults for other users.
We'll rewrite the tests to use this infrastructure in the next commit.
Changelog-Added: The new `pyln-testing` package now contains the testing infrastructure so it can be reused to test against c-lighting in external projects