Since we currently only (ab)use it to send everything, we need a way to
generate boutique queries for testing.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The gossip-query spec enhancements say not to forward an channel_announcement until
you have receive a channel_update. This test fails for now.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
When opening a number channels from a single node we could end up not waiting
for the funding tx to make it into the mempool, instead triggering on a previous
`sendrawtransaction` or `CHANNEL_NORMAL` in the logs. This now checks that the
actual funding transaction makes it into the mempool and that we wait for the
depth change for that specific channel.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
We don't have any connection yet, so how could they be active? Disable both
sides to avoid trying to route through them or telling others to use them as
`contact_points` in invoices.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
Mostly `lightningd` complaining about not being able to estimate fees. Safes us
a lot of log space when some tests time out, and safes us a few context switches
between log appender and log watchers.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
We were sort of assuming that one side telling it's ok would automagically make
the other side succeed as well.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
While 1 second is very generous, it resolving the HTLCs may take longer, so we
just loop over this instead of making it one-shot
Signed-off-by: Christian Decker <decker.christian@gmail.com>
The RPC calls aren't really free, so let's wait for absolute times, computed
from the `start_time` instead.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
We can restore them once we get parallel testing on Travis, but meanwhile
we time out because of the 30 seconds bitcoind poll.
Running on my laptop with --duration=5:
=========================== slowest 5 test durations ===========================
184.07s call tests/test_lightningd.py::LightningDTests::test_multiple_channels
156.66s call tests/test_lightningd.py::LightningDTests::test_forward
155.77s call tests/test_lightningd.py::LightningDTests::test_closing
126.83s call tests/test_lightningd.py::LightningDTests::test_waitinvoice
126.11s call tests/test_lightningd.py::LightningDTests::test_waitanyinvoice
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Because we have too many which are never used and I don't want to document
them.
1. Remove unused anchor_onchain_wait. When implemented, it should be
hardcoded to 100 or more.
2. Remove anchor_confirms_max. 10 always reasonable, and we can readd
an override option should someone need it.
3. max_htlc_expiry should be the same as locktime_max (which increases
from 3 to 5 days by default): they're both a limit on how long
funds can be locked up.
4. channel_update_interval should always be a dev option.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Make --override-fee-rates a dev option. We use default-fee-rate in
its place, which (since bitcoind won't give fee estimates in regtest
mode for short chains) gives an effective feerate of 15000/7500/3750.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
@cdecker points out that in test_forward, where we manually create a route,
we get an error back which contains an update for an unknown channel.
We should still note this, but it's not an error for testing.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is something which generally shouldn't happen, but we didn't
notice it previously.
We ignore this warning in the case where a channel was deleted: this
happens because one side can send an update while the other notices
that the channel is closed.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It gets confused if re-run due to flaky, since:
1. Using the same HSM, it generates the same spend and sees a previous one.
2. The block height numbers are off.
Fixes: #1479
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We have a 'bitcoind' global: getting it from inside one of the daemons
was a mistake I've copied widely.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It simply wasn't waiting for us to register the peer before attempting to open a
connection. Moved into a separate test to be able rerun multiple times.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
Someone could try to announce an internal address, and we might probe
it.
This breaks tests, so we add '--dev-allow-localhost' for our tests, so
we don't eliminate that one. Of course, now we need to skip some more
tests in non-developer mode.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
If we're given a wildcard address, we can't announce it like that: we need
to try to turn it into a real address (using guess_address). Then we
use that address. As a side-effect of this cleanup, we only announce
*any* '--addr' if it's routable.
This fix means that our tests have to force '--announce-addr' because
otherwise localhost isn't routable.
This means that gossipd really controls the addresses now, and breaks
them into two arrays: what we bind to, and what we announce. That is
now what we return to the master for json_getinfo(), which prints them
as 'bindings' and 'addresses' respectively.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
1. Add special option where an empty host means 'wildcard for IPv4 and/or IPv6'
which means ':1234' can be used to set only the portnum.
2. Only add this protocol wildcard if --autolisten=1 (default)
and no other addresses specified.
3. Pass it down to gossipd, so it can handle errors correctly: in most cases,
it's fatal not to be able to bind to a port, but for this case, it's OK
if we can only bind to one of IPv4/v6 (fatal iff neither).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It's become clear that our network options are insufficient, with the coming
addition of Tor and unix domain support.
Currently:
1. We always bind to local IPv4 and IPv6 sockets, unless --port=0, --offline,
or any address is specified explicitly. If they're routable, we announce.
2. --addr is used to announce, but not to control binding.
After this change:
1. --port is deprecated.
2. --addr controls what we bind to and announce.
3. --bind-addr/--announce-addr can be used to control one and not the other.
4. Unless --autolisten=0, we add local IPv4 & IPv6 port 9735 (and announce if they are routable).
5. --offline still overrides listening (though announcing is still the same).
This means we can bind to as many ports/interfaces as we want, and for
special effects we can announce different things (eg. we're sitting
behind a port forward or a proxy).
What remains to implement is semi-automatic binding: we should be able
to say '--addr=0.0.0.0:9999' and have the address resolve at bind
time, or even '--addr=0.0.0.0:0' and have the port autoresolve too
(you could determine what it was from 'lightning-cli getinfo'.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We're about to change the JSONRPC, so let's put an explicit 'port' into
our node class.
We initialize it at startup time: in future I hope to use ephemeral ports
to make our tests more easily parallelizable.
Suggested-by: @cdecker
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This used to be the port, but since we no longer have fixed ports, and we start
them in random order we can't easily distinguish them by the port anymore. Just
use a numeric ID that matches their lightning-dirs.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
We add an attempt number to the test directory to improve the test-isolation and
allow for multiple reruns of the same test, without re-using any of the
lightning-dirs or bitcoin-datadirs.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
This is the first example of the py.test style fixtures which should allow us to
write much cleaner and nicer tests.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
We never really look at the output, and it's rather noisy, so we just stop
writing to the log.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
Especially with valgrind this allows us to safe quite some time on multi-core
machines since startup is about the most expensive operation in the tests. In a
simple test spinning up 3 nodes this gave me about 25% - 30% test time
reduction. The effects will be smaller for single core machines, hoping Travis
handles these gracefully.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
We can create the hsm file from python directly; that works even if we
don't have DEVELOPER set, and is simpler.
We add a test that the aliases are correct.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We're getting spurious closures, even on mainnet. Using --ignore-fee-limits
is dangerous; it's slightly less so to lower the minimum (which is the
usual cause of problems).
So let's halve it, but beware the floor.
This is a workaround, until we get independent feerates in the spec.
Fixes: #613
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
At least say whether we failed to connect at all, or failed cryptographic
handshake, or failed reading/writing init messages.
The errno can be "Operation now in progress" if the other end closes the
socket on us: this happens when we handshake with the wrong key and it
hangs up on us. Fixing this would require work on ccan/io though.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
When we get a reconnection, kill the current remote peer, and wait for the
master to tell us it's dead. Then we hand it the new peer.
Previously, we would end up with gossipd holding multiple peers, and
the logging was really hard to interpret; I'm not completely convinced
that we did the right thing when one terminated, either.
Note that this now means we can have peers with neither ->local nor ->remote
populated, so we check that more carefully.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
1. Lifetime of 'struct reaching' now only while we're actively doing connect.
2. Always free after a single attempt: if it's an important peer, retry
on a timer.
3. Have a single response message to master, rather than relying on
peer_connected on success and other msgs on failure.
4. If we are actively connecting and we get another command for the same
id, just increment the counter
The result is much simpler in the master daemon, and much nicer for
reconnection: if they say to connect they get an immediate response,
rather than waiting for 10 retries. Even if it's an important peer,
it fires off another reconnect attempt, unless it's actively
connecting now.
This removes exponential backoff: that's restored in next patch. It
also doesn't handle multiple addresses for a single peer.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Got some intermittant failures, mainly caused by the tests being slow
enough that the peer reconnected. We should always suppress
reconnection if we can, and not stress too much in the !DEVELOPER case
where we can't.
We should turn off dev-no-reconnect *always* unless told we will
reconnect, and since we can't if !DEVELOPER, don't do the connection
check there.
Instead of adding an option to line_graph, we remove it in favor
of connect (since we only use it with n=2 anyway).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This shaves off about 15% of our integration testing suite on my machine. It
assumes we never reorg below the first block the node starts with, which is true
for all tests, so it's safe.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
Simplification of the offset calculation to use the rescan parameter, and rename
of `wallet_first_blocknum`. We now use either relative rescan from our last
known location, or absolute if a negative rescan was given. It's all handled in
a single location (except the case in which the blockcount is below our
precomputed offset), so this should reduce surprises.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
Careful log examination revealed that we were generating a block before one
of the mutual close txs had entered the mempool. This is rare because it
means that both peers have to be too slow.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
I had a weird failure which was caused by an unexpected disconnect and
reconnecct. Since we are prersistend and recover from these, they can
slip through our tests; most tests don't involve reconnection, so we
need to catch this explicitly.
For the connect() helper, we always suppress reconnection; tests which
want it all want other options so don't use this helper anyway. (Actually,
after I said that, test_closing_while_disconnected was added when I
rebased, which did require it, so I had to open-code that one).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Also report tx and txid, and whether we closed unilaterally or
bilaterally, if we could close the channel.
Also make a manpage.
Fixes: #1207Fixes: #714Fixes: #622
Mixing the test into the existing test allows us to reduce the execution, but
adds a bit of complexity, so I added two small helpers.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
We can have more than one; eg we might offer both bech32 and a p2sh
address, and in future we might offer v1 segwit, etc.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This allows us to have some default options that can then be overridden easily
on a per-test basis.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
In the next commit we remove channels whose outpoint was spent from our network
view, so checking for it will not work anymore.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
This fixes the root cause of https://github.com/ElementsProject/lightning/issues/1212
where we deleted the payment because we wanted to retry, then retry failed
so we had an (old) HTLC without a matching payment. We then fed that
HTLC to onchaind, which tells us it's missing, and we try to fail the
payment and deref a NULL pointer.
Fixes: #1212
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We say "in N blocks" but we actually mean "N blocks after this tx" which is
actually N-1 or less. Change wording and tighten tests which misunderstood
this.
Also, the 'assert not l1.daemon.is_in_log('onchaind complete, forgetting peer')'
are unlikely to work until the daemon has actually seen the block, so add
sync_blockheight before all of those.
These changes reveal some sloppy testing, which we fix.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
With the following patch applied, we could clearly see onchaind try to
broadcast the timeout tx one block too early:
sendrawtx exit 26, gave error code: -26?error message:?non-final (code 64)?
This is because of an out-by-one error in calculating the relative
depth required, since the out->tx_blockheight is already 1 before the
current block.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This was revealed in #1114; onchaind isn't actually completely idempotent
due to fee changes (and the now-fixed change in keys used).
This triggers the bug by restarting with different fees, resulting in
onchaind not recognizing its own proposal:
2018-03-05T09:38:15.550Z lightningd(23076): lightning_onchaind-022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59 chan #1: STATUS_FAIL_INTERNAL_ERROR: THEIR_UNILATERAL/OUR_HTLC spent with weird witness 3
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is twice the 'update_channel_interval' we get handed.
We delete the non-existent channel_add_connection and delete_connection
declarations from the header too.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We currently give them a free pass. The simplest fix is to give them
an old timestamp on initialization.
We still skip unannounced channels, on the assumption that they're
ours. And we set the last_update_timestamp to -1 when we convert to
gossip_getchannels_entry to indicate no update.
This breaks the DEVELOPER=1 pruning test, since we hardcode the 1
week timeout. That's fixed in the next patch.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
I initially disabled this until 0.16 because the withdraw command
was modified to require 'brct1' addresses for regtest.
But commit bd07a9 allows a regular testnet address to work
just as well. So renable this check.
FWIW, the tests without valgrind take 662 seconds before we reduced
the number of blocks, and only 648 seconds now.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
0.16.0 is required since we rely on it for some tests and the block
reduction allows us to waste less time during setup. 121 blocks were
chosen so that we have at least one mature output to spend.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
If bitcoind is still fetching blocks, we might accidentally inject
the failure between getblockhash and getblock. That's OK, but
it's not the failure we test for.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Looks like rebasing the flake8 branch caused breakage, as new violations
had occurred since that check was written
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This test generates a pre-bip173 testnet P2PKH address with bitcoin 0.15.1
which fails under the new verification checks for the bip173 network name.
This should be renabled with bitcoin 0.16 which can generate a bip173 address
for regtest with the bcrt prefix.
* Modifies invoice command to have the following format
invoice <msatoshi> <label> <desc> <?expiry> <?fallbackaddr>
* Adds support for Segwit bcrt1 addresses for withdraw
* Add test case for fallback address in invoice creation
* Create a common json_tok_address_scriptpubkey to be used
by invoice and withdraw commands.
We can do similar tricks to test other things, even to run without a
real bitcoind for faster testing, but for now we simply exit if a magic
file says so.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The billboard is now far more useful to tell what's going on, and this
gets us closer to a state == owner mapping.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We also fold opening_got_hsm_funding_sig() into the caller; it was
previously a callback before we decided to always use the HSM
synchronously.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>