We actually were waiting for l3 to disconnect, not l2.
But in general we should be looking for status rather than grovelling
in the logs where possible, so I changed all those.
```
2021-08-17T04:40:34.9015538Z # l2 leases a channel from l3
2021-08-17T04:40:34.9016520Z l2.rpc.connect(l3.info['id'], 'localhost', l3.port)
2021-08-17T04:40:34.9017570Z rates = l2.rpc.dev_queryrates(l3.info['id'], amount, amount)
2021-08-17T04:40:34.9018724Z l3.daemon.wait_for_log('disconnect')
2021-08-17T04:40:34.9019851Z l2.rpc.connect(l3.info['id'], 'localhost', l3.port)
2021-08-17T04:40:34.9021467Z l2.rpc.fundchannel(l3.info['id'], amount, request_amt=amount,
2021-08-17T04:40:34.9022865Z feerate='{}perkw'.format(feerate), minconf=0,
2021-08-17T04:40:34.9024000Z > compact_lease=rates['compact_lease'])
...
2021-08-17T04:40:34.9103422Z > raise RpcError(method, payload, resp['error'])
2021-08-17T04:40:34.9106829Z E pyln.client.lightning.RpcError: RPC call failed: method: fundchannel, payload: {'id': '035d2b1192dfba134e10e540875d366ebc8bc353d5aa766b80c090b39c3a5d885d', 'amount': 500000, 'feerate': '2000perkw', 'announce': True, 'minconf': 0, 'request_amt': 500000, 'compact_lease': '029a00640064000000644c4b40'}, error: {'code': 400, 'message': 'Unable to connect, no address known for peer', 'data': {'id': '035d2b1192dfba134e10e540875d366ebc8bc353d5aa766b80c090b39c3a5d885d', 'method': 'connect'}}
```
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We fail waiting for 'Resolved OUR_UNILATERAL/DELAYED_OUTPUT_TO_US by our proposal OUR_DELAYED_RETURN_TO_WALLET'
because we close *two* channels, but only wait for one transaction before mining a block.
This means we might only have one, and we immediately mine the next wait_for_mempool=1,
so the OUR_DELAYED_RETURN_TO_WALLET isn't mined.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
If l3 is too slow, it can get channel_announcement after channel
is closed, so it gets upset at the node_announcement which follows:
022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59-gossipd: Updated pending announce with update 103x1x1/1
022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59-gossipd: channel_announcement: no unspent txout 103x1x1
022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59-gossipd: Bad gossip order: WIRE_NODE_ANNOUNCEMENT before announcement 0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This actually caused the flake in test_funding_reorg_private, where
l1 and l2 might not mark the original channel disabled. In fact, they
should *remove* it as it gets reorged out.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Return false if the timelock didn't mature yet, not the other way
around.
Also, the check shouldn't be strict: if the CSV is 1 it is valid
at utxo->blockheight + 1.
Signed-off-by: Antoine Poinsot <darosior@protonmail.com>
dualopend doesn't always listen to lightningd messages, so it would
sometimes hang at the end of tests.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It subsumes `decodepay`, and it's nicer if people can just assume it's
available at all times.
Changelog-Added: JSON-RPC: `decode` now available without `experimental-offers`
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
I spent an hour thinking this code had a bug (see test vector fix);
we *do* overallocate the tree, but that's deliberate: we fill with NULLs
and ignore on recursion.
The Merkle recurse comment had an out-by-one, and the NULL-pad
technique used was uncommented.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We were printing out the final merkle root before calculating it,
resulting in the final one being the same as the previous.
Reported-by: Aditya Sharma
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The issue is that the new keyword `force_lease_closed` was being set
even if the user didn't specify it, which results in breakage if this
new pyln version talks to older c-lightning nodes that don't have this
keyword yet. By setting the default to `None` it gets filtered out if
the user has not explicitly set it, but still retains the `True` /
`False` values if they did.
Changelog-None Issue is not present in released versions
Pointed out by @fiatjaf, and indeed it happened to me as well; a peer with
a high feerate reconnects and sends a similar (but now ludicrous) feerate,
and we get upset:
```
$ lightning-cli listpeers 039c73f53daad1050a6a72afb5353a2152f3152ee17168cd0ab28c2cb3e0050e36
{
"peers": [
{
"id": "039c73f53daad1050a6a72afb5353a2152f3152ee17168cd0ab28c2cb3e0050e36",
"connected": false,
"channels": [
{
"state": "CHANNELD_NORMAL",
"scratch_txid": "d796aa9c44920cc7169cdb61e36437bf180cedaec44103a69591ce2baac9b1d9",
"last_tx_fee": "14329000msat",
"last_tx_fee_msat": "14329000msat",
"feerate": {
"perkw": 19791,
"perkb": 79164
},
```
Then in the logs:
```
2021-07-23T19:34:56.227Z DEBUG 039c73f53daad1050a6a72afb5353a2152f3152ee17168cd0ab28c2cb3e0050e36-channeld-chan#39381: billboard perm: update_fee 17055 outside range 253-7210
```
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>