If you miss a wait event, you can catch up by doing listinvoices and
getting the max of these fields. It's also a good debugging clue.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Now we have defined ordering, we can add a start param.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Added: JSON-RPC: `listinvoices` has `index` and `start` parameters for listing control.
We tried to fix this flake before, but now it actually happened again it shows that
b5845afd43 wasn't correct.
```
# If this happens fast enough, connect fails with "disconnected
# during connection"
try:
l1.rpc.connect(l2.info['id'], 'localhost', l2.port)
except RpcError as err:
> assert "disconnected during connection" in err.error
E assert 'disconnected during connection' in {'code': 402, 'message': 'disconnected during connection'}
E + where {'code': 402, 'message': 'disconnected during connection'} = RpcError("RPC call failed: method: connect, payload: {'id': '022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59', 'host': 'localhost', 'port': 41849}, error: {'code': 402, 'message': 'disconnected during connection'}").error
tests/test_misc.py:2728: AssertionError
```
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
l1 shuts down too fast, channeld doesn't get to tell gossipd about the channel,
and l2 sends (private) update_channel and we complain:
```
lightningd-2 2023-07-20T03:42:37.744Z DEBUG gossipd: received private channel announcement from channeld for 103x1x0
lightningd-2 2023-07-20T03:42:37.791Z DEBUG 022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59-hsmd: Got WIRE_HSMD_CUPDATE_SIG_REQ
lightningd-2 2023-07-20T03:42:37.796Z DEBUG hsmd: Client: Received message 3 from client
lightningd-1 2023-07-20T03:42:37.857Z DEBUG 022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59-gossipd: Bad gossip order: WIRE_CHANNEL_UPDATE before announcement 103x1x0/0
lightningd-1 2023-07-20T03:42:37.864Z DEBUG 022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59-gossipd: Bad gossip order: WIRE_CHANNEL_UPDATE before announcement 103x1x0/0
```
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Writing to the gossip_store file is not explicitly synchronized, so it
seems that connectd has not caught up with the dying flags we've set.
Simply wait for a second. Hacky, but should work.
```
def test_gossip_not_dying(node_factory, bitcoind):
l1 = node_factory.get_node()
l2, l3 = node_factory.line_graph(2, wait_for_announce=True)
l1.rpc.connect(l2.info['id'], 'localhost', l2.port)
# Wait until it sees all the updates, node announcments.
wait_for(lambda: len([n for n in l1.rpc.listnodes()['nodes'] if 'alias' in n])
+ len(l1.rpc.listchannels()['channels']) == 4)
def get_gossip(node):
out = subprocess.run(['devtools/gossipwith',
'--initial-sync',
'--timeout-after=2',
'{}@localhost:{}'.format(node.info['id'], node.port)],
check=True,
timeout=TIMEOUT, stdout=subprocess.PIPE).stdout
msgs = []
while len(out):
l, t = struct.unpack('>HH', out[0:4])
msg = out[2:2 + l]
out = out[2 + l:]
# Ignore pings, timestamp_filter
if t == 265 or t == 18:
continue
# channel_announcement node_announcement or channel_update
assert t == 256 or t == 257 or t == 258
msgs.append(msg)
return msgs
assert len(get_gossip(l1)) == 5
# Close l2->l3, mine block.
l2.rpc.close(l3.info['id'])
bitcoind.generate_block(1, wait_for_mempool=1)
l1.daemon.wait_for_log("closing soon due to the funding outpoint being spent")
# We won't gossip the dead channel any more (but we still propagate node_announcement)
> assert len(get_gossip(l1)) == 2
E assert 4 == 2
E + where 4 = len([b'\x01\x01L\xc2\xbe\x08\xbb\xa8~\x8f\x80R\x9e`J\x1cS\x18|\x12\n\xe5_6\xb0\xa6S\x9fU\xae\x19\x9c\x1fXB\xab\x81N\x13\xdc\x8e}\xb9\xb0\xb6\xe6\x14h\xd4:\x90\xce\xc3\xad\x9ezR`~\xba@\xc9\x91e\x89\xab\x00\x07\x88\xa0\x00\n\x02i\xa2d\xb8\xa9`\x02-"6 \xa3Y\xa4\x7f\xf7\xf7\xacD|\x85\xc4l\x92=\xa53\x89"\x1a\x00T\xc1\x1c\x1e<\xa3\x1dY\x02-"SILENTARTIST-27fc801-modded\x00\x00\x00\x00\x00\x00\x00', b'\x01\x01M\x00\x86\x8e4\xc8\x90p\n\x98\xf7\xce4\x1e\xd9\xd6-6\xfb(\xf0\xe4\xb7\x90\x7f\x89\xb9\xfa\x00\x82\x1b\xeb\x1fY\x93\x1e\xe0c\xb2\x0e<\xe6\x06x\xb7\xe54};\xfbd\xa0\x01S\xcf\xe8{\xf8\x8f/\xa7\xc0\xe2h\x00\x07\x88\xa0\x00\n\x02i\xa2d\xb8\xa9`\x03]+\x11\x92\xdf\xba\x13N\x10\xe5@\x87]6n\xbc\x8b\xc3S\xd5\xaavk\x80\xc0\x90\xb3\x9c:]\x88]\x03]+HOPPINGFIRE-27fc801-modded\x00\x00\x00\x00\x00\x00\x00\x00', b'\x01\x02~\xe0\x13\xb4\x84Gz\xcf(\xd4w\xa7\x9bZ\x1a\xe82\xd1\xe1\x1bLm\xc8\n\xcd\xd4\xfb\x88\xf8\xc6\xdbt\\v\x89~\xd1.e\xc8\xa8o\x9c`\xd5\xa8\x97\x11l\xf2g\xcb\xa8\xcf\r\x869\xd3\xb5\xd5\x9a\xa0my\x9f\x87\xebX\x0b\x9e_\x11\xdc!\x1e\x9f\xb6j\xbb6\x99\x99\x90D\xf8\xfe\x14h\x01\x16#\x936B\x86\xc6\x00\x00g\x00\x00\x01\x00\x00d\xb8\xa9d\x01\x02\x00\x06\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\n\x00\x00\x00\x00;\x023\x80', b"\x01\x0284\xf1a\x86z\x8e\xf2\xe5'\xf7\xfe1\x8d\x96R\x0c\xe7\x1fj#\xaf\xbd/\xba\x10e\xd1\xccQ-\xcf/>\xa5g\xc6\xd8\x9cO \xe7~\xb3\xda\xe0\\vg\xfb\x02&T\x93\xa0\xd4\x95\x8e\xd5L\x12\x9a\xf7\xe6\x9f\x87\xebX\x0b\x9e_\x11\xdc!\x1e\x9f\xb6j\xbb6\x99\x99\x90D\xf8\xfe\x14h\x01\x16#\x936B\x86\xc6\x00\x00g\x00\x00\x01\x00\x00d\xb8\xa9d\x01\x03\x00\x06\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\n\x00\x00\x00\x00;\x023\x80"])
```
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
These tests make assumptions about the presplitter behavior which
we'll remove in the next commit. We remove them here so we don't cause
temporary breaks in the git history.
While one side was not produced by us, we have a vested interest in propagating it.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Added: Protocol: When we send our own gossip when a peer connects, also send any incoming channel_updates.
@endothermicdev and I found this while investigating a "nobody sees my node_announcement" bug report.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Fixes: #6410
Reported-by: benjaminchodroff on discord
Changelog-Fixed: Protocol: When we send our own gossip when a peer connects, send our node_announcement too (regression in v23.05)
Fixes: #6368
Changelog-Fixed: Protocol: we no longer gossip about recently-closed channels (Eclair gets upset with this).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Without this, only per-peer daemons were filtered correctly. For generic
daemons, we need to filter with the actual nodeid they use (if any).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Fixed: config: `log-level` filters now apply correctly to messages from `connectd`.
Rather than initializating the "print_level" field on first use, we can
do it in logging_options_parsed(), now we have a linked list of them.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
```
2023-07-14T05:32:54.3688763Z # Set ECONOMICAL/6 feerate, for unilateral_close and htlc_resolution
2023-07-14T05:32:54.3689123Z l1.set_feerates((15000, 11000, 0, 0), True)
2023-07-14T05:32:54.3689484Z feerates = l1.rpc.feerates('perkw')
2023-07-14T05:32:54.3689919Z > assert feerates['perkw']['unilateral_close'] == 11000
2023-07-14T05:32:54.3690226Z E assert 15000 == 11000
2023-07-14T05:32:54.3690391Z
2023-07-14T05:32:54.3690514Z tests/test_misc.py:1572: AssertionError
```
The rough checks in set_feerates don't actually ensure that we've digested
the changes, so copy the check from elsewhere that makes sure
feerates['estimates'] has indeed been updated.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We assume that RBFs will happen in order (txid1, txid2) but that doesn't always happen.
```
for depth in range(2, 10):
bitcoind.generate_block(1)
# l2 should RBF, twice even, one for the l1 main output,
# one for the l1 HTLC output.
# Don't assume a specific order!
start = l2.daemon.logsearch_start
> txid1 = get_rbf_txid(l2, txid1)
tests/test_closing.py:1671:
...
print("({} was previously in logs!)".format(r))
> raise TimeoutError('Unable to find "{}" in logs.'.format(exs))
E TimeoutError: Unable to find "[re.compile('RBF onchain .*1fe38fe22852baaedccc3a9fd9d897e46bae5b7ca31daf23e0aa456fb235475e')]" in logs.
contrib/pyln-testing/pyln/testing/utils.py:328: TimeoutError
```
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Previously, our code checked for the presence of the `lightning:`
prefix while decoding a bolt11 string. Although this prefix is valid
and accepted by the core lightning pay command, it was causing issues
with how we managed invoices. Specifically, we were skipping the prefix
when creating a copy of the invoice string and storing the raw invoice
(including the prefix) in the database, which caused inconsistencies
in the user experience.
To address this issue, we need to strip the `lightning:` prefix before
calling each core lightning command. In addition, we should
modify the invstring inside the db with the canonical one.
This commit fixes the issue by stripping the `lightning:` prefix
from the `listsendpays` function, which will improve the
user experience and ensure consistency in our invoice management (see
next commit).
Reported-by: @johngribbin
Link: ElementsProject#6207
Fixes: debbdc0
Changelog-Fixes: trim the `lightning:` prefix from invoice everywhere.
Signed-off-by: Vincenzo Palazzo <vincenzopalazzodev@gmail.com>
We fixed the others. There are no fields, but this keeps it consistent.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Added: JSON-RPC: `shutdown` notification contains `shutdown` object (notification consistency)
This is just housekeeping that allows up
to do not spam the logs of people with not
useful information.
Signed-off-by: Vincenzo Palazzo <vincenzopalazzodev@gmail.com>
Turns out we resubmit two txs (the commitment tx, and the anchor spend), but only wait
for one of them: if we mine a block before the anchor spend, it doesn't go in:
```
@unittest.skipIf(TEST_NETWORK != 'regtest', 'elementsd anchors unsupported')
@pytest.mark.developer("needs dev_disconnect")
def test_closing_anchorspend_htlc_tx_rbf(node_factory, bitcoind):
...
l1.daemon.wait_for_log('Peer permanent failure in CHANNELD_NORMAL: Offered HTLC 0 SENT_ADD_ACK_REVOCATION cltv 116 hit deadline')
l1.daemon.wait_for_log('Creating anchor spend for CPFP')
wait_for(lambda: len(bitcoind.rpc.getrawmempool()) == 2)
# But we don't mine it! And fees go up again!
l1.set_feerates((3000, 3000, 3000, 3000))
bitcoind.generate_block(1, needfeerate=5000)
l1.daemon.wait_for_log('RBF anchor spend')
l1.daemon.wait_for_log('sendrawtx exit 0')
# And now we'll get it in (there's some rounding, so feerate a bit lower!)
bitcoind.generate_block(1, needfeerate=2990)
> wait_for(lambda: 'ONCHAIN:Tracking our own unilateral close' in only_one(l1.rpc.listpeerchannels()['channels'])['status'])
```
If we mine too fast, gossip can reach a node which considers it too far in the guture. Break it up.
```
@pytest.mark.developer("Too slow without --dev-fast-gossip")
def test_routing_gossip(node_factory, bitcoind):
nodes = node_factory.get_nodes(5)
for i in range(len(nodes) - 1):
src, dst = nodes[i], nodes[i + 1]
src.rpc.connect(dst.info['id'], 'localhost', dst.port)
src.openchannel(dst, CHANNEL_SIZE, confirm=False, wait_for_announce=False)
# openchannel calls fundwallet which mines a block; so first channel
# is 4 deep, last is unconfirmed.
# Allow announce messages.
mine_funding_to_announce(bitcoind, nodes, num_blocks=6, wait_for_mempool=1)
# Deep check that all channels are in there
comb = []
for i in range(len(nodes) - 1):
comb.append((nodes[i].info['id'], nodes[i + 1].info['id']))
comb.append((nodes[i + 1].info['id'], nodes[i].info['id']))
def check_gossip(n):
seen = []
channels = n.rpc.listchannels()['channels']
for c in channels:
seen.append((c['source'], c['destination']))
missing = set(comb) - set(seen)
logging.debug("Node {id} is missing channels {chans}".format(
id=n.info['id'],
chans=missing)
)
return len(missing) == 0
for n in nodes:
> wait_for(lambda: check_gossip(n))
tests/test_gossip.py:721:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
success = <function test_routing_gossip.<locals>.<lambda> at 0x7f3200534ef0>
timeout = 180
def wait_for(success, timeout=TIMEOUT):
start_time = time.time()
interval = 0.25
while not success():
time_left = start_time + timeout - time.time()
if time_left <= 0:
> raise ValueError("Timeout while waiting for {}".format(success))
E ValueError: Timeout while waiting for <function test_routing_gossip.<locals>.<lambda> at 0x7f3200534ef0>
Having the grpc bindings in the pyln-testing package was always a bit
strange, however it came with additional issues. Due to the way that
protos are handled by protobuf, any name clash, independently of where
we import the protos, would cause an import error. This usually
happens in diamond-pattern dependencies, and so pull out the generated
files into their own package that everyone else can rely on.
Changelog-None
Greg Sanders helped debug this:
```
# Payment should succeed.
> l1.bitcoin.generate_block(1, wait_for_mempool=txid1)
tests/test_closing.py:2145:
...
> raise ValueError("Timeout while waiting for {}".format(success))
E ValueError: Timeout while waiting for <function BitcoinD.generate_block.<locals>.<lambda> at 0x7f7cd7271560>
```
The lgos show the HTLC tx doesn't go through because it double-spent an input but didn't spend enough:
```
2023-07-06T03:05:54.3424456Z lightningd-2 2023-07-06T02:57:37.490Z DEBUG plugin-bcli: sendrawtx exit 26 (bitcoin-cli -regtest -datadir=/tmp/ltests-yihsd7f4/test_onchain_middleman_simple_1/lightning-2/ -rpcport=39033 -rpcuser=... -stdinrpcpass sendrawtransaction 0200000000010220f632933db174def3b4bd879ad892e28f4422ee6e844bda27c4e5bdbc754fda030000000001000000975e48aeaced953f7c2b85f20e85b5c241258cb9dbd2ba13de3038daba6316330100000000fdffffff02a1860100000000002200202df545ea882889846c52fc5e111ac07ce07e0c09418ac15743a6f6284c2a4fa739391e000000000016001461ed06c0632af284ec9e192e97758fc04692c8290500473044022064446978d7f15d923237d44d7701e4a09a2d03ebb0a7e2c42e22c67435ad2fcc02201c51002fb72920978c79872e427acd90a13423e841b4198717c1771e7355ba4683473044022059ed9e2c536a71fac3cd63c7349c64a4445f0f936270295518a8aa03607d1e9d022064d318669a7602f585c84fe80aa95487816920c2a8ac26836601fbb369068ff38320478171dbde0e1c243e5c1ae23bcce446ab361197ca10d57c1d8f030cc9ff52158c76a914f5fb7361abefe39af0c3ab31d5be71096deeb4198763ac6721034bbdd4e5a933a3a83f6c3d22714cf23c452cb0c5ac8e429eea14992262787e687c8201208763a9147a3d3592e3d93a525beed57acce3eb70ba1b514288527c21039a62150fd6808d6c68aabd1e9d144c93e84a8f54f4e0f9ec5b3c37eee0a051c752ae6775017eb175ac6851b275680247304402200fde6a943a2f7f0287af2f5c5872326a4b8ad7020a098c4e05c28d2c2a0f2018022053a1263f982b98bb8dfcbb3a66641fe9f298e7ab432a78d21a2e79190d74753f012102397b0449a60d9d35634401bceaf3beb6118fc229b8552bd7122f735808b28aee00000000) error code: -26\nerror message:\ninsufficient fee, rejecting replacement 76f438f176d8f9beabb286f53c81aa7dcb4948d12f034f51753f4dd9071d6a74; new feerate 0.00029576 BTC/kvB <= old feerate 0.00054659 BTC/kvB
```
This is because sometimes we reuse the same UTXO for the anchor push spend as we do for the HTLC. That would be fine, except that we can have bitcoind mine the commitment tx and not the anchor push, and then we fail to replace it.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This was added to fundpsbt/utxopsbt in v0.10, but the spender plugin
didn't take advantage of it, instead calculating its own change amount
and output.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We use parameterization here. The old `anchor_expected()` was for
non-zero-fee anchors, and have bitrotted so there are some other
changes as well.
Unfortunately, all the anchor accounting seems to be broken, but I
cannot understand these tests at all. I had to simply disable them
for now.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>