Commit graph

3384 commits

Author SHA1 Message Date
ShahanaFarooqui
3649a54f04 docs: Lock example values to avoid unnecessary updates in schemas
Changelog-None.
2025-02-19 15:51:44 -06:00
Alex Myers
26359e4c8c pytest: address flakes from blinded paths in bolt12 offers
Following 6e4ff6a, nodes now check for their public address and
generate a blinded path for invoices if the address is not advertized.
This breaks several of the tests where the blinded path is expected
to have the entrypoint be the node itself.

Changelog-None
2025-02-19 10:25:51 -06:00
Rusty Russell
8fedb26e6e pytest: fix test_xpay_bolt12_no_mpp
When we merged blinded paths for nodes with no address
(6e4ff6a7d2), this test
broke.  We need to prevent that, otherwise:

```
>       assert ret['successful_parts'] == 2
E       assert 1 == 2

tests/test_xpay.py:677: AssertionError
```

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2025-02-17 17:12:19 +10:30
daywalker90
5e0a25bca9 cln-grpc, clnrest: workaround for logging before shutdown
Changelog-Fixed: cln-grpc and clnrest errors that cause them to stop will now log
2025-02-14 14:58:00 -06:00
Rusty Russell
4bb7b49f0a pytest: test that we indeed do MPP even if not advertized.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2025-02-14 22:17:21 +10:30
Rusty Russell
4ed709f9ec xpay: avoid MPP if invoice does not allow it.
This is deeply annoying, and we may have to support this properly
(using a separate algorithm entirely) if other implementations don't
fix their crap.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Fixed: Plugins: xpay: suppress multi-part payment if invoice doesn't allow it (please, fix your nodes!)
2025-02-14 22:17:21 +10:30
Rusty Russell
461eb2eddb askrene: remove all small channels if there's no MPP support.
This is an inefficient hack.  Can you tell I really didn't want to
implement this?  MPP was finalized in 2018 FFS.

We do this by adding another "auto" layer, which removes all too-small
channels, and then makes our MPP node pile all the funds into the largest
channel it chooses.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2025-02-14 22:17:21 +10:30
Rusty Russell
5b74e98201 pytest: test that xpay doesn't use MPP if invoice doesn't permit it.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2025-02-14 22:17:21 +10:30
Rusty Russell
91d7394175 askrene: don't include impossible channels in the MCF graph.
In particular, if the total amount we're sending is less than the minimum amount
the channel insists on, we can eliminate it.

This fixes the problem that we're really trying to send a de-minumus
amount (rather than the more obscure case where we divide the amount
and then it is below the minimum).

After trying several other approaches, this was by far the cleanest!

Reported-by: https://github.com/JssDWt
Fixes: https://github.com/ElementsProject/lightning/issues/8045
Changelog-Fixed: xpay: don't simply give up if our total amount is less than htlc_minimum_msat on some channel.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2025-02-14 15:18:48 +10:30
Rusty Russell
38b9df44b2 pytest: test that askrene doesn't simply give up if we are below htlc_minimum.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2025-02-14 15:18:48 +10:30
Rusty Russell
6e4ff6a7d2 offers: add a blinded path if we have no advertized address.
Suggested-by: Matt Corallo
Fixes: https://github.com/ElementsProject/lightning/issues/7806
Changelog-Changed: Offers: we will use a blinded path if we have no advertized address (so payers wouldn't be able to connect directly).
2025-02-14 14:18:29 +10:30
Rusty Russell
e3c36988ea pytest: add override options to prepare for default removal of rest-port/rest-host options.
Otherwise this test breaks when we change CLN_NEXT_VERSION.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2025-02-13 21:15:45 -06:00
Rusty Russell
2b3b7fde2c pytest: expect complaints when we use deprecated commando-runes commands.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2025-02-13 21:15:45 -06:00
Rusty Russell
d18f564324 pytest: stop using deprecated commando_rune commands.
These are about to start logging warnings, so use modern versions for tests
which aren't explicitly about testing obsolete ones.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2025-02-13 21:15:45 -06:00
Rusty Russell
e81a50e551 pytest: fix change in error msg from latest bitcoind master.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2025-02-13 21:15:45 -06:00
Rusty Russell
df24568143 pytest: remove test_slow_startup_many_addresses
As I suspected, benchmark tests like this are too unreliable under CI.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2025-02-13 15:42:51 -06:00
Rusty Russell
645b202222 wallet: use htable in wallet_add_onchaind_utxo.
On lookup, we update the htable if any new addresses have been added.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Fixed: lightningd: startup time vastly improved for large nodes with pending closes and many bitcoin addresses.
2025-02-13 15:42:51 -06:00
Rusty Russell
c13756d2c5 pytest: test for slow start when closing tx and large number of addresses.
From Whitslack's "startup takes 15 minutes" bug report, we can see
that wallet_can_spend is extremely slow.  We exacerbate this by setting a large
bip32_max_index:

```
    91.29%     0.02%  lightningd       lightningd                 [.] wallet_can_spend
            |          
             --91.27%--wallet_can_spend
                       |          
                       |--47.81%--scriptpubkey_p2tr_derkey
                       |          |          
                       |          |--42.80%--scriptpubkey_p2tr
...
                       |--42.16%--bip32_key_from_parent
```

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2025-02-13 15:42:51 -06:00
Lagrang3
37150c1f49 renepay: add test_unannounced
Changelog-None.

Signed-off-by: Lagrang3 <lagrang3@protonmail.com>
2025-02-12 13:36:06 +10:30
Lagrang3
158fb91062 renepay: resolve self payments with fake node
Always use a fake destination node, the self-payments are no longer a
corner case for the routing problem in this way. Also it is ok for
get_routes to return routes with zero length.

Changelog-None.

Signed-off-by: Lagrang3 <lagrang3@protonmail.com>
2025-02-12 13:36:06 +10:30
Lagrang3
d7fba2d75b renepay: add test for simple offer
Changelog-None.

Signed-off-by: Lagrang3 <lagrang3@protonmail.com>
2025-02-12 13:36:06 +10:30
Lagrang3
90789287ff renepay: skip test_previous_sendpays
sendonion RPC does not allow to set the total amount in lightningd's
wallet, therefore it mixing sendpay and sendonion payment parts would
not work. That means for the time being we cannot complete a payment
initialized with sendpay until we add a total_amount parameter to
sendonion.

Changelog-None.

Signed-off-by: Lagrang3 <lagrang3@protonmail.com>
2025-02-12 13:36:06 +10:30
Lagrang3
47061f15ce renepay: fix error handling
Fix error handling since we moved from sendpay to sendonion rpc.
With sendonion once a route fails we don't get the scid and node_id that
failed along the route, so we have to deduce those from our own internal
data.

Changelog-None

Signed-off-by: Lagrang3 <lagrang3@protonmail.com>
2025-02-12 13:36:06 +10:30
Rusty Russell
c3362b057c BOLT12: remove -offers from bolt12 quotes, update them.
Typo fixes and wording changes.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2025-02-11 20:19:01 -06:00
Rusty Russell
c22cc11af5 fetchinvoice: allow user to specify bip353 name corresponding to how they got offer.
Changelog-Added: JSON-RPC: `fetchinvoice` BIP 353 name support (`bip353`).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2025-02-11 20:19:01 -06:00
Alex Myers
39753f1fd6 pytest: Address race condition in test_gossip_jsonrpc
In the time it takes connectd to flush the log message for
, gossipd can already have
the announcement sent.

lightningd-1 2025-02-11T15:26:06.745Z DEBUG 022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59-channeld-chan#1: peer_in WIRE_ANNOUNCEMENT_SIGNATURES
lightningd-2 2025-02-11T15:26:06.887Z DEBUG 0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518-channeld-chan#1: peer_in WIRE_ANNOUNCEMENT_SIGNATURES
lightningd-1 2025-02-11T15:26:06.897Z TRACE gossipd: Received node_announcement for node 0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518
lightningd-1 2025-02-11T15:26:06.915Z DEBUG 022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59-connectd: peer_out WIRE_ANNOUNCEMENT_SIGNATURES

Changelog-None
2025-02-11 19:16:16 -06:00
Rusty Russell
1df1300cc9 gossip_store: don't need to check for truncated amounts.
That's actually caught by the gossmap load now.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2025-02-11 15:11:47 -06:00
Rusty Russell
4f2a7039c6 gossipd: put gossip_store pointer inside gossmap_manage.
It's actually the only one that uses it.  We also tweak the way
gossip_store handles failure: gossmap_manage now tells it when to
reset the corrupted store.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2025-02-11 15:11:47 -06:00
Rusty Russell
fdfc7ce62f gossmap: add (and use) logging hook.
Default goes to stderr for LOG_UNUSUAL and higher.

We have to whitelist more cases in map_catchup so we don't spam the logs
with perfectly-expected (but ignored) messages though.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2025-02-11 15:11:47 -06:00
Alex Myers
41b05e14b7 pytest: update test_plugin_slowinit to match longer timeout 2025-02-11 08:40:29 -06:00
Rusty Russell
3b16637c27 pytest: fix race in test_autoclean
We can actually delete it before counters are updated:

```
        wait_for(lambda: len(l3.rpc.listinvoices()['invoices']) == 2)
>       assert l3.rpc.autoclean_status()['autoclean']['expiredinvoices']['cleaned'] == 3
E       assert 1 == 3

tests/test_plugin.py:3266: AssertionError
```

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2025-02-11 08:40:29 -06:00
niftynei
fc08340d74 bkpr: fixup htlc penalty test with anchors
Test flake where the balance for lightning-2 went negative

```
>       assert account_balance(l2, channel_id) == 0

tests/test_closing.py:1314:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/utils.py:183: in account_balance
    m_sum -= Millisatoshi(m['debit_msat'])
contrib/pyln-client/pyln/client/lightning.py:193: in __sub__
    return Millisatoshi(int(self) - int(other))
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = -10000msat, v = -10000
```

Led me to look into this test. lightning-2 should go negative since we
roll back the amounts it's received by going to a prior database state.

Rather than trying to do the right thing with obviously broken node
records, instead we just stop trying to account for them correctly
(impossible).

I also noticed that the anchor tests were failing the utxo output
matchup, which we should be asserting on it. The HTLC RBF that our
anchor code creates was causing an issue by creating another wallet
deposit utxo under the HTLC output. We now optionally add this utxo
in the case that anchors are turned on.

Changelog-None: Fix test flake
2025-02-07 14:38:54 -06:00
niftynei
473aefd9cb bkpr tests: return actual error data, not just a blank assert failure
It's really hard to tell what on earth went wrong when a coin movement
check fails, since we dont' return good error info.

Here we replace almost every `assert` with a proper check + error with
message to help make debugging easier.

cc @rustyrussell

Changelog-None: improve failure messages
2025-02-07 14:38:54 -06:00
Dusty Daemon
755c807477 splice: Test for missing user signatures 2025-02-07 12:49:20 -06:00
Lagrang3
3f2b49022e xpay-handle-pay: test null params
Changelog-None: xpay-handle-pay: test null params in json array.

Signed-off-by: Lagrang3 <lagrang3@protonmail.com>
2025-02-06 12:46:38 -06:00
Rusty Russell
b78b3232e3 lightningd: implement unblacklist.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Added: JSON-RPC: `blacklistrune` now has an option to un-blacklist (`relist`).
2025-02-05 16:43:09 -06:00
Rusty Russell
7b1e36d922 wallet: remove migration code for old commando runes.
This migration was introduced in dccbccf8f2 (pre 23.08), so the only way they
would need this is if they migration straight from 23.05 to 25.02.  And then
the solution is to migration to a prior one first, but I'll bet good money
we never, ever see this message:

   Commando runes still present?  Migration removed in v25.02: call Rusty!

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2025-02-05 16:43:09 -06:00
Rusty Russell
fc9f062124 gossipd: extra debugging when inject fails.
test_closing_different_fees fails:

```
2024-10-14T08:43:30.2733614Z
2024-10-14T08:43:30.2734133Z         # Now wait for them all to hit normal state, do payments
2024-10-14T08:43:30.2735205Z >       l1.daemon.wait_for_logs(['update for channel .* now ACTIVE'] * num_peers
2024-10-14T08:43:30.2736233Z                                 + ['to CHANNELD_NORMAL'] * num_peers)
2024-10-14T08:43:30.2736725Z
2024-10-14T08:43:30.2736903Z tests/test_closing.py:230:
...
2024-10-14T08:43:30.2761325Z E                   TimeoutError: Unable to find "[re.compile('update for channel .* now ACTIVE')]" in logs.
```

For some reason one of the channel_update injections does *not* evoke this message
from gossipd...

Changelog-None: debug!
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2025-02-04 13:18:00 -06:00
ShahanaFarooqui
54555ce5e7 doc: Add new createrune example
- Moved the `Usage` section further down in `createrune` and `commando-rune` for improved UX.
- Added a new example for creating a rune with `read-only` restrictions, extending it to allow only payments of `less than 100,000 sats per day` using the `pay` or `xpay` methods.
- Adjusted formatting by appending an extra space after the `dependentUpon` condition, fixing `[*start* [*end*]][*relist*]` to `[*start* [*end*]] [*relist*]`.
- Relocated `Examples` from the expandable section to a standard heading, as examples are now already placed at the end of the page.

Changelog-None.
2025-02-04 20:12:59 +10:30
daywalker90
cac9315c00 clnrest: replace with rust plugin
Changelog-Changed: clnrest is now a rust plugin
2025-01-29 18:32:23 -08:00
Rusty Russell
01bce4e7b7 askrene: don't crash if asked to route 0 msat.
But don't allow it either.

```
Jan 19 13:18:52 boltz lightningd[2259911]: cln-askrene: plugins/askrene/algorithm.c:274: simple_feasibleflow: Assertion `amount > 0' failed.
Jan 19 13:18:52 boltz lightningd[2259911]: cln-askrene: FATAL SIGNAL 6 (version v24.11.1-1-ge9dbdeb)
Jan 19 13:18:52 boltz lightningd[2259911]: 0x5576e212b407 send_backtrace
Jan 19 13:18:52 boltz lightningd[2259911]:         common/daemon.c:33
Jan 19 13:18:52 boltz lightningd[2259911]: 0x5576e212b49e crashdump
Jan 19 13:18:52 boltz lightningd[2259911]:         common/daemon.c:75
Jan 19 13:18:52 boltz lightningd[2259911]: 0x7f964ba9251f ???
Jan 19 13:18:52 boltz lightningd[2259911]:         ???:0
Jan 19 13:18:52 boltz lightningd[2259911]: 0x7f964bae69fc ???
Jan 19 13:18:52 boltz lightningd[2259911]:         ???:0
Jan 19 13:18:52 boltz lightningd[2259911]: 0x7f964ba92475 ???
Jan 19 13:18:52 boltz lightningd[2259911]:         ???:0
Jan 19 13:18:52 boltz lightningd[2259911]: 0x7f964ba787f2 ???
Jan 19 13:18:52 boltz lightningd[2259911]:         ???:0
Jan 19 13:18:52 boltz lightningd[2259911]: 0x7f964ba7871a ???
Jan 19 13:18:52 boltz lightningd[2259911]:         ???:0
Jan 19 13:18:52 boltz lightningd[2259911]: 0x7f964ba89e95 ???
Jan 19 13:18:52 boltz lightningd[2259911]:         ???:0
Jan 19 13:18:52 boltz lightningd[2259911]: 0x5576e211695e simple_feasibleflow
Jan 19 13:18:52 boltz lightningd[2259911]:         plugins/askrene/algorithm.c:274
Jan 19 13:18:52 boltz lightningd[2259911]: 0x5576e2111495 minflow
Jan 19 13:18:52 boltz lightningd[2259911]:         plugins/askrene/mcf.c:1014
Jan 19 13:18:52 boltz lightningd[2259911]: 0x5576e210bc74 get_routes
Jan 19 13:18:52 boltz lightningd[2259911]:         plugins/askrene/askrene.c:414
Jan 19 13:18:52 boltz lightningd[2259911]: 0x5576e210c610 do_getroutes
Jan 19 13:18:52 boltz lightningd[2259911]:         plugins/askrene/askrene.c:615
Jan 19 13:18:52 boltz lightningd[2259911]: 0x5576e210cad8 listpeerchannels_done
Jan 19 13:18:52 boltz lightningd[2259911]:         plugins/askrene/askrene.c:741
Jan 19 13:18:52 boltz lightningd[2259911]: 0x5576e211b35a handle_rpc_reply
Jan 19 13:18:52 boltz lightningd[2259911]:         plugins/libplugin.c:1084
Jan 19 13:18:52 boltz lightningd[2259911]: 0x5576e211b54c rpc_read_response_one
Jan 19 13:18:52 boltz lightningd[2259911]:         plugins/libplugin.c:1388
Jan 19 13:18:52 boltz lightningd[2259911]: 0x5576e211b5fd rpc_conn_read_response
Jan 19 13:18:52 boltz lightningd[2259911]:         plugins/libplugin.c:1412
Jan 19 13:18:52 boltz lightningd[2259911]: 0x5576e214fe8f next_plan
Jan 19 13:18:52 boltz lightningd[2259911]:         ccan/ccan/io/io.c:60
Jan 19 13:18:52 boltz lightningd[2259911]: 0x5576e215036e do_plan
Jan 19 13:18:52 boltz lightningd[2259911]:         ccan/ccan/io/io.c:422
Jan 19 13:18:52 boltz lightningd[2259911]: 0x5576e215042b io_ready
Jan 19 13:18:52 boltz lightningd[2259911]:         ccan/ccan/io/io.c:439
Jan 19 13:18:52 boltz lightningd[2259911]: 0x5576e2151e2e io_loop
Jan 19 13:18:52 boltz lightningd[2259911]:         ccan/ccan/io/poll.c:455
Jan 19 13:18:52 boltz lightningd[2259911]: 0x5576e211cc29 plugin_main
Jan 19 13:18:52 boltz lightningd[2259911]:         plugins/libplugin.c:2488
Jan 19 13:18:52 boltz lightningd[2259911]: 0x5576e210cb38 main
Jan 19 13:18:52 boltz lightningd[2259911]:         plugins/askrene/askrene.c:1262
Jan 19 13:18:52 boltz lightningd[2259911]: 0x7f964ba79d8f ???
```

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Fixed: JSON-RPC: `getroutes` will refuse, not crash, if asked to find a route fr 0msat.
2025-01-28 10:53:22 +10:30
Rusty Russell
4e887c7c67 lightningd: add option to crash itself after some time.
We have CI runs which timeout (after 2 hours).  It's not clear why,
but we can at least eliminate CLN lockups as the answer.

Since pytest disabled the --timeout option on test shutdown, we could be
seeing an issue on stopping taking a long time?

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2025-01-27 11:07:04 +10:30
Rusty Russell
9565b3a4a3 pytest: fix flake in test_onchain_slow_anchor.
If we get "lucky" then commit tx will have short sig, one less weight (1 in 256 chance):

```
    @unittest.skipIf(TEST_NETWORK != 'regtest', 'elementsd anchors not supportd')
    def test_onchain_slow_anchor(node_factory, bitcoind):
        """We still use anchors for non-critical closes"""
        l1, l2 = node_factory.line_graph(2)
    
        # Don't let l1 succeed in sending commit tx
        def censoring_sendrawtx(r):
            return {'id': r['id'], 'result': {}}
    
        l1.daemon.rpcproxy.mock_rpc('sendrawtransaction', censoring_sendrawtx)
    
        close_start_depth = bitcoind.rpc.getblockchaininfo()['blocks']
    
        # Make l1 close unilaterally.
        l1.rpc.disconnect(l2.info['id'], force=True)
        l1.rpc.close(l2.info['id'], unilateraltimeout=1)
    
        # We will have a super-low-prio anchor spend.
        l1.daemon.wait_for_log(r"Low-priority anchorspend aiming for block {} \(feerate 253\)".format(close_start_depth + 2016))
    
        # Restart with reduced block time.
        l1.stop()
        l1.daemon.opts['dev-low-prio-anchor-blocks'] = 20
        l1.start()
    
        l1.daemon.wait_for_log("Low-priority anchorspend aiming for block {}".format(close_start_depth + 20))
        l1.daemon.wait_for_log("Anchorspend for local commit tx")
    
        # Won't go under 12 blocks though.
    
        # Make sure it sees all these blocks at once, to avoid test flakes!
        l1.stop()
        bitcoind.generate_block(7)
        l1.start()
    
        height = bitcoind.rpc.getblockchaininfo()['blocks']
        l1.daemon.wait_for_log(r"Low-priority anchorspend aiming for block {} \(feerate 7458\)".format(height + 13))
>       l1.daemon.wait_for_log(r"Anchorspend for local commit tx fee 12335sat \(w=714\), commit_tx fee 4545sat \(w=768\): package feerate 11390 perkw")
```

Here's the log we *did* get:

```
2025-01-25T08:46:40.9399213Z lightningd-1 2025-01-25T08:40:06.312Z DEBUG   022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59-chan#1: Anchorspend for local commit tx fee 12328sat (w=714), commit_tx fee 4545sat (w=767): package feerate 11392 perkw
```

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2025-01-27 11:07:04 +10:30
Rusty Russell
e1e476fd39 pytest: mark test_rbf_reconnect_tx_construct flaky:
Doesn't always die messily, it seems?


```
    @unittest.skipIf(TEST_NETWORK != 'regtest', 'elementsd doesnt yet support PSBT features we need')
    @pytest.mark.openchannel('v2')
    def test_rbf_reconnect_tx_construct(node_factory, bitcoind, chainparams):
        disconnects = ['=WIRE_TX_ADD_INPUT',  # Initial funding succeeds
                       '-WIRE_TX_ADD_INPUT',
                       '+WIRE_TX_ADD_INPUT',
                       '-WIRE_TX_ADD_OUTPUT',
                       '+WIRE_TX_ADD_OUTPUT',
                       '-WIRE_TX_COMPLETE',
                       '+WIRE_TX_COMPLETE',
                       '-WIRE_COMMITMENT_SIGNED',
                       '+WIRE_COMMITMENT_SIGNED']
    
        l1, l2 = node_factory.get_nodes(2,
                                        opts=[{'disconnect': disconnects,
                                               'may_reconnect': True,
                                               'dev-no-reconnect': None},
                                              {'may_reconnect': True,
                                               'dev-no-reconnect': None,
                                               'broken_log': 'dualopend daemon died before signed PSBT returned'}])
    
        l1.rpc.connect(l2.info['id'], 'localhost', l2.port)
        amount = 2**24
        chan_amount = 100000
        bitcoind.rpc.sendtoaddress(l1.rpc.newaddr()['bech32'], amount / 10**8 + 0.01)
        bitcoind.generate_block(1)
        # Wait for it to arrive.
        wait_for(lambda: len(l1.rpc.listfunds()['outputs']) > 0)
    
        res = l1.rpc.fundchannel(l2.info['id'], chan_amount)
        chan_id = res['channel_id']
        vins = bitcoind.rpc.decoderawtransaction(res['tx'])['vin']
        assert(only_one(vins))
        prev_utxos = ["{}:{}".format(vins[0]['txid'], vins[0]['vout'])]
    
        # Check that we're waiting for lockin
        l1.daemon.wait_for_log(' to DUALOPEND_AWAITING_LOCKIN')
    
        # rbf the lease with a higher amount
        rate = int(find_next_feerate(l1, l2)[:-5])
        # We 4x the feerate to beat the min-relay fee
        next_feerate = '{}perkw'.format(rate * 4)
    
        # Initiate an RBF
        startweight = 42 + 172  # base weight, funding output
        initpsbt = l1.rpc.utxopsbt(chan_amount, next_feerate, startweight,
                                   prev_utxos, reservedok=True,
                                   excess_as_change=True)
    
        # Run through TX_ADD wires
        for d in disconnects[1:-4]:
            l1.rpc.connect(l2.info['id'], 'localhost', l2.port)
            with pytest.raises(RpcError):
                l1.rpc.openchannel_bump(chan_id, chan_amount, initpsbt['psbt'])
            wait_for(lambda: l1.rpc.getpeer(l2.info['id'])['connected'] is False)
    
        # The first TX_COMPLETE breaks
        l1.rpc.connect(l2.info['id'], 'localhost', l2.port)
        bump = l1.rpc.openchannel_bump(chan_id, chan_amount, initpsbt['psbt'])
        with pytest.raises(RpcError):
            update = l1.rpc.openchannel_update(chan_id, bump['psbt'])
        wait_for(lambda: l1.rpc.getpeer(l2.info['id'])['connected'] is False)
        l1.rpc.connect(l2.info['id'], 'localhost', l2.port)
        # l1 should remember, l2 has forgotten
        # l2 should send tx-abort, to reset
        l2.daemon.wait_for_log(r'tx-abort: Sent next_funding_txid .* doesn\'t match ours .*')
        l1.daemon.wait_for_log(r'Cleaned up incomplete inflight')
        # abort doesn't cause a disconnect
        assert l1.rpc.getpeer(l2.info['id'])['connected']
    
        # The next TX_COMPLETE break (both remember) + they break on the
        # COMMITMENT_SIGNED during the reconnect
        bump = l1.rpc.openchannel_bump(chan_id, chan_amount, initpsbt['psbt'])
        with pytest.raises(RpcError):
            update = l1.rpc.openchannel_update(chan_id, bump['psbt'])
        wait_for(lambda: l1.rpc.getpeer(l2.info['id'])['connected'] is False)
        l1.rpc.connect(l2.info['id'], 'localhost', l2.port)
        l2.daemon.wait_for_logs([r'Got dualopend reestablish',
                                 r'No commitment, not sending our sigs'])
        l1.daemon.wait_for_logs([r'Got dualopend reestablish',
                                 r'No commitment, not sending our sigs',
                                 r'dev_disconnect: -WIRE_COMMITMENT_SIGNED',
                                 'peer_disconnect_done'])
        assert not l1.rpc.getpeer(l2.info['id'])['connected']
        l1.rpc.connect(l2.info['id'], 'localhost', l2.port)
    
        # COMMITMENT_SIGNED disconnects *during* the reconnect
        # We can't bump because the last negotiation is in the wrong state
        with pytest.raises(RpcError, match=r'Funding sigs for this channel not secured'):
            l1.rpc.openchannel_bump(chan_id, chan_amount, initpsbt['psbt'])
        # l2 reconnects, but doesn't have l1's commitment
>       l2.daemon.wait_for_logs([r'Got dualopend reestablish',
                                 r'No commitment, not sending our sigs',
                                 # This is a BROKEN log, it's expected!
                                 r'dualopend daemon died before signed PSBT returned'])

tests/test_opening.py:944: 
...
>                   raise TimeoutError('Unable to find "{}" in logs.'.format(exs))
E                   TimeoutError: Unable to find "[re.compile('dualopend daemon died before signed PSBT returned')]" in logs.
```

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2025-01-27 11:07:04 +10:30
Rusty Russell
21cd11825a pytest: fix flake in test_onchain_reestablish_reply
We can ask for the state too fast, before WIRE_ERROR is processed:

```
2025-01-24T14:17:57.4799255Z ________________________ test_onchain_reestablish_reply ________________________
2025-01-24T14:17:57.4800383Z [gw2] linux -- Python 3.10.16 /home/runner/.cache/pypoetry/virtualenvs/cln-meta-project-AqJ9wMix-py3.10/bin/python
2025-01-24T14:17:57.4801246Z 
2025-01-24T14:17:57.4802068Z node_factory = <pyln.testing.utils.NodeFactory object at 0x7f66ec144400>
2025-01-24T14:17:57.4802645Z bitcoind = <pyln.testing.utils.BitcoinD object at 0x7f66ec145ab0>
2025-01-24T14:17:57.4803483Z executor = <concurrent.futures.thread.ThreadPoolExecutor object at 0x7f66ec146da0>
2025-01-24T14:17:57.4804029Z 
2025-01-24T14:17:57.4804340Z     def test_onchain_reestablish_reply(node_factory, bitcoind, executor):
2025-01-24T14:17:57.4805013Z         l1, l2, l3 = node_factory.line_graph(3, opts={'may_reconnect': True,
2025-01-24T14:17:57.4805572Z                                                       'dev-no-reconnect': None})
2025-01-24T14:17:57.4805990Z     
2025-01-24T14:17:57.4806262Z         # Make l1 close unilaterally.
2025-01-24T14:17:57.4806697Z         l1.rpc.disconnect(l2.info['id'], force=True)
2025-01-24T14:17:57.4807200Z         l1.rpc.close(l2.info['id'], unilateraltimeout=1)
2025-01-24T14:17:57.4807622Z     
2025-01-24T14:17:57.4808025Z         # l2 doesn't know, reconnection should get REESTABLISH *then* error.
2025-01-24T14:17:57.4808637Z         l2.rpc.connect(l1.info['id'], 'localhost', l1.port)
2025-01-24T14:17:57.4809066Z     
2025-01-24T14:17:57.4809334Z         # We should exchange messages
2025-01-24T14:17:57.4809835Z         l2.daemon.wait_for_logs(["peer_out WIRE_CHANNEL_REESTABLISH",
2025-01-24T14:17:57.4810399Z                                  "peer_in WIRE_CHANNEL_REESTABLISH"])
2025-01-24T14:17:57.4811197Z         # It should be OK
2025-01-24T14:17:57.4811642Z         l2.daemon.wait_for_log("Reconnected, and reestablished.")
2025-01-24T14:17:57.4812116Z     
2025-01-24T14:17:57.4812378Z         # Then we get the error, close.
2025-01-24T14:17:57.4812824Z         l2.daemon.wait_for_log("peer_in WIRE_ERROR")
2025-01-24T14:17:57.4813576Z >       assert only_one(l2.rpc.listpeerchannels(l1.info['id'])['channels'])['state'] == 'AWAITING_UNILATERAL'
2025-01-24T14:17:57.4814446Z E       AssertionError: assert 'CHANNELD_NORMAL' == 'AWAITING_UNILATERAL'
2025-01-24T14:17:57.4815014Z E         - AWAITING_UNILATERAL
2025-01-24T14:17:57.4815362Z E         + CHANNELD_NORMAL
```

```
2025-01-24T14:17:57.5760435Z lightningd-2 2025-01-24T14:04:28.398Z DEBUG   0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518-channeld-chan#1: peer_in WIRE_ERROR
2025-01-24T14:17:57.5760674Z lightningd-2 2025-01-24T14:04:28.401Z TRACE   lightningd: Calling rpc_command hook of plugin cln-xpay
2025-01-24T14:17:57.5760935Z lightningd-2 2025-01-24T14:04:28.403Z TRACE   lightningd: Plugin cln-xpay returned from rpc_command hook call
2025-01-24T14:17:57.5761176Z lightningd-2 2025-01-24T14:04:28.408Z TRACE   lightningd: Calling rpc_command hook of plugin cln-xpay
2025-01-24T14:17:57.5761430Z lightningd-2 2025-01-24T14:04:28.414Z TRACE   lightningd: Plugin cln-xpay returned from rpc_command hook call
2025-01-24T14:17:57.5762598Z {'github_repository': 'ElementsProject/lightning', 'github_sha': '390503b31e1949cb84e5a5257bf29a1b80d5cd20', 'github_ref': 'refs/pull/8027/merge', 'github_ref_name': 'HEAD', 'github_run_id': 12950414032, 'github_head_ref': 'guilt/fix-flakes12', 'github_run_number': 12051, 'github_base_ref': 'master', 'github_run_attempt': '1', 'testname': 'test_onchain_reestablish_reply', 'start_time': 1737727408, 'end_time': 1737727468, 'outcome': 'fail'}
2025-01-24T14:17:57.5763430Z lightningd-2 2025-01-24T14:04:28.426Z DEBUG   0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518-channeld-chan#1: billboard perm: Received ERROR channel 69d744488de7ec7f4c018da98a8a9d726ee3b47a97809608e4ce6f17747b5f09: Forcibly closed by `close` command timeout
2025-01-24T14:17:57.5764512Z lightningd-2 2025-01-24T14:04:28.433Z UNUSUAL 0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518-chan#1: Peer permanent failure in CHANNELD_NORMAL: channeld: received ERROR channel 69d744488de7ec7f4c018da98a8a9d726ee3b47a97809608e4ce6f17747b5f09: Forcibly closed by `close` command timeout (reason=protocol)
2025-01-24T14:17:57.5764997Z lightningd-2 2025-01-24T14:04:28.439Z INFO    0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518-chan#1: State changed from CHANNELD_NORMAL to AWAITING_UNILATERAL
```

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2025-01-27 11:07:04 +10:30
Rusty Russell
f68700908e pytest: make test_gossip_throttle more reliable.
By having gossipwith filter out messages we don't want, we can get the counts of
expected messages correct, and not hit errors like this:

```
    def test_gossip_throttle(node_factory, bitcoind, chainparams):
        """Make some gossip, test it gets throttled"""
        l1, l2, l3, l4 = node_factory.line_graph(4, wait_for_announce=True,
                                                 opts=[{}, {}, {}, {'dev-throttle-gossip': None}])
    
        # We expect: self-advertizement (3 messages for l1 and l4) plus
        # 4 node announcements, 3 channel announcements and 6 channel updates.
        # We also expect it to send a timestamp filter message.
        # (We won't take long enough to get a ping!)
        expected = 4 + 4 + 3 + 6 + 1
    
        # l1 is unlimited
        start_fast = time.time()
        out1 = subprocess.run(['devtools/gossipwith',
                               '--all-gossip',
                               '--hex',
                               '--network={}'.format(TEST_NETWORK),
                               '--max-messages={}'.format(expected),
                               '{}@localhost:{}'.format(l1.info['id'], l1.port)],
                              check=True,
                              timeout=TIMEOUT, stdout=subprocess.PIPE).stdout.split()
        time_fast = time.time() - start_fast
        assert time_fast < 2
        # Remove timestamp filter, since timestamp will change!
        out1 = [m for m in out1 if not m.startswith(b'0109')]
    
        # l4 is throttled
        start_slow = time.time()
        out2 = subprocess.run(['devtools/gossipwith',
                               '--all-gossip',
                               '--hex',
                               '--network={}'.format(TEST_NETWORK),
                               '--max-messages={}'.format(expected),
                               '{}@localhost:{}'.format(l4.info['id'], l4.port)],
                              check=True,
                              timeout=TIMEOUT, stdout=subprocess.PIPE).stdout.split()
        time_slow = time.time() - start_slow
        assert time_slow > 3
    
        # Remove timestamp filter, since timestamp will change!
        out2 = [m for m in out2 if not m.startswith(b'0109')]
    
        # Contents should be identical (once uniquified, since each
        # doubles-up on its own gossip)
>       assert set(out1) == set(out2)
E       AssertionError: assert {b'010054b1907bdf639c9060e0fa4bca02419c46f75a99f0908b87a2e09711d5d031ba76b8fd07acc8be1b2fac9e31efb808e5d362c32ef4665...
E         Extra items in the left set:
E         b'01010ad5be8b9ba029245c2ae2d667af7ead7c0129c479c7fd7145a9b65931e90222082e6e4ab37ef60ebd10f1493d73e8bf7a40c4ae5f7d87cc...8488830b60f7e744ed9235eb0b1ba93283b315c035180266e44a554e494f524245414d2d333930353033622d6d6f64646564000000000000000000'
E         Extra items in the right set:
E         b'01079f87eb580b9e5f11dc211e9fb66abb3699999044f8fe146801162393364286c6000000010000006c010101'
```

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2025-01-27 11:07:04 +10:30
Rusty Russell
123f8ba9d6 pytest: fix race in test_reconnect_remote_sends_no_sigs
We can be a bit early in our assertion:

```
    @unittest.skipIf(os.getenv('TEST_DB_PROVIDER', 'sqlite3') != 'sqlite3', "sqlite3-specific DB manip")
    def test_reconnect_remote_sends_no_sigs(node_factory):
        """We re-announce, even when remote node doesn't send its announcement_signatures on reconnect.
        """
        l1, l2 = node_factory.line_graph(2, wait_for_announce=True, opts={'may_reconnect': True,
                                                                          'dev-no-reconnect': None})
    
        # Wipe l2's gossip_store
        l2.stop()
        gs_path = os.path.join(l2.daemon.lightning_dir, TEST_NETWORK, 'gossip_store')
        os.unlink(gs_path)
        l2.start()
    
        # l2 will now uses (REMOTE's) announcement_signatures it has stored
        wait_for(lambda: l2.rpc.listchannels()['channels'] != [])
    
        # Remove remote signatures from l1 so it asks for them (and delete gossip store)
        l1.db_manip("UPDATE channels SET remote_ann_node_sig=NULL, remote_ann_bitcoin_sig=NULL")
        gs_path = os.path.join(l1.daemon.lightning_dir, TEST_NETWORK, 'gossip_store')
        os.unlink(gs_path)
        l1.restart()
    
        l1.connect(l2)
        l1needle = l1.daemon.logsearch_start
        l2needle = l2.daemon.logsearch_start
    
        # l1 asks once, l2 replies once.
        # Make sure we get all the msgs!
        time.sleep(5)
    
        l1.daemon.wait_for_log('peer_out WIRE_ANNOUNCEMENT_SIGNATURES')
        l2.daemon.wait_for_log('peer_out WIRE_ANNOUNCEMENT_SIGNATURES')
    
        l1msgs = [l.split()[4] for l in l1.daemon.logs[l1needle:] if 'WIRE_ANNOUNCEMENT_SIGNATURES' in l]
>       assert l1msgs == ['peer_out', 'peer_in']
E       AssertionError: assert ['peer_out'] == ['peer_out', 'peer_in']
E         Right contains one more item: 'peer_in'
E         Full diff:
E         - ['peer_out', 'peer_in']
E         + ['peer_out']
```

```
lightningd-2 2025-01-24T05:53:22.862Z DEBUG   0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518-connectd: peer_out WIRE_ANNOUNCEMENT_SIGNATURES
lightningd-1 2025-01-24T05:53:22.864Z DEBUG   022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59-channeld-chan#1: peer_in WIRE_ANNOUNCEMENT_SIGNATURES
lightningd-1 2025-01-24T05:53:22.885Z DEBUG   022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59-chan#1: channel_gossip: received announcement sigs for 103x1x0 (we have 103x1x0)
{'github_repository': 'ElementsProject/lightning', 'github_sha': 'e9d36f2b8ecd45882753cbe062c355e40bc7109c', 'github_ref': 'refs/pull/8027/merge', 'github_ref_name': 'HEAD', 'github_run_id': 12943530601, 'github_head_ref':
```

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2025-01-27 11:07:04 +10:30
Rusty Russell
7ab1037646 pytest: fix flake in test_connect_ratelimit.
Sometimes they connect too fast, so we don't get a chance to ratelimit all of them:

```
    def test_connect_ratelimit(node_factory, bitcoind):
        """l1 has 5 peers, restarts, make sure we limit"""
        nodes = node_factory.get_nodes(6,
                                       opts=[{'dev-limit-connections-inflight': None, 'may_reconnect': True}] + [{'may_reconnect': True}] * 5)
    
        l1 = nodes[0]
        nodes = nodes[1:]
    
        addr = l1.rpc.newaddr()['bech32']
        for n in nodes:
            bitcoind.rpc.sendtoaddress(addr, (FUNDAMOUNT + 1000000) / 10**8)
        bitcoind.generate_block(1, wait_for_mempool=len(nodes))
        sync_blockheight(bitcoind, [l1])
    
        for n in nodes:
            l1.rpc.connect(n.info['id'], 'localhost', n.port)
            l1.rpc.fundchannel(n.info['id'], FUNDAMOUNT)
    
        # Make sure all channels are established and announced.
        bitcoind.generate_block(6, wait_for_mempool=len(nodes))
        wait_for(lambda: len(l1.rpc.listchannels()['channels']) == len(nodes) * 2)
    
        assert not l1.daemon.is_in_log('Unblocking for')
    
        l1.restart()
    
        # The first will be ok, but others should block and be unblocked.
>       l1.daemon.wait_for_logs((['Unblocking for ']
                                 + ['Too many connections, waiting'])
                                * (len(nodes) - 1))

tests/test_connection.py:4721: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <pyln.testing.utils.LightningD object at 0x7f6e288a3a60>
regexs = ['Unblocking for ', 'Too many connections, waiting', 'Unblocking for ', 'Too many connections, waiting', 'Unblocking for ', 'Too many connections, waiting', ...]
timeout = 180

    def wait_for_logs(self, regexs, timeout=TIMEOUT):
        """Look for `regexs` in the logs.
    
        The logs contain tailed stdout of the process. We look for each regex
        in `regexs`, starting from `logsearch_start` which normally is the
        position of the last found entry of a previous wait-for logs call.
        The ordering inside `regexs` doesn't matter.
    
        We fail if the timeout is exceeded or if the underlying process
        exits before all the `regexs` were found.
    
        If timeout is None, no time-out is applied.
        """
        logging.debug("Waiting for {} in the logs".format(regexs))
        exs = [re.compile(r) for r in regexs]
        start_time = time.time()
        while True:
            if self.logsearch_start >= len(self.logs):
                if not self.logs_catchup():
                    time.sleep(0.25)
    
                if timeout is not None and time.time() > start_time + timeout:
                    print("Time-out: can't find {} in logs".format(exs))
                    for r in exs:
                        if self.is_in_log(r):
                            print("({} was previously in logs!)".format(r))
>                   raise TimeoutError('Unable to find "{}" in logs.'.format(exs))
E                   TimeoutError: Unable to find "[re.compile('Unblocking for '), re.compile('Too many connections, waiting')]" in logs.
```

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2025-01-27 11:07:04 +10:30
Rusty Russell
1fffe6aa51 pytest: fix test_splice_disconnect_commit
This test seems confused: l2 won't be able to reconnect to l1 (l1
connected to l2 in the first place, it's the only one which can
reconnect).  Also, there's a commitment_signed early on when we have
dual funding, so this doesn't actually test splicing disconnect in
that case?

The race seems to happen when l1 reconnectd, and l2 hasn't registered the disconnect yet.

```
________________________ test_splice_disconnect_commit _________________________
[gw9] linux -- Python 3.10.16 /home/runner/.cache/pypoetry/virtualenvs/cln-meta-project-AqJ9wMix-py3.10/bin/python

node_factory = <pyln.testing.utils.NodeFactory object at 0x7f53ebc1a950>
bitcoind = <pyln.testing.utils.BitcoinD object at 0x7f53ebc1b760>
executor = <concurrent.futures.thread.ThreadPoolExecutor object at 0x7f53ebc1a170>

    @pytest.mark.openchannel('v1')
    @pytest.mark.openchannel('v2')
    @unittest.skipIf(TEST_NETWORK != 'regtest', 'elementsd doesnt yet support PSBT features we need')
    def test_splice_disconnect_commit(node_factory, bitcoind, executor):
        l1 = node_factory.get_node(options={'experimental-splicing': None}, may_reconnect=True)
        l2 = node_factory.get_node(disconnect=['+WIRE_COMMITMENT_SIGNED'],
                                   options={'experimental-splicing': None, 'dev-no-reconnect': None},
                                   may_reconnect=True)
>       l1.openchannel(l2, 1000000)

tests/test_splicing_disconnect.py:77: 

...
        elif "error" in resp:
>           raise RpcError(method, payload, resp['error'])
E           pyln.client.lightning.RpcError: RPC call failed: method: fundchannel, payload: {'id': '022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59', 'amount': 1000000, 'announce': True}, error: {'code': -1, 'message': 'Disconnected', 'data': {'id': '022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59', 'method': 'openchannel_update'}}
```

From the logs:

```
2025-01-23T23:50:25.4098040Z lightningd-2 2025-01-23T23:47:12.443Z DEBUG   0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518-connectd: dev_disconnect: +WIRE_COMMITMENT_SIGNED (WIRE_COMMITMENT_SIGNED)
...
2025-01-23T23:50:25.4107026Z lightningd-2 2025-01-23T23:47:12.444Z DEBUG   0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518-dualopend-chan#1: peer_out WIRE_COMMITMENT_SIGNED
2025-01-23T23:50:25.4108070Z lightningd-2 2025-01-23T23:47:12.444Z INFO    0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518-dualopend-chan#1: Peer connection lost
2025-01-23T23:50:25.4109375Z lightningd-2 2025-01-23T23:47:12.445Z INFO    0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518-chan#1: Peer transient failure in DUALOPEND_OPEN_COMMIT_READY: dualopend: Owning subdaemon dualopend died (62208)
...
2025-01-23T23:50:25.4111195Z lightningd-2 2025-01-23T23:47:12.445Z DEBUG   plugin-cln-grpc: Received a message: CustomNotification(Object {\"jsonrpc\": String(\"2.0\"), \"method\": String(\"channel_open_failed\"), \"params\": Object {\"channel_open_failed\": Object {\"channel_id\": String(\"252d1b0a1e57895e84137f28cf19ab2c35847e284c112fefdecc7afeaa5c1de7\")}}})
2025-01-23T23:50:25.4113039Z lightningd-2 2025-01-23T23:47:12.445Z DEBUG   plugin-cln-grpc: Dispatching custom notification Object {\"jsonrpc\": String(\"2.0\"), \"method\": String(\"channel_open_failed\"), \"params\": Object {\"channel_open_failed\": Object {\"channel_id\": String(\"252d1b0a1e57895e84137f28cf19ab2c35847e284c112fefdecc7afeaa5c1de7\")}}}
2025-01-23T23:50:25.4114525Z lightningd-2 2025-01-23T23:47:12.446Z DEBUG   plugin-funder: Cleaning up inflights for peer id 0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518
2025-01-23T23:50:25.4115550Z lightningd-2 2025-01-23T23:47:12.446Z DEBUG   plugin-funder: Cleaning up inflight for channel_id 252d1b0a1e57895e84137f28cf19ab2c35847e284c112fefdecc7afeaa5c1de7
2025-01-23T23:50:25.4116406Z lightningd-2 2025-01-23T23:47:12.446Z TRACE   lightningd: Calling rpc_command hook of plugin cln-xpay
2025-01-23T23:50:25.4117134Z lightningd-2 2025-01-23T23:47:12.447Z TRACE   lightningd: Plugin cln-xpay returned from rpc_command hook call
2025-01-23T23:50:25.4117728Z lightningd-1 2025-01-23T23:47:12.448Z DEBUG   connectd: drain_peer
2025-01-23T23:50:25.4118229Z lightningd-1 2025-01-23T23:47:12.448Z DEBUG   connectd: drain_peer draining subd!
2025-01-23T23:50:25.4119066Z lightningd-1 2025-01-23T23:47:12.448Z DEBUG   022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59-lightningd: peer_disconnect_done
2025-01-23T23:50:25.4119792Z lightningd-1 2025-01-23T23:47:12.449Z DEBUG   connectd: maybe_free_peer freeing peer!
...
2025-01-23T23:50:25.4135647Z lightningd-2 2025-01-23T23:47:12.455Z DEBUG   0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518-connectd: Connect IN
2025-01-23T23:50:25.4136554Z lightningd-1 2025-01-23T23:47:12.455Z DEBUG   022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59-connectd: Connected out, starting crypto
2025-01-23T23:50:25.4137502Z lightningd-1 2025-01-23T23:47:12.456Z DEBUG   022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59-connectd: Connect OUT
2025-01-23T23:50:25.4138483Z lightningd-1 2025-01-23T23:47:12.456Z DEBUG   022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59-connectd: peer_out WIRE_INIT
2025-01-23T23:50:25.4139407Z lightningd-1 2025-01-23T23:47:12.456Z DEBUG   022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59-connectd: peer_in WIRE_INIT
2025-01-23T23:50:25.4140714Z lightningd-1 2025-01-23T23:47:12.456Z INFO    022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59-chan#1: Peer transient failure in DUALOPEND_OPEN_COMMIT_READY: Disconnected
```

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2025-01-27 11:07:04 +10:30
Rusty Russell
54a5eddee7 pytest: mark xpay real gossip test as slow
```
____________________ ERROR at teardown of test_xpay_maxfee _____________________
...
            # Format a nice list of everything that went wrong and raise an exception
            request.node.has_errors = True
>           raise ValueError(str(errors))
E           ValueError: 
E           Node errors:
E            - lightningd-1: Node exited with return code 1
E           Global errors:
```

And:

```
    @unittest.skipIf(TEST_NETWORK != 'regtest', 'too dusty on elements')
    def test_xpay_maxfee(node_factory, bitcoind, chainparams):
        """Test which shows that we don't excees maxfee"""
        outfile = tempfile.NamedTemporaryFile(prefix='gossip-store-')
        subprocess.check_output(['devtools/gossmap-compress',
                                 'decompress',
                                 '--node-map=3301=022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59',
                                 'tests/data/gossip-store-2024-09-22.compressed',
                                 outfile.name]).decode('utf-8').splitlines()
        AMOUNT = 100_000_000
    
        # l2 will warn l1 about its invalid gossip: ignore.
        # We throttle l1's gossip to avoid massive log spam.
>       l1, l2 = node_factory.line_graph(2,
                                         # This is in sats, so 1000x amount we send.
                                         fundamount=AMOUNT,
                                         opts=[{'gossip_store_file': outfile.name,
                                                'subdaemon': 'channeld:../tests/plugins/channeld_fakenet',
                                                'allow_warning': True,
                                                'dev-throttle-gossip': None},
                                               {'allow_bad_gossip': True}])

tests/test_xpay.py:509: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
contrib/pyln-testing/pyln/testing/utils.py:1720: in line_graph
    nodes = self.get_nodes(num_nodes, opts=opts)
contrib/pyln-testing/pyln/testing/utils.py:1602: in get_nodes
    return [j.result() for j in jobs]
contrib/pyln-testing/pyln/testing/utils.py:1602: in <listcomp>
    return [j.result() for j in jobs]
/opt/hostedtoolcache/Python/3.10.16/x64/lib/python3.10/concurrent/futures/_base.py:458: in result
    return self.__get_result()
/opt/hostedtoolcache/Python/3.10.16/x64/lib/python3.10/concurrent/futures/_base.py:403: in __get_result
    raise self._exception
/opt/hostedtoolcache/Python/3.10.16/x64/lib/python3.10/concurrent/futures/thread.py:58: in run
    result = self.fn(*self.args, **self.kwargs)
contrib/pyln-testing/pyln/testing/utils.py:1653: in get_node
    node.start(wait_for_bitcoind_sync)
contrib/pyln-testing/pyln/testing/utils.py:1015: in start
    self.daemon.start(stderr_redir=stderr_redir)
contrib/pyln-testing/pyln/testing/utils.py:671: in start
    self.wait_for_log("Server started with public key")
contrib/pyln-testing/pyln/testing/utils.py:355: in wait_for_log
    return self.wait_for_logs([regex], timeout)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <pyln.testing.utils.LightningD object at 0x7f27ab586c20>
regexs = ['Server started with public key'], timeout = 180

    def wait_for_logs(self, regexs, timeout=TIMEOUT):
        """Look for `regexs` in the logs.
    
        The logs contain tailed stdout of the process. We look for each regex
        in `regexs`, starting from `logsearch_start` which normally is the
        position of the last found entry of a previous wait-for logs call.
        The ordering inside `regexs` doesn't matter.
    
        We fail if the timeout is exceeded or if the underlying process
        exits before all the `regexs` were found.
    
        If timeout is None, no time-out is applied.
        """
        logging.debug("Waiting for {} in the logs".format(regexs))
        exs = [re.compile(r) for r in regexs]
        start_time = time.time()
        while True:
            if self.logsearch_start >= len(self.logs):
                if not self.logs_catchup():
                    time.sleep(0.25)
    
                if timeout is not None and time.time() > start_time + timeout:
                    print("Time-out: can't find {} in logs".format(exs))
                    for r in exs:
                        if self.is_in_log(r):
                            print("({} was previously in logs!)".format(r))
>                   raise TimeoutError('Unable to find "{}" in logs.'.format(exs))
E                   TimeoutError: Unable to find "[re.compile('Server started with public key')]" in logs.
```

gossipd (and other plugins) simply take too long to digest the gossmap under valgrind.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2025-01-27 11:07:04 +10:30