Commit graph

3312 commits

Author SHA1 Message Date
Rusty Russell
d196b9bb53 doc: document (and test) the injectonionmessage API.
It's actually tested by fetchinvoice, but doing an explicit test in Python
allows for schema checking!

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Changed: JSON-RPC: `injectonionmessage` API simplified and documented.
2024-12-05 17:38:16 +10:30
Rusty Russell
b520543867 gossipd: log at trace, not debug for regular messages.
See: https://github.com/ElementsProject/lightning/issues/7899

A node with 23 connections gets far too many debug messages.

Changelog-Fixed: `gossipd` now does logging at trace, not debug level.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2024-12-05 11:43:50 +10:30
Rusty Russell
113156858b xpay: don't excees maxfee *overall*.
We were handing "maxfee" to every getroutes call, even if we had already
used some of the fees.

Reported-by: @daywalker90
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-None: xpay is new this release.
2024-12-02 14:31:11 +10:30
Rusty Russell
d0b470618e pytest: test for maxfee compliance.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2024-12-02 14:31:11 +10:30
Rusty Russell
b8e5b122d2 decode: don't fail to decode just because a bolt12 invoice has expired.
In fact, there are several places where we try to decode old invoices,
and they should all work.  The only place we should enforce expiration is
when we're going to pay.

This also revealed that xpay wasn't checking bolt11 expiries!

Reported-by: hMsats
Fixes: https://github.com/ElementsProject/lightning/issues/7869
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Fixed: JSON-RPC: `decode` refused to decode expired bolt12 invoices.
2024-11-30 13:17:55 +01:00
Rusty Russell
14cb0574f7 pytest: test (fails) for decoding expired bolt12 invoices.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2024-11-30 13:17:55 +01:00
ShahanaFarooqui
a3a33fe3be doc: Add GENERATE_EXAMPLES env
- Run with environment variable `GENERATE_EXAMPLES`
- Update cln version in getinfo example on `make update-versions`
- Added two `dev` configs, dev-no-plugin-checksum and dev-no-version-checks, to match CI listconfigs
- Changed commando rpc example from `getinfo` to `newaddr` to avoid unneccessary file updates for future builds
- Stabilized `bkpr-editdescriptionbyoutpoint`, `listclosedchannels` and `listaddresses` examples
2024-11-28 15:56:16 +10:30
ShahanaFarooqui
e568d69867 doc: Lock askrene example values 2024-11-26 21:45:19 +10:30
ShahanaFarooqui
9592facf83 doc: Lock example values
Changelog-Added: Test script generates all RPC documentation examples now.
2024-11-26 21:45:19 +10:30
Rusty Russell
20257c3308 lightningd: --dev-low-prio-anchor-blocks and test for low-priority anchors.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2024-11-26 14:49:36 +10:30
Rusty Russell
de30f9c4b2 anchors: create low priority anchor to spend commit tx within a week.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Changed: Protocol: we now create a low-priority (2016 down to 12 blocks fee target) anchor for low-fee unilateral closes even if there's no urgency.
2024-11-26 14:49:36 +10:30
niftynei
46fde419b1 pytest: fix up coin_move tests now anchors don't get redundantly spent/ 2024-11-25 20:23:21 +10:30
Rusty Russell
5701123209 pytest: fix flake in test_gossip_force_broadcast_channel_msgs
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2024-11-25 15:39:13 +10:30
Rusty Russell
7cdf45bb00 pytest: fix flake in test_ping_timeout
The seeker can send a full gossip query, which means the ping doesn't happen
(it needs 14-45 seconds of quiet!).

We disable the gossip_queries feature, so it doesn't ask.

```
    def test_ping_timeout(node_factory):
        # Disconnects after this, but doesn't know it.
        l1_disconnects = ['xWIRE_PING']
    
        l1, l2 = node_factory.get_nodes(2, opts=[{'dev-no-reconnect': None,
                                                  'disconnect': l1_disconnects},
                                                 {'dev-no-ping-timer': None}])
        l1.rpc.connect(l2.info['id'], 'localhost', l2.port)
    
        # This can take 10 seconds (dev-fast-gossip means timer fires every 5 seconds)
        l1.daemon.wait_for_log('seeker: startup peer finished', timeout=15)
        # Ping timers runs at 15-45 seconds, *but* only fires if also 60 seconds
        # after previous traffic.
>       l1.daemon.wait_for_log('dev_disconnect: xWIRE_PING', timeout=60 + 45 + 5)

tests/test_connection.py:4194: 
...
>                   raise TimeoutError('Unable to find "{}" in logs.'.format(exs))
E                   TimeoutError: Unable to find "[re.compile('dev_disconnect: xWIRE_PING')]" in logs.
```

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2024-11-25 15:39:13 +10:30
Rusty Russell
faf7ae6ad4 pytest: add test for connection ratelimiting.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2024-11-25 15:39:13 +10:30
Rusty Russell
73b9812178 pytest: restore test_sendpay_grouping test.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2024-11-25 15:39:13 +10:30
Rusty Russell
15950bb7d4 connectd: reconnect for non-transient connections.
Rather than have lightningd call us repeatedly to try to connect, have
it tell us what peers are transient and aren't, and connectd will
automatically try to maintain that connection.

There's a new "downgrade_peer" message to tell it a peer is now
transient: to make it non-transient we simply tell connectd to
connect as a non-transient.

The first time, I missed that dual_open_control does its own state
transitions :(

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Changed: `connectd` now handles maintaining/reconnecting to important peers, and we remember the last successful address we connected to.
2024-11-25 15:39:13 +10:30
Rusty Russell
23dc10cf81 connectd: get our own addresses to contact node from node_announcements.
Let lightningd feed us hints to try first, but we can extract the
addresses from node_announcement messages ourselves.

(Lightningd used to ask gossipd on our behalf: this is far simpler!)

One side effect of this is that we don't hand back address hints given to us
by lightningd: it would use these again for reconnecting.  This is breaks
test_sendpay_grouping, so we disable it temporarily.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2024-11-25 15:39:13 +10:30
Alex Myers
11580dfd43 pyln-testing: disable seeker autoconnect by default
This avoids test flakes, but can be explicitly set if needed.

Changelog-None
2024-11-24 12:03:16 +10:30
Rusty Russell
dba9746d21 pytest: fix flake in test_gossip_pruning.
If the first one doesn't use the entire timeout, the second might need longer
(I used TIMEOUT=10 normally):

```
FAILED tests/test_gossip.py::test_gossip_pruning - TimeoutError: Unable to find "[re.compile('Pruning channel 103x1x0 from network view')]" in logs.
```

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2024-11-23 10:20:30 +10:30
Alex Myers
363b721cd3 gossipd: use autoconnect-seeker-peers setting 2024-11-22 15:21:45 +10:30
Alex Myers
f2243e6013 pytest: Add seeker autoconnect test 2024-11-22 15:21:45 +10:30
Rusty Russell
8566370087 pytest: fix flake in test_gossip_force_broadcast_channel_msgs
We can get more gossip_filter messages now.  And we can also go over max-messages,
so increase that too.

```
        del tally['query_short_channel_ids']
        del tally['query_channel_range']
        del tally['ping']
>       assert tally == {'channel_announce': 1,
                         'channel_update': 3,
                         'node_announce': 1,
                         'gossip_filter': 1}
E       AssertionError: assert {'channel_ann..._announce': 1} == {'channel_ann..._announce': 1}
E         Omitting 2 identical items, use -vv to show
E         Differing items:
E         {'gossip_filter': 2} != {'gossip_filter': 1}
E         {'channel_update': 2} != {'channel_update': 3}
E         Full diff:
E           {
E            'channel_announce': 1,...
E         
E         ...Full output truncated (10 lines hidden), use '-vv' to show

tests/test_gossip.py:2326: AssertionError
```

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2024-11-22 14:01:44 +10:30
Rusty Russell
a295099ace pytest: fix flake in test_onchaind_replay.
We actually mine *300* blocks, not 200, and if timing is right l1
can have mined the txid before mine_txid_or_rbf() checks the mempool:

```
    def test_onchaind_replay(node_factory, bitcoind):
        disconnects = ['+WIRE_REVOKE_AND_ACK', 'permfail']
        # Feerates identical so we don't get gratuitous commit to update them
        l1, l2 = node_factory.line_graph(2, opts=[{'watchtime-blocks': 201, 'cltv-delta': 101,
                                                   'disconnect': disconnects,
                                                   'feerates': (7500, 7500, 7500, 7500)},
                                                  {'watchtime-blocks': 201, 'cltv-delta': 101}],
                                         wait_for_announce=True)
    
        inv = l2.rpc.invoice(10**8, 'onchaind_replay', 'desc')
        rhash = inv['payment_hash']
        routestep = {
            'amount_msat': 10**8 - 1,
            'id': l2.info['id'],
            'delay': 101,
            'channel': first_scid(l1, l2)
        }
        l1.rpc.sendpay([routestep], rhash, payment_secret=inv['payment_secret'])
        l1.daemon.wait_for_log('sendrawtx exit 0')
        bitcoind.generate_block(1, wait_for_mempool=1)
    
        # Wait for nodes to notice the failure, this seach needle is after the
        # DB commit so we're sure the tx entries in onchaindtxs have been added
        l1.daemon.wait_for_log("Deleting channel .* due to the funding outpoint being spent")
        l2.daemon.wait_for_log("Deleting channel .* due to the funding outpoint being spent")
    
        # We should at least have the init tx now
        assert len(l1.db_query("SELECT * FROM channeltxs;")) > 0
        assert len(l2.db_query("SELECT * FROM channeltxs;")) > 0
    
        # Generate some blocks so we restart the onchaind from DB (we rescan
        # last_height - 100)
        bitcoind.generate_block(100)
        sync_blockheight(bitcoind, [l1, l2])
    
        # l1 should still have a running onchaind
        assert len(l1.db_query("SELECT * FROM channeltxs;")) > 0
    
        l2.rpc.stop()
        l1.restart()
    
        # Can't wait for it, it's after the "Server started" wait in restart()
        assert l1.daemon.is_in_log(r'Restarting onchaind \(ONCHAIN\): closed in block 109')
    
        # l1 should still notice that the funding was spent and that we should react to it
        _, txid, blocks = l1.wait_for_onchaind_tx('OUR_DELAYED_RETURN_TO_WALLET',
                                                  'OUR_UNILATERAL/DELAYED_OUTPUT_TO_US')
        assert blocks == 200
        bitcoind.generate_block(200)
        # Could be RBF!
>       l1.mine_txid_or_rbf(txid)

tests/test_closing.py:1864: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
contrib/pyln-testing/pyln/testing/utils.py:1375: in mine_txid_or_rbf
    wait_for(lambda: rbf_or_txid_broadcast(txids))
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

success = <function LightningNode.mine_txid_or_rbf.<locals>.<lambda> at 0x7f9b129c4550>
timeout = 180

    def wait_for(success, timeout=TIMEOUT):
        start_time = time.time()
        interval = 0.25
        while not success():
            time_left = start_time + timeout - time.time()
            if time_left <= 0:
>               raise ValueError("Timeout while waiting for {}".format(success))
E               ValueError: Timeout while waiting for <function LightningNode.mine_txid_or_rbf.<locals>.<lambda> at 0x7f9b129c4550>
```

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2024-11-22 14:01:44 +10:30
Jesse de Wit
a90d9c9f4f tests: add pay test over unannounced channels
This test fails with cln v24.08.2. Add this test, so it doesn't happen
again.

Changelog-None
2024-11-21 11:22:26 +01:00
Rusty Russell
2c9023ee25 pytest: reenable askrene bias test.
We can fix the median calc by removing the (unused) reverse edges.

Also analyze the failure case in test_real_data: it's a real edge case, so
hardcode that one as "ok".

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2024-11-21 16:17:52 +10:30
Lagrang3
05514b46e3 Askrene: change median factor to 1.
The ratio of the median of the fees and probability cost is overall not
a bad factor to combine these two features. This is what the
test_real_data shows.

Changelog-None

Signed-off-by: Lagrang3 <lagrang3@protonmail.com>
2024-11-21 16:17:52 +10:30
Lagrang3
2b3fd67dfb askrene: don't skip fee_fallback test
The fee_fallback test would fail after fixing the computation of the
median. Now by we can restore it by making the probability cost factor
1000x higher than the ratio of the median. This shows how hard it is to
combine fee and probability costs and why is the current approach so
fragile.

Changelog-None

Signed-off-by: Lagrang3 <lagrang3@protonmail.com>
2024-11-21 16:17:52 +10:30
Lagrang3
4dc1a44cd9 askrene: fix the median
The calculation of the median values of probability and fee cost in the
linear approximation had a bug by counting on non-existing arcs.

Changelog-none: askrene: fix the median

Signed-off-by: Lagrang3 <lagrang3@protonmail.com>
2024-11-21 16:17:52 +10:30
Alex Myers
ead5dbf6a2 pytest: allow additional gossip filters
in test_gossip_force_broadcast_channel_msgs now that the seeker
is asking for periodic full gossip syncs.
2024-11-21 14:23:57 +10:30
Dusty Daemon
d04e64670d splice: tx_abort no longer reestablishes
As per eclair implementation we skip `channel_reestablish` and go straight into the channel for `tx_abort` events.

Changelog-None
2024-11-21 14:15:36 +10:30
Dusty Daemon
6d63e68e99 splice: Update messages to spec
Changelog-Changed: Splicing moved from test numbers to spec numbers.
2024-11-21 14:15:36 +10:30
Dusty Daemon
2b3cb8b8a8 splice: Update splice signature msg type
Update to use spec signature type.

Changelog-None
2024-11-21 14:15:36 +10:30
Dusty Daemon
d077fd59c9 splice: Remove blockhash from peer msg
This is no longer used.

Changelog-None
2024-11-21 14:15:36 +10:30
Dusty Daemon
2e92fdc507 interactivetx: Add support for shared prevtx
It is possible for prevtx to be larger than max packet size, so for shared outputs (currently only the funding tx) we add support for sending the `txid` only across the wire and filling in the prevtx locally.

Changelog-None
2024-11-21 14:15:36 +10:30
Rusty Russell
2bf1053cdb offers: update block height correctly.
As we can see from the previous test, l3 tells us why it rejected the payment:

```
lightningd-3 2024-11-19T03:56:27.151Z DEBUG   022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59-chan#1: Failing HTLC because of an invalid payload (TLV 10 pos 104): cltv_expiry 136 > payment_constraint 130
```

It turns out, we were not updating the block height in the plugin!

Without this, when we create a (non-dummy) blinded path we set a
too-low CLTV restriction, and it doesn't work after a few blocks!

Note we were actually triggering this error in the xpay tests!

Reported-by: Vincenzo Palazzo
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Fixed: Offers: Receiving bolt12 payments where we have no public channels would fail a few blocks after startup.
2024-11-20 12:29:27 +01:00
Vincenzo Palazzo
6f4f33d06b test: reproduce WIRE_INVALID_ONION_PAYLOAD
```
>           raise RpcError(method, payload, resp['error'])
E           pyln.client.lightning.RpcError: RPC call failed: method: pay, payload: {'bolt11': 'lni1qqgvsykv6pslpmzq73597z0ws2qv5q3qqc3xu3s3rg94nj40zfsy866mhu5vxne6tcej5878k2mneuvgjy8ssqgzpg48getnw30k7enxv4e97amfw3597urjd9mxzar9ta3ksctwdejkcu6ld46kcaredphhqvssacp462c3jt0m5y6wzrj5pp6axehtez7r20265antsrqfpvuu8fwcshgzqjushll8xx9x356tn40gk9mxzkyua9ajtrdpyhm3uaj9nvj0fm9qyqnjp20gp6gr2qsmfas7j086jvkmszqgyys3uht6jq7g4p6vsg7fyyqrx76aulp40m9uxejn57eyczy6v6hqmxr8xx273l480kd5zcl0g9hqp3d9qnsrj40gmeshx0w7fu6j9cfthksz2xv78wxr4ae4wrc3lht8lryc2kxxdpxvs3tpdepm0asuvp0l25fqqvjumjneecjg9etepcu426t2ueu6p3escfrxl9ggnkh5k2vm929tnt26dx66nt67kfy5lgx99py2jhqalaqkyypjeu2artvufgydym4tryv0wvkca78ac64mjeqt70d3wsmjcjgnqnjsyqrzymjxzydqkkw24ufxqslttwlj3s608f0rx2slc7etw0833zgs7kppqd350d9wur2l6mkanmpsswv4xrc49kaq6ey9sfn3rg3z8afgng8fdg8aqr7sxhftzxfdlwsnfcgw2sy8t5mxa0ytcdfat2nkdwqvpy9nnsa9mzzaqtyu8wy0yul9hk026znqy6pn6xd2fpxva7jjcpmvqugeewk7emufyqsru03er082j6ya0p694k6qu858hl0g9rt7g2y042ppzyhdv4qdv99qqs3scf49sff7vg27zlmx6n3kgywrh3s82rwgpza2s8jmqqx72ah0kurp94rj7dlxq438nnm34w78kq7hwu53chx0aqh824eqcsgmq9j2fyvsqttg4yksstnk7h7ga5as69fhemltg0m9hqnn2yxr0lxv70293l7ryqpjfamk2k4mzgax8txef7zcxdzjn0wg7te2cx98ft9cyhk0hquzypasww8m40kyzyqvahtzamflylsygnny5gwqqqqqqyqqqqq2qq9sqqqqqqqqqqqqqqqqqqp6tqqqqqqq5szxdnxnuk5zpctsclfcs4yx0kcvz8nckast2ueswxftuelh49su6sqs2vlzs8ha4gqs9vppqvk0zhg6m8z2prfxa2cerrmn9k803lwx4wukgzlnmvt5xukyjycyauzqwvm6pxlfpfffgktvj3wkurcwrqcp0537hnkd8pnm7tsa0zcklua9zv338cjuphz38wml6tlr8xgdzxdsqh0ks2pns2zkn3c52crfcfs'}, error: {'code': 203, 'message': 'failed: WIRE_INVALID_ONION_PAYLOAD (reply from remote)', 'id': 1, 'failcode': 16406, 'failcodename': 'WIRE_INVALID_ONION_PAYLOAD', 'bolt12': 'lni1qqgvsykv6pslpmzq73597z0ws2qv5q3qqc3xu3s3rg94nj40zfsy866mhu5vxne6tcej5878k2mneuvgjy8ssqgzpg48getnw30k7enxv4e97amfw3597urjd9mxzar9ta3ksctwdejkcu6ld46kcaredphhqvssacp462c3jt0m5y6wzrj5pp6axehtez7r20265antsrqfpvuu8fwcshgzqjushll8xx9x356tn40gk9mxzkyua9ajtrdpyhm3uaj9nvj0fm9qyqnjp20gp6gr2qsmfas7j086jvkmszqgyys3uht6jq7g4p6vsg7fyyqrx76aulp40m9uxejn57eyczy6v6hqmxr8xx273l480kd5zcl0g9hqp3d9qnsrj40gmeshx0w7fu6j9cfthksz2xv78wxr4ae4wrc3lht8lryc2kxxdpxvs3tpdepm0asuvp0l25fqqvjumjneecjg9etepcu426t2ueu6p3escfrxl9ggnkh5k2vm929tnt26dx66nt67kfy5lgx99py2jhqalaqkyypjeu2artvufgydym4tryv0wvkca78ac64mjeqt70d3wsmjcjgnqnjsyqrzymjxzydqkkw24ufxqslttwlj3s608f0rx2slc7etw0833zgs7kppqd350d9wur2l6mkanmpsswv4xrc49kaq6ey9sfn3rg3z8afgng8fdg8aqr7sxhftzxfdlwsnfcgw2sy8t5mxa0ytcdfat2nkdwqvpy9nnsa9mzzaqtyu8wy0yul9hk026znqy6pn6xd2fpxva7jjcpmvqugeewk7emufyqsru03er082j6ya0p694k6qu858hl0g9rt7g2y042ppzyhdv4qdv99qqs3scf49sff7vg27zlmx6n3kgywrh3s82rwgpza2s8jmqqx72ah0kurp94rj7dlxq438nnm34w78kq7hwu53chx0aqh824eqcsgmq9j2fyvsqttg4yksstnk7h7ga5as69fhemltg0m9hqnn2yxr0lxv70293l7ryqpjfamk2k4mzgax8txef7zcxdzjn0wg7te2cx98ft9cyhk0hquzypasww8m40kyzyqvahtzamflylsygnny5gwqqqqqqyqqqqq2qq9sqqqqqqqqqqqqqqqqqqp6tqqqqqqq5szxdnxnuk5zpctsclfcs4yx0kcvz8nckast2ueswxftuelh49su6sqs2vlzs8ha4gqs9vppqvk0zhg6m8z2prfxa2cerrmn9k803lwx4wukgzlnmvt5xukyjycyauzqwvm6pxlfpfffgktvj3wkurcwrqcp0537hnkd8pnm7tsa0zcklua9zv338cjuphz38wml6tlr8xgdzxdsqh0ks2pns2zkn3c52crfcfs', 'raw_message': '40160a0068', 'created_at': 1724699621, 'destination': '032cf15d1ad9c4a08d26eab1918f732d8ef8fdc6abb9640bf3db174372c491304e', 'payment_hash': 'e170c7d38854867db0c11e78b760b573307192be67f7a961cd4010533e281efd', 'status': 'failed', 'amount_msat': 3, 'amount_sent_msat': 0, 'erring_index': 2, 'erring_node': '035d2b1192dfba134e10e540875d366ebc8bc353d5aa766b80c090b39c3a5d885d'}

contrib/pyln-client/pyln/client/lightning.py:416: RpcError
```

Signed-off-by: Vincenzo Palazzo <vincenzopalazzodev@gmail.com>
2024-11-20 12:29:27 +01:00
Rusty Russell
1160e35669 lightningd: send errors inside blinded paths correctly.
Don't reply with update_fail_malformed_htlc, even though WIRE_INVALID_ONION_BLINDING
has BADONION set.  Fail it with a normal error message.

This fixes a known FIXME.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Fixed: Protocol: entry to blinded paths return more useful errors (e.g if it's the final node, you get a real error, otherwise you get invalid_onion_blinding).
2024-11-20 12:29:27 +01:00
Rusty Russell
e38f5d8c27 common: provide readable explanation when onion payload is invalid.
I had to use fprintf, which is terrible.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2024-11-20 12:29:27 +01:00
Rusty Russell
9ed7260328 pytest: fix test_pay tests now we've deprecated experimental-offers.
The test was merged after it was deprecated, and autogenerate-rpc-examples isn't
run by CI.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2024-11-20 17:34:24 +10:30
Vincenzo Palazzo
e76a334609 tests: add the test for fetching invoice with metadata
Signed-off-by: Vincenzo Palazzo <vincenzopalazzodev@gmail.com>
2024-11-19 22:54:22 +01:00
Rusty Russell
6c347a4050 pytest: fix flake in test_gossip_pruning
We actually pruned before we got all the channels.  Extend the pruning time,
which unfortunately makes the test slower.

```
2024-11-18T02:13:11.7013278Z node_factory = <pyln.testing.utils.NodeFactory object at 0x7ff72969e820>
2024-11-18T02:13:11.7014386Z bitcoind = <pyln.testing.utils.BitcoinD object at 0x7ff72968fe20>
2024-11-18T02:13:11.7014996Z 
2024-11-18T02:13:11.7015271Z     def test_gossip_pruning(node_factory, bitcoind):
2024-11-18T02:13:11.7016222Z         """ Create channel and see it being updated in time before pruning
2024-11-18T02:13:11.7017037Z         """
2024-11-18T02:13:11.7017871Z         l1, l2, l3 = node_factory.get_nodes(3, opts={'dev-fast-gossip-prune': None,
2024-11-18T02:13:11.7018971Z                                                      'allow_bad_gossip': True})
2024-11-18T02:13:11.7019634Z     
2024-11-18T02:13:11.7020236Z         l1.rpc.connect(l2.info['id'], 'localhost', l2.port)
2024-11-18T02:13:11.7021153Z         l2.rpc.connect(l3.info['id'], 'localhost', l3.port)
2024-11-18T02:13:11.7021806Z     
2024-11-18T02:13:11.7022226Z         scid1, _ = l1.fundchannel(l2, 10**6)
2024-11-18T02:13:11.7022886Z         scid2, _ = l2.fundchannel(l3, 10**6)
2024-11-18T02:13:11.7023458Z     
2024-11-18T02:13:11.7023907Z         mine_funding_to_announce(bitcoind, [l1, l2, l3])
2024-11-18T02:13:11.7025183Z         l1_initial_cupdate_timestamp = only_one(l1.rpc.listchannels(source=l1.info['id'])['channels'])['last_update']
2024-11-18T02:13:11.7026179Z     
2024-11-18T02:13:11.7027358Z         # Get timestamps of initial updates, so we can ensure they change.
2024-11-18T02:13:11.7028171Z         # Channels should be activated locally
2024-11-18T02:13:11.7029326Z >       wait_for(lambda: [c['active'] for c in l1.rpc.listchannels()['channels']] == [True] * 4)
```

We can see in logs, it actually started pruning already:

```
2024-11-18T02:13:11.9622477Z lightningd-1 2024-11-18T01:52:03.570Z DEBUG   gossipd: Pruning channel 105x1x0 from network view (ages 1731894723 and 0)
```

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2024-11-19 17:51:18 +10:30
Rusty Russell
79d39e0296 pytest: fix flake in test_onionmessage_ratelimit
Sometimes l1 ratelimits before l2, and l2 receives the warning message, not l1:

```
>       assert l1.daemon.is_in_log('WARNING: Ratelimited onion_message: exceeded one per 250msec')
E       AssertionError: assert None
E        +  where None = <bound method TailableProc.is_in_log of <pyln.testing.utils.LightningD object at 0x7f13435f45b0>>('WARNING: Ratelimited onion_message: exceeded one per 250msec')
E        +    where <bound method TailableProc.is_in_log of <pyln.testing.utils.LightningD object at 0x7f13435f45b0>> = <pyln.testing.utils.LightningD object at 0x7f13435f45b0>.is_in_log
E        +      where <pyln.testing.utils.LightningD object at 0x7f13435f45b0> = <fixtures.LightningNode object at 0x7f13435cbb80>.daemon
...
lightningd-1 2024-11-19T00:45:43.721Z DEBUG   022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59-connectd: peer_in WIRE_ONION_MESSAGE
lightningd-1 2024-11-19T00:45:43.721Z DEBUG   022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59-connectd: peer_out WIRE_WARNING
lightningd-2 2024-11-19T00:45:43.722Z DEBUG   0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518-connectd: peer_out WIRE_ONION_MESSAGE
lightningd-2 2024-11-19T00:45:43.722Z DEBUG   connectd: REPLY WIRE_CONNECTD_INJECT_ONIONMSG_REPLY with 0 fds
lightningd-2 2024-11-19T00:45:43.722Z DEBUG   0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518-connectd: peer_in WIRE_WARNING
lightningd-2 2024-11-19T00:45:43.722Z INFO    0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518-connectd: Received WIRE_WARNING: WARNING: Ratelimited onion_message: exceeded one per 250msec
```

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2024-11-19 17:51:18 +10:30
Rusty Russell
c4f8716a55 pytest: fix race in test_autoclean.
We can see the log message about cleanup just after the test ends.

```
>       assert l3.rpc.autoclean_status()['autoclean']['expiredinvoices']['cleaned'] == 1
E       assert 0 == 1

tests/test_plugin.py:3232: AssertionError
```
...
```

lightningd-3 2024-11-18T07:52:55.402Z INFO    lightningd: setconfig: autoclean-cycle 10 (updated /tmp/ltests-ao0p8pem/test_autoclean_1/lightning-3/regtest/config:4)
...
lightningd-3 2024-11-18T07:52:59.747Z DEBUG   022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59-connectd: peer_out WIRE_QUERY_CHANNEL_RANGE
lightningd-3 2024-11-18T07:52:59.747Z DEBUG   022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59-gossipd: reply_channel_range 0+109 (of 0+109) 2 scids
lightningd-3 2024-11-18T07:52:59.747Z DEBUG   gossipd: seeker: state = NORMAL No unannounced nodes
{'github_repository': 'ElementsProject/lightning', 'github_sha': '0729de783e95c5208b1706f7d27b23904596bb71', 'github_ref': 'refs/pull/7835/merge', 'github_ref_name': 'HEAD', 'github_run_id': 11887300979, 'github_head_ref': 'guilt/fix-flakes8', 'github_run_number': 11566, 'github_base_ref': 'master', 'github_run_attempt': '1', 'testname': 'test_autoclean', 'start_time': 1731916359, 'end_time': 1731916385, 'outcome': 'fail'}
--------------------------- Captured stdout teardown ---------------------------
lightningd-3 2024-11-18T07:53:05.503Z DEBUG   plugin-autoclean: cleaned 1 from expiredinvoices
lightningd-3 2024-11-18T07:53:05.503Z DEBUG   plugin-autoclean: setting next timer
```

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2024-11-19 17:51:18 +10:30
Rusty Russell
1a5916cb81 pytest: try to fix flake in test_lightningd_still_loading
I can't reproduce this, but CI did (with Elements):

```
[gw3] linux -- Python 3.8.18 /home/runner/.cache/pypoetry/virtualenvs/cln-meta-project-AqJ9wMix-py3.8/bin/python

node_factory = <pyln.testing.utils.NodeFactory object at 0x7fd0e20f57f0>
bitcoind = <pyln.testing.utils.ElementsD object at 0x7fd0e307dbe0>
executor = <concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd0e307da30>

    @pytest.mark.openchannel('v1')
    @pytest.mark.openchannel('v2')
    def test_lightningd_still_loading(node_factory, bitcoind, executor):
        """Test that we recognize we haven't got all blocks from bitcoind"""
    
        mock_release = Event()
    
        # This is slow enough that we're going to notice.
        def mock_getblock(r):
            conf_file = os.path.join(bitcoind.bitcoin_dir, 'bitcoin.conf')
            brpc = RawProxy(btc_conf_file=conf_file)
            if r['params'][0] == slow_blockid:
                mock_release.wait(TIMEOUT)
            return {
                "result": brpc._call(r['method'], *r['params']),
                "error": None,
                "id": r['id']
            }
    
        # Start it, establish channel, get extra funds.
        l1, l2, l3 = node_factory.get_nodes(3, opts=[{'may_reconnect': True,
                                                      'wait_for_bitcoind_sync': False},
                                                     {'may_reconnect': True,
                                                      'wait_for_bitcoind_sync': False},
                                                     {}])
        node_factory.join_nodes([l1, l2])
    
        # Balance l1<->l2 channel
        l1.pay(l2, 10**9 // 2)
    
        l1.stop()
    
        # Now make sure l2 is behind.
        bitcoind.generate_block(2)
        # Make sure l2/l3 are synced
        sync_blockheight(bitcoind, [l2, l3])
    
        # Make it slow grabbing the final block.
        slow_blockid = bitcoind.rpc.getblockhash(bitcoind.rpc.getblockcount())
        l1.daemon.rpcproxy.mock_rpc('getblock', mock_getblock)
    
        l1.start(wait_for_bitcoind_sync=False)
    
        # It will warn about being out-of-sync.
        assert 'warning_bitcoind_sync' not in l1.rpc.getinfo()
        assert 'warning_lightningd_sync' in l1.rpc.getinfo()
    
        # Make sure it's connected to l2 (otherwise we get TEMPORARY_CHANNEL_FAILURE)
        wait_for(lambda: only_one(l1.rpc.listpeers(l2.info['id'])['peers'])['connected'])
    
        # Payments will succced.
        l1.pay(l2, 1000)
>       assert l1.daemon.is_in_log(r"Sending HTLC while still syncing with bitcoin network \(104 vs 105\)")
E       AssertionError: assert None
E        +  where None = <bound method TailableProc.is_in_log of <pyln.testing.utils.LightningD object at 0x7fd0e20f9fa0>>('Sending HTLC while still syncing with bitcoin network \\(104 vs 105\\)')
E        +    where <bound method TailableProc.is_in_log of <pyln.testing.utils.LightningD object at 0x7fd0e20f9fa0>> = <pyln.testing.utils.LightningD object at 0x7fd0e20f9fa0>.is_in_log
E        +      where <pyln.testing.utils.LightningD object at 0x7fd0e20f9fa0> = <fixtures.LightningNode object at 0x7fd0e20f59d0>.daemon
```

What was in logs was:

```
lightningd-1 2024-11-18T05:33:50.634Z DEBUG   022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59-chan#1: Sending HTLC while still syncing with bitcoin network (103 vs 105)
```

Implying that l1 was an extra block behind.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2024-11-19 17:51:18 +10:30
Rusty Russell
576d003cf0 pytest: fix flake in test_wumbo_channels
We need to wait for *l2* to see the channel in CHANNELD_NORMAL,
otherwise the array here is empty:

```
	chan = only_one([c for c in l1.rpc.listpeerchannels(l2.info['id'])['channels'] if c['state'] == 'CHANNELD_NORMAL'])
        amount = chan['funding']['local_funds_msat']
        assert amount > Millisatoshi(str((1 << 24) - 1) + "sat")
    
        # We should know we can spend that much!
        spendable = chan['spendable_msat']
        assert spendable > Millisatoshi(str((1 << 24) - 1) + "sat")
    
        # So should peer.
>       chan = only_one([c for c in l2.rpc.listpeerchannels(l1.info['id'])['channels'] if c['state'] == 'CHANNELD_NORMAL'])

tests/test_connection.py:3552: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

arr = []

    def only_one(arr):
        """Many JSON RPC calls return an array; often we only expect a single entry
        """
>       assert len(arr) == 1
E       AssertionError
```
2024-11-19 17:51:18 +10:30
Michael Schmoock
e79a275209 pytest: fix a test that broke because of docstring usage 2024-11-19 11:50:42 +10:30
ShahanaFarooqui
2e52df41dd test: Fixed test plugin source paths for reckless 2024-11-19 09:06:28 +10:30
ShahanaFarooqui
e352a72e5e shellcheck: shellcheck fixes 2024-11-19 09:05:55 +10:30
Rusty Russell
0cc52bc281 pytest: don't set experimental-offers in tests: it's the default now.
And about to be deprecated.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2024-11-18 10:42:54 +01:00