Commit graph

3378 commits

Author SHA1 Message Date
ShahanaFarooqui
219623c8d7 tests: Reckless test fix for project must contain name
Fix for `The Poetry configuration is invalid:  - project must contain ['name'] properties`
2025-01-09 11:15:05 +01:00
Rusty Russell
98679aa6cf pytest: fix flake in test_restorefrompeer.
Just because we've seen the block doesn't mean onchaind has finished
starting up.

```
 _____________________________ test_restorefrompeer _____________________________
[gw0] linux -- Python 3.10.15 /home/runner/.cache/pypoetry/virtualenvs/cln-meta-project-AqJ9wMix-py3.10/bin/python

node_factory = <pyln.testing.utils.NodeFactory object at 0x7fb8f3887f70>
bitcoind = <pyln.testing.utils.BitcoinD object at 0x7fb8f3886f50>

    @unittest.skipIf(os.getenv('TEST_DB_PROVIDER', 'sqlite3') != 'sqlite3', "deletes database, which is assumed sqlite3")
    def test_restorefrompeer(node_factory, bitcoind):
        """
        Test restorefrompeer
        """
        l1, l2 = node_factory.get_nodes(2, [{'broken_log': 'ERROR: Unknown commitment #.*, recovering our funds!',
                                             'experimental-peer-storage': None,
                                             'may_reconnect': True,
                                             'allow_bad_gossip': True},
                                            {'experimental-peer-storage': None,
                                             'may_reconnect': True}])
    
        l1.rpc.connect(l2.info['id'], 'localhost', l2.port)
    
        c12, _ = l1.fundchannel(l2, 10**5)
        assert l1.daemon.is_in_log('Peer storage sent!')
        assert l2.daemon.is_in_log('Peer storage sent!')
    
        l1.stop()
        os.unlink(os.path.join(l1.daemon.lightning_dir, TEST_NETWORK, "lightningd.sqlite3"))
    
        l1.start()
        assert l1.daemon.is_in_log('Server started with public key')
    
        # If this happens fast enough, connect fails with "disconnected
        # during connection"
        try:
            l1.rpc.connect(l2.info['id'], 'localhost', l2.port)
        except RpcError as err:
            assert "disconnected during connection" in err.error['message']
    
        l1.daemon.wait_for_log('peer_in WIRE_YOUR_PEER_STORAGE')
    
        assert l1.rpc.restorefrompeer()['stubs'][0] == _['channel_id']
    
        l1.daemon.wait_for_log('peer_out WIRE_ERROR')
        l2.daemon.wait_for_log('State changed from CHANNELD_NORMAL to AWAITING_UNILATERAL')
    
        bitcoind.generate_block(5, wait_for_mempool=1)
        sync_blockheight(bitcoind, [l1, l2])
    
        l1.daemon.wait_for_log(r'All outputs resolved.*')
        wait_for(lambda: l1.rpc.listfunds()["channels"][0]["state"] == "ONCHAIN")
    
        # Check if funds are recovered.
        assert l1.rpc.listfunds()["channels"][0]["state"] == "ONCHAIN"
>       assert l2.rpc.listfunds()["channels"][0]["state"] == "ONCHAIN"
E       AssertionError: assert 'FUNDING_SPEND_SEEN' == 'ONCHAIN'
E         - ONCHAIN
E         + FUNDING_SPEND_SEEN

tests/test_misc.py:3044: AssertionError
```

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2024-12-20 15:36:07 +10:30
Rusty Russell
9758b05bd0 pytest: fix flake in test_penalty_htlc_tx_fulfill
Make sure balancing payment is fully cleared before trying to get a routeL

```
    def test_penalty_htlc_tx_fulfill(node_factory, bitcoind, chainparams, anchors):

        # now we send one 'sticky' htlc: l4->l1
        amt = 10**8 // 2
        sticky_inv = l1.rpc.invoice(amt, '2', 'sticky')
>       route = l4.rpc.getroute(l1.info['id'], amt, 1)['route']

tests/test_closing.py:1232:

>           raise RpcError(method, payload, resp['error'])
E           pyln.client.lightning.RpcError: RPC call failed: method: getroute, payload: {'id': '0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518', 'amount_msat': 50000000, 'riskfactor': 1, 'cltv': 9}, error: {'code': 205, 'message': 'Could not find a route'}
```
2024-12-20 15:36:07 +10:30
Rusty Russell
69bfa6f5b1 channeld_fakenet: don't be as brute-force trying to derive keys.
Keep a proper cache of all possible ones.  I think this may be the
timeout problem: according to the logs, channeld_fakenet stops responding
and thus HTLCs eventually time out.

```
```
2024-12-16T23:16:16.4874420Z lightningd-1 2024-12-16T22:45:14.068Z UNUSUAL 022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59-channeld-chan#1: Adding HTLC 18446744073709551615 too slow: killing connection
```

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2024-12-20 15:36:07 +10:30
Rusty Russell
4b283eb96e pytest: fix flake in test_gossip_throttle
We can get the reply_short_channel_ids_end in the messages when
we make a query:

```
2024-11-29T07:39:28.8550652Z         time_fast = time.time() - start_fast
2024-11-29T07:39:28.8551067Z         assert time_fast < 2
2024-11-29T07:39:28.8551487Z         out3 = [m for m in out3 if not m.startswith(b'0109')]
2024-11-29T07:39:28.8552158Z >       assert set(out1) == set(out3)
...
2024-11-29T07:39:28.8675516Z E         Extra items in the right set:
2024-11-29T07:39:28.8675887Z E         b'010606226e46111a0b59caaf126043eb5bbf28c34f3a5e332a1fc7b2b73cf188910f01'
```

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2024-12-20 15:36:07 +10:30
Rusty Russell
3a0e3a1591 pytest: fix test in test_gossip_pruning
It's possible that listchannels doesn't show the channel yet:

```
    
        l1.rpc.connect(l2.info['id'], 'localhost', l2.port)
        l2.rpc.connect(l3.info['id'], 'localhost', l3.port)
    
        scid1, _ = l1.fundchannel(l2, 10**6)
        scid2, _ = l2.fundchannel(l3, 10**6)
    
        mine_funding_to_announce(bitcoind, [l1, l2, l3])
>       l1_initial_cupdate_timestamp = only_one(l1.rpc.listchannels(source=l1.info['id'])['channels'])['last_update']

tests/test_gossip.py:43: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

arr = []

    def only_one(arr):
        """Many JSON RPC calls return an array; often we only expect a single entry
        """
>       assert len(arr) == 1
E       AssertionError
```

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2024-12-20 15:36:07 +10:30
Rusty Russell
2c8d9d0deb pytest: actually test xpay/pay return similarity.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2024-12-18 14:04:14 +10:30
Rusty Russell
84f30b12f7 pytest: bonus test to make sure xpay uses zeroconf channels correctly.
It needs to use the channel alias here, and it does.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2024-12-17 15:59:30 +10:30
Rusty Russell
a2f58a28ba lightningd: injectpaymentonion can use scids of unannounced channels.
Cut & paste from the forwarding code, where we don't let onions use the
unannounced scids.  Obviously local commands can use them.

Reported-by: @michael1011
Changelog-Fixed: JSON-RPC: xpay now works through unannounced channels.
2024-12-17 15:59:30 +10:30
Rusty Russell
92c45712d2 pytest: test xpay using unannounced channels.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2024-12-17 15:59:30 +10:30
Rusty Russell
d26ea6673d xpay: more accurately reflect pay when xpay-handle-pay is set.
Note that the slight code reorder changes the JSON order, which is generally
undefined, but our doc checker is very strict!

Changelog-Changed: `xpay` now gives the same JSON success return as documented by `pay` when `xpay-handle-pay` is set.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Fixes: https://github.com/ElementsProject/lightning/issues/7923
2024-12-17 15:49:03 +10:30
Rusty Russell
428c76068c xpay: emulate maxfeepercent and exemptfee when xpay-handle-pay used
maxfeepercent is use by Zeus, so let's make that work.

maxfee is more precise, so it's the only xpay option (maxfee was added
to pay later).

[ Fix to ppm logic by Lagrang3, thanks! --RR ]

Fixes: https://github.com/ElementsProject/lightning/issues/7926
Changelog-Changed: JSON-RPC: With `xpay-handle-pay` set, xpay will now be used even if `pay` uses maxfeeprecent or exemptfee parameters (e.g. Zeus)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2024-12-17 10:54:31 +10:30
Rusty Russell
cf22762c8f xpay: tell injectpaymentonion what the amount being delivered to destination is.
This means that it gets shown in listsendpays: omitting this broke spark, apparently!

Changelog-Changed: `xpay` now populates more fields, so `listsendpays` and `listpays` show `destination` and `amount_msat` fields for xpay payments.
Fixes: https://github.com/ElementsProject/lightning/issues/7881
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2024-12-17 08:14:45 +10:30
Rusty Russell
8202929a00 lightningd: populate listsendpays destination from injectpaymentonion if we can.
If they give us the invstring, we can at least set who signed the invoice.  Of course,
it might not be a real node_id (with blinded paths).

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2024-12-17 08:14:45 +10:30
Rusty Russell
80c43ec97d injectpaymentionion: allow specification of actual amount which reaches destination.
This appears in listsendpays / listpays, and is useful information (if we know!).

This doesn't fix old payments, but means that xpay can use this for new payments.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2024-12-17 08:14:45 +10:30
Rusty Russell
f0c5ea2e1e doc: document and test the onionmessage_forward_fail notification.
Doing exactly what we expect to do: initiate a connection and then
forward the message.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2024-12-05 17:38:16 +10:30
Rusty Russell
d196b9bb53 doc: document (and test) the injectonionmessage API.
It's actually tested by fetchinvoice, but doing an explicit test in Python
allows for schema checking!

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Changed: JSON-RPC: `injectonionmessage` API simplified and documented.
2024-12-05 17:38:16 +10:30
Rusty Russell
b520543867 gossipd: log at trace, not debug for regular messages.
See: https://github.com/ElementsProject/lightning/issues/7899

A node with 23 connections gets far too many debug messages.

Changelog-Fixed: `gossipd` now does logging at trace, not debug level.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2024-12-05 11:43:50 +10:30
Rusty Russell
113156858b xpay: don't excees maxfee *overall*.
We were handing "maxfee" to every getroutes call, even if we had already
used some of the fees.

Reported-by: @daywalker90
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-None: xpay is new this release.
2024-12-02 14:31:11 +10:30
Rusty Russell
d0b470618e pytest: test for maxfee compliance.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2024-12-02 14:31:11 +10:30
Rusty Russell
b8e5b122d2 decode: don't fail to decode just because a bolt12 invoice has expired.
In fact, there are several places where we try to decode old invoices,
and they should all work.  The only place we should enforce expiration is
when we're going to pay.

This also revealed that xpay wasn't checking bolt11 expiries!

Reported-by: hMsats
Fixes: https://github.com/ElementsProject/lightning/issues/7869
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Fixed: JSON-RPC: `decode` refused to decode expired bolt12 invoices.
2024-11-30 13:17:55 +01:00
Rusty Russell
14cb0574f7 pytest: test (fails) for decoding expired bolt12 invoices.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2024-11-30 13:17:55 +01:00
ShahanaFarooqui
a3a33fe3be doc: Add GENERATE_EXAMPLES env
- Run with environment variable `GENERATE_EXAMPLES`
- Update cln version in getinfo example on `make update-versions`
- Added two `dev` configs, dev-no-plugin-checksum and dev-no-version-checks, to match CI listconfigs
- Changed commando rpc example from `getinfo` to `newaddr` to avoid unneccessary file updates for future builds
- Stabilized `bkpr-editdescriptionbyoutpoint`, `listclosedchannels` and `listaddresses` examples
2024-11-28 15:56:16 +10:30
ShahanaFarooqui
e568d69867 doc: Lock askrene example values 2024-11-26 21:45:19 +10:30
ShahanaFarooqui
9592facf83 doc: Lock example values
Changelog-Added: Test script generates all RPC documentation examples now.
2024-11-26 21:45:19 +10:30
Rusty Russell
20257c3308 lightningd: --dev-low-prio-anchor-blocks and test for low-priority anchors.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2024-11-26 14:49:36 +10:30
Rusty Russell
de30f9c4b2 anchors: create low priority anchor to spend commit tx within a week.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Changed: Protocol: we now create a low-priority (2016 down to 12 blocks fee target) anchor for low-fee unilateral closes even if there's no urgency.
2024-11-26 14:49:36 +10:30
niftynei
46fde419b1 pytest: fix up coin_move tests now anchors don't get redundantly spent/ 2024-11-25 20:23:21 +10:30
Rusty Russell
5701123209 pytest: fix flake in test_gossip_force_broadcast_channel_msgs
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2024-11-25 15:39:13 +10:30
Rusty Russell
7cdf45bb00 pytest: fix flake in test_ping_timeout
The seeker can send a full gossip query, which means the ping doesn't happen
(it needs 14-45 seconds of quiet!).

We disable the gossip_queries feature, so it doesn't ask.

```
    def test_ping_timeout(node_factory):
        # Disconnects after this, but doesn't know it.
        l1_disconnects = ['xWIRE_PING']
    
        l1, l2 = node_factory.get_nodes(2, opts=[{'dev-no-reconnect': None,
                                                  'disconnect': l1_disconnects},
                                                 {'dev-no-ping-timer': None}])
        l1.rpc.connect(l2.info['id'], 'localhost', l2.port)
    
        # This can take 10 seconds (dev-fast-gossip means timer fires every 5 seconds)
        l1.daemon.wait_for_log('seeker: startup peer finished', timeout=15)
        # Ping timers runs at 15-45 seconds, *but* only fires if also 60 seconds
        # after previous traffic.
>       l1.daemon.wait_for_log('dev_disconnect: xWIRE_PING', timeout=60 + 45 + 5)

tests/test_connection.py:4194: 
...
>                   raise TimeoutError('Unable to find "{}" in logs.'.format(exs))
E                   TimeoutError: Unable to find "[re.compile('dev_disconnect: xWIRE_PING')]" in logs.
```

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2024-11-25 15:39:13 +10:30
Rusty Russell
faf7ae6ad4 pytest: add test for connection ratelimiting.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2024-11-25 15:39:13 +10:30
Rusty Russell
73b9812178 pytest: restore test_sendpay_grouping test.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2024-11-25 15:39:13 +10:30
Rusty Russell
15950bb7d4 connectd: reconnect for non-transient connections.
Rather than have lightningd call us repeatedly to try to connect, have
it tell us what peers are transient and aren't, and connectd will
automatically try to maintain that connection.

There's a new "downgrade_peer" message to tell it a peer is now
transient: to make it non-transient we simply tell connectd to
connect as a non-transient.

The first time, I missed that dual_open_control does its own state
transitions :(

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Changed: `connectd` now handles maintaining/reconnecting to important peers, and we remember the last successful address we connected to.
2024-11-25 15:39:13 +10:30
Rusty Russell
23dc10cf81 connectd: get our own addresses to contact node from node_announcements.
Let lightningd feed us hints to try first, but we can extract the
addresses from node_announcement messages ourselves.

(Lightningd used to ask gossipd on our behalf: this is far simpler!)

One side effect of this is that we don't hand back address hints given to us
by lightningd: it would use these again for reconnecting.  This is breaks
test_sendpay_grouping, so we disable it temporarily.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2024-11-25 15:39:13 +10:30
Alex Myers
11580dfd43 pyln-testing: disable seeker autoconnect by default
This avoids test flakes, but can be explicitly set if needed.

Changelog-None
2024-11-24 12:03:16 +10:30
Rusty Russell
dba9746d21 pytest: fix flake in test_gossip_pruning.
If the first one doesn't use the entire timeout, the second might need longer
(I used TIMEOUT=10 normally):

```
FAILED tests/test_gossip.py::test_gossip_pruning - TimeoutError: Unable to find "[re.compile('Pruning channel 103x1x0 from network view')]" in logs.
```

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2024-11-23 10:20:30 +10:30
Alex Myers
363b721cd3 gossipd: use autoconnect-seeker-peers setting 2024-11-22 15:21:45 +10:30
Alex Myers
f2243e6013 pytest: Add seeker autoconnect test 2024-11-22 15:21:45 +10:30
Rusty Russell
8566370087 pytest: fix flake in test_gossip_force_broadcast_channel_msgs
We can get more gossip_filter messages now.  And we can also go over max-messages,
so increase that too.

```
        del tally['query_short_channel_ids']
        del tally['query_channel_range']
        del tally['ping']
>       assert tally == {'channel_announce': 1,
                         'channel_update': 3,
                         'node_announce': 1,
                         'gossip_filter': 1}
E       AssertionError: assert {'channel_ann..._announce': 1} == {'channel_ann..._announce': 1}
E         Omitting 2 identical items, use -vv to show
E         Differing items:
E         {'gossip_filter': 2} != {'gossip_filter': 1}
E         {'channel_update': 2} != {'channel_update': 3}
E         Full diff:
E           {
E            'channel_announce': 1,...
E         
E         ...Full output truncated (10 lines hidden), use '-vv' to show

tests/test_gossip.py:2326: AssertionError
```

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2024-11-22 14:01:44 +10:30
Rusty Russell
a295099ace pytest: fix flake in test_onchaind_replay.
We actually mine *300* blocks, not 200, and if timing is right l1
can have mined the txid before mine_txid_or_rbf() checks the mempool:

```
    def test_onchaind_replay(node_factory, bitcoind):
        disconnects = ['+WIRE_REVOKE_AND_ACK', 'permfail']
        # Feerates identical so we don't get gratuitous commit to update them
        l1, l2 = node_factory.line_graph(2, opts=[{'watchtime-blocks': 201, 'cltv-delta': 101,
                                                   'disconnect': disconnects,
                                                   'feerates': (7500, 7500, 7500, 7500)},
                                                  {'watchtime-blocks': 201, 'cltv-delta': 101}],
                                         wait_for_announce=True)
    
        inv = l2.rpc.invoice(10**8, 'onchaind_replay', 'desc')
        rhash = inv['payment_hash']
        routestep = {
            'amount_msat': 10**8 - 1,
            'id': l2.info['id'],
            'delay': 101,
            'channel': first_scid(l1, l2)
        }
        l1.rpc.sendpay([routestep], rhash, payment_secret=inv['payment_secret'])
        l1.daemon.wait_for_log('sendrawtx exit 0')
        bitcoind.generate_block(1, wait_for_mempool=1)
    
        # Wait for nodes to notice the failure, this seach needle is after the
        # DB commit so we're sure the tx entries in onchaindtxs have been added
        l1.daemon.wait_for_log("Deleting channel .* due to the funding outpoint being spent")
        l2.daemon.wait_for_log("Deleting channel .* due to the funding outpoint being spent")
    
        # We should at least have the init tx now
        assert len(l1.db_query("SELECT * FROM channeltxs;")) > 0
        assert len(l2.db_query("SELECT * FROM channeltxs;")) > 0
    
        # Generate some blocks so we restart the onchaind from DB (we rescan
        # last_height - 100)
        bitcoind.generate_block(100)
        sync_blockheight(bitcoind, [l1, l2])
    
        # l1 should still have a running onchaind
        assert len(l1.db_query("SELECT * FROM channeltxs;")) > 0
    
        l2.rpc.stop()
        l1.restart()
    
        # Can't wait for it, it's after the "Server started" wait in restart()
        assert l1.daemon.is_in_log(r'Restarting onchaind \(ONCHAIN\): closed in block 109')
    
        # l1 should still notice that the funding was spent and that we should react to it
        _, txid, blocks = l1.wait_for_onchaind_tx('OUR_DELAYED_RETURN_TO_WALLET',
                                                  'OUR_UNILATERAL/DELAYED_OUTPUT_TO_US')
        assert blocks == 200
        bitcoind.generate_block(200)
        # Could be RBF!
>       l1.mine_txid_or_rbf(txid)

tests/test_closing.py:1864: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
contrib/pyln-testing/pyln/testing/utils.py:1375: in mine_txid_or_rbf
    wait_for(lambda: rbf_or_txid_broadcast(txids))
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

success = <function LightningNode.mine_txid_or_rbf.<locals>.<lambda> at 0x7f9b129c4550>
timeout = 180

    def wait_for(success, timeout=TIMEOUT):
        start_time = time.time()
        interval = 0.25
        while not success():
            time_left = start_time + timeout - time.time()
            if time_left <= 0:
>               raise ValueError("Timeout while waiting for {}".format(success))
E               ValueError: Timeout while waiting for <function LightningNode.mine_txid_or_rbf.<locals>.<lambda> at 0x7f9b129c4550>
```

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2024-11-22 14:01:44 +10:30
Jesse de Wit
a90d9c9f4f tests: add pay test over unannounced channels
This test fails with cln v24.08.2. Add this test, so it doesn't happen
again.

Changelog-None
2024-11-21 11:22:26 +01:00
Rusty Russell
2c9023ee25 pytest: reenable askrene bias test.
We can fix the median calc by removing the (unused) reverse edges.

Also analyze the failure case in test_real_data: it's a real edge case, so
hardcode that one as "ok".

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2024-11-21 16:17:52 +10:30
Lagrang3
05514b46e3 Askrene: change median factor to 1.
The ratio of the median of the fees and probability cost is overall not
a bad factor to combine these two features. This is what the
test_real_data shows.

Changelog-None

Signed-off-by: Lagrang3 <lagrang3@protonmail.com>
2024-11-21 16:17:52 +10:30
Lagrang3
2b3fd67dfb askrene: don't skip fee_fallback test
The fee_fallback test would fail after fixing the computation of the
median. Now by we can restore it by making the probability cost factor
1000x higher than the ratio of the median. This shows how hard it is to
combine fee and probability costs and why is the current approach so
fragile.

Changelog-None

Signed-off-by: Lagrang3 <lagrang3@protonmail.com>
2024-11-21 16:17:52 +10:30
Lagrang3
4dc1a44cd9 askrene: fix the median
The calculation of the median values of probability and fee cost in the
linear approximation had a bug by counting on non-existing arcs.

Changelog-none: askrene: fix the median

Signed-off-by: Lagrang3 <lagrang3@protonmail.com>
2024-11-21 16:17:52 +10:30
Alex Myers
ead5dbf6a2 pytest: allow additional gossip filters
in test_gossip_force_broadcast_channel_msgs now that the seeker
is asking for periodic full gossip syncs.
2024-11-21 14:23:57 +10:30
Dusty Daemon
d04e64670d splice: tx_abort no longer reestablishes
As per eclair implementation we skip `channel_reestablish` and go straight into the channel for `tx_abort` events.

Changelog-None
2024-11-21 14:15:36 +10:30
Dusty Daemon
6d63e68e99 splice: Update messages to spec
Changelog-Changed: Splicing moved from test numbers to spec numbers.
2024-11-21 14:15:36 +10:30
Dusty Daemon
2b3cb8b8a8 splice: Update splice signature msg type
Update to use spec signature type.

Changelog-None
2024-11-21 14:15:36 +10:30
Dusty Daemon
d077fd59c9 splice: Remove blockhash from peer msg
This is no longer used.

Changelog-None
2024-11-21 14:15:36 +10:30