While we have (I hope!) fixed the underlying sync problem, this can still happen with
older gossip. So now we tell it the channel is dead, so it won't happen more than once.
Fixes: #7703
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This can help us backfill any missing gossip if our current
peers haven't been the most reliable.
Changelog-Changed: Gossipd requests a full sync from a random peer every hour.
Based on gossip sync data from random network peers, listening to only 5
peers will not reliably catch all gossip traffic. For now, add extra
redundancy.
Allows our peer to change their funding pub key during a splice.
Changelog-Changed: Support added for peers that wish to rotate their funding pubkey during a splice.
Set the remote funding pubkey on both lightningd and channeld when mutual splice lock is achieved.
This will be needed once rotating funding keys is enabled during splicing
Changelog-None.
As per eclair spec proposal.
1) A renaming to `funding_txid`
2) Adding of `batch_size` to indicate how many commitment_signed msgs are expected.
Changelog-None
In anticipation of adding support for rotating funding pubkeys during a splice, `channel_txs` is updated to support specifying these manually instead of using the channel’s funding pubkeys.
Changelog-None
Channeld stores its own cache of `inflight` and that needs to have a copy of `remote_funding` as well.
Since copying a secp256k1 pubkey isn’t documented and `copy_inflight` isn’t used anyway — we’re dropping `copy_inflight`.
Changelog-None
It is possible for prevtx to be larger than max packet size, so for shared outputs (currently only the funding tx) we add support for sending the `txid` only across the wire and filling in the prevtx locally.
Changelog-None
Enable storing the remote funding pubkey in DB if the channel peer decides to change it during splicing. It needs to be in DB incase of restarts mid-splice.
Changelog-None
Added and updated error messages when splicing to make it more clear to the user why a splice is failing.
Changelog-Changed: Improved error messaging for splice commands.
As we can see from the previous test, l3 tells us why it rejected the payment:
```
lightningd-3 2024-11-19T03:56:27.151Z DEBUG 022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59-chan#1: Failing HTLC because of an invalid payload (TLV 10 pos 104): cltv_expiry 136 > payment_constraint 130
```
It turns out, we were not updating the block height in the plugin!
Without this, when we create a (non-dummy) blinded path we set a
too-low CLTV restriction, and it doesn't work after a few blocks!
Note we were actually triggering this error in the xpay tests!
Reported-by: Vincenzo Palazzo
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Fixed: Offers: Receiving bolt12 payments where we have no public channels would fail a few blocks after startup.
Don't reply with update_fail_malformed_htlc, even though WIRE_INVALID_ONION_BLINDING
has BADONION set. Fail it with a normal error message.
This fixes a known FIXME.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Fixed: Protocol: entry to blinded paths return more useful errors (e.g if it's the final node, you get a real error, otherwise you get invalid_onion_blinding).
Payer metadata is a field that controls the payer ID
provided during the fetchinvoice process.
There are use cases where this is highly useful, such as
proving that the payer has paid for the correct item.
Imagine visiting a merchant's website to pay for multiple offers, where
one of these offers is a default offer (with no description and no set amount).
In this scenario, the merchant could claim not to have received
payment for a specific item. Since the same offer may be used to
fetch invoices for different products, there needs to be a way to
identify which product the invoice corresponds to.
With this commit, it will be possible to inject payer metadata,
which helps solve the issue described above.
For example, possible payer metadata could be `to_hex(b"{payer_node_id}.{product_id}.{created_at}")`.
Changelog-Added: JSON-RPC: `fetchinvoice` allows setting invreq_metadata via `payer_metadata` parameter.
Signed-off-by: Vincenzo Palazzo <vincenzopalazzodev@gmail.com>
We actually pruned before we got all the channels. Extend the pruning time,
which unfortunately makes the test slower.
```
2024-11-18T02:13:11.7013278Z node_factory = <pyln.testing.utils.NodeFactory object at 0x7ff72969e820>
2024-11-18T02:13:11.7014386Z bitcoind = <pyln.testing.utils.BitcoinD object at 0x7ff72968fe20>
2024-11-18T02:13:11.7014996Z
2024-11-18T02:13:11.7015271Z def test_gossip_pruning(node_factory, bitcoind):
2024-11-18T02:13:11.7016222Z """ Create channel and see it being updated in time before pruning
2024-11-18T02:13:11.7017037Z """
2024-11-18T02:13:11.7017871Z l1, l2, l3 = node_factory.get_nodes(3, opts={'dev-fast-gossip-prune': None,
2024-11-18T02:13:11.7018971Z 'allow_bad_gossip': True})
2024-11-18T02:13:11.7019634Z
2024-11-18T02:13:11.7020236Z l1.rpc.connect(l2.info['id'], 'localhost', l2.port)
2024-11-18T02:13:11.7021153Z l2.rpc.connect(l3.info['id'], 'localhost', l3.port)
2024-11-18T02:13:11.7021806Z
2024-11-18T02:13:11.7022226Z scid1, _ = l1.fundchannel(l2, 10**6)
2024-11-18T02:13:11.7022886Z scid2, _ = l2.fundchannel(l3, 10**6)
2024-11-18T02:13:11.7023458Z
2024-11-18T02:13:11.7023907Z mine_funding_to_announce(bitcoind, [l1, l2, l3])
2024-11-18T02:13:11.7025183Z l1_initial_cupdate_timestamp = only_one(l1.rpc.listchannels(source=l1.info['id'])['channels'])['last_update']
2024-11-18T02:13:11.7026179Z
2024-11-18T02:13:11.7027358Z # Get timestamps of initial updates, so we can ensure they change.
2024-11-18T02:13:11.7028171Z # Channels should be activated locally
2024-11-18T02:13:11.7029326Z > wait_for(lambda: [c['active'] for c in l1.rpc.listchannels()['channels']] == [True] * 4)
```
We can see in logs, it actually started pruning already:
```
2024-11-18T02:13:11.9622477Z lightningd-1 2024-11-18T01:52:03.570Z DEBUG gossipd: Pruning channel 105x1x0 from network view (ages 1731894723 and 0)
```
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Sometimes l1 ratelimits before l2, and l2 receives the warning message, not l1:
```
> assert l1.daemon.is_in_log('WARNING: Ratelimited onion_message: exceeded one per 250msec')
E AssertionError: assert None
E + where None = <bound method TailableProc.is_in_log of <pyln.testing.utils.LightningD object at 0x7f13435f45b0>>('WARNING: Ratelimited onion_message: exceeded one per 250msec')
E + where <bound method TailableProc.is_in_log of <pyln.testing.utils.LightningD object at 0x7f13435f45b0>> = <pyln.testing.utils.LightningD object at 0x7f13435f45b0>.is_in_log
E + where <pyln.testing.utils.LightningD object at 0x7f13435f45b0> = <fixtures.LightningNode object at 0x7f13435cbb80>.daemon
...
lightningd-1 2024-11-19T00:45:43.721Z DEBUG 022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59-connectd: peer_in WIRE_ONION_MESSAGE
lightningd-1 2024-11-19T00:45:43.721Z DEBUG 022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59-connectd: peer_out WIRE_WARNING
lightningd-2 2024-11-19T00:45:43.722Z DEBUG 0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518-connectd: peer_out WIRE_ONION_MESSAGE
lightningd-2 2024-11-19T00:45:43.722Z DEBUG connectd: REPLY WIRE_CONNECTD_INJECT_ONIONMSG_REPLY with 0 fds
lightningd-2 2024-11-19T00:45:43.722Z DEBUG 0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518-connectd: peer_in WIRE_WARNING
lightningd-2 2024-11-19T00:45:43.722Z INFO 0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518-connectd: Received WIRE_WARNING: WARNING: Ratelimited onion_message: exceeded one per 250msec
```
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
I can't reproduce this, but CI did (with Elements):
```
[gw3] linux -- Python 3.8.18 /home/runner/.cache/pypoetry/virtualenvs/cln-meta-project-AqJ9wMix-py3.8/bin/python
node_factory = <pyln.testing.utils.NodeFactory object at 0x7fd0e20f57f0>
bitcoind = <pyln.testing.utils.ElementsD object at 0x7fd0e307dbe0>
executor = <concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd0e307da30>
@pytest.mark.openchannel('v1')
@pytest.mark.openchannel('v2')
def test_lightningd_still_loading(node_factory, bitcoind, executor):
"""Test that we recognize we haven't got all blocks from bitcoind"""
mock_release = Event()
# This is slow enough that we're going to notice.
def mock_getblock(r):
conf_file = os.path.join(bitcoind.bitcoin_dir, 'bitcoin.conf')
brpc = RawProxy(btc_conf_file=conf_file)
if r['params'][0] == slow_blockid:
mock_release.wait(TIMEOUT)
return {
"result": brpc._call(r['method'], *r['params']),
"error": None,
"id": r['id']
}
# Start it, establish channel, get extra funds.
l1, l2, l3 = node_factory.get_nodes(3, opts=[{'may_reconnect': True,
'wait_for_bitcoind_sync': False},
{'may_reconnect': True,
'wait_for_bitcoind_sync': False},
{}])
node_factory.join_nodes([l1, l2])
# Balance l1<->l2 channel
l1.pay(l2, 10**9 // 2)
l1.stop()
# Now make sure l2 is behind.
bitcoind.generate_block(2)
# Make sure l2/l3 are synced
sync_blockheight(bitcoind, [l2, l3])
# Make it slow grabbing the final block.
slow_blockid = bitcoind.rpc.getblockhash(bitcoind.rpc.getblockcount())
l1.daemon.rpcproxy.mock_rpc('getblock', mock_getblock)
l1.start(wait_for_bitcoind_sync=False)
# It will warn about being out-of-sync.
assert 'warning_bitcoind_sync' not in l1.rpc.getinfo()
assert 'warning_lightningd_sync' in l1.rpc.getinfo()
# Make sure it's connected to l2 (otherwise we get TEMPORARY_CHANNEL_FAILURE)
wait_for(lambda: only_one(l1.rpc.listpeers(l2.info['id'])['peers'])['connected'])
# Payments will succced.
l1.pay(l2, 1000)
> assert l1.daemon.is_in_log(r"Sending HTLC while still syncing with bitcoin network \(104 vs 105\)")
E AssertionError: assert None
E + where None = <bound method TailableProc.is_in_log of <pyln.testing.utils.LightningD object at 0x7fd0e20f9fa0>>('Sending HTLC while still syncing with bitcoin network \\(104 vs 105\\)')
E + where <bound method TailableProc.is_in_log of <pyln.testing.utils.LightningD object at 0x7fd0e20f9fa0>> = <pyln.testing.utils.LightningD object at 0x7fd0e20f9fa0>.is_in_log
E + where <pyln.testing.utils.LightningD object at 0x7fd0e20f9fa0> = <fixtures.LightningNode object at 0x7fd0e20f59d0>.daemon
```
What was in logs was:
```
lightningd-1 2024-11-18T05:33:50.634Z DEBUG 022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59-chan#1: Sending HTLC while still syncing with bitcoin network (103 vs 105)
```
Implying that l1 was an extra block behind.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We need to wait for *l2* to see the channel in CHANNELD_NORMAL,
otherwise the array here is empty:
```
chan = only_one([c for c in l1.rpc.listpeerchannels(l2.info['id'])['channels'] if c['state'] == 'CHANNELD_NORMAL'])
amount = chan['funding']['local_funds_msat']
assert amount > Millisatoshi(str((1 << 24) - 1) + "sat")
# We should know we can spend that much!
spendable = chan['spendable_msat']
assert spendable > Millisatoshi(str((1 << 24) - 1) + "sat")
# So should peer.
> chan = only_one([c for c in l2.rpc.listpeerchannels(l1.info['id'])['channels'] if c['state'] == 'CHANNELD_NORMAL'])
tests/test_connection.py:3552:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
arr = []
def only_one(arr):
"""Many JSON RPC calls return an array; often we only expect a single entry
"""
> assert len(arr) == 1
E AssertionError
```