Sometimes l1 ratelimits before l2, and l2 receives the warning message, not l1:
```
> assert l1.daemon.is_in_log('WARNING: Ratelimited onion_message: exceeded one per 250msec')
E AssertionError: assert None
E + where None = <bound method TailableProc.is_in_log of <pyln.testing.utils.LightningD object at 0x7f13435f45b0>>('WARNING: Ratelimited onion_message: exceeded one per 250msec')
E + where <bound method TailableProc.is_in_log of <pyln.testing.utils.LightningD object at 0x7f13435f45b0>> = <pyln.testing.utils.LightningD object at 0x7f13435f45b0>.is_in_log
E + where <pyln.testing.utils.LightningD object at 0x7f13435f45b0> = <fixtures.LightningNode object at 0x7f13435cbb80>.daemon
...
lightningd-1 2024-11-19T00:45:43.721Z DEBUG 022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59-connectd: peer_in WIRE_ONION_MESSAGE
lightningd-1 2024-11-19T00:45:43.721Z DEBUG 022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59-connectd: peer_out WIRE_WARNING
lightningd-2 2024-11-19T00:45:43.722Z DEBUG 0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518-connectd: peer_out WIRE_ONION_MESSAGE
lightningd-2 2024-11-19T00:45:43.722Z DEBUG connectd: REPLY WIRE_CONNECTD_INJECT_ONIONMSG_REPLY with 0 fds
lightningd-2 2024-11-19T00:45:43.722Z DEBUG 0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518-connectd: peer_in WIRE_WARNING
lightningd-2 2024-11-19T00:45:43.722Z INFO 0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518-connectd: Received WIRE_WARNING: WARNING: Ratelimited onion_message: exceeded one per 250msec
```
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
I can't reproduce this, but CI did (with Elements):
```
[gw3] linux -- Python 3.8.18 /home/runner/.cache/pypoetry/virtualenvs/cln-meta-project-AqJ9wMix-py3.8/bin/python
node_factory = <pyln.testing.utils.NodeFactory object at 0x7fd0e20f57f0>
bitcoind = <pyln.testing.utils.ElementsD object at 0x7fd0e307dbe0>
executor = <concurrent.futures.thread.ThreadPoolExecutor object at 0x7fd0e307da30>
@pytest.mark.openchannel('v1')
@pytest.mark.openchannel('v2')
def test_lightningd_still_loading(node_factory, bitcoind, executor):
"""Test that we recognize we haven't got all blocks from bitcoind"""
mock_release = Event()
# This is slow enough that we're going to notice.
def mock_getblock(r):
conf_file = os.path.join(bitcoind.bitcoin_dir, 'bitcoin.conf')
brpc = RawProxy(btc_conf_file=conf_file)
if r['params'][0] == slow_blockid:
mock_release.wait(TIMEOUT)
return {
"result": brpc._call(r['method'], *r['params']),
"error": None,
"id": r['id']
}
# Start it, establish channel, get extra funds.
l1, l2, l3 = node_factory.get_nodes(3, opts=[{'may_reconnect': True,
'wait_for_bitcoind_sync': False},
{'may_reconnect': True,
'wait_for_bitcoind_sync': False},
{}])
node_factory.join_nodes([l1, l2])
# Balance l1<->l2 channel
l1.pay(l2, 10**9 // 2)
l1.stop()
# Now make sure l2 is behind.
bitcoind.generate_block(2)
# Make sure l2/l3 are synced
sync_blockheight(bitcoind, [l2, l3])
# Make it slow grabbing the final block.
slow_blockid = bitcoind.rpc.getblockhash(bitcoind.rpc.getblockcount())
l1.daemon.rpcproxy.mock_rpc('getblock', mock_getblock)
l1.start(wait_for_bitcoind_sync=False)
# It will warn about being out-of-sync.
assert 'warning_bitcoind_sync' not in l1.rpc.getinfo()
assert 'warning_lightningd_sync' in l1.rpc.getinfo()
# Make sure it's connected to l2 (otherwise we get TEMPORARY_CHANNEL_FAILURE)
wait_for(lambda: only_one(l1.rpc.listpeers(l2.info['id'])['peers'])['connected'])
# Payments will succced.
l1.pay(l2, 1000)
> assert l1.daemon.is_in_log(r"Sending HTLC while still syncing with bitcoin network \(104 vs 105\)")
E AssertionError: assert None
E + where None = <bound method TailableProc.is_in_log of <pyln.testing.utils.LightningD object at 0x7fd0e20f9fa0>>('Sending HTLC while still syncing with bitcoin network \\(104 vs 105\\)')
E + where <bound method TailableProc.is_in_log of <pyln.testing.utils.LightningD object at 0x7fd0e20f9fa0>> = <pyln.testing.utils.LightningD object at 0x7fd0e20f9fa0>.is_in_log
E + where <pyln.testing.utils.LightningD object at 0x7fd0e20f9fa0> = <fixtures.LightningNode object at 0x7fd0e20f59d0>.daemon
```
What was in logs was:
```
lightningd-1 2024-11-18T05:33:50.634Z DEBUG 022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59-chan#1: Sending HTLC while still syncing with bitcoin network (103 vs 105)
```
Implying that l1 was an extra block behind.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We need to wait for *l2* to see the channel in CHANNELD_NORMAL,
otherwise the array here is empty:
```
chan = only_one([c for c in l1.rpc.listpeerchannels(l2.info['id'])['channels'] if c['state'] == 'CHANNELD_NORMAL'])
amount = chan['funding']['local_funds_msat']
assert amount > Millisatoshi(str((1 << 24) - 1) + "sat")
# We should know we can spend that much!
spendable = chan['spendable_msat']
assert spendable > Millisatoshi(str((1 << 24) - 1) + "sat")
# So should peer.
> chan = only_one([c for c in l2.rpc.listpeerchannels(l1.info['id'])['channels'] if c['state'] == 'CHANNELD_NORMAL'])
tests/test_connection.py:3552:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
arr = []
def only_one(arr):
"""Many JSON RPC calls return an array; often we only expect a single entry
"""
> assert len(arr) == 1
E AssertionError
```
The old `long_description` was removed and deprecated a while ago
without adding a proper replacement for plugin developers.
The getmanifest JSON that was to be used for that only knows `name` and `usage`.
This PR adds an optional `description` parameter that will be filled
with the methods docstring `__doc__` (if set).
Example:
@p.method("example")
def some_method(...)
"""some description"""
...
Changelog-Add: optional description paramter to Plugin.Method
Commit 531845971c broke the build without
SQLite because this code:
new = tal_fmt(NULL, template,
IF_SQLITE3(sqlite3_libversion_number()));
preprocesses into:
new = tal_fmt(NULL, template,
);
which has a syntax error. Fix it by moving the comma into the macro
argument.
Fixes: 531845971c
Changelog-None
This is required for VLS which wants to know (and potentially decline) invoices
we're trying to pay.
As a nice side effect, our "check" command for xpay now does much more thorough
checking of arguments.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
As the first user of a persistent layer, this tripped tests which
assumed the datastore would be empty!
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
These are automatically marked "important", in the sense that we won't startup
if they are not working, but this wasn't meant to disallow stopping them.
Changelog-Changed: JSON-RPC: built-in plugins can now be stopped using "plugin stop".
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Note: won't work with grpc (or probably other tools), since the output
is different. But good for testing.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Added: Config: option `xpay-handle-pay` can be used to call xpay when pay is used in many cases (but output is different from pay!)
Because we initalized plugin->io_rpc_conn *after* calling plugin->init,
send_outreq would do a (harmless, in our case) wakeup on an uninitialized address:
```
==1164079== Conditional jump or move depends on uninitialised value(s)
==1164079== at 0x1628FC: backend_wake (poll.c:227)
==1164079== by 0x160B98: io_wake (io.c:384)
==1164079== by 0x1160A8: ld_rpc_send (libplugin.c:255)
==1164079== by 0x1187E0: send_outreq (libplugin.c:1099)
==1164079== by 0x115041: init (xpay.c:1620)
```
Solution is simple: set plugin->io_rpc_conn to NULL, and don't wake it in this case.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
fail->msg can be NULL for local failures (the error message itself is more informative
in this case). Use the generic "something went wrong" message.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Both for HTLC txs and the to-self outputs.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Changed: Wallet: Taproot addresses are used for unilateral-close change addresses.