This is required for VLS which wants to know (and potentially decline) invoices
we're trying to pay.
As a nice side effect, our "check" command for xpay now does much more thorough
checking of arguments.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
As the first user of a persistent layer, this tripped tests which
assumed the datastore would be empty!
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
These are automatically marked "important", in the sense that we won't startup
if they are not working, but this wasn't meant to disallow stopping them.
Changelog-Changed: JSON-RPC: built-in plugins can now be stopped using "plugin stop".
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Note: won't work with grpc (or probably other tools), since the output
is different. But good for testing.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Added: Config: option `xpay-handle-pay` can be used to call xpay when pay is used in many cases (but output is different from pay!)
Because we initalized plugin->io_rpc_conn *after* calling plugin->init,
send_outreq would do a (harmless, in our case) wakeup on an uninitialized address:
```
==1164079== Conditional jump or move depends on uninitialised value(s)
==1164079== at 0x1628FC: backend_wake (poll.c:227)
==1164079== by 0x160B98: io_wake (io.c:384)
==1164079== by 0x1160A8: ld_rpc_send (libplugin.c:255)
==1164079== by 0x1187E0: send_outreq (libplugin.c:1099)
==1164079== by 0x115041: init (xpay.c:1620)
```
Solution is simple: set plugin->io_rpc_conn to NULL, and don't wake it in this case.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
fail->msg can be NULL for local failures (the error message itself is more informative
in this case). Use the generic "something went wrong" message.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Both for HTLC txs and the to-self outputs.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Changed: Wallet: Taproot addresses are used for unilateral-close change addresses.
It only works on BOLT11, and has long been replaced by the more
generic "decode".
Removing it will stop the confusion!
(Note: documentation claims it was introduced in 23.08, but that was
wrong, as it's been in CLN since the beginning).
[ Fixup from: niftynei <niftynei@gmail.com> ]
Fixes: https://github.com/ElementsProject/lightning/issues/6419
Changelog-Deprecated: JSON-RPC: `decodepay`: use `decode`.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Fails when l3 doesn't know address for l1, to connect to it:
```
2024-11-16T04:45:42.2243366Z lightningd-3 2024-11-16T04:35:10.582Z DEBUG 022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59-connectd: peer_in WIRE_ONION_MESSAGE
2024-11-16T04:45:42.2244342Z lightningd-3 2024-11-16T04:35:10.582Z DEBUG lightningd: Got onionmsg reply_path
2024-11-16T04:45:42.2245398Z lightningd-3 2024-11-16T04:35:10.582Z DEBUG plugin-offers: Note: disallowing deprecated onion_message_recv.blinding
2024-11-16T04:45:42.2246408Z lightningd-3 2024-11-16T04:35:10.586Z UNUSUAL plugin-offers: No incoming channel for 5msat, so no blinded path
2024-11-16T04:45:42.2247289Z lightningd-3 2024-11-16T04:35:10.605Z DEBUG hsmd: Client: Received message 25 from client
2024-11-16T04:45:42.2248372Z lightningd-3 2024-11-16T04:35:10.606Z DEBUG plugin-offers: connecting directly to 0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518
2024-11-16T04:45:42.2249451Z lightningd-3 2024-11-16T04:35:10.606Z DEBUG gossipd: REPLY WIRE_GOSSIPD_GET_ADDRS_REPLY with 0 fds
2024-11-16T04:45:42.2250743Z lightningd-3 2024-11-16T04:35:10.607Z DEBUG 0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518-connectd: Failed connected out: Unable to connect, no address known for peer
```
This is because the test which was supposed to wait for addresses is
wrong: it passes when l3 knows nothing! (`all([])` == `True`)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Couldn't figure out why hsmtool.proc.wait(WAIT_TIMEOUT) returns 1?
hsmtool doesn't ever seem to exit status 1!
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Looking at the logs (and comparing a successful run), it seems the connect happens before
the connect_stream is ready, so we miss it:
```
________________________ test_grpc_connect_notification ________________________
[gw7] linux -- Python 3.8.18 /home/runner/.cache/pypoetry/virtualenvs/cln-meta-project-AqJ9wMix-py3.8/bin/python
node_factory = <pyln.testing.utils.NodeFactory object at 0x7fb08bb969d0>
def test_grpc_connect_notification(node_factory):
l1, l2 = node_factory.get_nodes(2)
# Test the connect notification
connect_stream = l1.grpc.SubscribeConnect(clnpb.StreamConnectRequest())
l2.connect(l1)
> for connect_event in connect_stream:
tests/test_cln_rs.py:425:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../../.cache/pypoetry/virtualenvs/cln-meta-project-AqJ9wMix-py3.8/lib/python3.8/site-packages/grpc/_channel.py:543: in __next__
return self._next()
../../../.cache/pypoetry/virtualenvs/cln-meta-project-AqJ9wMix-py3.8/lib/python3.8/site-packages/grpc/_channel.py:960: in _next
_common.wait(self._state.condition.wait, _response_ready)
../../../.cache/pypoetry/virtualenvs/cln-meta-project-AqJ9wMix-py3.8/lib/python3.8/site-packages/grpc/_common.py:156: in wait
_wait_once(wait_fn, MAXIMUM_WAIT_TIMEOUT, spin_cb)
../../../.cache/pypoetry/virtualenvs/cln-meta-project-AqJ9wMix-py3.8/lib/python3.8/site-packages/grpc/_common.py:116: in _wait_once
wait_fn(timeout=timeout)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <Condition(<unlocked _thread.RLock object owner=0 count=0 at 0x7fb089730f00>, 0)>
timeout = 0.1
def wait(self, timeout=None):
"""Wait until notified or until a timeout occurs.
If the calling thread has not acquired the lock when this method is
called, a RuntimeError is raised.
This method releases the underlying lock, and then blocks until it is
awakened by a notify() or notify_all() call for the same condition
variable in another thread, or until the optional timeout occurs. Once
awakened or timed out, it re-acquires the lock and returns.
When the timeout argument is present and not None, it should be a
floating point number specifying a timeout for the operation in seconds
(or fractions thereof).
When the underlying lock is an RLock, it is not released using its
release() method, since this may not actually unlock the lock when it
was acquired multiple times recursively. Instead, an internal interface
of the RLock class is used, which really unlocks it even when it has
been recursively acquired several times. Another internal interface is
then used to restore the recursion level when the lock is reacquired.
"""
if not self._is_owned():
raise RuntimeError("cannot wait on un-acquired lock")
waiter = _allocate_lock()
waiter.acquire()
self._waiters.append(waiter)
saved_state = self._release_save()
gotit = False
try: # restore state no matter what (e.g., KeyboardInterrupt)
if timeout is None:
waiter.acquire()
gotit = True
else:
if timeout > 0:
> gotit = waiter.acquire(True, timeout)
E Failed: Timeout >1200.0s
```
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Since wait_for_onchaind_tx doesn't actually wait for the call to bitcoind to
return, we have a race in checking if the txid is in the mempool. Fix this
by making wait_for_onchaind_tx actually wait for the response (except for delayed txs!).
```
2024-11-15T07:15:22.0836959Z def test_htlc_in_timeout(node_factory, bitcoind, executor):
2024-11-15T07:15:22.0837722Z """Test that we drop onchain if the peer doesn't accept fulfilled HTLC"""
2024-11-15T07:15:22.0838208Z
2024-11-15T07:15:22.0838585Z # HTLC 1->2, 1 fails after 2 has sent committed the fulfill
2024-11-15T07:15:22.0839137Z disconnects = ['-WIRE_REVOKE_AND_ACK*2']
2024-11-15T07:15:22.0839741Z # Feerates identical so we don't get gratuitous commit to update them
2024-11-15T07:15:22.0840304Z l1 = node_factory.get_node(disconnect=disconnects,
2024-11-15T07:15:22.0840839Z options={'dev-no-reconnect': None},
2024-11-15T07:15:22.0841285Z feerates=(7500, 7500, 7500, 7500))
2024-11-15T07:15:22.0841673Z l2 = node_factory.get_node()
2024-11-15T07:15:22.0842278Z # Give it some sats for anchor spend!
2024-11-15T07:15:22.0842679Z l2.fundwallet(25000, mine_block=False)
2024-11-15T07:15:22.0843013Z
2024-11-15T07:15:22.0843342Z l1.rpc.connect(l2.info['id'], 'localhost', l2.port)
2024-11-15T07:15:22.0843753Z chanid, _ = l1.fundchannel(l2, 10**6)
2024-11-15T07:15:22.0844058Z
2024-11-15T07:15:22.0844291Z sync_blockheight(bitcoind, [l1, l2])
2024-11-15T07:15:22.0844606Z
2024-11-15T07:15:22.0844958Z amt = 200000000
2024-11-15T07:15:22.0845713Z inv = l2.rpc.invoice(amt, 'test_htlc_in_timeout', 'desc')['bolt11']
2024-11-15T07:15:22.0846612Z assert only_one(l2.rpc.listinvoices('test_htlc_in_timeout')['invoices'])['status'] == 'unpaid'
2024-11-15T07:15:22.0847141Z
2024-11-15T07:15:22.0847430Z executor.submit(l1.dev_pay, inv, dev_use_shadow=False)
2024-11-15T07:15:22.0847805Z
2024-11-15T07:15:22.0848041Z # l1 will disconnect and not reconnect.
2024-11-15T07:15:22.0848660Z l1.daemon.wait_for_log('dev_disconnect: -WIRE_REVOKE_AND_ACK')
2024-11-15T07:15:22.0850393Z
2024-11-15T07:15:22.0851297Z # Deadline HTLC expiry minus 1/2 cltv-expiry delta (rounded up) (== cltv - 3). cltv is 5+1.
2024-11-15T07:15:22.0852146Z # shadow route can add extra blocks!
2024-11-15T07:15:22.0852622Z status = only_one(l1.rpc.call('paystatus')['pay'])
2024-11-15T07:15:22.0853044Z if 'shadow' in status:
2024-11-15T07:15:22.0853861Z shadowlen = 6 * status['shadow'].count('Added 6 cltv delay for shadow')
2024-11-15T07:15:22.0854325Z else:
2024-11-15T07:15:22.0854547Z shadowlen = 0
2024-11-15T07:15:22.0854845Z bitcoind.generate_block(2 + shadowlen)
2024-11-15T07:15:22.0855292Z assert not l2.daemon.is_in_log('hit deadline')
2024-11-15T07:15:22.0855669Z bitcoind.generate_block(1)
2024-11-15T07:15:22.0855950Z
2024-11-15T07:15:22.0856406Z l2.daemon.wait_for_log('Fulfilled HTLC 0 SENT_REMOVE_COMMIT cltv .* hit deadline')
2024-11-15T07:15:22.0856997Z l2.daemon.wait_for_log('sendrawtx exit 0')
2024-11-15T07:15:22.0857360Z l2.bitcoin.generate_block(1)
2024-11-15T07:15:22.0857741Z l2.daemon.wait_for_log(' to ONCHAIN')
2024-11-15T07:15:22.0858137Z l1.daemon.wait_for_log(' to ONCHAIN')
2024-11-15T07:15:22.0858644Z
2024-11-15T07:15:22.0859068Z # L2 will collect HTLC (iff no shadow route)
2024-11-15T07:15:22.0859741Z _, txid, blocks = l2.wait_for_onchaind_tx('OUR_HTLC_SUCCESS_TX',
2024-11-15T07:15:22.0860287Z 'OUR_UNILATERAL/THEIR_HTLC')
2024-11-15T07:15:22.0860662Z assert blocks == 0
2024-11-15T07:15:22.0860908Z
2024-11-15T07:15:22.0861262Z # If we try to reuse the same output as we used for the anchor spend, then
2024-11-15T07:15:22.0861951Z # bitcoind can reject it. In that case we'll try again after we get change
2024-11-15T07:15:22.0862433Z # from anchor spend.
2024-11-15T07:15:22.0862768Z if txid not in bitcoind.rpc.getrawmempool():
2024-11-15T07:15:22.0863354Z bitcoind.generate_block(1)
2024-11-15T07:15:22.0863735Z > bitcoind.generate_block(1, wait_for_mempool=1)
2024-11-15T07:15:22.0864019Z
```
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>