Both for HTLC txs and the to-self outputs.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Changed: Wallet: Taproot addresses are used for unilateral-close change addresses.
It only works on BOLT11, and has long been replaced by the more
generic "decode".
Removing it will stop the confusion!
(Note: documentation claims it was introduced in 23.08, but that was
wrong, as it's been in CLN since the beginning).
[ Fixup from: niftynei <niftynei@gmail.com> ]
Fixes: https://github.com/ElementsProject/lightning/issues/6419
Changelog-Deprecated: JSON-RPC: `decodepay`: use `decode`.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Fails when l3 doesn't know address for l1, to connect to it:
```
2024-11-16T04:45:42.2243366Z lightningd-3 2024-11-16T04:35:10.582Z DEBUG 022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59-connectd: peer_in WIRE_ONION_MESSAGE
2024-11-16T04:45:42.2244342Z lightningd-3 2024-11-16T04:35:10.582Z DEBUG lightningd: Got onionmsg reply_path
2024-11-16T04:45:42.2245398Z lightningd-3 2024-11-16T04:35:10.582Z DEBUG plugin-offers: Note: disallowing deprecated onion_message_recv.blinding
2024-11-16T04:45:42.2246408Z lightningd-3 2024-11-16T04:35:10.586Z UNUSUAL plugin-offers: No incoming channel for 5msat, so no blinded path
2024-11-16T04:45:42.2247289Z lightningd-3 2024-11-16T04:35:10.605Z DEBUG hsmd: Client: Received message 25 from client
2024-11-16T04:45:42.2248372Z lightningd-3 2024-11-16T04:35:10.606Z DEBUG plugin-offers: connecting directly to 0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518
2024-11-16T04:45:42.2249451Z lightningd-3 2024-11-16T04:35:10.606Z DEBUG gossipd: REPLY WIRE_GOSSIPD_GET_ADDRS_REPLY with 0 fds
2024-11-16T04:45:42.2250743Z lightningd-3 2024-11-16T04:35:10.607Z DEBUG 0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518-connectd: Failed connected out: Unable to connect, no address known for peer
```
This is because the test which was supposed to wait for addresses is
wrong: it passes when l3 knows nothing! (`all([])` == `True`)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Couldn't figure out why hsmtool.proc.wait(WAIT_TIMEOUT) returns 1?
hsmtool doesn't ever seem to exit status 1!
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Looking at the logs (and comparing a successful run), it seems the connect happens before
the connect_stream is ready, so we miss it:
```
________________________ test_grpc_connect_notification ________________________
[gw7] linux -- Python 3.8.18 /home/runner/.cache/pypoetry/virtualenvs/cln-meta-project-AqJ9wMix-py3.8/bin/python
node_factory = <pyln.testing.utils.NodeFactory object at 0x7fb08bb969d0>
def test_grpc_connect_notification(node_factory):
l1, l2 = node_factory.get_nodes(2)
# Test the connect notification
connect_stream = l1.grpc.SubscribeConnect(clnpb.StreamConnectRequest())
l2.connect(l1)
> for connect_event in connect_stream:
tests/test_cln_rs.py:425:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../../.cache/pypoetry/virtualenvs/cln-meta-project-AqJ9wMix-py3.8/lib/python3.8/site-packages/grpc/_channel.py:543: in __next__
return self._next()
../../../.cache/pypoetry/virtualenvs/cln-meta-project-AqJ9wMix-py3.8/lib/python3.8/site-packages/grpc/_channel.py:960: in _next
_common.wait(self._state.condition.wait, _response_ready)
../../../.cache/pypoetry/virtualenvs/cln-meta-project-AqJ9wMix-py3.8/lib/python3.8/site-packages/grpc/_common.py:156: in wait
_wait_once(wait_fn, MAXIMUM_WAIT_TIMEOUT, spin_cb)
../../../.cache/pypoetry/virtualenvs/cln-meta-project-AqJ9wMix-py3.8/lib/python3.8/site-packages/grpc/_common.py:116: in _wait_once
wait_fn(timeout=timeout)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <Condition(<unlocked _thread.RLock object owner=0 count=0 at 0x7fb089730f00>, 0)>
timeout = 0.1
def wait(self, timeout=None):
"""Wait until notified or until a timeout occurs.
If the calling thread has not acquired the lock when this method is
called, a RuntimeError is raised.
This method releases the underlying lock, and then blocks until it is
awakened by a notify() or notify_all() call for the same condition
variable in another thread, or until the optional timeout occurs. Once
awakened or timed out, it re-acquires the lock and returns.
When the timeout argument is present and not None, it should be a
floating point number specifying a timeout for the operation in seconds
(or fractions thereof).
When the underlying lock is an RLock, it is not released using its
release() method, since this may not actually unlock the lock when it
was acquired multiple times recursively. Instead, an internal interface
of the RLock class is used, which really unlocks it even when it has
been recursively acquired several times. Another internal interface is
then used to restore the recursion level when the lock is reacquired.
"""
if not self._is_owned():
raise RuntimeError("cannot wait on un-acquired lock")
waiter = _allocate_lock()
waiter.acquire()
self._waiters.append(waiter)
saved_state = self._release_save()
gotit = False
try: # restore state no matter what (e.g., KeyboardInterrupt)
if timeout is None:
waiter.acquire()
gotit = True
else:
if timeout > 0:
> gotit = waiter.acquire(True, timeout)
E Failed: Timeout >1200.0s
```
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Since wait_for_onchaind_tx doesn't actually wait for the call to bitcoind to
return, we have a race in checking if the txid is in the mempool. Fix this
by making wait_for_onchaind_tx actually wait for the response (except for delayed txs!).
```
2024-11-15T07:15:22.0836959Z def test_htlc_in_timeout(node_factory, bitcoind, executor):
2024-11-15T07:15:22.0837722Z """Test that we drop onchain if the peer doesn't accept fulfilled HTLC"""
2024-11-15T07:15:22.0838208Z
2024-11-15T07:15:22.0838585Z # HTLC 1->2, 1 fails after 2 has sent committed the fulfill
2024-11-15T07:15:22.0839137Z disconnects = ['-WIRE_REVOKE_AND_ACK*2']
2024-11-15T07:15:22.0839741Z # Feerates identical so we don't get gratuitous commit to update them
2024-11-15T07:15:22.0840304Z l1 = node_factory.get_node(disconnect=disconnects,
2024-11-15T07:15:22.0840839Z options={'dev-no-reconnect': None},
2024-11-15T07:15:22.0841285Z feerates=(7500, 7500, 7500, 7500))
2024-11-15T07:15:22.0841673Z l2 = node_factory.get_node()
2024-11-15T07:15:22.0842278Z # Give it some sats for anchor spend!
2024-11-15T07:15:22.0842679Z l2.fundwallet(25000, mine_block=False)
2024-11-15T07:15:22.0843013Z
2024-11-15T07:15:22.0843342Z l1.rpc.connect(l2.info['id'], 'localhost', l2.port)
2024-11-15T07:15:22.0843753Z chanid, _ = l1.fundchannel(l2, 10**6)
2024-11-15T07:15:22.0844058Z
2024-11-15T07:15:22.0844291Z sync_blockheight(bitcoind, [l1, l2])
2024-11-15T07:15:22.0844606Z
2024-11-15T07:15:22.0844958Z amt = 200000000
2024-11-15T07:15:22.0845713Z inv = l2.rpc.invoice(amt, 'test_htlc_in_timeout', 'desc')['bolt11']
2024-11-15T07:15:22.0846612Z assert only_one(l2.rpc.listinvoices('test_htlc_in_timeout')['invoices'])['status'] == 'unpaid'
2024-11-15T07:15:22.0847141Z
2024-11-15T07:15:22.0847430Z executor.submit(l1.dev_pay, inv, dev_use_shadow=False)
2024-11-15T07:15:22.0847805Z
2024-11-15T07:15:22.0848041Z # l1 will disconnect and not reconnect.
2024-11-15T07:15:22.0848660Z l1.daemon.wait_for_log('dev_disconnect: -WIRE_REVOKE_AND_ACK')
2024-11-15T07:15:22.0850393Z
2024-11-15T07:15:22.0851297Z # Deadline HTLC expiry minus 1/2 cltv-expiry delta (rounded up) (== cltv - 3). cltv is 5+1.
2024-11-15T07:15:22.0852146Z # shadow route can add extra blocks!
2024-11-15T07:15:22.0852622Z status = only_one(l1.rpc.call('paystatus')['pay'])
2024-11-15T07:15:22.0853044Z if 'shadow' in status:
2024-11-15T07:15:22.0853861Z shadowlen = 6 * status['shadow'].count('Added 6 cltv delay for shadow')
2024-11-15T07:15:22.0854325Z else:
2024-11-15T07:15:22.0854547Z shadowlen = 0
2024-11-15T07:15:22.0854845Z bitcoind.generate_block(2 + shadowlen)
2024-11-15T07:15:22.0855292Z assert not l2.daemon.is_in_log('hit deadline')
2024-11-15T07:15:22.0855669Z bitcoind.generate_block(1)
2024-11-15T07:15:22.0855950Z
2024-11-15T07:15:22.0856406Z l2.daemon.wait_for_log('Fulfilled HTLC 0 SENT_REMOVE_COMMIT cltv .* hit deadline')
2024-11-15T07:15:22.0856997Z l2.daemon.wait_for_log('sendrawtx exit 0')
2024-11-15T07:15:22.0857360Z l2.bitcoin.generate_block(1)
2024-11-15T07:15:22.0857741Z l2.daemon.wait_for_log(' to ONCHAIN')
2024-11-15T07:15:22.0858137Z l1.daemon.wait_for_log(' to ONCHAIN')
2024-11-15T07:15:22.0858644Z
2024-11-15T07:15:22.0859068Z # L2 will collect HTLC (iff no shadow route)
2024-11-15T07:15:22.0859741Z _, txid, blocks = l2.wait_for_onchaind_tx('OUR_HTLC_SUCCESS_TX',
2024-11-15T07:15:22.0860287Z 'OUR_UNILATERAL/THEIR_HTLC')
2024-11-15T07:15:22.0860662Z assert blocks == 0
2024-11-15T07:15:22.0860908Z
2024-11-15T07:15:22.0861262Z # If we try to reuse the same output as we used for the anchor spend, then
2024-11-15T07:15:22.0861951Z # bitcoind can reject it. In that case we'll try again after we get change
2024-11-15T07:15:22.0862433Z # from anchor spend.
2024-11-15T07:15:22.0862768Z if txid not in bitcoind.rpc.getrawmempool():
2024-11-15T07:15:22.0863354Z bitcoind.generate_block(1)
2024-11-15T07:15:22.0863735Z > bitcoind.generate_block(1, wait_for_mempool=1)
2024-11-15T07:15:22.0864019Z
```
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Because it wasn't in ALL_OBJS. Copy the Makefile pattern!
```
Submodule 'src/secp256k1' (https://github.com/ElementsProject/secp256k1-zkp.git) registered for path 'external/libwally-core/src/secp256k1'
Cloning into '/home/runner/work/lightning/lightning/external/libwally-core/src/secp256k1'...
cc tests/plugins/channeld_fakenet.c
In file included from ./bitcoin/script.h:4,
from tests/plugins/channeld_fakenet.c:14:
./bitcoin/signature.h:6:10: fatal error: secp256k1.h: No such file or directory
6 | #include <secp256k1.h>
| ^~~~~~~~~~~~~
compilation terminated.
make: *** [Makefile:301: tests/plugins/channeld_fakenet.o] Error 1
make: *** Waiting for unfinished jobs....
Submodule path 'external/libwally-core/src/secp256k1': checked out
```
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
There's a known issue with hsm passwords and valgrind:
```
write_all(master_fd, (password + '\n').encode("utf-8"))
> l1.daemon.wait_for_log("Server started with public key")
tests/test_plugin.py:4526:
...
if self.is_in_log(r):
print("({} was previously in logs!)".format(r))
> raise TimeoutError('Unable to find "{}" in logs.'.format(exs))
E TimeoutError: Unable to find "[re.compile('Server started with public key')]" in logs.
```
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Based on the patch by bstin <barry.github@capsmx.com>, which added a separate command,
this simply extends "generatehsm" to allow more options.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Added: hsmtool: generatehsm can run non-interactive, taking options on the cmdline.
The `send_outreq` function is a good place to suspend and resume
traces, since these are usually the places where we hand off control
back to the `io_loop`. This assumes that we do not continue doing
heavy liftin after we have queued an `outreq` call, but that is most
likely the case anyway. This frees us from having to track suspensions
whenever we call the RPC from a plugin.
Christian noted that if we don't do this we could flood onchaind with messages:
particularly in Greenlight where the HSM (remote) may delay indefinitely, so
onchaind doesn't process messages.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This means it always tells us explicitly whether to keep watching or not,
and we know it's processed it.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This may help the cases we see where gossipd doesn't realize channels
are closed (because of shutdown before it processed the closing).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Fixed: `gossipd` will no longer miss some channel closes on restart.
And we hook in the replay watch code.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Fixed: `onchaind` can miss conclusion of final txs in some cases, will now replay independently.
We start by telling onchaind about the funding spend, and anything
which spends it, and it tells us the txids it *doesn't* want to watch
any more. We're going to use a separate set of watches for the replay
case: this implements that code.
Once we're caught up, we convert any remaining watches to normal ones
to follow future blocks.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>