We close the channel because the payment fulfilment is not totally done, and
we generate 6 blocks, causing it to hit CLTV deadline.
```
# send some payments, mine a block or two
inv = l2.rpc.invoice(10**4, '1', 'no_1')
l1.rpc.pay(inv['bolt11'])
# l2 attempts to close a channel that it leased, should fail
with pytest.raises(RpcError, match=r'Peer leased this channel from us'):
l2.rpc.close(l1.get_channel_scid(l2))
bitcoind.generate_block(6)
sync_blockheight(bitcoind, [l1, l2])
# make sure we're at the right place for the csv lock
> l2.daemon.wait_for_log('Blockheight: SENT_ADD_ACK_COMMIT->RCVD_ADD_ACK_REVOCATION LOCAL now 115')
tests/test_closing.py:823:
...
lightningd-2 2022-07-17T13:15:34.242Z DEBUG lightningd: Adding block 115: 39d95061935e9fc42b04c86ae60d0cf157765aff4c040f3a8d0b7888db19e015
lightningd-2 2022-07-17T13:15:34.244Z UNUSUAL 0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518-chan#2: Peer permanent failure in CHANNELD_NORMAL: Fulfilled HTLC 0 SENT_REMOVE_COMMIT cltv 115 hit deadline
```
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Dualopend is not listening to the peer fd when it hangs up, so doesn't
notice it's gone. We don't clean up the channel until it's done (usually
a good thing: it could be about to lock it in), but this harms us
here.
Fix the test failure and make a comment.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The only places which should call try_reconnect now are the "connect"
command, and the disconnect path when it decides there's still an
active channel.
This introduces one subtlety: if we disconnect when there's no active
channel, but then the subd makes one, we have to catch that case!
This temporarily reverts "slow" reconnections to fast ones: see next
patch.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Connectd already does this when we *receive* an error or warning, but
now do it on send. This causes some slight behavior change: we don't
disconnect when we close a channel, for example (our behaviour here
has been inconsistent across versions, depending on the code).
When connectd is told to disconnect, it now does so immediately, and
doesn't wait for subds to drain etc. That simplifies the manual
disconnect case, which now cleans up as it would from any other
disconnection when connectd says it's disconnected.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
In various places, we assumed that when `connected` is false,
everything is finished. This is not true: we should wait for the
state we expect.
In addition, various places allows reconnections, which interfered
with the logic; suppress them.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Disconnecting a peer after openingd fails is not instantaneous:
we abort the open, so openingd sends out a WIRE_ERROR which makes
connectd close the connection.
As a result this test fails often. The simplest fix is to wait for a
second in multifundchannel before retrying, which is also robust
against behaviour changes if we decide *not* to disconnect in future.
Also make sure that addrhint ownership is correct, since this can
lead to a use-after-free if we filter dests.
```
tests/test_connection.py::test_multifunding_best_effort FAILED [100%]
======================================================= FAILURES ========================================================
_____________________________________________ test_multifunding_best_effort _____________________________________________
node_factory = <pyln.testing.utils.NodeFactory object at 0x7f6c0c95c1c0>
bitcoind = <pyln.testing.utils.BitcoinD object at 0x7f6c0c92a880>
@pytest.mark.openchannel('v1')
@pytest.mark.developer("disconnect=... needs DEVELOPER=1")
def test_multifunding_best_effort(node_factory, bitcoind):
'''
Check that best_effort flag works.
'''
disconnects = ["-WIRE_INIT",
"-WIRE_ACCEPT_CHANNEL",
"-WIRE_FUNDING_SIGNED"]
l1 = node_factory.get_node()
l2 = node_factory.get_node()
l3 = node_factory.get_node(disconnect=disconnects)
l4 = node_factory.get_node()
l1.fundwallet(2000000)
destinations = [{"id": '{}@localhost:{}'.format(l2.info['id'], l2.port),
"amount": 50000},
{"id": '{}@localhost:{}'.format(l3.info['id'], l3.port),
"amount": 50000},
{"id": '{}@localhost:{}'.format(l4.info['id'], l4.port),
"amount": 50000}]
for i, d in enumerate(disconnects):
# Should succeed due to best-effort flag.
> l1.rpc.multifundchannel(destinations, minchannels=2)
tests/test_connection.py:2070:
...
> raise RpcError(method, payload, resp['error'])
E pyln.client.lightning.RpcError: RPC call failed: method: multifundchannel, payload: {'destinations': [{'id': '022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59@localhost:41023', 'amount': 50000}, {'id': '035d2b1192dfba134e10e540875d366ebc8bc353d5aa766b80c090b39c3a5d885d@localhost:41977', 'amount': 50000}, {'id': '0382ce59ebf18be7d84677c2e35f23294b9992ceca95491fcf8a56c6cb2d9de199@localhost:34943', 'amount': 50000}], 'minchannels': 2}, error: {'code': 305, 'message': 'Peer not connected at start', 'data': {'id': '0382ce59ebf18be7d84677c2e35f23294b9992ceca95491fcf8a56c6cb2d9de199', 'method': 'fundchannel_start'}}
```
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We want to avoid lost messages in the common cases.
This generalizes our drain code, by giving the subds each 5 seconds to
close themselves, but continue to allow them to send us traffic (if
peer is still connected) and continue to send them traffic.
We continue to send traffic *out* to the peer (if it's still
connected), until all subds are gone. We still have a 5 second timer
to close the connection to peer.
On reconnects, we don't do this "drain period" on reconnects: we kill
immediately.
We fix up one test which was looking for the "disconnect" message
explicitly.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Give them time to process any final messages! If there's a reconnect,
then we need to clean them up immediately of course.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
A subtle case I hadn't come across before: if a child tal_resizes()
its parent while the parent is being deleted, tal gets confused.
The subd destructor does this using tal_arr_remove() on peer->subds,
which is currently being freed:
```
==61056== Invalid read of size 8
==61056== at 0x185632: del_tree (tal.c:417)
==61056== by 0x18560D: del_tree (tal.c:412)
==61056== by 0x185957: tal_free (tal.c:486)
==61056== by 0x1183BC: peer_discard (connectd.c:1861)
==61056== by 0x11869E: recv_req (connectd.c:1942)
==61056== by 0x12774B: handle_read (daemon_conn.c:35)
==61056== by 0x173453: next_plan (io.c:59)
==61056== by 0x17405B: do_plan (io.c:407)
==61056== by 0x17409D: io_ready (io.c:417)
==61056== by 0x176390: io_loop (poll.c:453)
==61056== by 0x118A68: main (connectd.c:2082)
==61056== Address 0x4bd8850 is 16 bytes inside a block of size 48 free'd
==61056== at 0x483DFAF: realloc (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so)
==61056== by 0x1860E6: tal_resize_ (tal.c:699)
==61056== by 0x1373DD: tal_arr_remove_ (utils.c:184)
==61056== by 0x11D508: destroy_subd (multiplex.c:930)
==61056== by 0x1850A4: notify (tal.c:240)
==61056== by 0x1855BB: del_tree (tal.c:402)
==61056== by 0x18560D: del_tree (tal.c:412)
==61056== by 0x18560D: del_tree (tal.c:412)
==61056== by 0x185957: tal_free (tal.c:486)
==61056== by 0x1183BC: peer_discard (connectd.c:1861)
==61056== by 0x11869E: recv_req (connectd.c:1942)
==61056== by 0x12774B: handle_read (daemon_conn.c:35)
```
So simply make the subds children of `peer` not the `peer->subds`
array. The only effect is that drain_peer() can't simply free the
subds array but must free the subds one at a time.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This allows us to detect when lightningd hasn't seen our latest
disconnect/reconnect; in particular, we would hit the following pattern:
1. lightningd says to connect a subd.
2. connectd disconnects and reconnects.
3. connectd reads message, connects subd.
4. lightningd reads disconnect and reconnect, sends msg to connect to subd again.
5. connectd asserts because subd is alreacy connected.
This way connectd can tell if lightningd is talking about the previous
connection, and ignoere it.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Before this patch:
1. connectd says it's connected (peer_connected)
2. we tell connectd we want to talk about each channel (peer_make_active)
3. connectd gives us an fd for each channel, and we connect it to a subd (peer_active)
4. OR, connectd says it sent something about a channel we didn't tell it about, with an fd (peer_active)
Now:
1. connectd says it's connected (peer_connected)
2. we start all appropriate subds and tell connectd to what channels/fds (peer_connect_subd).
3. if connectd says it sent something about a channel we didn't tell it about, we either tell
it to hang up (peer_final_msg), or connect a new opening daemon (peer_connect_subd).
This is the minimal-size patch, which is why we create socket pairs in
so many places to use the existing functions. Many cleanups are
possible, since the new flow is so simple.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
First, connectd tells us the peer has connected, and we call the connected hook,
and if it says it's fine, we are actually connected and we fire off notifications.
Of course, we could be disconnected while in the connected hook, and that would
mean we tell people about a connection which is no longer current.
Make this clear with a tristate: if we're not marked disconnected by
the time the hooks finish, we're good. It also gives us a cleaner
"connect" command return when we connected but disconnected before
processing.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We used to not return from "connect" until we had connected all the subds,
which introduced more races if something went wrong.
Remove this workaround, since we're going to rework this logic entirely.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It's caused by a reconnection race: we hold the new incoming connection while we
ask lightningd to kill the old connection. But under some circumstances we leave
the new incoming hanging (with, in this case, old reestablish messages unread!)
and another connection comes in.
Then, later we service the long-gone "incoming" connection, channeld
reads the ancient reestablish message and gets upset.
This test used to hang, but now we've fixed reconnection races it is fine.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We don't have to put aside a peer which is reconnecting and wait for
lightningd to remove the old peer, we can now simply free the old
and add the new.
Fixes: #5240
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Sending any pending messages to peer before hanging up is a courtesy:
give it 5 seconds before simply closing.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Now we have separate peer draining logic, we can simply use it when
connectd tells us to release the peer, without waiting. (We could
simply free the peer, but that's a bit rude, as messages can get
lost).
This removes various complex flags and logic we had before.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Fixed: `connectd`: various crashes and issues fixed by simplification and rewrite.
We allow connectd to tell us a peer has gone away, but now we need
to make sure we don't double-spiderman and tell it to disconnect peer.
This is particularly harmful on reconnect: it (will soon) tell us the
old connection is gone, ready to tell us the new peer has connected.
We would tell it to disconnect the peer, which throws away the new
connection!
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is redundant now, since connectd only sends us this once we tell
it it's OK, but that's changing, so clean up now. This means that
connectd will be able to make *unsolicited* closes, if it needs to.
We share logic with peer_please_disconnect.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This removes it from the hashtable, and forces it to do nothing but
send out any remaining packets, then close.
It is, in effect, reduced to a stub, with no further interactions
with the rest of the system (all subds are freed already).
Also removes the need for an explicit "final_msg" too.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This came out in a later patch: freeing the peer->subds doesn't actually
free the subds, because they're reparented onto subd->conn, which is
a child of peer itself.
This breaks because when the peer is finally freed, destroy_subd is
called, and expects to find itself in peer->subds (but we made that
NULL when we manually freed it!).
Fix this, and make it obvious that we tal_steal it.
```
ightning_connectd: FATAL SIGNAL 11 (version v0.11.0.1-25-gbf025aa-modded)
0x55de2a1b8b94 send_backtrace
common/daemon.c:33
0x55de2a1b8c3e crashdump
common/daemon.c:46
0x7fe2be2fc08f ???
/build/glibc-SzIz7B/glibc-2.31/signal/../sysdeps/unix/sysv/linux/x86_64/sigaction.c:0
0x55de2a1af41e destroy_subd
connectd/multiplex.c:1119
0x55de2a217686 notify
ccan/ccan/tal/tal.c:240
0x55de2a217b9d del_tree
ccan/ccan/tal/tal.c:402
0x55de2a217bef del_tree
ccan/ccan/tal/tal.c:412
0x55de2a217bef del_tree
ccan/ccan/tal/tal.c:412
0x55de2a217f39 tal_free
ccan/ccan/tal/tal.c:486
0x55de2a1aa116 peer_discard
connectd/connectd.c:1834
0x55de2a1aa38d recv_req
connectd/connectd.c:1903
0x55de2a1b9121 handle_read
common/daemon_conn.c:31
0x55de2a205a35 next_plan
ccan/ccan/io/io.c:59
0x55de2a20663d do_plan
ccan/ccan/io/io.c:407
0x55de2a20667f io_ready
ccan/ccan/io/io.c:417
0x55de2a208972 io_loop
ccan/ccan/io/poll.c:453
0x55de2a1aa736 main
connectd/connectd.c:2042
0x7fe2be2dd082 __libc_start_main
../csu/libc-start.c:308
0x55de2a1a085d ???
???:0
0xffffffffffffffff ???
???:0
```
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It's directly a product of "does it have a current owner subdaemon"
and "does that subdaemon talk to peers", so create a helper function
which just evaluates that instead.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It's available in listpeers() if you want to see it, otherwise it's not
really something users want to see in the normal course of operation.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This adds an X-Fail testcase that demonstrates that currently
the port of a DNS announcement is not set to the corresponding
network port (in this case regtest), but it will be set to 0.
Changelog-None
This is a bit weird since it lives in the offers plugin, but it works
well. This should make runes much more approachable for people!
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
I'm assuming that nobody wants a rate slower than 1 per minute; we can
introduce 'drate' if we want a per-day kind of limit.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>