Give them time to process any final messages! If there's a reconnect,
then we need to clean them up immediately of course.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
A subtle case I hadn't come across before: if a child tal_resizes()
its parent while the parent is being deleted, tal gets confused.
The subd destructor does this using tal_arr_remove() on peer->subds,
which is currently being freed:
```
==61056== Invalid read of size 8
==61056== at 0x185632: del_tree (tal.c:417)
==61056== by 0x18560D: del_tree (tal.c:412)
==61056== by 0x185957: tal_free (tal.c:486)
==61056== by 0x1183BC: peer_discard (connectd.c:1861)
==61056== by 0x11869E: recv_req (connectd.c:1942)
==61056== by 0x12774B: handle_read (daemon_conn.c:35)
==61056== by 0x173453: next_plan (io.c:59)
==61056== by 0x17405B: do_plan (io.c:407)
==61056== by 0x17409D: io_ready (io.c:417)
==61056== by 0x176390: io_loop (poll.c:453)
==61056== by 0x118A68: main (connectd.c:2082)
==61056== Address 0x4bd8850 is 16 bytes inside a block of size 48 free'd
==61056== at 0x483DFAF: realloc (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so)
==61056== by 0x1860E6: tal_resize_ (tal.c:699)
==61056== by 0x1373DD: tal_arr_remove_ (utils.c:184)
==61056== by 0x11D508: destroy_subd (multiplex.c:930)
==61056== by 0x1850A4: notify (tal.c:240)
==61056== by 0x1855BB: del_tree (tal.c:402)
==61056== by 0x18560D: del_tree (tal.c:412)
==61056== by 0x18560D: del_tree (tal.c:412)
==61056== by 0x185957: tal_free (tal.c:486)
==61056== by 0x1183BC: peer_discard (connectd.c:1861)
==61056== by 0x11869E: recv_req (connectd.c:1942)
==61056== by 0x12774B: handle_read (daemon_conn.c:35)
```
So simply make the subds children of `peer` not the `peer->subds`
array. The only effect is that drain_peer() can't simply free the
subds array but must free the subds one at a time.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This allows us to detect when lightningd hasn't seen our latest
disconnect/reconnect; in particular, we would hit the following pattern:
1. lightningd says to connect a subd.
2. connectd disconnects and reconnects.
3. connectd reads message, connects subd.
4. lightningd reads disconnect and reconnect, sends msg to connect to subd again.
5. connectd asserts because subd is alreacy connected.
This way connectd can tell if lightningd is talking about the previous
connection, and ignoere it.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Before this patch:
1. connectd says it's connected (peer_connected)
2. we tell connectd we want to talk about each channel (peer_make_active)
3. connectd gives us an fd for each channel, and we connect it to a subd (peer_active)
4. OR, connectd says it sent something about a channel we didn't tell it about, with an fd (peer_active)
Now:
1. connectd says it's connected (peer_connected)
2. we start all appropriate subds and tell connectd to what channels/fds (peer_connect_subd).
3. if connectd says it sent something about a channel we didn't tell it about, we either tell
it to hang up (peer_final_msg), or connect a new opening daemon (peer_connect_subd).
This is the minimal-size patch, which is why we create socket pairs in
so many places to use the existing functions. Many cleanups are
possible, since the new flow is so simple.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
First, connectd tells us the peer has connected, and we call the connected hook,
and if it says it's fine, we are actually connected and we fire off notifications.
Of course, we could be disconnected while in the connected hook, and that would
mean we tell people about a connection which is no longer current.
Make this clear with a tristate: if we're not marked disconnected by
the time the hooks finish, we're good. It also gives us a cleaner
"connect" command return when we connected but disconnected before
processing.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We used to not return from "connect" until we had connected all the subds,
which introduced more races if something went wrong.
Remove this workaround, since we're going to rework this logic entirely.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It's caused by a reconnection race: we hold the new incoming connection while we
ask lightningd to kill the old connection. But under some circumstances we leave
the new incoming hanging (with, in this case, old reestablish messages unread!)
and another connection comes in.
Then, later we service the long-gone "incoming" connection, channeld
reads the ancient reestablish message and gets upset.
This test used to hang, but now we've fixed reconnection races it is fine.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We don't have to put aside a peer which is reconnecting and wait for
lightningd to remove the old peer, we can now simply free the old
and add the new.
Fixes: #5240
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Sending any pending messages to peer before hanging up is a courtesy:
give it 5 seconds before simply closing.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Now we have separate peer draining logic, we can simply use it when
connectd tells us to release the peer, without waiting. (We could
simply free the peer, but that's a bit rude, as messages can get
lost).
This removes various complex flags and logic we had before.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Fixed: `connectd`: various crashes and issues fixed by simplification and rewrite.
We allow connectd to tell us a peer has gone away, but now we need
to make sure we don't double-spiderman and tell it to disconnect peer.
This is particularly harmful on reconnect: it (will soon) tell us the
old connection is gone, ready to tell us the new peer has connected.
We would tell it to disconnect the peer, which throws away the new
connection!
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is redundant now, since connectd only sends us this once we tell
it it's OK, but that's changing, so clean up now. This means that
connectd will be able to make *unsolicited* closes, if it needs to.
We share logic with peer_please_disconnect.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This removes it from the hashtable, and forces it to do nothing but
send out any remaining packets, then close.
It is, in effect, reduced to a stub, with no further interactions
with the rest of the system (all subds are freed already).
Also removes the need for an explicit "final_msg" too.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This came out in a later patch: freeing the peer->subds doesn't actually
free the subds, because they're reparented onto subd->conn, which is
a child of peer itself.
This breaks because when the peer is finally freed, destroy_subd is
called, and expects to find itself in peer->subds (but we made that
NULL when we manually freed it!).
Fix this, and make it obvious that we tal_steal it.
```
ightning_connectd: FATAL SIGNAL 11 (version v0.11.0.1-25-gbf025aa-modded)
0x55de2a1b8b94 send_backtrace
common/daemon.c:33
0x55de2a1b8c3e crashdump
common/daemon.c:46
0x7fe2be2fc08f ???
/build/glibc-SzIz7B/glibc-2.31/signal/../sysdeps/unix/sysv/linux/x86_64/sigaction.c:0
0x55de2a1af41e destroy_subd
connectd/multiplex.c:1119
0x55de2a217686 notify
ccan/ccan/tal/tal.c:240
0x55de2a217b9d del_tree
ccan/ccan/tal/tal.c:402
0x55de2a217bef del_tree
ccan/ccan/tal/tal.c:412
0x55de2a217bef del_tree
ccan/ccan/tal/tal.c:412
0x55de2a217f39 tal_free
ccan/ccan/tal/tal.c:486
0x55de2a1aa116 peer_discard
connectd/connectd.c:1834
0x55de2a1aa38d recv_req
connectd/connectd.c:1903
0x55de2a1b9121 handle_read
common/daemon_conn.c:31
0x55de2a205a35 next_plan
ccan/ccan/io/io.c:59
0x55de2a20663d do_plan
ccan/ccan/io/io.c:407
0x55de2a20667f io_ready
ccan/ccan/io/io.c:417
0x55de2a208972 io_loop
ccan/ccan/io/poll.c:453
0x55de2a1aa736 main
connectd/connectd.c:2042
0x7fe2be2dd082 __libc_start_main
../csu/libc-start.c:308
0x55de2a1a085d ???
???:0
0xffffffffffffffff ???
???:0
```
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It's directly a product of "does it have a current owner subdaemon"
and "does that subdaemon talk to peers", so create a helper function
which just evaluates that instead.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It's available in listpeers() if you want to see it, otherwise it's not
really something users want to see in the normal course of operation.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This adds an X-Fail testcase that demonstrates that currently
the port of a DNS announcement is not set to the corresponding
network port (in this case regtest), but it will be set to 0.
Changelog-None
This is a bit weird since it lives in the offers plugin, but it works
well. This should make runes much more approachable for people!
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
I'm assuming that nobody wants a rate slower than 1 per minute; we can
introduce 'drate' if we want a per-day kind of limit.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We support the old commando.py plugin, which stores a random secret,
as well as a more modern approach which uses makesecret.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is needed for invoice, which can be asked to commit to giant descriptions
(though that's antisocial!).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au
Changelog-Added: Plugins: `commando` a new builtin plugin to send/recv peer commands over the lightning network, using runes.
Plugins are supposed to store their data in the datastore, and commando does so:
let's make it easier for them by providing convenience APIs.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>