Not just when it's a private channel. This is useful for listpeerchannels in the next patch.
Most of this is renaming.
It also means that source can be NULL, so move it out of the struct and put it in the message,
where it logically belongs, and make it an optional field.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
and stash in the database.
Rusty: I added the bad gossip message so we would see unknown updates in CI, and made sure we don't send our own generated updates to lightningd.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
If we ever re-enabled a channel too fast, if we considered it spam it
wouldn't propagate. For the moment, consider our own updates to never
be spam.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This should only have an effect if someone queries, but in practice, peers actually
see the disabling of channels.
This is a workaround until we rework the code so gossipd doesn't generate these at all
any more.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Now we've asserted that channeld would tell lightningd the same thing it
would do anyway, we can simply have channeld say "enable=True|False" and
lightningd fill in the other fields.
This means there's a pile of things channeld doesn't need to know any more!
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This makes it easier to use outside simple subds, and now lightningd can
simply dump to log rather than returning JSON.
JSON formatting was a lot of work, and we only did it for lightningd, not for
subdaemons. Easier to use the logs in all cases.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Also requires us to expose memleak when !DEVELOPER, however we only
ever used the memleak tracking when the LIGHTNINGD_DEV_MEMLEAK
environment variable was set, so keep that.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
dump_our_gossip() is mainly useful for propagating our gossip when we
are poorly connected, not when we have many peers. @whitslack
reported excessive memory use queueing messages on a large node, so we
limit it beyond the first 5 peers, to 5 channels each.
This assumes we have ~ the same number of peers as channels, which
is probably reasonable.
In the long term, we should move this to connectd, which is properly
equipped to trickle out these messages.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Fixes: #6540
Update the lightningd <-> channeld interface with lots of new commands to needed to facilitate spicing.
Implement the channeld splicing protocol leveraging the interactivetx protocol.
Implement lightningd’s channel_control to support channeld in its splicing efforts.
Changelog-Added: Added the features to enable splicing & resizing of active channels.
Update gossip routiens and various other hecks on the channel state to consider AWAITING_SPLICE to be routable and treated similar to CHANNELD_NORMAL.
Small updates to psbt interface
Changelog-None
Also as update_own_node_announcement is called nearly continuously
under normal operation by maybe_send_own_node_announce, the timer should
not be freed continuously - better to only free before actually
refreshing.
When an outdated own node announcement is present, it fails the
nannounce_different test and also fails to kick off the forced regen
timer.
Changelog-Fixed: Node announcements are refreshed more reliably.
This will at least *help* the case where these were not populated, causing us
to send errors without channel_updated appended.
It's not perfect: we can still send such errors if the gossip store is
corrupted, and we still send them for private channels, but it should
help.
(The much better fix is far more invasive, so slips to next release!)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This looked like a test flake, but was real:
```
l1.daemon.wait_for_log("closing soon due to the funding outpoint being spent")
# We won't gossip the dead channel any more (but we still propagate node_announcement). But connectd is not explicitly synced, so wait for "a bit".
time.sleep(1)
> assert len(get_gossip(l1)) == 2
E assert 4 == 2
```
We can see that two channel_updates come in *after* we mark it dying:
```
gossipd: channel 103x1x0 closing soon due to the funding outpoint being spent
gossipd: REPLY WIRE_GOSSIPD_NEW_BLOCKHEIGHT_REPLY with 0 fds
022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59-gossipd: Received channel_update for channel 103x1x0/0 now DISABLED
022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59-gossipd: Received channel_update for channel 103x1x0/1 now DISABLED
```
We should keep marking channel_updates the same way.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
While one side was not produced by us, we have a vested interest in propagating it.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Added: Protocol: When we send our own gossip when a peer connects, also send any incoming channel_updates.
@endothermicdev and I found this while investigating a "nobody sees my node_announcement" bug report.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Fixes: #6410
Reported-by: benjaminchodroff on discord
Changelog-Fixed: Protocol: When we send our own gossip when a peer connects, send our node_announcement too (regression in v23.05)
Alex and I were reading it and I got confused: it's really a simpler loop
than it seems, with all those redundant `continue` statements.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We don't actually delete them for 12 blocks, but we can't avoid
propagating them. We don't mark node_announcements, which is a bit
weird, but avoids us tracking logic to un-dying them if a channel is
opened.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We keep several peer pointers, but we just add a hook to NULL them
manually when a peer dies, rather than using voodoo.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We use a "softref" which is a magic pointer which gets NULL'ed when
the object is freed. But it's heavy, and a bit tricky to use, and we
only use it in gossipd.
Instead, keep the nodeid, and do a lookup (now that's fast) if we want
to credit the sender for valid gossip.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Reported in #6270, there was an attempt to delete gossip overrunning
the end of the gossip_store. This logs the gossip type that was attempted to be deleted and avoids an immediate crash (tombstones would be fine to
skip over at least.)
Changelog-None
The push bit was convenient for connectd to send our own gossip
to peers upon connecting by naively traversing the gossip_store
and sending anything flagged `push`. This function is now
performed by gossipd leaving no use for the push bit.
Changelog-Changed: `gossipd`: gossip_store PUSH bit is no longer set.
This was previously the role of connectd, but it's actually more
efficient for us to do it: connectd has to sweep through the entire
gossip_store, but we have datastructures for this already.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This reverts us to the v22.11 behaviour, pending a revisit for the
next release.
Changelog-Changed: gossipd: revert zombification change, keep all gossip for now.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Loading the gossip_store would not create a pending node announcement
when the node already had a zombie channel. This would cause the node
announcement to attempt to be loaded, but fail because it had no
broadcastable channels. Accepting a pending node announcement as when
normally loading from the channel corrects this.
`node_has_public_channels` taking into account zombie channels enables
this behavior.
Separately, node_announcements were still being flagged as zombies
in the gossip store despite that feature being removed.
Changelog-None
Without inheriting zombie status, gossipd would allow regular channel updates
into the store until the pruning cycle hits (and the channel is properly
flagged) which is 3.5 days. Applying zombie status when reading channel
updates from the store prevents this.
Changelog-None
remove_chan_from_node already corrects the ordering if a node_announcement
is left ahead of the next oldest channel_announcement, but zombifying should
do that check (and reorder if necessary) too.
Changelog-None
Closing channels would previously require moving the node announcements
in the gossip store on occasion. They incorrectly lost their spam flag
during this process (would no longer be squelched.)
Changelog-None
A zombie channel is not considered broadcastable, so if all channels
are zombies (i.e. is_node_zombie() is true), then
node_has_broadcastable_channels() is false.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This simplifies things (we'll get node_announcement if they ever
rebroadcast), since we clearly have an issue with node_announcement for
zombie nodes.
Changes:
1. Remove now-unused gossip_store_mark_nannounce_zombie and resurrect_nannouncements.
2. Don't consider zombie channels to count when deciding whether to move node_announcement
(node_announcement must be preceded by at least one broadcastable channel_announcement).
3. Treat incoming node_announcement where we have all-zombie channels the same as if
we had no channels.
4. Remove node_announcement whenever we have no announcable channels (not just zombies).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This could always happen if we armed the timer when we did have public
channels, and by the time we did our node_announcement we no longer
did, but it gets triggered in our tests when we remove (our own!)
zombied node_announcement in the next patch.