- Removes CI value error for Broken logs
- Fixes CI errors due to deprecated listconfigs 'important-plugins'
- Removed listchannels deprecated local test
Changelog-None.
When a peer connects, we always send all our own gossip (even if they
had set the timestamp filters to filter it out). But we weren't
forcing it out to them when it changed, so this logic only applied to
unstable or frequently-restarting nodes.
So now, we tell all the peers whenever we tell gossipd about our new
gossip.
Fixes: #7276
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Changed: Protocol: We now send current peers our changed gossip (even if they set timestamp_filter otherwise), not just on reconnect.
We didn't actually check that we *send* the refreshed gossip, just
that we print the message saying we're going to.
So check that everyone received updated gossip when this happens.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This helps code using generate_gossip_store() too, since it can map its identifiers
to the nodeids which were used.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is a hack, but we've had nodes missing gossip. LDK does this,
so until we get a better workaround, use this one.
Changelog-Changed: Protocol: we now always ask the first peer for all its gossip.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
A bit tricky, since we get more than one message at a time. However,
this just means we go over quota for a bit, and will get caught when
those are sent (we do this for a single message already, so it's not
that much worse).
Note: this not only limits sending, but it limits the actuall query
processing, which is nice.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We currently stream gossip as fast as we can, even if they start at
timestamp 0. Instead, use a simple token bucket filter and only let
them have 1MB per second (500 bytes per second for testing).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Protocol: connectd: we now throttle outgoing gossip at 1MB/second per peer.
This means we do have to set the network correctly though,
and also we can get query messages from lightningd which we have to filter.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Rather than `allow_broken_log`, we have `broken_log` which is a regex
indicating what log lines are expected. This tightens our tests
significantly, as it will catch *unexpected* brokenness.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This avoids us gossiping about nodes which don't have live channels.
Interstingly, we previously tested that we *did* gossip such node
announcements, and now we fix that test.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Instead of "new" and "load", we don't really need to "load" anything,
so do everything in gossip_store_new.
Have it do the compaction/rewrite, and collect the dying records
The gossip_store_load is now basically a noop, since gossmap
does that.
gossipd removes a pile of routines dealing with messages,
in favor of just handing them to gossmap_manage.
The stub gossmap_manage constructor is removed entirely.
We simplified behaviour around channel_announcements with
no channel update: we now add them to the store, and go
back to fix the timestamp later. This changes a test,
which explicitly tests for the old behaviour.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We weakened this progressively over time, and gossip v1.5 makes spam
impossible by protocol, so we can wait until then.
Removing this code simplifies things a great deal!
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Removed: Protocol: we no longer ratelimit gossip messages by channel, making our code far simpler.
We never enabled it, because we seemed to be eliminating valid
channels. We discard zombie-marked records on loading.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It was an obscure dev command, as it never worked reliably.
It would be much easier to re-implement once this is done.
This turned out to reveal a tiny leak on
tests/test_gossip.py::test_gossip_store_load_amount_truncated where we
didn't immedately free chan_ann if it was dangling.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This caused a flake in test_gossip_lease_rates, since the peer would ignore
the node_announcement due to duplicate timestamps!
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We mine blocks too fast, and l3 discard the channel_announcment as too far in the future:
```
lightningd-3 2023-12-14T06:40:04.744Z DEBUG 0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518-gossipd: Ignoring future channel_announcment for 103x1x1 (current block 102)
```
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This commit is a bit messy, but it tries to do the minimal switchover.
Some tests change, so those are included here.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This was triggering on private updates after gossip changes.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
1diff --git a/tests/test_gossip.py b/tests/test_gossip.py
index 677ec4825..285290b71 100644
This alters a few remaining tests, as well.
[ Inclused even more test fixes from Alex Myers <alex@endothermic.dev>! ]
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Deprecated: JSON-RPC: `listchannels` listing private channels: use listpeerchannels
Writing to the gossip_store file is not explicitly synchronized, so it
seems that connectd has not caught up with the dying flags we've set.
Simply wait for a second. Hacky, but should work.
```
def test_gossip_not_dying(node_factory, bitcoind):
l1 = node_factory.get_node()
l2, l3 = node_factory.line_graph(2, wait_for_announce=True)
l1.rpc.connect(l2.info['id'], 'localhost', l2.port)
# Wait until it sees all the updates, node announcments.
wait_for(lambda: len([n for n in l1.rpc.listnodes()['nodes'] if 'alias' in n])
+ len(l1.rpc.listchannels()['channels']) == 4)
def get_gossip(node):
out = subprocess.run(['devtools/gossipwith',
'--initial-sync',
'--timeout-after=2',
'{}@localhost:{}'.format(node.info['id'], node.port)],
check=True,
timeout=TIMEOUT, stdout=subprocess.PIPE).stdout
msgs = []
while len(out):
l, t = struct.unpack('>HH', out[0:4])
msg = out[2:2 + l]
out = out[2 + l:]
# Ignore pings, timestamp_filter
if t == 265 or t == 18:
continue
# channel_announcement node_announcement or channel_update
assert t == 256 or t == 257 or t == 258
msgs.append(msg)
return msgs
assert len(get_gossip(l1)) == 5
# Close l2->l3, mine block.
l2.rpc.close(l3.info['id'])
bitcoind.generate_block(1, wait_for_mempool=1)
l1.daemon.wait_for_log("closing soon due to the funding outpoint being spent")
# We won't gossip the dead channel any more (but we still propagate node_announcement)
> assert len(get_gossip(l1)) == 2
E assert 4 == 2
E + where 4 = len([b'\x01\x01L\xc2\xbe\x08\xbb\xa8~\x8f\x80R\x9e`J\x1cS\x18|\x12\n\xe5_6\xb0\xa6S\x9fU\xae\x19\x9c\x1fXB\xab\x81N\x13\xdc\x8e}\xb9\xb0\xb6\xe6\x14h\xd4:\x90\xce\xc3\xad\x9ezR`~\xba@\xc9\x91e\x89\xab\x00\x07\x88\xa0\x00\n\x02i\xa2d\xb8\xa9`\x02-"6 \xa3Y\xa4\x7f\xf7\xf7\xacD|\x85\xc4l\x92=\xa53\x89"\x1a\x00T\xc1\x1c\x1e<\xa3\x1dY\x02-"SILENTARTIST-27fc801-modded\x00\x00\x00\x00\x00\x00\x00', b'\x01\x01M\x00\x86\x8e4\xc8\x90p\n\x98\xf7\xce4\x1e\xd9\xd6-6\xfb(\xf0\xe4\xb7\x90\x7f\x89\xb9\xfa\x00\x82\x1b\xeb\x1fY\x93\x1e\xe0c\xb2\x0e<\xe6\x06x\xb7\xe54};\xfbd\xa0\x01S\xcf\xe8{\xf8\x8f/\xa7\xc0\xe2h\x00\x07\x88\xa0\x00\n\x02i\xa2d\xb8\xa9`\x03]+\x11\x92\xdf\xba\x13N\x10\xe5@\x87]6n\xbc\x8b\xc3S\xd5\xaavk\x80\xc0\x90\xb3\x9c:]\x88]\x03]+HOPPINGFIRE-27fc801-modded\x00\x00\x00\x00\x00\x00\x00\x00', b'\x01\x02~\xe0\x13\xb4\x84Gz\xcf(\xd4w\xa7\x9bZ\x1a\xe82\xd1\xe1\x1bLm\xc8\n\xcd\xd4\xfb\x88\xf8\xc6\xdbt\\v\x89~\xd1.e\xc8\xa8o\x9c`\xd5\xa8\x97\x11l\xf2g\xcb\xa8\xcf\r\x869\xd3\xb5\xd5\x9a\xa0my\x9f\x87\xebX\x0b\x9e_\x11\xdc!\x1e\x9f\xb6j\xbb6\x99\x99\x90D\xf8\xfe\x14h\x01\x16#\x936B\x86\xc6\x00\x00g\x00\x00\x01\x00\x00d\xb8\xa9d\x01\x02\x00\x06\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\n\x00\x00\x00\x00;\x023\x80', b"\x01\x0284\xf1a\x86z\x8e\xf2\xe5'\xf7\xfe1\x8d\x96R\x0c\xe7\x1fj#\xaf\xbd/\xba\x10e\xd1\xccQ-\xcf/>\xa5g\xc6\xd8\x9cO \xe7~\xb3\xda\xe0\\vg\xfb\x02&T\x93\xa0\xd4\x95\x8e\xd5L\x12\x9a\xf7\xe6\x9f\x87\xebX\x0b\x9e_\x11\xdc!\x1e\x9f\xb6j\xbb6\x99\x99\x90D\xf8\xfe\x14h\x01\x16#\x936B\x86\xc6\x00\x00g\x00\x00\x01\x00\x00d\xb8\xa9d\x01\x03\x00\x06\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\n\x00\x00\x00\x00;\x023\x80"])
```
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
While one side was not produced by us, we have a vested interest in propagating it.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Added: Protocol: When we send our own gossip when a peer connects, also send any incoming channel_updates.
@endothermicdev and I found this while investigating a "nobody sees my node_announcement" bug report.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Fixes: #6410
Reported-by: benjaminchodroff on discord
Changelog-Fixed: Protocol: When we send our own gossip when a peer connects, send our node_announcement too (regression in v23.05)
Fixes: #6368
Changelog-Fixed: Protocol: we no longer gossip about recently-closed channels (Eclair gets upset with this).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
If we mine too fast, gossip can reach a node which considers it too far in the guture. Break it up.
```
@pytest.mark.developer("Too slow without --dev-fast-gossip")
def test_routing_gossip(node_factory, bitcoind):
nodes = node_factory.get_nodes(5)
for i in range(len(nodes) - 1):
src, dst = nodes[i], nodes[i + 1]
src.rpc.connect(dst.info['id'], 'localhost', dst.port)
src.openchannel(dst, CHANNEL_SIZE, confirm=False, wait_for_announce=False)
# openchannel calls fundwallet which mines a block; so first channel
# is 4 deep, last is unconfirmed.
# Allow announce messages.
mine_funding_to_announce(bitcoind, nodes, num_blocks=6, wait_for_mempool=1)
# Deep check that all channels are in there
comb = []
for i in range(len(nodes) - 1):
comb.append((nodes[i].info['id'], nodes[i + 1].info['id']))
comb.append((nodes[i + 1].info['id'], nodes[i].info['id']))
def check_gossip(n):
seen = []
channels = n.rpc.listchannels()['channels']
for c in channels:
seen.append((c['source'], c['destination']))
missing = set(comb) - set(seen)
logging.debug("Node {id} is missing channels {chans}".format(
id=n.info['id'],
chans=missing)
)
return len(missing) == 0
for n in nodes:
> wait_for(lambda: check_gossip(n))
tests/test_gossip.py:721:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
success = <function test_routing_gossip.<locals>.<lambda> at 0x7f3200534ef0>
timeout = 180
def wait_for(success, timeout=TIMEOUT):
start_time = time.time()
interval = 0.25
while not success():
time_left = start_time + timeout - time.time()
if time_left <= 0:
> raise ValueError("Timeout while waiting for {}".format(success))
E ValueError: Timeout while waiting for <function test_routing_gossip.<locals>.<lambda> at 0x7f3200534ef0>
This obsoletes the use of --announce-addr-dns which I know Michael
didn't really like either.
Changelog-Deprecated: Config: `announce-addr-dns`; use `--bind-addr=dns:ADDR` for finer control.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This was previously the role of connectd, but it's actually more
efficient for us to do it: connectd has to sweep through the entire
gossip_store, but we have datastructures for this already.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We were in fact feeding l1 its own gossip, which it doesn't ratelimit (this was
a bit fuzzy before, but definitely is the case now!).
So make this node actually l3, so we test what we expected to test.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>