This does not validate a node announcement and address, but it
does select a node at random from the gossmap and asks lightningd
to attempt a connection to it.
Gossipd uses this to ask lightningd -> connectd to initiate
a connection to a new gossip peer. This can be used when
there are insufficient peers already connected to gossip with.
Changelog-Changed: Gossipd can now request connections to additional nodes for improved gossip sync
We can get more gossip_filter messages now. And we can also go over max-messages,
so increase that too.
```
del tally['query_short_channel_ids']
del tally['query_channel_range']
del tally['ping']
> assert tally == {'channel_announce': 1,
'channel_update': 3,
'node_announce': 1,
'gossip_filter': 1}
E AssertionError: assert {'channel_ann..._announce': 1} == {'channel_ann..._announce': 1}
E Omitting 2 identical items, use -vv to show
E Differing items:
E {'gossip_filter': 2} != {'gossip_filter': 1}
E {'channel_update': 2} != {'channel_update': 3}
E Full diff:
E {
E 'channel_announce': 1,...
E
E ...Full output truncated (10 lines hidden), use '-vv' to show
tests/test_gossip.py:2326: AssertionError
```
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We actually mine *300* blocks, not 200, and if timing is right l1
can have mined the txid before mine_txid_or_rbf() checks the mempool:
```
def test_onchaind_replay(node_factory, bitcoind):
disconnects = ['+WIRE_REVOKE_AND_ACK', 'permfail']
# Feerates identical so we don't get gratuitous commit to update them
l1, l2 = node_factory.line_graph(2, opts=[{'watchtime-blocks': 201, 'cltv-delta': 101,
'disconnect': disconnects,
'feerates': (7500, 7500, 7500, 7500)},
{'watchtime-blocks': 201, 'cltv-delta': 101}],
wait_for_announce=True)
inv = l2.rpc.invoice(10**8, 'onchaind_replay', 'desc')
rhash = inv['payment_hash']
routestep = {
'amount_msat': 10**8 - 1,
'id': l2.info['id'],
'delay': 101,
'channel': first_scid(l1, l2)
}
l1.rpc.sendpay([routestep], rhash, payment_secret=inv['payment_secret'])
l1.daemon.wait_for_log('sendrawtx exit 0')
bitcoind.generate_block(1, wait_for_mempool=1)
# Wait for nodes to notice the failure, this seach needle is after the
# DB commit so we're sure the tx entries in onchaindtxs have been added
l1.daemon.wait_for_log("Deleting channel .* due to the funding outpoint being spent")
l2.daemon.wait_for_log("Deleting channel .* due to the funding outpoint being spent")
# We should at least have the init tx now
assert len(l1.db_query("SELECT * FROM channeltxs;")) > 0
assert len(l2.db_query("SELECT * FROM channeltxs;")) > 0
# Generate some blocks so we restart the onchaind from DB (we rescan
# last_height - 100)
bitcoind.generate_block(100)
sync_blockheight(bitcoind, [l1, l2])
# l1 should still have a running onchaind
assert len(l1.db_query("SELECT * FROM channeltxs;")) > 0
l2.rpc.stop()
l1.restart()
# Can't wait for it, it's after the "Server started" wait in restart()
assert l1.daemon.is_in_log(r'Restarting onchaind \(ONCHAIN\): closed in block 109')
# l1 should still notice that the funding was spent and that we should react to it
_, txid, blocks = l1.wait_for_onchaind_tx('OUR_DELAYED_RETURN_TO_WALLET',
'OUR_UNILATERAL/DELAYED_OUTPUT_TO_US')
assert blocks == 200
bitcoind.generate_block(200)
# Could be RBF!
> l1.mine_txid_or_rbf(txid)
tests/test_closing.py:1864:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
contrib/pyln-testing/pyln/testing/utils.py:1375: in mine_txid_or_rbf
wait_for(lambda: rbf_or_txid_broadcast(txids))
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
success = <function LightningNode.mine_txid_or_rbf.<locals>.<lambda> at 0x7f9b129c4550>
timeout = 180
def wait_for(success, timeout=TIMEOUT):
start_time = time.time()
interval = 0.25
while not success():
time_left = start_time + timeout - time.time()
if time_left <= 0:
> raise ValueError("Timeout while waiting for {}".format(success))
E ValueError: Timeout while waiting for <function LightningNode.mine_txid_or_rbf.<locals>.<lambda> at 0x7f9b129c4550>
```
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The test program has a leak, so address sanitizer complains and makes it
"fail" the zlib detection test!
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We can fix the median calc by removing the (unused) reverse edges.
Also analyze the failure case in test_real_data: it's a real edge case, so
hardcode that one as "ok".
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The ratio of the median of the fees and probability cost is overall not
a bad factor to combine these two features. This is what the
test_real_data shows.
Changelog-None
Signed-off-by: Lagrang3 <lagrang3@protonmail.com>
The fee_fallback test would fail after fixing the computation of the
median. Now by we can restore it by making the probability cost factor
1000x higher than the ratio of the median. This shows how hard it is to
combine fee and probability costs and why is the current approach so
fragile.
Changelog-None
Signed-off-by: Lagrang3 <lagrang3@protonmail.com>
Rusty: "We don't generally use NDEBUG in our code"
Instead use a compile time flag ASKRENE_UNITTEST to make checks on unit
tests that we don't normally need on release code.
Changelog-none
Signed-off-by: Lagrang3 <lagrang3@protonmail.com>
- use graph_max_num_arcs/nodes instead of tal_count in bound checks,
- don't use ccan/lqueue, use instead a minimalistic queue
implementation with an array,
- add missing const qualifiers to temporary tal allocators,
- check preconditions with assert,
- remove inline specifier for static functions,
Changelog-None
Signed-off-by: Lagrang3 <lagrang3@protonmail.com>
The calculation of the median values of probability and fee cost in the
linear approximation had a bug by counting on non-existing arcs.
Changelog-none: askrene: fix the median
Signed-off-by: Lagrang3 <lagrang3@protonmail.com>
We use an arc "array" in the graph structure, but not all arc indexes
correspond to real topological arcs. We must be careful when iterating
through all arcs, and check if they are enabled before making operations
on them.
Changelog-None: askrene: fix bug, not all arcs exists
Signed-off-by: Lagrang3 <lagrang3@protonmail.com>
Add a new function to compute a MCF using a more general description of
the problem. I call it mcf_refinement because it can start with a
feasible flow (though this is not necessary) and adapt it to achieve
optimality.
Changelog-None: askrene: add a MCF refinement
Signed-off-by: Lagrang3 <lagrang3@protonmail.com>
It is just a copy-paste of "dijkstra" but the name
implies what it actually is. Not an implementation of minimum cost path
Dijkstra algorithm, but a helper data structure.
I keep the old "dijkstra.h/c" files for the moment to avoid breaking the
current code.
Changelog-EXPERIMENTAL: askrene: add priorityqueue
Signed-off-by: Lagrang3 <lagrang3@protonmail.com>
While we have (I hope!) fixed the underlying sync problem, this can still happen with
older gossip. So now we tell it the channel is dead, so it won't happen more than once.
Fixes: #7703
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This can help us backfill any missing gossip if our current
peers haven't been the most reliable.
Changelog-Changed: Gossipd requests a full sync from a random peer every hour.
Based on gossip sync data from random network peers, listening to only 5
peers will not reliably catch all gossip traffic. For now, add extra
redundancy.
Allows our peer to change their funding pub key during a splice.
Changelog-Changed: Support added for peers that wish to rotate their funding pubkey during a splice.
Set the remote funding pubkey on both lightningd and channeld when mutual splice lock is achieved.
This will be needed once rotating funding keys is enabled during splicing
Changelog-None.