We need to store the channel capacity for channel_announcement: hand it
in directly rather than having the gossip_store code do a lookup.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Save some overhead, plus gets us ready for giving subdaemons direct
store access. This is the first time we *upgrade* the gossip_store,
rather than just discarding.
The downside is that we need to add an extra message after each
channel_announcement, containing the channel capacity.
After:
store_load_msec:28337-30288(28975+/-7.4e+02)
vsz_kb:582304-582316(582306+/-4.8)
store_rewrite_sec:11.240000-11.800000(11.55+/-0.21)
listnodes_sec:1.800000-1.880000(1.84+/-0.028)
listchannels_sec:22.690000-26.260000(23.878+/-1.3)
routing_sec:2.280000-9.570000(6.842+/-2.8)
peer_write_all_sec:48.160000-51.480000(49.608+/-1.1)
Differences:
-vsz_kb:582320
+vsz_kb:582316
-listnodes_sec:2.100000-2.170000(2.118+/-0.026)
+listnodes_sec:1.800000-1.880000(1.84+/-0.028)
-peer_write_all_sec:51.600000-52.550000(52.188+/-0.34)
+peer_write_all_sec:48.160000-51.480000(49.608+/-1.1)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We know exactly how many there will be, so allocate an entire array up-front.
-listnodes_sec:2.540000-2.610000(2.584+/-0.029)
+listnodes_sec:2.100000-2.170000(2.118+/-0.026)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Entries we've already loaded expect to exist in the store. We could go
back and remove them all, but instead just truncate at the known-good
point.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Some tests require dev support, but the rest can run. We simplify
the gossip_store output so it's the same in non-dev mode too.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Now we can benchmark, and remove 500 bytes per node.
MCP results from 5 runs, min-max(mean +/- stddev):
store_load_msec:35093-37907(36146+/-1.1e+03)
vsz_kb:555168
store_rewrite_sec:12.120000-13.750000(12.7+/-0.6)
listnodes_sec:1.270000-1.370000(1.322+/-0.039)
listchannels_sec:29.770000-31.600000(30.82+/-0.64)
routing_sec:0.00
peer_write_all_sec:63.630000-67.850000(65.432+/-1.7)
MCP notable changes from pre-Dijkstra (>1 stddev):
-vsz_kb:577456
+vsz_kb:555168
-routing_sec:60.70
+routing_sec:12.04
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Do it inside the can_reach() function, which is less optimal for BFG
which does 20 ops on the same channel, but fine for Dijkstra.
This does have a measurable cost, so we might want to use
non-cryptographic fuzz in future:
$ gossipd/test/run-bench-find_route 100000 100:
Before:
100 (100 succeeded) routes in 100000 nodes in 97346 msec (973461784 nanoseconds per route)
After:
100 (100 succeeded) routes in 100000 nodes in 113381 msec (1133813412 nanoseconds per route)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
If a route is too long, we try to bias Dijkstra towards choosing a
shorter route by adding a per-hop cost. We do a naive "shortest path"
pass, then using that cost as a ceiling on per-hop cost, we do a
binary search.
There are some subtleties: we use risk rather than total as our
counter field (we normally bias this by 1 anyway, so it's easy to make
that a variable), and we set riskfactor to a mimimal value once we're
iterating. It's good enough to get a solution, we don't need to do a
2-dimensional search on riskfactor and riskbias.
Of course, this is extremely slow if we hit it on our benchmark,
though it doesn't happen in a more realistic network:
$ gossipd/test/run-bench-find_route 100000 100:
Before:
100 (79 succeeded) routes in 100000 nodes in 25341 msec (253412314 nanoseconds per route)
After:
100 (100 succeeded) routes in 100000 nodes in 97346 msec (973461784 nanoseconds per route)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Our uintmap can be a little slow with all the reallocation, so leave
NULL entries and walk to find the first one. Since we don't clean
them up, keep a cache of where the min non-all-NULL value is in the
heap.
It's clearer benefit on really large tests, so here's 1M nodes:
Comparison using gossipd/test/run-bench-find_route 1000000 10:
Before:
10 (10 succeeded) routes in 1000000 nodes in 91995 msec (9199532898 nanoseconds per route)
After:
10 (10 succeeded) routes in 1000000 nodes in 20605 msec (2060539287 nanoseconds per route)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Use a uintmap as our minheap.
Note that Dijkstra can give overlength routes, so some checks are disabled.
Comparison using gossipd/test/run-bench-find_route 100000 10:
Before:
10 (10 succeeded) routes in 100000 nodes in 120087 msec (12008708402 nanoseconds per route)
After:
10 (10 succeeded) routes in 100000 nodes in 2269 msec (226925462 nanoseconds per route)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is a weakness with Dijkstra, so write an explicit unit test that
we can find a short enough (but more expensive) route.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
When we compact the store, we need to adjust the broadast index for
peers so they know where they're up to.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This requires some trickiness when we want to re-add unannounced channels
to the store after compaction, so we extract a common "copy_message" to
transfer from old store to new.
MCP results from 5 runs, min-max(mean +/- stddev):
store_load_msec:36034-37853(37109.8+/-5.9e+02)
vsz_kb:577456
store_rewrite_sec:12.490000-13.250000(12.862+/-0.27)
listnodes_sec:1.250000-1.480000(1.364+/-0.09)
listchannels_sec:30.820000-31.480000(31.068+/-0.24)
routing_sec:26.940000-27.990000(27.616+/-0.39)
peer_write_all_sec:65.690000-68.600000(66.698+/-0.99)
MCP notable changes from previous patch (>1 stddev):
-vsz_kb:1202316
+vsz_kb:577456
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The next patch causes us to access the store while loading (we read
channel_updates for local peers), which messes up loading due to the
lseek involved.
Using pread() is atomic with seek & read, and also a bit more
efficient. Make the header contiguous too, while we're here.
We don't need pwrite: we always open with O_APPEND which means the
seek-to-end is implicit.
MCP results from 5 runs, min-max(mean +/- stddev):
store_load_msec:36771-38289(37529.6+/-5.3e+02)
vsz_kb:1202316
store_rewrite_sec:12.460000-13.280000(12.784+/-0.29)
listnodes_sec:1.240000-1.410000(1.34+/-0.058)
listchannels_sec:29.850000-31.840000(30.908+/-0.69)
routing_sec:27.800000-31.790000(28.822+/-1.5)
peer_write_all_sec:66.200000-68.720000(67.44+/-0.84)
MCP notable changes from previous patch (>1 stddev):
-store_load_msec:39207-45089(41374.6+/-2.2e+03)
+store_load_msec:36771-38289(37529.6+/-5.3e+02)
-store_rewrite_sec:15.090000-16.790000(15.654+/-0.63)
+store_rewrite_sec:12.460000-13.280000(12.784+/-0.29)
-peer_write_all_sec:66.830000-76.850000(71.976+/-3.6)
+peer_write_all_sec:66.200000-68.720000(67.44+/-0.84)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
When we no longer keep channel_updates in memory, there's a path where
we access them on load: when we promote a local channel to an
announced channel.
This breaks at the moment, since gs->fd == -1; change it to a writable
flag instead.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The txout_script field is unused; the local_disable only applies to
the handful of local channels, so move that into a hash table.
MCP results from 5 runs, min-max(mean +/- stddev):
store_load_msec:39207-45089(41374.6+/-2.2e+03)
vsz_kb:1202316
store_rewrite_sec:15.090000-16.790000(15.654+/-0.63)
listnodes_sec:1.290000-3.790000(1.938+/-0.93)
listchannels_sec:30.190000-32.120000(31.31+/-0.69)
routing_sec:28.220000-31.340000(29.314+/-1.2)
peer_write_all_sec:66.830000-76.850000(71.976+/-3.6)
MCP notable changes from previous patch (>1 stddev):
-store_load_msec:35107-37944(36686+/-1e+03)
+store_load_msec:39207-45089(41374.6+/-2.2e+03)
-vsz_kb:1218036
+vsz_kb:1202316
-listchannels_sec:28.510000-30.270000(29.6+/-0.6)
+listchannels_sec:30.190000-32.120000(31.31+/-0.69)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We used to have a `struct chan` while we're waiting for an update; now we
keep that internally. So a `struct chan` without a channel_announcement
in the store is private, and other is public.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Reload them from disk if they do listnodes.
MCP results from 5 runs, min-max(mean +/- stddev):
store_load_msec:35390-38659(37336.4+/-1.3e+03)
vsz_kb:1780516
store_rewrite_sec:13.800000-16.800000(15.02+/-0.98)
listnodes_sec:1.280000-1.530000(1.382+/-0.096)
listchannels_sec:28.700000-30.440000(29.34+/-0.68)
routing_sec:30.120000-31.080000(30.526+/-0.35)
peer_write_all_sec:65.910000-76.850000(69.462+/-4.1)
MCP notable changes from previous patch (>1 stddev):
-vsz_kb:1792996
+vsz_kb:1780516
-listnodes_sec:1.030000-1.120000(1.068+/-0.032)
+listnodes_sec:1.280000-1.530000(1.382+/-0.096)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We currently create a struct chan when we receive a `channel_announcement`,
but we can only broadcast once we have a `channel_update` (since that
provides the timestamp).
This means a `struct chan` can be in a weird state where it exists,
but is unusable (can't use without an update), and also means we need to
keep the channel_announcement message around until an update arrives, so
we can put it in the gossip_store.
Instead, keep track of these "unupdated" channels separately, and check
for them in all the places we search for a specific channel to update.
MCP results from 5 runs, min-max(mean +/- stddev):
store_load_msec:30640-33236(32202+/-8.7e+02)
vsz_kb:1812956
store_rewrite_sec:13.410000-16.970000(14.438+/-1.3)
listnodes_sec:0.590000-0.660000(0.62+/-0.033)
listchannels_sec:28.140000-29.560000(28.816+/-0.56)
routing_sec:29.530000-32.590000(30.352+/-1.1)
peer_write_all_sec:60.380000-61.320000(60.836+/-0.37)
MCP notable changes from previous patch (>1 stddev):
-vsz_kb:1812904
+vsz_kb:1812956
-store_rewrite_sec:21.390000-27.070000(23.596+/-2.4)
+store_rewrite_sec:13.410000-16.970000(14.438+/-1.3)
-listnodes_sec:1.120000-1.230000(1.176+/-0.044)
+listnodes_sec:0.590000-0.660000(0.62+/-0.033)
-listchannels_sec:38.900000-50.580000(44.716+/-3.9)
+listchannels_sec:28.140000-29.560000(28.816+/-0.56)
-routing_sec:45.080000-48.160000(46.814+/-1.1)
+routing_sec:29.530000-32.590000(30.352+/-1.1)
-peer_write_all_sec:58.780000-87.150000(72.278+/-9.7)
+peer_write_all_sec:60.380000-61.320000(60.836+/-0.37)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
If we have a channel_announcement, we catch any node_announcement for
either end while we validate the channel_announcement. But if we have
multiple channel_announcements and the first one failed to verify, it
would remove this catch, meaning we'd discard following node_announcements
even though there was a pending channel_announcement.
The answer is to use a simple reference count, and as a further
optimization, only place the `pending_node_announce` if there's no
node already.
We also move the process_pending_node_announcement() calls lower down,
so *any* new channel creation checks it. This is more robust, and
will prove useful for the next patch, where we can use the same
mechanism to handle node_announcements on channel_announcements which
are verified, but don't yet have a channel_update.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is currently done higher up, in handle_channel_update(), but
that's one reason why handle_channel_update() has to do a channel
lookup. Moving the check down means handle_channel_update() can do a
minimal "get node id for this channel" so it can check the signature.
This helps, because the chan lookup semantics are changing in the next
few patches.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
If we need the payload, pull it from the gossip store.
MCP results from 5 runs, min-max(mean +/- stddev):
store_load_msec:30189-52561(39416.4+/-8.8e+03)
vsz_kb:1812904
store_rewrite_sec:21.390000-27.070000(23.596+/-2.4)
listnodes_sec:1.120000-1.230000(1.176+/-0.044)
listchannels_sec:38.900000-50.580000(44.716+/-3.9)
routing_sec:45.080000-48.160000(46.814+/-1.1)
peer_write_all_sec:58.780000-87.150000(72.278+/-9.7)
MCP notable changes from previous patch (>1 stddev):
-vsz_kb:2288784
+vsz_kb:1812904
-store_rewrite_sec:38.060000-39.130000(38.426+/-0.39)
+store_rewrite_sec:21.390000-27.070000(23.596+/-2.4)
-listnodes_sec:0.750000-0.850000(0.794+/-0.042)
+listnodes_sec:1.120000-1.230000(1.176+/-0.044)
-listchannels_sec:30.740000-31.760000(31.096+/-0.35)
+listchannels_sec:38.900000-50.580000(44.716+/-3.9)
-routing_sec:29.600000-33.560000(30.472+/-1.5)
+routing_sec:45.080000-48.160000(46.814+/-1.1)
-peer_write_all_sec:49.220000-52.690000(50.892+/-1.3)
+peer_write_all_sec:58.780000-87.150000(72.278+/-9.7)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Instead of an arbitrary counter, we can use the file offset for our
partial ordering, removing a field. It takes some care when we compact
the store, however, as this field changes.
MCP results from 5 runs, min-max(mean +/- stddev):
store_load_msec:34271-35283(34789.6+/-3.3e+02)
vsz_kb:2288784
store_rewrite_sec:38.060000-39.130000(38.426+/-0.39)
listnodes_sec:0.750000-0.850000(0.794+/-0.042)
listchannels_sec:30.740000-31.760000(31.096+/-0.35)
routing_sec:29.600000-33.560000(30.472+/-1.5)
peer_write_all_sec:49.220000-52.690000(50.892+/-1.3)
MCP notable changes from previous patch (>1 stddev):
-store_load_msec:35685-38538(37090.4+/-9.1e+02)
+store_load_msec:34271-35283(34789.6+/-3.3e+02)
-vsz_kb:2288768
+vsz_kb:2288784
-peer_write_all_sec:51.140000-58.350000(55.69+/-2.4)
+peer_write_all_sec:49.220000-52.690000(50.892+/-1.3)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is more compact, but also required once we replace the arbitrary
"index" with an actual offset into the gossip store. That will let us
remove the in-memory variants entirely.
MCP results from 5 runs, min-max(mean +/- stddev):
store_load_msec:35685-38538(37090.4+/-9.1e+02)
vsz_kb:2288768
store_rewrite_sec:35.530000-41.230000(37.904+/-2.3)
listnodes_sec:0.720000-0.810000(0.762+/-0.041)
listchannels_sec:30.750000-35.990000(32.704+/-2)
routing_sec:29.570000-34.010000(31.374+/-1.8)
peer_write_all_sec:51.140000-58.350000(55.69+/-2.4)
MCP notable changes from previous patch (>1 stddev):
-vsz_kb:2621808
+vsz_kb:2288768
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We used an s64 so we could use -1 and save a check, but that's just
silly as we have adjacent non-u64 fields: wastes 7 bytes per node
and 16 per channel.
Interestingly, this seemed to make us a little slower for some reason.
MCP results from 5 runs, min-max(mean +/- stddev):
store_load_msec:35569-38776(37169.8+/-1.2e+03)
vsz_kb:2621808
store_rewrite_sec:35.870000-40.290000(38.14+/-1.6)
listnodes_sec:0.740000-0.800000(0.768+/-0.023)
listchannels_sec:29.820000-32.730000(30.972+/-0.99)
routing_sec:30.110000-30.590000(30.346+/-0.18)
peer_write_all_sec:52.420000-59.160000(54.692+/-2.5)
MCP notable changes from previous patch (>1 stddev):
-store_load_msec:32825-36365(34615.6+/-1.1e+03)
+store_load_msec:35569-38776(37169.8+/-1.2e+03)
-vsz_kb:2637488
+vsz_kb:2621808
-store_rewrite_sec:35.150000-36.200000(35.59+/-0.4)
+store_rewrite_sec:35.870000-40.290000(38.14+/-1.6)
-listnodes_sec:0.590000-0.710000(0.682+/-0.046)
+listnodes_sec:0.740000-0.800000(0.768+/-0.023)
-peer_write_all_sec:49.020000-52.890000(50.376+/-1.5)
+peer_write_all_sec:52.420000-59.160000(54.692+/-2.5)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We can save significant space by combining both sides: so much that we
can reduce the WIRE_LEN_LIMIT to something sane again.
MCP results from 5 runs, min-max(mean +/- stddev):
store_load_msec:34467-36764(35517.8+/-7.7e+02)
vsz_kb:2637488
store_rewrite_sec:35.310000-36.580000(35.816+/-0.44)
listnodes_sec:1.140000-2.780000(1.596+/-0.6)
listchannels_sec:55.390000-58.110000(56.998+/-0.99)
routing_sec:30.330000-30.920000(30.642+/-0.19)
peer_write_all_sec:50.640000-53.360000(51.822+/-0.91)
MCP notable changes from previous patch (>1 stddev):
-store_rewrite_sec:34.720000-35.130000(34.94+/-0.14)
+store_rewrite_sec:35.310000-36.580000(35.816+/-0.44)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Don't turn them to/from pubkeys implicitly. This means nodeids in the store
don't get converted, but bitcoin keys still do.
MCP results from 5 runs, min-max(mean +/- stddev):
store_load_msec:33934-35251(34531.4+/-5e+02)
vsz_kb:2637488
store_rewrite_sec:34.720000-35.130000(34.94+/-0.14)
listnodes_sec:1.020000-1.290000(1.146+/-0.086)
listchannels_sec:51.110000-58.240000(54.826+/-2.5)
routing_sec:30.000000-33.320000(30.726+/-1.3)
peer_write_all_sec:50.370000-52.970000(51.646+/-1.1)
MCP notable changes from previous patch (>1 stddev):
-store_load_msec:46184-47474(46673.4+/-4.5e+02)
+store_load_msec:33934-35251(34531.4+/-5e+02)
-vsz_kb:2638880
+vsz_kb:2637488
-store_rewrite_sec:46.750000-48.280000(47.512+/-0.51)
+store_rewrite_sec:34.720000-35.130000(34.94+/-0.14)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
I tried to just do gossipd, but it was uncontainable, so this ended up being
a complete sweep.
We didn't get much space saving in gossipd, even though we should save
24 bytes per node.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Allocating a htable is overkill for most nodes; we can fit 11 pointers
in the same space (10, since we use 1 to indicate we're using an array).
MCP results from 5 runs, min-max(mean +/- stddev):
store_load_msec:45947-47016(46683.4+/-4e+02)
vsz_kb:2639240
store_rewrite_sec:46.950000-49.830000(48.048+/-0.95)
listnodes_sec:1.090000-1.350000(1.196+/-0.095)
listchannels_sec:48.960000-57.640000(53.358+/-2.8)
routing_sec:29.990000-33.880000(31.088+/-1.4)
peer_write_all_sec:49.360000-53.210000(51.338+/-1.4)
MCP notable changes from previous patch (>1 stddev):
- vsz_kb:2641316
+ vsz_kb:2639240
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Makes the next step easier.
MCP results from 5 runs, min-max(mean +/- stddev):
store_load_msec:45791-46917(46330.4+/-3.6e+02)
vsz_kb:2641316
store_rewrite_sec:47.040000-48.720000(47.684+/-0.57)
listnodes_sec:1.140000-1.340000(1.2+/-0.072)
listchannels_sec:50.970000-54.250000(52.698+/-1.3)
routing_sec:29.950000-31.010000(30.332+/-0.37)
peer_write_all_sec:51.570000-52.970000(52.1+/-0.54)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Outputs CSV. We add some stats for load times in developer mode, so we can
easily read them out.
peer_read_all_sec doesn't work, since we seem to reject about half the
updates for having bad signatures. It's also very slow...
routing fails, for unknown reasons, so that failure is ignored in routing_sec.
Results from 5 runs, min-max(mean +/- stddev):
store_load_msec,vsz_kb,store_rewrite_sec,listnodes_sec,listchannels_sec,routing_sec,peer_write_all_sec
39275-44779(40466.8+/-2.2e+03),2899248,41.010000-44.970000(41.972+/-1.5),2.280000-2.350000(2.304+/-0.025),49.770000-63.390000(59.178+/-5),33.310000-34.260000(33.62+/-0.35),42.100000-44.080000(43.082+/-0.67)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Header from folded patch 'fixup!_tools-bench-gossipd.sh__rough_benchmark_for_gossipd_and_the_million_channels_project-2.patch':
fixup! tools/bench-gossipd.sh: rough benchmark for gossipd and the million channels project
Suggested-by: @niftynei
Header from folded patch 'fixup!_tools-bench-gossipd.sh__rough_benchmark_for_gossipd_and_the_million_channels_project-1.patch':
fixup! tools/bench-gossipd.sh: rough benchmark for gossipd and the million channels project
MCP filename change.
Header from folded patch 'tools-bench-gossipd.sh__dont_print_csv_by_default.patch':
tools/bench-gossipd.sh: don't print CSV by default.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Header from folded patch 'fixup!_tools-bench-gossipd.sh__rough_benchmark_for_gossipd_and_the_million_channels_project.patch':
fixup! tools/bench-gossipd.sh: rough benchmark for gossipd and the million channels project
Make shellcheck happy.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Either private or simply not enough confirms. They would have been added
on reconnect, but that's not ideal.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This lets us benchmark without a valid blockchain.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Header from folded patch 'fixup!_gossipd__dev_option_to_allow_unknown_channels.patch':
fixup! gossipd: dev option to allow unknown channels.
Suggested-by: @cdecker
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
For giant nodes, it seems we spend a lot of time memmoving this array.
Normally we'd go for a linked list, but that's actually hard: each
channel has two nodes, so needs two embedded list pointers, and when
iterating there's no good way to figure out which embedded pointer
we'd be using.
So we (ab)use htable; we don't really need an index, but it's good for
cache-friendly iteration (our main operation). We can actually change
to a hybrid later to avoid the extra allocation for small nodes.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
If we asked `bitcoind` for a txout and it failed we were not storing that
information anywhere, meaning that when we see the channel announcement the
next time we'd be reaching out to `lightningd` and `bitcoind` again, just to
see it fail again. This adds an in-memory cache for these failures so we can
just ignore these the next time around.
Fixes#2503
Signed-off-by: Christian Decker <decker.christian@gmail.com>
We need to do it in various places, but we shouldn't do it lightly:
the primitives are there to help us get overflow handling correct.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Basically we tell it that every field ending in '_msat' is a struct
amount_msat, and 'satoshis' is an amount_sat. The exceptions are
channel_update's fee_base_msat which is a u32, and
final_incorrect_htlc_amount's incoming_htlc_amt which is also a
'struct amount_msat'.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
As a side-effect of using amount_msat in gossipd/routing.c, we explicitly
handle overflows and don't need to pre-prune ridiculous-fee channels.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
They're generally used pass-by-copy (unusual for C structs, but
convenient they're basically u64) and all possibly problematic
operations return WARN_UNUSED_RESULT bool to make you handle the
over/underflow cases.
The new #include in json.h means we bolt11.c sees the amount.h definition
of MSAT_PER_BTC, so delete its local version.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We need to still accept it when parsing the database, but this flag
should allow upgrade testing for devs building on top
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Up until now, riskfactor was useless due to implementation bugs, and
also the default setting is wrong (too low to have an effect on
reasonable payment scenarios).
Let's simplify the definition (by assuming that P(failure) of a node
is 1), to make it a simple percentage. I examined the current network
fees to see what would work, and under this definition, a default of
10 seems reasonable (equivalent to 1000 under the old definition).
It is *this* change which finally fixes our test case! The riskfactor
is now 40msat (1500000 * 14 * 10 / 5259600 = 39.9), comparable with
worst-case fuzz is 50msat (1001 * 0.05 = 50).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We were only comparing by total msatoshis.
Note, this *still* isn't sufficient to fix our indirect problem, as
our risk values are all 1 (the minimum):
lightning_gossipd(25480): 2 hop solution: 1501990 + 2
lightning_gossipd(25480): 3 hop solution: 1501971 + 3
...
lightning_gossipd(25480): => chose 3 hop solution
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We used a u16, and a 1000 multiplier, which meant we wrapped at
riskfactor 66. We also never undid the multiplier, so we ended up
applying 1000x the riskfactor they specified.
This changes us to pass the riskfactor with a 1M multiplier. The next
patch changes the definition of riskfactor to be more useful.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We have a seed, which is for (future!) unit testing consistency. This
makes it change every time, so our pay_direct_test is more useful.
I tried restarting the noed around the loop, but it tended to fail
rebinding to the same port for some reason?
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
As a general rule, lightningd shouldn't parse user packets. We move the
parsing into gossipd, and have it respond only to permanent failures.
Note that we should *not* unconditionally remove a channel on
WIRE_INVALID_ONION_HMAC, as this can be triggered (and we do!) by
feeding sendpay a route with an incorrect pubkey.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We had a bug 0ba547ee10 caused by
short_channel_id overflow. If we'd caught this, we'd have terminated
the peer instead of crashing, so add appropriate checks.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Don't do this:
(gdb) bt
#0 0x00007f37ae667c40 in ?? () from /lib/x86_64-linux-gnu/libz.so.1
#1 0x00007f37ae668b38 in ?? () from /lib/x86_64-linux-gnu/libz.so.1
#2 0x00007f37ae669907 in deflate () from /lib/x86_64-linux-gnu/libz.so.1
#3 0x00007f37ae674c65 in compress2 () from /lib/x86_64-linux-gnu/libz.so.1
#4 0x000000000040cfe3 in zencode_scids (ctx=0xc1f118, scids=0x2599bc49 "\a\325{", len=176320) at gossipd/gossipd.c:218
#5 0x000000000040d0b3 in encode_short_channel_ids_end (encoded=0x7fff8f98d9f0, max_bytes=65490) at gossipd/gossipd.c:236
#6 0x000000000040dd28 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17290511, number_of_blocks=8) at gossipd/gossipd.c:576
#7 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17290511, number_of_blocks=16) at gossipd/gossipd.c:595
#8 0x000000000040ddee in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17290495, number_of_blocks=32) at gossipd/gossipd.c:596
#9 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17290495, number_of_blocks=64) at gossipd/gossipd.c:595
#10 0x000000000040ddee in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17290431, number_of_blocks=128) at gossipd/gossipd.c:596
#11 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17290431, number_of_blocks=256) at gossipd/gossipd.c:595
#12 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17290431, number_of_blocks=512) at gossipd/gossipd.c:595
#13 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17290431, number_of_blocks=1024) at gossipd/gossipd.c:595
#14 0x000000000040ddee in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=2047) at gossipd/gossipd.c:596
#15 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=4095) at gossipd/gossipd.c:595
#16 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=8191) at gossipd/gossipd.c:595
#17 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=16382) at gossipd/gossipd.c:595
#18 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=32764) at gossipd/gossipd.c:595
#19 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=65528) at gossipd/gossipd.c:595
#20 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=131056) at gossipd/gossipd.c:595
#21 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=262112) at gossipd/gossipd.c:595
#22 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=524225) at gossipd/gossipd.c:595
#23 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=1048450) at gossipd/gossipd.c:595
#24 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=2096900) at gossipd/gossipd.c:595
#25 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=4193801) at gossipd/gossipd.c:595
#26 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=8387603) at gossipd/gossipd.c:595
#27 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=16775207) at gossipd/gossipd.c:595
#28 0x000000000040ddee in queue_channel_ranges (peer=0x3868fc8, first_blocknum=514201, number_of_blocks=33550414) at gossipd/gossipd.c:596
#29 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=514201, number_of_blocks=67100829) at gossipd/gossipd.c:595
#30 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=514201, number_of_blocks=134201659) at gossipd/gossipd.c:595
#31 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=514201, number_of_blocks=268403318) at gossipd/gossipd.c:595
#32 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=514201, number_of_blocks=536806636) at gossipd/gossipd.c:595
#33 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=514201, number_of_blocks=1073613273) at gossipd/gossipd.c:595
#34 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=514201, number_of_blocks=2147226547) at gossipd/gossipd.c:595
#35 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=514201, number_of_blocks=4294453094) at gossipd/gossipd.c:595
#36 0x000000000040df26 in handle_query_channel_range (peer=0x3868fc8, msg=0x37e0678 "\001\ao\342\214\n\266\361\263r\301\246\242F\256c\367O\223\036\203e\341Z\b\234h\326\031") at gossipd/gossipd.c:625
The cause was that converting a block number to an scid truncates it
at 24 bits. When we look through the index from (truncated number) to
(real end number) we get every channel, which is too large to encode,
so we iterate again.
This fixes both that problem, and also the issue that we'd end up
dividing into many empty sections until we get to the highest block
number. Instead, we just tack the empty blocks on to then end of the
final query.
(My initial version requested 0xFFFFFFFE blocks, but the dev code
which records what blocks were returned can't make a bitmap that big
on 32 bit).
Reported-by: George Vaccaro
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Currently only used by gossipd for channel elimination.
Also print them in canonical form (/[01]), so tests need to be
changed.
Suggested-by: @cdecker
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We didn't populate the channels properly so it always failed.
Additionally, somewhere along the line we kept using the single scid
so we only created one channel.
Also, the next patch will start comparing the pubkeys, so make valid
ones: use an array so we don't affect the benchmark too much.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
seed isn't very useful at this level: I've left it in routing.c
because it might be useful for detailed testing. Pretty sure it's unused,
so I simply removed it.
The fuzzpercent is documented to default at 5%, but actually was 75%.
Fix that too.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Christian and I both unwittingly used it in form:
*tal_arr_expand(&x) = tal(x, ...)
Since '=' isn't a sequence point, the compiler can (and does!) cache
the value of x, handing it to tal *after* tal_arr_expand() moves it
due to tal_resize().
The new version is somewhat less convenient to use, but doesn't have
this problem, since the assignment is always evaluated after the
resize.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is mainly just copying over the copy-editing from the
lightning-rfc repository.
[ Split to just perform changes after the UNKNOWN_PAYMENT_HASH change --RR ]
Signed-off-by: Christian Decker <decker.christian@gmail.com>
Reported-by: Rusty Russell <@rustyrussell>
This is mainly just copying over the copy-editing from the
lightning-rfc repository.
[ Split to just perform changes prior to the UNKNOWN_PAYMENT_HASH change --RR ]
Signed-off-by: Christian Decker <decker.christian@gmail.com>
Reported-by: Rusty Russell <@rustyrussell>
We keep a chain_hash in struct daemon, becayse otherwise we end up with
`&peer->daemon->rstate->chainparams->genesis_blockhash` which is a bit
ridiculous.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This avoids some very ugly switch() statements which mixed the two,
but we also take the chance to rename 'towire_gossip_' to
'towire_gossipd_' for those inter-daemon messages; they're messages to
gossipd, not gossip messages.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We had at least one bug caused by it not returning true when it had
queued something. Instead, just re-check thq queue after it's called.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We shouldn't insist on an exact reponse match: they can batch it and send
a whole batch, as long as it overlaps what we ask.
We also change to a bitmap to save some memory.
This isn't note in the CHANGELOG since we don't actually send gossip
range queries except for testing.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Messages from a peer may be invalid in many ways: we send an error
packet in that case. Rather than internally calling peer_error,
however, we make it explicit by having the handle_ functions return
NULL or an error packet.
Messages from the daemon itself should not be invalid: we log an error
and close the fd to them if it is. Previously we logged an error but
didn't kill them.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The idea is that `plugin` is an early arg that is parsed (from command
line or the config file). We can then start the plugins and have them
tell us about the options they'd like to add to the mix, before we
actually parse them.
Signed-off-by: Christian Decker <@cdecker>
It means an extra allocation at startup, but it means we can hide the definition,
and use standard patterns (new_daemon_conn and typesafe callbacks).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We spend quite a bit of time in libsecp256k1 moving them to and from
DER encoding. With a bit of care, we can transfer the raw bytes from
gossipd and manually decode them so a malformed one can't make us
abort().
Before:
real 0m0.629000-0.695000(0.64985+/-0.019)s
After:
real 0m0.359000-0.433000(0.37645+/-0.023)s
At this point, the main issues are 11% of time spent in ccan/io's
backend_wake (I tried using a hash table there, but that actually makes
the small-number-of-fds case slower), and 65% of gossipd's time is
in marshalling the response (all those tal_resize add up!).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Usually Travis triggers corner cases because it's so slow, but this
time the moons aligned, and it managed to fail test_node_reannounce
because it generated the updated node_announcement with the same
timestamp as the old one.
This is because we only updated "last_announce_timestamp" when
we generated the announcement, not when we got it off the wire or
loaded it from the gossip store.
The fix is to ask the routing code what the latest timestamp is;
we could still generate a clashing timestamp if (1) the gossip store
is lost, and (2) we restart within one second. Hard to care.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Have c-lightning nodes send out the largest value for
`htlc_maximum_msat` that makes sense, ie the lesser of
the peer's max_inflight_htlc value or the total channel
capacity minus the total channel reserve.
We initialize it to 30 seconds, but it's *always* overridden by the
gossip_init message (and usually to 60 seconds, so it's doubly
misleading).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Gossipd provided a generic "get endpoints of this scid" and we only
use it in one place: to look up htlc forwards. But lightningd just
assumed that one would be us.
Instead, provide a simpler API which only returns the peer node
if any, and now we handle it much more gracefully.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We don't create unannouncable channels, but other implementations can.
Not only is it rude to expose these via invoices, it's probably not
useable anyway.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
If another channel has set the optional `htlc_maximum_msat` field,
we should correctly parse that field and respect it when drawing up
routes for payments.
globalfeatures should not be accessed if we haven't received a
channel_update. Treat it like the other fields which are only
initialized and marshalled/unmarshalled if the timestamp is positive.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
And use ARRAY_SIZE() everywhere which will break compile if it's not a
literal array, plus assertions that it's the same length.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
For routeboost, we want to select from all our enabled channels with
sufficient incoming capacity. Gossipd knows which are enabled (ie. we
have received a `channel_update` from the peer), but doesn't know the
current incoming capacity.
So we get gossipd to give us all the candidates, and lightningd
selects from those.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Otherwise, if we don't announce the last node, we'll not flush this
out; it will be delayed until the next time we send gossip!
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is consistent: we don't broadcast a channel_announce until we've seen
a channel_update, so we probably shouldn't advertise it here.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We do this a lot, and had boutique helpers in various places. So add
a more generic one; for convenience it returns a pointer to the new
end element.
I prefer the name tal_arr_expand to tal_arr_append, since it's up to
the caller to populate the new array entry.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We have a lot of infrastructure to delay local channel_updates to
avoid spamming on each peer reconnect; we had to keep tracking of
pending ones though, in case we needed the very latest for sending an
error when failing an HTLC.
Instead, it's far simpler to set the local_disabled flag on a channel
when we disconnect, but only send a disabling channel_update if we
actually fail an HTLC.
Note: handle_channel_update() TAKES update (due to tal_arr_dup), but we
didn't use that before. Now we do, add annotation.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We trade channel_update before channel_announce makes the channel
public, and currently forget them when we finally get the
channel_announce. We should instead apply them, and not rely on
retransmission (which we remove in the next patch!).
This earlier channel_update means test_gossip_jsonrpc triggers too
early, so have that wait for node_announcement.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It's simpler and more robust to just check that it's not yet announced
(the broadcast index will be 0).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Incrementing version number means stores which were prior to the previous
commit will be removed, and refreshed. The simplest fix, if not the most
efficient.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
BOLT 7's been updated to split the flags field in `channel_update`
into two: `channel_flags` and `message_flags`. This changeset does the
minimal necessary to get to building with the new flags.
That matches the other CSV names (HSM was the first, so it was written
before the pattern emerged).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We would never complete further ping commands if we had < responses
than pings. Oops.
Fixes: #1928
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
If we receive a channel_announce but not a channel_update, we store the announce
but don't put it in the broadcast map.
When we delete a channel, we check if the node_announcement broadcast
now preceeds all channel_announcements, and if so, we move it to the
end of the map. However, with a channel_announcement at index '0',
this test fails.
This is at least one potential cause of the node map getting out of order.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
These happen after we compact the store; every log I've seen of a
restart on a real node has a message about truncating the store,
because node_announcements predate channel_announcements.
I extracted one such case from testnet, and reduced it to test here.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
@renepickhardt: why is it actually lightningd.c with a d but hsm.c without d ?
And delete unused gossipd/gossip.h.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
1. Move the list to the start of `struct peer`: memleak walks the
list correctly this way.
2. Don't create tal parent loop daemon->conn->daemon.
The second one is silly anyway: we exit via master_gone when the master
conn is closed.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Lightning charge tests stopped working without a timeout, being unable
to find a route. The 15 second delay doesn't matter in real life, but
in these scenarios it does. This fixes it by making sure the channel
is usable immediately.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
As pointed out by @rustyrussell the capacity is now always defined, so we can
fold that into the construction of the channel itself.
Reported-by: Rusty Russell <@rustyrussell>
Signed-off-by: Christian Decker <@cdecker>
The `htlc_minimum_msat` parameter was ignored so far, and we'd be attempting to
pay and hitting a brick wall by doing so. This patch just skips channels that
are not eligible anyway.
We know the total channel capacity after checking for its existence on-chain, so
we can actually make use of that information to discard channels that don't have
a sufficient capacity anyway, reducing the number of failed attempts.
We were adding channels without their capacity, and eventually annotated them
when we exchanged `channel_update`s. This worked as long as we weren't
considering the channel capacity, but would result in local-only channels to be
unusable once we start checking.
'cursor < ser + max' isn't valid because we reduce 'max' as we go! Effectively
we'll stop once we're past halfway, which can only happen with ipv6 + a torv2
address.
Ths fix is one-line, but we rename 'max' to 'len' which makes its purpose
clearer.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
tal_count() is used where there's a type, even if it's char or u8, and
tal_bytelen() is going to replace tal_len() for clarity: it's only needed
where a pointer is void.
We shim tal_bytelen() for now.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We delay internally to reduce broadcastig route flap, but errors are
a special case: we want to send the latest, otherwise we might send an
old (non-disabled) update.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We used to just manually set ROUTING_FLAGS_DISABLED, but that means we
then suppressed the real channel_update because we thought it was a
duplicate!
So use a local flag: set it for the channel when the peer disconnects,
and clear it when channeld sends a local update.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
gossip_getnodes_entry was used by gossipd for reporting nodes, and for
reporting peers. But the local_features field is only available for peers,
and most other fields are only available from node_announcement.
Note that the connectd change actually means we get less information
about peers: gossipd used to do the node lookup for peers and include the
node_announcement information if it had it.
Since generate_wire.py can't create arrays-of-arrays, we add a 'struct
peer_features' to encapsulate the two feature arrays for each peer, and
for convenience we add it to lightningd/gossip_msg.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This patch guts gossipd of all peer-related functionality, and hands
all the peer-related requests to channeld instead.
gossipd now gets the final announcable addresses in its init msg, since
it doesn't handle socket binding any more.
lightningd now actually starts connectd, and activates it. The init
messages for both gossipd and connectd still contain redundant fields
which need cleaning up.
There are shims to handle the fact that connectd's wire messages are
still (mostly) gossipd messages.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
connectd has a dedicated fd to gossipd, so it can ask for a new gossip_fd
for a peer.
gossipd has a standalone routine to create a remote peer (this will
eventually be the only way gossipd creates a new peer).
For now lightningd creates a socketpair but doesn't run connectd, so
gossipd never sees any requests on this fd.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Store the two we care about as booleans. Once connectd is complete we won't
even have the feature bitmaps for peers.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We weren't waiting for gossipd to actually process the
dev_set_max_scids_encode_size message, so under Travis it sometimes
split the reply before processing that.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Note that we mark both directions of the channel disabled immediately,
it's just the broadcast of the update which is delayed, just like the
ones generated when channeld tells us to.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We disable the channel every time the peer disconnects; if it reconnects
we get two updates.
The simplest solution: delay all updates by 15 seconds. Replace any
pending delayed update. If update is redundant after 15 seconds,
discard.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This doesn't do anything for us now, since we actually tend to produce
DISABLE/ENABLE update pairs. But the infrastructure is useful for the
next patch.
We also add more details to the trace message in the core update code.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
json_listpeers returns an array of peers, and an array of nodes: the latter
is a subset of the former, and is used for printing alias/color information.
This changes it so there is a 1:1 correspondance between the peer information
and nodes, meaning no more O(n^2) search.
If there is no node_announce for a peer, we use a negative timestamp
(already used to indicate that the rest of the gossip_getnodes_entry
is not valid).
Other fixes:
1. Use get_node instead of iterating through the node map.
2. A node without addresses is perfectly valid: we have to use the timestamp
to see if the alias/color are set. Previously we wouldn't print that
if it didn't also advertize an address.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
structeq() is too dangerous: if a structure has padding, it can fail
silently.
The new ccan/structeq instead provides a macro to define foo_eq(),
which does the right thing in case of padding (which none of our
structures currently have anyway).
Upgrade ccan, and use it everywhere. Except run-peer-wire.c, which
is only testing code and can use raw memcmp(): valgrind will tell us
if padding exists.
Interestingly, we still declared short_channel_id_eq, even though
we didn't define it any more!
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We wrap it in 'struct pubkey' for typesafety and consistency, and the
next patch takes advantage of that when we move to pubkey_eq.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Only --addr implies announce-if-public: --bind-addr does not.
It's also possible to have --bind-addr to an automatic Tor address:
you'd have to dig the onion address out of the logs or getinfo to use
it, but it's possible.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is a best effort attempt to skip connection attempts if we detect a broken
ISP resolver. A broken ISP resolver is a resolver that will replace NXDOMAIN
replies with a dummy response. This is best effort in that it'll only detect a
single fixed dummy reply, it'll check only on startup, and will not detect if we
switched networks. It should be good enough for most cases, and in the worst
case it will result in a connection attempt that does not complete.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
Reported-by: Glenn Willen <@gwillen>
Cut & paste means we sometimes sent NULL:
```
2018-06-15T00:13:51.908Z lightningd(23653): lightning_closingd-03864ef025fde8fb587d989186ce6a4a186895ee44a926bfc370e2c366597a3f8f chan #436: Gossipd gave us bad send_gossip message 0bc80000
```
Fixes: #1581
Reported-by: @Xian001
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
In this case, local and remote are *both* NULL; so if someone tries to
send a packet with take(), we need to free it.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
I think this is what is causing #1536: getting disconnected causes gossipd to
attempt to reach the peer again, unconditionally setting the flag to tell the
master. At the same time the master also issues a reaching command (which is
allowed since it is its first), but then it clashes on the already set
flag. Setting this flag only when the master actually needs to be told should
fix this.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
A failed compaction shouldn't be deadly, but we should also not attempt to do
one on every gossip message after the first one fails.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
`gossip_store_add` is the entry point for messages from the network, so it
should do the bookkeeping and disable on failures. `gossip_store_append` is the
shared function that wraps messages and writes it to the given file. This is
shared between the from network path and the compaction path, so we don't
directly use the `gossip_store` instance, but `fd`s.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
We write both when coming from outside, as well as when compacting, so we
extract the write functionality to use it in both cases.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
This makes the exposed interface much smaller, cleaner and will allow us to just
replay gossip messages from the broadcast.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
Two cases:
1. Node no longer has any public channels: remove node_announcement.
2. Node's node_announcement now preceeds all the channel_announcements:
move node_announcement to the end.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This lets detect if a node announce preceeds a channel announce once we
delete the node announcement.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We *accept* a node_announce if we have a channel_announce, but we
can't queue it until we queue the channel_announce, which we only do
once we have recieved a channel_update.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Since we currently only (ab)use it to send everything, we need a way to
generate boutique queries for testing.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We have a function called 'wake_pkt_out' which is really 'start
gossiping', so rename it to 'wake_gossip_out'.
In addition, it's fired both on a timer, and in response to our first
gossip_timestamp_filter, which leads to very confusing (though,
technically, not incorrect) behavior.
Keep a single timer at all times, which now doubles as the flag to
indicating we're syncing right now. Set it once we're done syncing
gossip.
Technically this means we got from once-every-60-seconds to
quiet-for-60-seconds-between-gossip, but that's OK.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
And initialize filter (to "never") when we negotiated LOCAL_GOSSIP_QUERIES,
and send initial filter message.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is kind of orthogonal to the other changes, but makes sense: if we
would instantly or never prune the message, don't accept it.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We use the same system as for gossip: we trickle out replies when we're
otherwise idle.
As we trickle out replies to query_short_channel_ids, we remember the
pubkeys of nodes we mention. At the end, we sort and uniquify, and
then send any node_announcements we have for those.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We use the same system as for gossip: we trickle out replies when we're
otherwise idle.
This is minimal infrastructure: we don't actually process the
query_short_channel_ids message yet, nor do we append node
announcements.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
In general, we need to only publish node announcements after
publishing channel announcements, though we can accept node
announcements as soon as we see channel announcements. So we keep a
flag for those node_announcement which haven't been broadcast yet.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
handle_pending_cannouncement might not actually add the announcment,
as it could be waiting for a channel_update. We need to wait for
the actual announcement before considering announcing our node.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We generate new ones anyway; removing this code changes fixes coming
up which now only need to change one place.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We don't have any connection yet, so how could they be active? Disable both
sides to avoid trying to route through them or telling others to use them as
`contact_points` in invoices.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
We're telling gossipd about disconnections anyway, so let's just use that signal
to disable both sides of the channel.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
This was failing some of our integration tests, i.e., the ones closing a channel
and not waiting for sigexchange. The remote node would often not be quick enough
to send us its disabling channel_update, and hence we'd still remember the
incoming direction. That could then be sent out as part of an invoice, and fail
subsequently. So just set both directions to be disabled and let the onchain
spend clean up once it happens.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
This resolves the problem where both channeld and gossipd can generate
updates, and they can have the same timestamp. gossipd is always able
to generate them, so can ensure timestamp moves forward.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We erroneously create updates with the same timestamps when tests run
quickly, and the second one is ignored.
We've already noted that this should be fixed: gossipd should generate
all the updates, as it already has to do the case where channeld
crashed, for example. But that's a bigger change.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
@cdecker points out that in test_forward, where we manually create a route,
we get an error back which contains an update for an unknown channel.
We should still note this, but it's not an error for testing.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is something which generally shouldn't happen, but we didn't
notice it previously.
We ignore this warning in the case where a channel was deleted: this
happens because one side can send an update while the other notices
that the channel is closed.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Note: this will break the gossip_store if they have current channels,
but it will fail to parse and be discarded.
Have local_add_channel do just that: the update is logically separate
and can be sent separately.
This removes the ugly 'bool add_to_store' flag.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Tor wasn't actually working for me to connect to anything, but it worked
for 'ssh -D' testing.
Note that the resulting 'netaddr' is a bit weird, but I guess it's honest.
$ ./cli/lightning-cli connect 021f2cbffc4045ca2d70678ecf8ed75e488290874c9da38074f6d378248337062b
{
"id": "021f2cbffc4045ca2d70678ecf8ed75e488290874c9da38074f6d378248337062b"
}
$ ./cli/lightning-cli listpeers
{
"peers": [
{
"state": "GOSSIPING",
"id": "021f2cbffc4045ca2d70678ecf8ed75e488290874c9da38074f6d378248337062b",
"netaddr": [
"ln1qg0je0lugpzu5ttsv78vlrkhteyg9yy8fjw68qr57mfhsfyrxurzkq522ah.lseed.bitcoinstats.com:9735"
],
"connected": true,
"owner": "lightning_gossipd"
}
]
}
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Good for debugging (you have to send SIGUSR1 to lightning_gossipd to turn
it on though, and --log-level=io on the lightningd cmdline to have it
output IO messages by default).
I also noticed that io_tor_connect_after_req_host() does a useless
test on reach->buffer[0] after it's *written*: remove it.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Use a wireaddr_internal directly (which is what we want).
Also, don't hardcode 9735, use DEFAULT_PORT internally in
seed_resolve_addr().
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Previously it converted the wireaddr to a string internally: to support
unresolved names we need that done externally.
We actually tell the SOCKS5 proxy to do a domain lookup already, even
though we give use IP/IPv6 address, so this change is sufficient to
support connect-by-name.
Note replacement of assert() with an explicit case statement, which
has the benefit that the compiler complains when we add new
ADDR_INTERNAL types.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is useful for the next patch, where we want to hand the unresolved
name through to the proxy.
This also addresses @Saibato's worry that we still called getaddrinfo()
(with the AI_NUMERICHOST option) even if we didn't want a lookup.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Not all of them, but it's really about using the SOCKS proxy rather than
really using Tor at this level.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We assert() that it's set by one of the branches (it should be!) but
if we don't hit one it's uninitialized, not NULL.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
1. If we have a channel_announcement, the channel is public, otherwise
it's not. Not all channels are public, as they can be local: those
have a NULL channel_announcement.
2. If we don't have a channel_update, we know nothing about that half
of the channel, and no other fields are valid.
3. We can tell if a half channel is disabled by the flags field directly.
Note that we never send halfchannels without an update over
gossip_getchannels_reply so that marshalling/unmarshalling can be
vastly simplified.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Make the update/announce messages own the element in the broadcast map
not the other way around.
Then we keep a pointer to the message, and when we free it
(eg. channel closed, update replaces it), it gets freed from the
broadcast map automatically.
The result is much nicer!
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Basically, if we don't have an announcement for the channel, stash it,
and once we get an announcement, replay if necessary.
Fixes: #1485
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This means it will effect connect commands too (though it's too
late to stop DNS lookups caused by commandline options).
We also warn that this is one case where we allow forcing through Tor
without a proxy set: it just means all connections will fail.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This takes the Tor service address in the same option, rather than using
a separate one. Gossipd now digests this like any other type.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
For the moment, this is a straight handing of current parameters through
from master to the gossip daemon. Next we'll change that.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Risks leakage. We could do lookup via the proxy, but that's a TODO.
There's only one occurance of getaddrinfo (and no gethostbyname), so
we add a flag to the callers.
Note: the use of --always-use-proxy suppresses *all* DNS lookups, even
those from connect commands and the command line.
FIXME: An implicit setting of use_proxy_always is done in gossipd if it
determines that we are announcing nothing but Tor addresses, but that
does *not* suppress 'connect'.
This is fixed in a later patch.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
1. Only force proxy use if we don't announce any non-TOR address.
There's no option to turn it off, so this makes more sense.
2. Don't assume we want an IPv4 socket to reach proxy, use the family
from the struct addrinfo.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Instead of storing a wireaddr and converting to an addrinfo every
time, just convert once (which also avoids the memory leak in the
current code).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Rename tor_proxyaddrs and tor_serviceaddrs to tor_proxyaddr and tor_serviceaddr:
the 's' at the end suggests that there can be more than one.
Make them NULL or non-NULL, rather than using all-zero if unset.
Hand them the same way to gossipd; it's a bit of a hack since we don't
have optional fields, so we use a counter which is always 0 or 1.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
All gossipd needs from common/tor is do_we_use_tor_addr(), so move
that and the rest of the tor-specific handshake code into gossip/tor.c
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is a rebased and combined patch for Tor support. It is extensively
reworked in the following patches, but the basis remains Saibato's work,
so it seemed fairest to begin with this.
Minor changes:
1. Use --announce-addr instead of --tor-external.
2. I also reverted some whitespace and unrelated changes from the patch.
3. Removed unnecessary ';' after } in functions.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is a leftover from before splitting the `gossip_store` injection path from
the handling of gossip messages.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
Someone could try to announce an internal address, and we might probe
it.
This breaks tests, so we add '--dev-allow-localhost' for our tests, so
we don't eliminate that one. Of course, now we need to skip some more
tests in non-developer mode.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
If we're given a wildcard address, we can't announce it like that: we need
to try to turn it into a real address (using guess_address). Then we
use that address. As a side-effect of this cleanup, we only announce
*any* '--addr' if it's routable.
This fix means that our tests have to force '--announce-addr' because
otherwise localhost isn't routable.
This means that gossipd really controls the addresses now, and breaks
them into two arrays: what we bind to, and what we announce. That is
now what we return to the master for json_getinfo(), which prints them
as 'bindings' and 'addresses' respectively.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
1. Add special option where an empty host means 'wildcard for IPv4 and/or IPv6'
which means ':1234' can be used to set only the portnum.
2. Only add this protocol wildcard if --autolisten=1 (default)
and no other addresses specified.
3. Pass it down to gossipd, so it can handle errors correctly: in most cases,
it's fatal not to be able to bind to a port, but for this case, it's OK
if we can only bind to one of IPv4/v6 (fatal iff neither).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This replacement is a little menial, but it explicitly catches all
the places where we allow a local socket. The actual implementation of
opening a AF_UNIX socket is almost hidden in the patch.
The detection of "valid address" is now more complex:
p->addr.itype != ADDR_INTERNAL_WIREADDR || p->addr.u.wireaddr.type != ADDR_TYPE_PADDING
But most places we do this, we should audit: I'm pretty sure we can't
get an invalid address any more from gossipd (they may be in db, but
we should fix that too).
Closes: #1323
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It does all the other address handling, do this too. It also proves useful
as we clean up wildcard address handling.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It's become clear that our network options are insufficient, with the coming
addition of Tor and unix domain support.
Currently:
1. We always bind to local IPv4 and IPv6 sockets, unless --port=0, --offline,
or any address is specified explicitly. If they're routable, we announce.
2. --addr is used to announce, but not to control binding.
After this change:
1. --port is deprecated.
2. --addr controls what we bind to and announce.
3. --bind-addr/--announce-addr can be used to control one and not the other.
4. Unless --autolisten=0, we add local IPv4 & IPv6 port 9735 (and announce if they are routable).
5. --offline still overrides listening (though announcing is still the same).
This means we can bind to as many ports/interfaces as we want, and for
special effects we can announce different things (eg. we're sitting
behind a port forward or a proxy).
What remains to implement is semi-automatic binding: we should be able
to say '--addr=0.0.0.0:9999' and have the address resolve at bind
time, or even '--addr=0.0.0.0:0' and have the port autoresolve too
(you could determine what it was from 'lightning-cli getinfo'.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We set no_reconnect with --offline, but that doesn't work if !DEVELOPER.
Make the flag positive, and non-DEVELOPER mode for gossipd.
We also don't override portnum with --offline, but have an explicit
'listen' flag.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This means gossipd is live and we can tell it things, but it won't
receive incoming connections. The split also means that the main daemon
continues (eg. loading peers from db) while gossipd is loading from the store,
potentially speeding startup.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
If channeld dies for some reason (eg, reconnect) and we didn't yet announce
the channel, we can miss doing so. This is unusual, because if lightningd
restarts it rearms the callback which gives us funding_locked, so it only
happens if just channel dies before sending the announcement message.
This problem applies to both temporary announcement (for gossipd) and
the real one. For the temporary one, simply re-send on startup, and
remote the error msg gossipd gives if it sees a second one. For the
real one, we need a flag to tell us the depth is sufficient; the peer
will ignore re-sends anyway.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
When a connect fails, if it's an important peer, we set a timer. If
we have a manual connect command, this means we do this again, leading
to another timer.
For a manual command, free any existing timer; the normal fail logic
will start another if necessary.
Reported-by: @ZmnSCPxj
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
At least say whether we failed to connect at all, or failed cryptographic
handshake, or failed reading/writing init messages.
The errno can be "Operation now in progress" if the other end closes the
socket on us: this happens when we handshake with the wrong key and it
hangs up on us. Fixing this would require work on ccan/io though.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
When we get a reconnection, kill the current remote peer, and wait for the
master to tell us it's dead. Then we hand it the new peer.
Previously, we would end up with gossipd holding multiple peers, and
the logging was really hard to interpret; I'm not completely convinced
that we did the right thing when one terminated, either.
Note that this now means we can have peers with neither ->local nor ->remote
populated, so we check that more carefully.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Currently we intuit it from the fd being closed, but that may happen out
of order with when the master thinks it's dead.
So now if the gossip fd closes we just ignore it, and we'll get a
notification from the master when the peer is disconnected.
The notification is slightly ugly in that we have to disable it for
a channel when we manually hand the channel back to gossipd.
Note: as stands, this is racy with reconnects. See the next patch.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
(This was sitting in my gossip-enchancement patch queue, but it simplifies
this set too, so I moved it here).
In 94711969f we added an explicit gossip_index so when gossipd gets
peers back from other daemons, it knows what gossip it has sent (since
gossipd can send gossip after the other daemon is already complete).
This solution is insufficient for the more general case where gossipd
wants to send other messages reliably, so replace it with the other
solution: have gossipd drain the "gossip fd" which the daemon returns.
This turns out to be quite simple, and is probably how I should have
done it originally :(
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
1. Lifetime of 'struct reaching' now only while we're actively doing connect.
2. Always free after a single attempt: if it's an important peer, retry
on a timer.
3. Have a single response message to master, rather than relying on
peer_connected on success and other msgs on failure.
4. If we are actively connecting and we get another command for the same
id, just increment the counter
The result is much simpler in the master daemon, and much nicer for
reconnection: if they say to connect they get an immediate response,
rather than waiting for 10 retries. Even if it's an important peer,
it fires off another reconnect attempt, unless it's actively
connecting now.
This removes exponential backoff: that's restored in next patch. It
also doesn't handle multiple addresses for a single peer.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Rather than using a flag in reaching/peer; we make it self-contained
as the next patch puts it straight into a timer callback.
Also remove unused 'succeeded' field from struct peer.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
And on channel_fail_permanent and closing (the two places we drop to
chain), we tell gossipd it's no longer important.
Fixes: #1316
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
These don't have a maximum number of reconnect attempts, and ensure
that we try to reconnect when the peer dies.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
These were so far only used for bolt11 construction, but we'll need them for the
DNS seed as well, so here we just pull them out into their own unit and prefix
them.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
We're about to remove automatic retrying of connect, and that uncovered
that we actually print out our "Server started" message before we create
the listening socket.
Move the init higher (outside the db transaction) and make it a
request/response, the loop until it's done.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Christian points out that we don't get spend notifications for old
channels if we truncate the store. We'd need more work to do this,
either validating the channels are still unspent, or replaying old
blocks from the truncation point.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Since we open with O_APPEND, any write() will append as we want it to.
But we want to distinguish a new store creation from a truncation due
to bad version.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
If something goes (fatally) wrong, we won't add it to the store.
This reveals a latent bug in routing_add_channel_announcement() and
friend which did a take() on msg, which it doesn't own. TAKES means
that it will take ownership IF the caller requests, not an unconditional
ownership transfer (which is an antipattern).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We enter nodes in the map when we create channels, but those channels
could be local and unannounced. This triggered a failure in
test_gossip_persistence since the store truncated when it saw the
first thing was a node_announce.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Internally both payment and routing use 64-bit, but the interface
between them used 32-bit.
Since both components already support 64-bit we should use that.
In particular, the main daemon and subdaemons share the backtrace code,
with hooks for logging.
The daemon hook inserts the io_poll override, which means we no longer
need io_debug.[ch]. Though most daemons don't need it, they still link
against ccan/io, so it's harmess (suggested by @ZmnSCPxj).
This was tested manually to make sure we get backtraces still.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
If we only remember the actions that added channels then we'd restore them when
re-reading the gossip_store, so put a tombstone in there to remember to delete
it. These will be cleared upon re-writing the store since the announcements wont
be written anymore.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
This was a tricky one to find, it turns out that some nodes are sending
node_announcements even if they don't have a channel announced yet. If they are
a peer and the channel is currently verifying then we'll have a local channel in
the network view, hence accept the node_announcement, but when replaying, the
node_announcement will be replayed and we won't have a channel yet. This just
skips node_announcements, which is always safe.
Reported-by: @laszlohanyecz
Signed-off-by: Christian Decker <decker.christian@gmail.com>
This now works because we no longer call out to masterd or bitcoind to verify
the channels. It's also rather quick and silent so we can just process all
stored messages until we're done.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
Messages from peers and messages from the gossip_store now have completely
different entrypoints, so we don't need to trace their origin around the message
handling code any longer.
This stores and reads the channel_announcements in the wrapping message which
allows us to store associated data with the raw channel_announcements.
The gossip_store applies channel_announcements directly but it also returns it,
and it gets discarded as a duplicate. In the next commit we'll have gossip_store
apply all changes, bypassing verification, so the duplication is only temporary.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
Since we now store additional data along with the original messages they exceed
the length of the peer wire protocol messages.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
If we have a non-empty file and the version doesn't match, then we truncate and
write our own version. If the file is empty we write our version and the
truncate becomes a no-op
Signed-off-by: Christian Decker <decker.christian@gmail.com>
Since we may want to extend the on-disk format by adding custom information we
may as well just go the extra mile and reuse the serialization primitives we
already have.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
Moves any modifications based on an incoming gossip message into its own
function separate from the message verification. This allows us to skip
verification when reading messages from a trusted source, e.g., the
gossip_store, speeding up the gossip replay.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
When we read from the gossip_store we set store=false so that we don't duplicate
messages in the store.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
As proposed by @rustyrussell this makes it a bit easier to truncate and sync on
read errors.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
Ee will be replaying gossip messages from the gossip_store soon. This means that
not all messages originate from a peer, so we move the queuing of error messages
up into the peer message handler.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
If we're going to simply take() a pointer, don't allocate it off a random
object. Using NULL makes our intent clear, particularly with allocating
packets we're going to take() onto a queue.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Now it just returns true if it queued something. This allows it
to queue multiple packets, and lets it share code paths with other code
in future patches.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
As we add more features, the current code is insufficient.
1. Keep an array of single feature bits, for easy switching on and off.
2. Create feature_offered() which checks for both compulsory and optional
variants.
3. Invert requires_unsupported_features() and unsupported_features()
which tend to be double-negative, all_supported_features() and
features_supported().
4. Move single feature definition from wire/peer_wire.h to common/features.h.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We currently keep two copies; one in the broadcast structure to send
in order, and one in the routing information. Since we already keep
the broadcast index in the routing information, use that.
Conveniently, a zero index is the same as the old NULL test.
Rename struct node's announcement_idx to node_announce_msgidx to
make it match the other users.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We tal_dup_arr() it, which does take. Make it const in the structure;
the tal_dup_arr() removes the const, so it compiles without it, but it's
misleading.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We only access via index. We do, however, want to clean up when we
delete nodes and channels, so we tie lifetimes to that. This leads
us to put the index into 'struct queued_message'.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
1. make queue_peer_msg() use both if branches, as both equally likely.
2. Remove redundant *scid = NULL in handle_channel_announcement.
3. Log failing pending channel_updates.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
As per BOLT #7.
We don't do this for channel_update which are queued because the
channel_announcement is pending though.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
If the channel is pending, we queue the node_announcment and if the channel
is OK we re-call process_node_announcement. Make sure that second call
won't fail if the first succeeded.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We already have 'struct node', so rename 'struct routing_channel' to
'struct chan', and 'struct node_connection' to 'struct half_chan'.
Other minor changes:
1. rstate->channels -> rstate->chanmap.
2. 'connections' -> 'half'.
3. connection_to -> half_chan_to
4. connection_from -> half_chan_from
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The containing `struct routing_channel` contains src and dst, so
remove them. However, the channel_update msgidx does belong int
`struct node_connection` along with the channel_update.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Returning the separate first routing_channel was a weird API: just
return the entire array. Sure, we have to treat the first node a bit
differently (because we don't charge ourselves fees), but it's still
simpler.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
To remove the redundant fields in `struct node_connection` (ie. 'src'
and 'dst' pointers) we need to deal with `struct routing_channel`.
This means we get a series of channels, from which the direction is
implied, so it's a bit more complex to decode. We add a helper
`other_node` to help with this, and since we're the only user of
`connection_to` we change that function to return the index.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Failure and pruning were the two places where a node_connection could
be freed; now they both deal with entire channels, we can remove the
NULL checks, and the destructor.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We discarded it; we should populate it. The comment is wrong, since
local_add_channel() doesn't add public channels, and we test that above.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is twice the 'update_channel_interval' we get handed.
We delete the non-existent channel_add_connection and delete_connection
declarations from the header too.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We currently give them a free pass. The simplest fix is to give them
an old timestamp on initialization.
We still skip unannounced channels, on the assumption that they're
ours. And we set the last_update_timestamp to -1 when we convert to
gossip_getchannels_entry to indicate no update.
This breaks the DEVELOPER=1 pruning test, since we hardcode the 1
week timeout. That's fixed in the next patch.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We make new_routing_channel() populate both connections
(active=false), so local_add_channel becomes simpler. We also
suppress listchannels output of active=false unannounced channels, to
avoid breaking tests (also, these are unusable, so it makes sense to
omit them)
It also seems the logic in add_channel_direction is legacy: a
channel_announce cannot replace the scid (that would be a different
channel), we don't allow duplicate announcements, and the announcement
is never NULL.
And since we disallow repeated channel_announce already, I believe
'forward' is always true, greatly simplifying the logic in
handle_pending_cannouncement.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This makes 'routing_channel' the primary object in the system; it can have
one or two 'node_connection's attached, and points to two nodes.
The nodes are freed when no more routing_channel refer to them. The
routing_channel are freed when they contain no more 'node_connection'.
This fixes#1072 which I surmise was caused by a dangling
routing_channel after pruning.
Each node contains a single array of 'routing_channel's, not one for
each direction. The 'routing_channel' itself orders nodes in key
order (conveniently the index is equal to the direction flag we use),
and 'node_connection' with source in the same order.
There are helpers to assist with common questions like "which
'node_connection' leads out of this node?".
There are now two ways to find a channel:
1. Direct scid lookup via rstate->channels map.
2. Node key lookup, followed by channel traversal.
Several FIXMEs are inserted for where we can now do things more optimally.
Fixes: #1072
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We're going to make it a first-class citizen, and pending routing_channel
are not real ones (in particular, we don't want to create pending nodes).
We had a linked list called rstate->pending_cannouncement which we didn't
actually use, so put that back for now and add a FIXME to use a faster
data structure.
We need to check that list now in handle_channel_update, but we never
have a real routing_channel and a pending, unless the routing_channel
isn't public.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This hook is called when the queue is empty; we should only send gossip
according to the gossip timer. We're currently dribbling it out after
every message, in violation of the spec.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Now we have them, let's use them. I missed one case deliberately, since
that causes merge conflicts when I replace it in a following patch.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
I'm not completely conviced that we can't end up removing pending things,
so change asserts to simple returns.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
If we make destroy_node() remove itself from the map, then we simply
need to free it.
We can batch the frees (as we need) simply by reparenting all the pruned
nodes onto a single temporary parent, then freeing it, relying on tal's
internal datastructures.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
get_connection_by_scid() and update_to_pending() both do the same
lookup which we did in handle_channel_update().
Do the lookup once, and simplify the others.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>