We used to have a `struct chan` while we're waiting for an update; now we
keep that internally. So a `struct chan` without a channel_announcement
in the store is private, and other is public.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Reload them from disk if they do listnodes.
MCP results from 5 runs, min-max(mean +/- stddev):
store_load_msec:35390-38659(37336.4+/-1.3e+03)
vsz_kb:1780516
store_rewrite_sec:13.800000-16.800000(15.02+/-0.98)
listnodes_sec:1.280000-1.530000(1.382+/-0.096)
listchannels_sec:28.700000-30.440000(29.34+/-0.68)
routing_sec:30.120000-31.080000(30.526+/-0.35)
peer_write_all_sec:65.910000-76.850000(69.462+/-4.1)
MCP notable changes from previous patch (>1 stddev):
-vsz_kb:1792996
+vsz_kb:1780516
-listnodes_sec:1.030000-1.120000(1.068+/-0.032)
+listnodes_sec:1.280000-1.530000(1.382+/-0.096)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We currently create a struct chan when we receive a `channel_announcement`,
but we can only broadcast once we have a `channel_update` (since that
provides the timestamp).
This means a `struct chan` can be in a weird state where it exists,
but is unusable (can't use without an update), and also means we need to
keep the channel_announcement message around until an update arrives, so
we can put it in the gossip_store.
Instead, keep track of these "unupdated" channels separately, and check
for them in all the places we search for a specific channel to update.
MCP results from 5 runs, min-max(mean +/- stddev):
store_load_msec:30640-33236(32202+/-8.7e+02)
vsz_kb:1812956
store_rewrite_sec:13.410000-16.970000(14.438+/-1.3)
listnodes_sec:0.590000-0.660000(0.62+/-0.033)
listchannels_sec:28.140000-29.560000(28.816+/-0.56)
routing_sec:29.530000-32.590000(30.352+/-1.1)
peer_write_all_sec:60.380000-61.320000(60.836+/-0.37)
MCP notable changes from previous patch (>1 stddev):
-vsz_kb:1812904
+vsz_kb:1812956
-store_rewrite_sec:21.390000-27.070000(23.596+/-2.4)
+store_rewrite_sec:13.410000-16.970000(14.438+/-1.3)
-listnodes_sec:1.120000-1.230000(1.176+/-0.044)
+listnodes_sec:0.590000-0.660000(0.62+/-0.033)
-listchannels_sec:38.900000-50.580000(44.716+/-3.9)
+listchannels_sec:28.140000-29.560000(28.816+/-0.56)
-routing_sec:45.080000-48.160000(46.814+/-1.1)
+routing_sec:29.530000-32.590000(30.352+/-1.1)
-peer_write_all_sec:58.780000-87.150000(72.278+/-9.7)
+peer_write_all_sec:60.380000-61.320000(60.836+/-0.37)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Instead of an arbitrary counter, we can use the file offset for our
partial ordering, removing a field. It takes some care when we compact
the store, however, as this field changes.
MCP results from 5 runs, min-max(mean +/- stddev):
store_load_msec:34271-35283(34789.6+/-3.3e+02)
vsz_kb:2288784
store_rewrite_sec:38.060000-39.130000(38.426+/-0.39)
listnodes_sec:0.750000-0.850000(0.794+/-0.042)
listchannels_sec:30.740000-31.760000(31.096+/-0.35)
routing_sec:29.600000-33.560000(30.472+/-1.5)
peer_write_all_sec:49.220000-52.690000(50.892+/-1.3)
MCP notable changes from previous patch (>1 stddev):
-store_load_msec:35685-38538(37090.4+/-9.1e+02)
+store_load_msec:34271-35283(34789.6+/-3.3e+02)
-vsz_kb:2288768
+vsz_kb:2288784
-peer_write_all_sec:51.140000-58.350000(55.69+/-2.4)
+peer_write_all_sec:49.220000-52.690000(50.892+/-1.3)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is more compact, but also required once we replace the arbitrary
"index" with an actual offset into the gossip store. That will let us
remove the in-memory variants entirely.
MCP results from 5 runs, min-max(mean +/- stddev):
store_load_msec:35685-38538(37090.4+/-9.1e+02)
vsz_kb:2288768
store_rewrite_sec:35.530000-41.230000(37.904+/-2.3)
listnodes_sec:0.720000-0.810000(0.762+/-0.041)
listchannels_sec:30.750000-35.990000(32.704+/-2)
routing_sec:29.570000-34.010000(31.374+/-1.8)
peer_write_all_sec:51.140000-58.350000(55.69+/-2.4)
MCP notable changes from previous patch (>1 stddev):
-vsz_kb:2621808
+vsz_kb:2288768
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We used an s64 so we could use -1 and save a check, but that's just
silly as we have adjacent non-u64 fields: wastes 7 bytes per node
and 16 per channel.
Interestingly, this seemed to make us a little slower for some reason.
MCP results from 5 runs, min-max(mean +/- stddev):
store_load_msec:35569-38776(37169.8+/-1.2e+03)
vsz_kb:2621808
store_rewrite_sec:35.870000-40.290000(38.14+/-1.6)
listnodes_sec:0.740000-0.800000(0.768+/-0.023)
listchannels_sec:29.820000-32.730000(30.972+/-0.99)
routing_sec:30.110000-30.590000(30.346+/-0.18)
peer_write_all_sec:52.420000-59.160000(54.692+/-2.5)
MCP notable changes from previous patch (>1 stddev):
-store_load_msec:32825-36365(34615.6+/-1.1e+03)
+store_load_msec:35569-38776(37169.8+/-1.2e+03)
-vsz_kb:2637488
+vsz_kb:2621808
-store_rewrite_sec:35.150000-36.200000(35.59+/-0.4)
+store_rewrite_sec:35.870000-40.290000(38.14+/-1.6)
-listnodes_sec:0.590000-0.710000(0.682+/-0.046)
+listnodes_sec:0.740000-0.800000(0.768+/-0.023)
-peer_write_all_sec:49.020000-52.890000(50.376+/-1.5)
+peer_write_all_sec:52.420000-59.160000(54.692+/-2.5)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
I tried to just do gossipd, but it was uncontainable, so this ended up being
a complete sweep.
We didn't get much space saving in gossipd, even though we should save
24 bytes per node.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Allocating a htable is overkill for most nodes; we can fit 11 pointers
in the same space (10, since we use 1 to indicate we're using an array).
MCP results from 5 runs, min-max(mean +/- stddev):
store_load_msec:45947-47016(46683.4+/-4e+02)
vsz_kb:2639240
store_rewrite_sec:46.950000-49.830000(48.048+/-0.95)
listnodes_sec:1.090000-1.350000(1.196+/-0.095)
listchannels_sec:48.960000-57.640000(53.358+/-2.8)
routing_sec:29.990000-33.880000(31.088+/-1.4)
peer_write_all_sec:49.360000-53.210000(51.338+/-1.4)
MCP notable changes from previous patch (>1 stddev):
- vsz_kb:2641316
+ vsz_kb:2639240
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Makes the next step easier.
MCP results from 5 runs, min-max(mean +/- stddev):
store_load_msec:45791-46917(46330.4+/-3.6e+02)
vsz_kb:2641316
store_rewrite_sec:47.040000-48.720000(47.684+/-0.57)
listnodes_sec:1.140000-1.340000(1.2+/-0.072)
listchannels_sec:50.970000-54.250000(52.698+/-1.3)
routing_sec:29.950000-31.010000(30.332+/-0.37)
peer_write_all_sec:51.570000-52.970000(52.1+/-0.54)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This lets us benchmark without a valid blockchain.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Header from folded patch 'fixup!_gossipd__dev_option_to_allow_unknown_channels.patch':
fixup! gossipd: dev option to allow unknown channels.
Suggested-by: @cdecker
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
For giant nodes, it seems we spend a lot of time memmoving this array.
Normally we'd go for a linked list, but that's actually hard: each
channel has two nodes, so needs two embedded list pointers, and when
iterating there's no good way to figure out which embedded pointer
we'd be using.
So we (ab)use htable; we don't really need an index, but it's good for
cache-friendly iteration (our main operation). We can actually change
to a hybrid later to avoid the extra allocation for small nodes.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
If we asked `bitcoind` for a txout and it failed we were not storing that
information anywhere, meaning that when we see the channel announcement the
next time we'd be reaching out to `lightningd` and `bitcoind` again, just to
see it fail again. This adds an in-memory cache for these failures so we can
just ignore these the next time around.
Fixes#2503
Signed-off-by: Christian Decker <decker.christian@gmail.com>
As a side-effect of using amount_msat in gossipd/routing.c, we explicitly
handle overflows and don't need to pre-prune ridiculous-fee channels.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We have a seed, which is for (future!) unit testing consistency. This
makes it change every time, so our pay_direct_test is more useful.
I tried restarting the noed around the loop, but it tended to fail
rebinding to the same port for some reason?
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
As a general rule, lightningd shouldn't parse user packets. We move the
parsing into gossipd, and have it respond only to permanent failures.
Note that we should *not* unconditionally remove a channel on
WIRE_INVALID_ONION_HMAC, as this can be triggered (and we do!) by
feeding sendpay a route with an incorrect pubkey.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Currently only used by gossipd for channel elimination.
Also print them in canonical form (/[01]), so tests need to be
changed.
Suggested-by: @cdecker
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We keep a chain_hash in struct daemon, becayse otherwise we end up with
`&peer->daemon->rstate->chainparams->genesis_blockhash` which is a bit
ridiculous.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This avoids some very ugly switch() statements which mixed the two,
but we also take the chance to rename 'towire_gossip_' to
'towire_gossipd_' for those inter-daemon messages; they're messages to
gossipd, not gossip messages.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Messages from a peer may be invalid in many ways: we send an error
packet in that case. Rather than internally calling peer_error,
however, we make it explicit by having the handle_ functions return
NULL or an error packet.
Messages from the daemon itself should not be invalid: we log an error
and close the fd to them if it is. Previously we logged an error but
didn't kill them.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
If another channel has set the optional `htlc_maximum_msat` field,
we should correctly parse that field and respect it when drawing up
routes for payments.
And use ARRAY_SIZE() everywhere which will break compile if it's not a
literal array, plus assertions that it's the same length.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We have a lot of infrastructure to delay local channel_updates to
avoid spamming on each peer reconnect; we had to keep tracking of
pending ones though, in case we needed the very latest for sending an
error when failing an HTLC.
Instead, it's far simpler to set the local_disabled flag on a channel
when we disconnect, but only send a disabling channel_update if we
actually fail an HTLC.
Note: handle_channel_update() TAKES update (due to tal_arr_dup), but we
didn't use that before. Now we do, add annotation.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
BOLT 7's been updated to split the flags field in `channel_update`
into two: `channel_flags` and `message_flags`. This changeset does the
minimal necessary to get to building with the new flags.
If we receive a channel_announce but not a channel_update, we store the announce
but don't put it in the broadcast map.
When we delete a channel, we check if the node_announcement broadcast
now preceeds all channel_announcements, and if so, we move it to the
end of the map. However, with a channel_announcement at index '0',
this test fails.
This is at least one potential cause of the node map getting out of order.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
These happen after we compact the store; every log I've seen of a
restart on a real node has a message about truncating the store,
because node_announcements predate channel_announcements.
I extracted one such case from testnet, and reduced it to test here.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
As pointed out by @rustyrussell the capacity is now always defined, so we can
fold that into the construction of the channel itself.
Reported-by: Rusty Russell <@rustyrussell>
Signed-off-by: Christian Decker <@cdecker>
We used to just manually set ROUTING_FLAGS_DISABLED, but that means we
then suppressed the real channel_update because we thought it was a
duplicate!
So use a local flag: set it for the channel when the peer disconnects,
and clear it when channeld sends a local update.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We wrap it in 'struct pubkey' for typesafety and consistency, and the
next patch takes advantage of that when we move to pubkey_eq.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This lets detect if a node announce preceeds a channel announce once we
delete the node announcement.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
In general, we need to only publish node announcements after
publishing channel announcements, though we can accept node
announcements as soon as we see channel announcements. So we keep a
flag for those node_announcement which haven't been broadcast yet.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
handle_pending_cannouncement might not actually add the announcment,
as it could be waiting for a channel_update. We need to wait for
the actual announcement before considering announcing our node.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
@cdecker points out that in test_forward, where we manually create a route,
we get an error back which contains an update for an unknown channel.
We should still note this, but it's not an error for testing.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>