Tor v2 hidden services have been deprecated for a while:
https://blog.torproject.org/v2-deprecation-timeline .
This prevents user from being able to set them in the configuration
and to connect to them while still letting us be able to parse them
for gossip.
Changelog-Deprecated: lightningd: v2 Tor addresses. Use v3. See https://blog.torproject.org/v2-deprecation-timeline.
Signed-off-by: Antoine Poinsot <darosior@protonmail.com>
Otherwise, we might find an address other than the one given and
the user might think that address worked.
Fixes: #4185
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Added: JSON-RPC: `connect` returns `address` it actually connected to
We were getting bad gossip because some nodes discarded the channel
announcement for being in the future. This is because the node was, at
that time, below the confirmation height. It'd then discard the
followup messages because not preceded by an announcement, and getting
upset about that.
We used to create the entire reply, the if it was too big, split in
half and retry.
Now that the main network is larger, this always happens with a full
request, which is inefficient.
Instead, produce a reply assuming no compression, then compress as a
bonus. This is simpler and more efficient, at cost of sending more
packets.
I also renamed an internal dev var to make it clearer.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Instead of a boutique message, use a "real" channel_announcement for
private channels (with fake sigs and pubkeys). This makes it far
easier for gossmap to handle local channels.
Backwards compatible update, since we update old stores.
We also fix devtools/dump-gossipstore to know about the tombstone markers.
Since we increment our channel_announce count for local channels now,
the stats in the tests changed too.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We really are just interested in their on-chain footprint, so actually
starting the nodes is pointless overhead, and caused a lot of flakyness due to
the output ordering sometimes not matching up. We now just use the `bitcoind`
API to fund, sign and send a raw transaction that matches the stashed gossip
messages.
With a feerate of 7500perkw and subtracting 660 sats for anchors, a
20,000 sat channel has capacity about 9800 sat, below our default:
You gave bad parameters: channel capacity with funding 20000sat, reserves 546sat/546sat, max_htlc_value_in_flight_msat is 18446744073709551615msat, channel capacity is 9818sat, which is below 10000000msat
So bump channel amounts.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
I started replacing all get_node() calls, but got bored, so then just did the
tests which call get_node() 3 times or more.
Ends up not making a measurable speed difference, but it does make some
things neater and more standard.
Times with SLOW_MACHINE=1 (given that's how Travis tests):
Time before (non-valgrind):
393 sec (had 3 failures?)
Time after (non-valgrind):
410 sec
Time before (valgrind):
890 seconds (had 2 failures)
Time after (valgrind):
892 sec
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The previous patch changed the gossip_store, but in a trivial way.
The next patch will implement upgrading, so this is the test.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The new `keysend` plugin modifies the node features that we send to
peers. This commit breaks out the 'expected_features' we use for tests
to encompass this differentiation.
This restriction was removed from the spec as of
86c2ebcc5973a4133d3ce4d80ae1c203061a1646.
We also fix up some strange formatting in that part of the documentation.
Changelog-changed: We now announce multiple addresses of the same type, if given.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Thanks to @t-bast, who made this possible by interop testing with Eclair!
Changelog-Added: Protocol: can now send and receive TLV-style onion messages.
Changelog-Added: Protocol: can now send and receive BOLT11 payment_secrets.
Changelog-Added: Protocol: can now receive basic multi-part payments.
Changelog-Added: RPC: low-level commands sendpay and waitsendpay can now be used to manually send multi-part payments.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We were hardcoding the chainparams->chain_hash which caused the query to
return an empty result. By parametrizing the test we can make it work on
elements.
Changelog-changed: .lightningd plugins and files moved into <network>/ subdir
Changelog-changed: WARNING: If you don't have a config file, you now may need to specify the network to lightning-cli
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Spaces just make life a little harder for everyone.
(Plus, fix documentation: it's 'jsonrpc' not 'json' subsystem).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This simplifies our tests, too, since we don't need a magic option to
enable io logging in subdaemons.
Note that test_bad_onion still takes too long, due to a separate minor
bug, so that's marked and left dev-only for now.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
1. Printed form is always "[<nodeid>-]<prefix>: <string>"
2. "jcon fd %i" becomes "jsonrpc #%i".
3. "jsonrpc" log is only used once, and is removed.
4. "database" log prefix is use for db accesses.
5. "lightningd(%i)" becomes simply "lightningd" without the pid.
6. The "lightningd_" prefix is stripped from subd log prefixes, and pid removed.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-changed: Logging: formatting made uniform: [NODEID-]SUBSYSTEM: MESSAGE
Changelog-removed: `lightning_` prefixes removed from subdaemon names, including in listpeers `owner` field.
A log can have a default node_id, which can be overridden on a per-entry
basis. This changes the format of logging, so some tests need rework.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It currently works because we inject it so fast that it's still doing the
txout lookup, but that's about to change.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We're about to change it so we always send our local messages, which
breaks this test. Add a new node which doesn't have any local
messages, so the test works correctly.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is mainly an internal-only change, especially since we don't
offer any globalfeatures.
However, LND (as of next release) will offer global features, and also
expect option_static_remotekey to be a *global* feature. So we send
our (merged) feature bitset as both global and local in init, and fold
those bitsets together when we get an init msg.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We completely rework test_node_reannounce: it's assumes we always ask for
all gossip and that assumption will be broken in future patches too.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It relies on the fact that nodes don't do their own gossip queries.
Use devtools instead.
This revealed that the entire logic was broken! It just happened to work.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
I was seeing some accidental pruning under load / Travis, and in
particular we stopped accepting channel_updates because they were 103
seconds old. But making it too long makes the prune test untenable,
so restore a separate flag that this test can use.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It's generally clearer to have simple hardcoded numbers with an
#if DEVELOPER around it, than apparent variables which aren't, really.
Interestingly, our pruning test was always kinda broken: we have to pass
two cycles, since l2 will refresh the channel once to avoid pruning.
Do the more obvious thing, and cut the network in half and check that
l1 and l3 time out.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
If you send a message which simply changes timestamp and signature, we
drop it. You shouldn't be doing that, and the door to ignoring them
was opened by by option_gossip_query_ex, which would allow clients to
ignore updates with the same checksum.
This is more aggressive at reducing spam messages, but we allow refreshes
(to be conservative, we allow them even when 1/2 of the way through the
refresh period).
I dropped the now-unnecessary sleep from test_gossip_pruning, too.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Make update_local_channel use a timer if it's too soon to make another
update.
1. Implement cupdate_different() which compares two updates.
2. make update_local_channel() take a single arg for timer usage.
3. Set timestamp of non-disable update back 5 minutes, so we can
always generate a disable update if we need to.
4. Make update_local_channel() itself do the "unchanged update" suppression.
gossipd: clean up local channel updates.
5. Keep pointer to the current timer so we override any old updates with
a new one, to avoid a race.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
During sync it is highly likely that we can coalesce multiple calls and share
results among them. We also report back failures for non-existing blocks early
on, so we don't run into issues with blocks that our bitcoind doesn't have
yet.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
This was caused by us not checking against the max_blockheight, but rather the
min_blockheight which can be negative with a newly created node. This is still
safe since we check for duplicates anyway in `wallet_filteredblock_add`.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
I don't remember ever seeing a bug which only showed up in VALGRIND=1 with developer
mode disabled, so don't test that, and spread out the other test more evenly.
In addition, disable the worst-performing tests in DEVELOPER=0 mode.
Here timings from my build machine: the worst 6 (- DEVELOPER=0 VALGRIND=0)
with the same tests (+ DEVELOPER=1 VALGRIND=1)
-452.42s call tests/test_pay.py::test_channel_spendable
+87.69s call tests/test_pay.py::test_channel_spendable
-335.66s call tests/test_gossip.py::test_gossip_store_compact_on_load
+47.41s call tests/test_gossip.py::test_gossip_store_compact_on_load
-332.07s call tests/test_connection.py::test_opening_tiny_channel
+89.71s call tests/test_connection.py::test_opening_tiny_channel
-331.97s call tests/test_pay.py::test_channel_spendable_large
+56.23s call tests/test_pay.py::test_channel_spendable_large
-305.28s call tests/test_invoices.py::test_invoice_routeboost
+37.57s call tests/test_invoices.py::test_invoice_routeboost
-284.28s call tests/test_plugin.py::test_htlc_accepted_hook_forward_restart
+49.12s call tests/test_plugin.py::test_htlc_accepted_hook_forward_restart
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
And clean up some dev ones which actually happen (mainly by calling
channel_fail_permanent which logs UNUSUAL, rather than
channel_internal_error which logs BROKEN).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Rewriting the gossip_store is much more trivial when we don't have
any pointers into it, so add some simple offline compaction code
and disable the automatic compaction code.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The crashes in #2750 are mostly caused by us trying to partially truncate
the store. The simplest fix for release is to discard the whole thing if
we detect a problem.
This is a workaround: it'd be far nicer to try to recover.
Fixes: #2750
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It wasn't invalid due to a missing channel_update, but in fact was a
bad checksum due to a cut & paste bug. Fix that, and assert it's not
actually truncating.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
If something went wrong and there was an old one, we were
appending to it!
Reported-by: @SimonVrouwe
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We might have channel_announcements which have no channel_update: normally
these don't get written into the store until there is one, but if the
store was truncated it can happen. We then get upset on compaction, since
we don't have an in-memory representation of the channel_announcement.
Similarly, we leave the node_announcement pending until after that
channel_announcement, leading to a similar case.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
I decided to try a faster implementation, only to find our crc32c was
not correct! Ouch.
I removed the crc32c functions from ccan/crc, and added a new crc32c
module which has the Mark Adler x86-64-optimized variants.
We bump gossip_store version again, since csums have changed.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is where payment tests should go. Also mark it xfail for the moment,
and remove developer-only tag (propagating gossip is only 60 seconds, which
is OK).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
There were several gossip breakages in master; bumping version means
upgrades get a clean store (not just those upgrading from stable version).
Fixes: #2719
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We didn't count some records before, so we could compare the two counters.
This is much simpler, and avoids reliance on bs.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This means we intercept the peer's gossip_timestamp_filter request
in the per-peer subdaemon itself. The rest of the semantics are fairly
simple however.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
(We don't increment the gossip_store version, since there are only a
few commits since the last time we did this).
This lets the reader simply filter messages; this is especially nice since
the channel_announcement timestamp is *derived*, not in the actual message.
This also creates a 'struct gossip_hdr' which makes the code a bit
clearer.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Keeping the uintmap ordering all the broadcastable messages is expensive:
130MB for the million-channels project. But now we delete obsolete entries
from the store, we can have the per-peer daemons simply read that sequentially
and stream the gossip itself.
This is the most primitive version, where all gossip is streamed;
successive patches will bring back proper handling of timestamp filtering
and initial_routing_sync.
We add a gossip_state field to track what's happening with our gossip
streaming: it's initialized in gossipd, and currently always set, but
once we handle timestamps the per-peer daemon may do it when the first
filter is sent.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We use the high bit of the length field: this way we can still check
that the checksums are valid on deleted fields.
Once this is done, serially reading the gossip_store file will result
in a complete, ordered, minimal gossip broadcast. Also, the horrible
corner case where we might try to delete things from the store during
load time is completely gone: we only load non-deleted things.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We're about to bump version again, and the code to upgrade it was
quite hairy (and buggy!). It's not worthwhile for such a
poorly-tested path: I will just add code to limit how much incoming
gossip we get to avoid flooding when we upgrade, however.
I also use a modern gossip_store version in our test_gossip_store_load
test, instead of relying on the upgrade path.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
When we first receive a channel_update, we write both the
channel_announcement and that channel_update to the store: we need
that first update so we can set the channel_announcement timestamp.
However, the channel_update can be replaced later. This means we can
have a channel_announcement, a node_update which relies on it, then
the channel_update later.
So move the "this applies to a pending announcement" check lower, where
gossip_store can use it too. Has a nice side-effect of avoiding
one lookup of the node id.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
First, we should have a channel_update so we actually do some compaction!
(Reported-by @SimonVrouwe). But we should also handle the cases where:
1. A channel_announcement is *not* directly followed by a
channel_update (happens when the channel_update is replaced).
2. A node_announcement predates a channel_update for the peer
(again, can happen once a channel_update is replaced).
3. A local/private channel_creation is not directly followed by an
update.
In addition, we might as well check that we can *load* such a store,
before compaction.
This checks the corner cases which occur in real gossip stores.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It's possible that it hasn't got the node_announcement messages;
it will still list the nodes, however (the channel_announcement tells
it the nodes exist). Check for the alias field instead.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
My raspberry pi node hung up on my other node:
lightning_openingd-... chan #1: Got bad message from gossipd: 0db1
This is because we didn't handle that message in one path.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Save some overhead, plus gets us ready for giving subdaemons direct
store access. This is the first time we *upgrade* the gossip_store,
rather than just discarding.
The downside is that we need to add an extra message after each
channel_announcement, containing the channel capacity.
After:
store_load_msec:28337-30288(28975+/-7.4e+02)
vsz_kb:582304-582316(582306+/-4.8)
store_rewrite_sec:11.240000-11.800000(11.55+/-0.21)
listnodes_sec:1.800000-1.880000(1.84+/-0.028)
listchannels_sec:22.690000-26.260000(23.878+/-1.3)
routing_sec:2.280000-9.570000(6.842+/-2.8)
peer_write_all_sec:48.160000-51.480000(49.608+/-1.1)
Differences:
-vsz_kb:582320
+vsz_kb:582316
-listnodes_sec:2.100000-2.170000(2.118+/-0.026)
+listnodes_sec:1.800000-1.880000(1.84+/-0.028)
-peer_write_all_sec:51.600000-52.550000(52.188+/-0.34)
+peer_write_all_sec:48.160000-51.480000(49.608+/-1.1)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>