Asking for the last few blocks was logical, but my node is missing
most gossip in practice.
For the moment, simply ask a peer for every channel it knows, once
we're started up.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
When probing, no point probing for before lightning became cool. Current
logic means we often probe below block 500,000, and think things are OK
because there are no short_channel_ids.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We were using `i` as index variable in two nested loops. This works as long as
the DNS seed resolves to a single address, but will crash if the node has both
an A as well as an AAAA entry, at which point we'll try to index the hostname
without a matching entry.
Signed-off-by: Christian Decker <@cdecker>
We were implicitly relying on sqlite3 behavior that returns the zero-value for
nulled fields when accessing them. This adds the same behavior explicitly to
the DB abstraction in order to reduce `db_column_is_null` checks in the logic,
but still make it evident what is happening here.
Fixes https://github.com/fiatjaf/mcldsp/issues/1
Signed-off-by: Christian Decker <@cdecker>
Check behavior for user supplied upfront_shutdown_script via close_to
Header from folded patch 'fix__return__not__iff_well_close_to_the_provided_addr.patch':
fix: return not iff we'll close to the provided addr
Takes advantage of upfront-shutdown-script to permit users to
specify the close-to address for a channel at open, by adding
a `close_to` field to `fundchannel_start`.
Note that this only is in effect if `fundchannel_start` returns
with `close_to` set -- otherwise, peer doesn't
support `option_upfront_shutdown_script`.
We always ended up sending an empty query when we had stale scids!
And it turns out we consider such a query invalid:
Bad query_short_channel_ids query_flags 010506226e46111a0b59caaf126043eb5bbf28c34f3a5e332a1fc7b2b73cf188910f000100010100
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We only chose 3 peers to gossip with us (down from 8 last release).
There's no justification for this number, or reason to believe that
it is sufficient to keep us in sync.
Be more conservative for now; we can always decrease it later once
we have more data.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
polling the last 32 is fairly useless in practice, since they tend to
be recent nodes; it won't detect long-forgotten ones.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
If we get a channel_update while we're still verifying the channel_announcement
we didn't set the peer pointer, so it didn't get credit. As a result, the
seeker tended to think we were done gossiping sooner than we were.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
They're a little subtle, so let's spell out exactly what checkmessage
means. In particular, we avoid discussing key derivation from a
signature, as it tends to get people (like me!) into trouble: we
describe it instead as iterating through every known node.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
*If* we know the key has signed something else (as is the case for
channel_announcement) then we can effectively trust the key derivation.
This matches how LND's VerifyMessage works, though in the next patch
we will document it exactly.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Thanks Twitter helpers @duck1123 and @jochemin for tests!
And @bitconner for the initial test vector.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
I wanted to call it verifymessage, but then I read the LND API for that
and wanted nothing to do with it!
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
I had a report of a 0.7.2 user whose node hadn't appeared on 1ml. Their
node_announcement wasn't visible to my node, either.
I suspect this is a consequence of recent version reducing the amount of
gossip they send, as well as large nodes increasingly turning off gossip
altogether from some peers (as we do). We should ignore timestamp filters
for our own channels: the easiest way to do this is to push them out
directly from gossipd (other messages are sent via the store).
We change channeld to wrap the local channel_announcements: previously
we just handed it to gossipd as for any other gossip message we received
from our peer. Now gossipd knows to push it out, as it's local.
This interferes with the logic in tests/test_misc.py::test_htlc_send_timeout
which expects the node_announcement message last, so we generalize
that too.
[ Thanks to @trueptolmy for bugfix! ]
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is mainly an internal-only change, especially since we don't
offer any globalfeatures.
However, LND (as of next release) will offer global features, and also
expect option_static_remotekey to be a *global* feature. So we send
our (merged) feature bitset as both global and local in init, and fold
those bitsets together when we get an init msg.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
I thought LND had a bug, but turns out it doesn't like out-of-order
short_channel_ids: in fact, the spec says they have to be in order!
This means we use uintmap instead of a htable for unknown_scids and
stale_scids so they're nicely ordered.
But our nodes-missing-announcements probe is harder since they can
also contain duplicates: we switch that to iterate through channels
rather than nodes.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We weren't supposed to do any gossiping until we were synced (and thus
knew blockheight), but our seeker_check() didn't wait for it! Move the
waiting all into seeker.c, so it can handle it all consistently.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
On testing, I found a node which would hang up every time we asked it
for query_short_channel_ids (despite it offering features 0x81, meaning
it should handle this message).
Then it would reconnect, and we'd choose it again as our
PROBING_NANNOUNCES peer!
Instead, leave finding another peer to the once-a-minute
seeker_check() function.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
By combining set_state() with selected_peer() we can give a single log
line describing what we're asking for, from whom.
We also add more verbosity to a few key areas, such as gossip rotation
and when gossipd tells a peer to send an error. And move a comment which
was above the wrong function (due to rebase?).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>