1. Create a plugin: ./lightning/tests/plugins/pretend_badlog.py
This plugin subscribes 'warning' notification and log the payload of
'warning';
2. Add a new test: tests/test_plugin.py::test_warning_notification
This test runs the plugin-pretend_badlog.py and check if 'warning'
notification can be normal triggered and subscribed.
We reserve inputs when we're going to send a transaction, but we don't
unreserve them if we crash. This is most graphically demonstrated by
the txprepare case, which makes it easier to trigger.
Instead, we should query bitcoind to see whether the tx made it out or
not, as we would do manually with dev-rescan-outputs.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We fail this at the moment, since we rely on shutdown to do the cleanups
for us.
(Also had to fix the unclean shutdown path: the caller checks the rc unless
mayfail is set, and of course it's not zero since we just SIGTERM'd it).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It would always return bech32; fix that, and don't bother printing
it if they use the (new) 'all' parameter.
This API was introduced in 3e67c09d5e,
which means it wasn't in a release so no CHANGELOG entry necessary.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We didn't count some records before, so we could compare the two counters.
This is much simpler, and avoids reliance on bs.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This means we intercept the peer's gossip_timestamp_filter request
in the per-peer subdaemon itself. The rest of the semantics are fairly
simple however.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
(We don't increment the gossip_store version, since there are only a
few commits since the last time we did this).
This lets the reader simply filter messages; this is especially nice since
the channel_announcement timestamp is *derived*, not in the actual message.
This also creates a 'struct gossip_hdr' which makes the code a bit
clearer.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Keeping the uintmap ordering all the broadcastable messages is expensive:
130MB for the million-channels project. But now we delete obsolete entries
from the store, we can have the per-peer daemons simply read that sequentially
and stream the gossip itself.
This is the most primitive version, where all gossip is streamed;
successive patches will bring back proper handling of timestamp filtering
and initial_routing_sync.
We add a gossip_state field to track what's happening with our gossip
streaming: it's initialized in gossipd, and currently always set, but
once we handle timestamps the per-peer daemon may do it when the first
filter is sent.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We use the high bit of the length field: this way we can still check
that the checksums are valid on deleted fields.
Once this is done, serially reading the gossip_store file will result
in a complete, ordered, minimal gossip broadcast. Also, the horrible
corner case where we might try to delete things from the store during
load time is completely gone: we only load non-deleted things.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We're about to bump version again, and the code to upgrade it was
quite hairy (and buggy!). It's not worthwhile for such a
poorly-tested path: I will just add code to limit how much incoming
gossip we get to avoid flooding when we upgrade, however.
I also use a modern gossip_store version in our test_gossip_store_load
test, instead of relying on the upgrade path.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
When we first receive a channel_update, we write both the
channel_announcement and that channel_update to the store: we need
that first update so we can set the channel_announcement timestamp.
However, the channel_update can be replaced later. This means we can
have a channel_announcement, a node_update which relies on it, then
the channel_update later.
So move the "this applies to a pending announcement" check lower, where
gossip_store can use it too. Has a nice side-effect of avoiding
one lookup of the node id.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
First, we should have a channel_update so we actually do some compaction!
(Reported-by @SimonVrouwe). But we should also handle the cases where:
1. A channel_announcement is *not* directly followed by a
channel_update (happens when the channel_update is replaced).
2. A node_announcement predates a channel_update for the peer
(again, can happen once a channel_update is replaced).
3. A local/private channel_creation is not directly followed by an
update.
In addition, we might as well check that we can *load* such a store,
before compaction.
This checks the corner cases which occur in real gossip stores.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It's possible that it hasn't got the node_announcement messages;
it will still list the nodes, however (the channel_announcement tells
it the nodes exist). Check for the alias field instead.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Basically, any "Bad" message from gossipd is something we should look
at. This covers failures loading the gossip_store, too!
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Reorg changes short_channel_id after lockin of private channel, while
one node restarts.
test that:
- peer->depth_togo in billboard decrements
- reorg and scid change is detected by running node and restarting node
- both `old` and `new` scids are in rtable
Also added a comment to test_blockchaintrack to clarify.
Now without bitcoind restart.
bitcoin-cli `prioritisetransaction` came to the rescue!
Its argument `fee_delta` (apparently) lowers the txs _effective_ feerate
soo low that bitcoind wont mine it ... untill we raise it when we want
it to be mined.
Because the call (wallet_extract_owned outputs) that prints that line can happen
_before_ or _after_ confirmation in block, adding `CONFIRMED` in the later.
This brings up an interesting quirk though, in that we report "3
attempts", where we really should have done one.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
My raspberry pi node hung up on my other node:
lightning_openingd-... chan #1: Got bad message from gossipd: 0db1
This is because we didn't handle that message in one path.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We try to look up the funding tx, but it's already spent that to fund
the channel, so we need txindex if this test is to work reliably.
It's not clear to me why this *ever* worked, but if fails on my new
ThreadRipper build machine with valgrind:
> wallettx = l1.bitcoin.rpc.getrawtransaction(wallettxid, True)
...
E bitcoin.rpc.InvalidAddressOrKeyError: {'code': -5, 'message': 'No such mempool transaction. Use -txindex to enable blockchain transaction queries. Use gettransaction for wallet transactions.'}
/usr/lib/python3/dist-packages/bitcoin/rpc.py:231: InvalidAddressOrKeyError
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Save some overhead, plus gets us ready for giving subdaemons direct
store access. This is the first time we *upgrade* the gossip_store,
rather than just discarding.
The downside is that we need to add an extra message after each
channel_announcement, containing the channel capacity.
After:
store_load_msec:28337-30288(28975+/-7.4e+02)
vsz_kb:582304-582316(582306+/-4.8)
store_rewrite_sec:11.240000-11.800000(11.55+/-0.21)
listnodes_sec:1.800000-1.880000(1.84+/-0.028)
listchannels_sec:22.690000-26.260000(23.878+/-1.3)
routing_sec:2.280000-9.570000(6.842+/-2.8)
peer_write_all_sec:48.160000-51.480000(49.608+/-1.1)
Differences:
-vsz_kb:582320
+vsz_kb:582316
-listnodes_sec:2.100000-2.170000(2.118+/-0.026)
+listnodes_sec:1.800000-1.880000(1.84+/-0.028)
-peer_write_all_sec:51.600000-52.550000(52.188+/-0.34)
+peer_write_all_sec:48.160000-51.480000(49.608+/-1.1)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
For performance reasons we start the lightningd instances in
parallel. However, if we only assign the numeric ids (used for log-prefixes
and home directories) when we are already running in parallel, we are not
guaranteed to get the numeric ids matching the return value of `get_nodes` or
`line_graph`. With this patch we now select numeric ids before parallelizing
the start.
Signed-off-by: Christian Decker <@cdecker>
Here I add the test for this 5 local_failed case in this commit.
There 5 cases for FORWARD_LOCAL_FAILED status:
1. When Msater resolves the reply about the next peer infor(sent by Gossipd), and need handle unknown next peer failure in channel_resolve_reply();
2. When Master handle the forward process with the htlc_in and the id of next hop, it tries to drive a new htlc_out but fails in forward_htlc();
3. When we send htlc_out, Master asks Channeld to add a new htlc into the outgoing channel but Channeld fails. Master need handle and store this failure in rcvd_htlc_reply();
4. When Channeld receives a new revoke message, if the state of corresponding htlc is RCVD_ADD_ACK_REVOCATION, Master will tries to resolve onionpacket and handle the failure before resolving the next hop in peer_got_revoke();
5. When Onchaind finds the htlc time out or missing htlc, Master need handle these failure as FORWARD_LOCAL_FAILED in if it's forward payment case.
We were triggering a second exception in the directory cleanup step by
attempting to access a field that'd only be set upon entering the test code
itself. That error did not contribute to the problem resolution, so now we
check whether that field is set before accessing it.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
Fixes#2518
Signed-off-by: Christian Decker <decker.christian@gmail.com>
Changelog-fixed: `minconf` no longer gets wrapped around for large values, which was causing funds with insufficient confirmations to be selected.
I misunderstood the API, this ended up nesting a result inside the JSON-RPC
result.
No concerns about backwards compatibility since this is so new.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We generated blocks to announce the channel, but it can also expire
the HTLC if the timing is wrong. We don't need to anyway, since we
fixed the FIXME; we store local unannounced channels for restoration
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The old value of 1000 sat was too small to cover the dust reserves.
This lead to the situation when trying to open a channel with minimal
amount, the channels got refused because they were not able cover the
commitment fees.
For this reason the minimal capacity should be increased to i.e. 10k
satoshi, as the technical minimum that also accounts for fees and
reserves is somewhere around 6k sat.
Refactored the weighted-reservoir-sampling algo to make it more straightforward.
It now uses the excess as fraction of capacity as weight. This favors channels that
are more _relatively_ unbalanced to be used for incoming payment.
Now passes test_invoice_routeboost_private() when using max fundamount=16777215.
make it a bit easier to track mutual channel closures by
adding broadcast txid to the listpeers billboard.
since lightningd manages the 'identity' of the closing tx we need
to send it back to closingd so it can update the billboard
appropriately.
We're going to make it async, so start by moving the core code into
invoice.c and having that directly call fail/success functions for the
htlc.
We add an extra check in fulfill_htlc() that the HTLC state is correct:
that can't happen now, but may once we're async.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is currently done higher up, in handle_channel_update(), but
that's one reason why handle_channel_update() has to do a channel
lookup. Moving the check down means handle_channel_update() can do a
minimal "get node id for this channel" so it can check the signature.
This helps, because the chan lookup semantics are changing in the next
few patches.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The user can explicitly create such things (within [] or ") as we paste
those cases literally, but not for the simple cases.
Fixes: #2550
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Plugins don't do it right anyway, and we're about to remove it from
lightningd. Produces same format as json_pp.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
They don't clean up after themselves, so best we do it here (by this
point we've already done the pid check to make sure we're the only
lightningd here anyway).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
In particular, the assert when `--addr=/sockname` is used, and that it
doesn't clean up on restart, requiring manual deletion of the socket.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
gossipd in l1 might not have registered l2 reconnecting, thus considering
the channel local-disabled.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
lightning_connectd(19780): STATUS_FAIL_INTERNAL_ERROR: Failed to bind on 2 socket: Address family not supported by protocol
"Untested code is buggy code"
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
1. We need to read in as a byte string, then decode into utf8 once we
have a marker. Otherwise we seem to mangle it horribly, and we
might have a bad utf8 string anyway.
2. We need to suppress the JSON \u escapes on output.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We should be able to pass UTF-8 strings to and from plugins without
python turning them into JSON-\u escapes.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Rather than using LightningJSONDecoder's implicit "field name and
value ends in msat, try converting to Millisatoshi", we do it to
parameters using type annotations.
If you had a parameter which was an array or dict itself, we don't
delve into that, but that's probably OK.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We don't, but we should, like we do for normal RPC. However, I chose
to use function annotations, rather than names-ending-in-'msat'
because it's more Pythony.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
New name is less confusing, and most people should be transitioning to
listpays rather than this anyway.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is the same deprecation, but one level up. For the moment, we
still support invoices with a `h` field (where description will be
necessary) but that will be removed once this option is removed.
Note that I just changed pylightning without backwards compatibility,
since the field was unlikely to be used, but we could do something
more complex here?
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
In particular this matches the case of `their_unilateral/to_us` outputs, which
were missing their addresses so far.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
We want to disallow using unconfirmed outputs by default, so making the
default 1 confirmation seems a good idea. This also matches `bitcoind`s
minimum confirmation requirement.
Arming however breaks some of our tests, so I used `minconf=0` for the
breaking tests and added a new test specifically for the `minconf` parameter
for `fundchannel`.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
An uncommitted channel should not keep the peer in the db, since the
uncommitted channel isn't in the db itself.
Fixes: #2367
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is particularly interesting because we handle overflow during route
calculation now; this could happen in theory once we wumbo.
It fixes a thinko when we print out routehints, too: we want to print
them out literally, not print out the effect they have on fees (which
is in the route, which we also print).
This ABI change doesn't need a CHANGELOG, since paystatus is new since
release.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Little point having users handle the postfixes manually, this
translates them, and also allows Millisatoshi to be used wherever an
'int' would be previously.
There are also helpers to create the formatting in a way c-lightning's
JSONRPC will accept.
All standard arithmetic operations with integers work.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We had occasional failures, because the fuzz could overwhelm the difference
in routes. Increasing the amount to 2,000,000 millisatoshis makes the
riskfactor 53msat (2000000 * 14 * 10 / 5259600) which is always greater
than the worst-case fuzz of 5% on the fee of 1002msat.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
I got a spurious failure because the final node gave a CLTV error and
so it decided to use a different channel. It should probably handle
this corner case better, but meanwhile make the test robust.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It was waiting for a remote channel, but not for all the interesting
channels we want to check. It can sometimes happen that further away
channels are added before closer ones are added, depending on
propagation path, flush timers and bitcoind poll timers. This now just
checks for all channels, which also reduces the ambiguity of whether
we selected a path solely because we were lacking alternatives.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
Travis timed out.
Waiting for three fundchannel commands depends on the bitcoind polling
interval (30 seconds), and then waiting for gossip propagation
requires two propagation intervals (120 seconds).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Up until now, riskfactor was useless due to implementation bugs, and
also the default setting is wrong (too low to have an effect on
reasonable payment scenarios).
Let's simplify the definition (by assuming that P(failure) of a node
is 1), to make it a simple percentage. I examined the current network
fees to see what would work, and under this definition, a default of
10 seems reasonable (equivalent to 1000 under the old definition).
It is *this* change which finally fixes our test case! The riskfactor
is now 40msat (1500000 * 14 * 10 / 5259600 = 39.9), comparable with
worst-case fuzz is 50msat (1001 * 0.05 = 50).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The test sometimes passes: our routing logic always chooses between
the shorter of two equal-cost routes (because we compare best with <
not <=).
By adding another hop, we add more noise, and by making the alternate
route fee 0 we provide the worst case.
But to be fair, we make the amount of the payment ~50c (15,000,000
msat), and increase our cltv-delay to 14 and fee-base 1000 to match
mainnet. The final patch shows the effect of this choice.
Otherwise our risk penalty is completely in the noise on
mainnet which has the vast majority of fees set at 1000msat + 1ppm.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is the direct cause of the failure of the original
test_pay_direct test and it makes sense: invoice routehints may not be
necessary, so try without them *first* rather than last.
We didn't mention the use of routehints in CHANGELOG at all yet, so
do that now.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
- result fundchannel command now depends on successful or failed broadcast of the funding tx
- failure returns error code FUNDING_BROADCAST_FAIL
- don't fail the channel when broadcast failed, but keep in CHANNELD_AWAITING_LOCKIN
- after fixing the initial broadcast failure, the user could manually rebroadcast the tx and
keep the channel
openingd/opening_funder_finished:
- broadcast_tx callback function now handles both success and failure
jsonrpc: added error code FUNDING_BROADCAST_FAIL
manpage: added error code returned by fundchannel command
This makes the user more aware of broadcast failure, so it hopefully doesn't
try to broadcast new tx's that depend on its change_outputs. Some users have reported (see
issue #2171) a whole sequence of fundings failing, because each funding was using the change
output of the previous one, which would not confirm.
We actually produce an invalid JSON error at the moment: bitcoin-cli
complains "JSON value is not an integer as expected" rather than returning
the given error. Make our error a valid JSON RPC error.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It is suddenly timing out a lot and is breaking master, so we
temporarily disable it until it is fixed.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
We were restarting the with the nodes before, which was causing some
port contention. This is more natural since `bitcoind` will take care
of terminating all proxies it returned.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
Internally libplugin turns ' into ", which causes these messages to produce
bad JSON.
The real fix is to remove the '->" convenience substitution and port the
JSON creation APIs into common/ from lightningd/
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Spurious errors were occuring around checking the provided
current commitment point from the peer on reconnect when
option_data_loss_protect is enabled. The problem was that
we were using an inaccurate measure to screen for which
commitment point to compare the peer's provided one to.
This fixes the problem with screening, plus makes our
data_loss test a teensy bit more robust.
So add a new 'strategy' field. This makes it clearer what is going
on, currently one of:
* "Initial attempt"
* "Excluded channel <scid>"
* "Removed route hint"
* "Excluded expensive channel <scid>"
* "Excluded delaying channel <scid>"
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We sanitize the routes: firstly, we assume appending so eliminate the
first hop if the route points straight to us. Secondly, eliminate empty
hints. Thirdly, trim overlong hints.
Then we just use the first route hint.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We use the 'exclude' option to getroute for successive attempts. This
is more robust than having gossipd disable for some limited time.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
I wrote this sync first, then rewrote async, then developed libplugin.
But committing all that just wastes reviewer time, so I present it as
if it was always asnc and using the library helper.
Currently the command it registers is 'pay2', but when it's complete
we'll remove the internal 'pay' and rename it. This does a single
'getroute/sendpay' call. No retries, no options.
Shockingly, this by itself is almost sufficient to pass our current test
suite with `pay`->`pay2`.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Don't do this:
(gdb) bt
#0 0x00007f37ae667c40 in ?? () from /lib/x86_64-linux-gnu/libz.so.1
#1 0x00007f37ae668b38 in ?? () from /lib/x86_64-linux-gnu/libz.so.1
#2 0x00007f37ae669907 in deflate () from /lib/x86_64-linux-gnu/libz.so.1
#3 0x00007f37ae674c65 in compress2 () from /lib/x86_64-linux-gnu/libz.so.1
#4 0x000000000040cfe3 in zencode_scids (ctx=0xc1f118, scids=0x2599bc49 "\a\325{", len=176320) at gossipd/gossipd.c:218
#5 0x000000000040d0b3 in encode_short_channel_ids_end (encoded=0x7fff8f98d9f0, max_bytes=65490) at gossipd/gossipd.c:236
#6 0x000000000040dd28 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17290511, number_of_blocks=8) at gossipd/gossipd.c:576
#7 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17290511, number_of_blocks=16) at gossipd/gossipd.c:595
#8 0x000000000040ddee in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17290495, number_of_blocks=32) at gossipd/gossipd.c:596
#9 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17290495, number_of_blocks=64) at gossipd/gossipd.c:595
#10 0x000000000040ddee in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17290431, number_of_blocks=128) at gossipd/gossipd.c:596
#11 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17290431, number_of_blocks=256) at gossipd/gossipd.c:595
#12 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17290431, number_of_blocks=512) at gossipd/gossipd.c:595
#13 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17290431, number_of_blocks=1024) at gossipd/gossipd.c:595
#14 0x000000000040ddee in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=2047) at gossipd/gossipd.c:596
#15 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=4095) at gossipd/gossipd.c:595
#16 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=8191) at gossipd/gossipd.c:595
#17 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=16382) at gossipd/gossipd.c:595
#18 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=32764) at gossipd/gossipd.c:595
#19 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=65528) at gossipd/gossipd.c:595
#20 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=131056) at gossipd/gossipd.c:595
#21 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=262112) at gossipd/gossipd.c:595
#22 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=524225) at gossipd/gossipd.c:595
#23 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=1048450) at gossipd/gossipd.c:595
#24 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=2096900) at gossipd/gossipd.c:595
#25 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=4193801) at gossipd/gossipd.c:595
#26 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=8387603) at gossipd/gossipd.c:595
#27 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=17289408, number_of_blocks=16775207) at gossipd/gossipd.c:595
#28 0x000000000040ddee in queue_channel_ranges (peer=0x3868fc8, first_blocknum=514201, number_of_blocks=33550414) at gossipd/gossipd.c:596
#29 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=514201, number_of_blocks=67100829) at gossipd/gossipd.c:595
#30 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=514201, number_of_blocks=134201659) at gossipd/gossipd.c:595
#31 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=514201, number_of_blocks=268403318) at gossipd/gossipd.c:595
#32 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=514201, number_of_blocks=536806636) at gossipd/gossipd.c:595
#33 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=514201, number_of_blocks=1073613273) at gossipd/gossipd.c:595
#34 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=514201, number_of_blocks=2147226547) at gossipd/gossipd.c:595
#35 0x000000000040ddc6 in queue_channel_ranges (peer=0x3868fc8, first_blocknum=514201, number_of_blocks=4294453094) at gossipd/gossipd.c:595
#36 0x000000000040df26 in handle_query_channel_range (peer=0x3868fc8, msg=0x37e0678 "\001\ao\342\214\n\266\361\263r\301\246\242F\256c\367O\223\036\203e\341Z\b\234h\326\031") at gossipd/gossipd.c:625
The cause was that converting a block number to an scid truncates it
at 24 bits. When we look through the index from (truncated number) to
(real end number) we get every channel, which is too large to encode,
so we iterate again.
This fixes both that problem, and also the issue that we'd end up
dividing into many empty sections until we get to the highest block
number. Instead, we just tack the empty blocks on to then end of the
final query.
(My initial version requested 0xFFFFFFFE blocks, but the dev code
which records what blocks were returned can't make a bitmap that big
on 32 bit).
Reported-by: George Vaccaro
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Make the two channels adjacent, and specify exactly the number of
divide-and-conquer steps there are.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We used to have a bug where decoderawtransaction would fail, fixed in
fedcfd661 (pytest: hand 'True' to decoderawtransaction so it doesn't
get confused.).
So we can remove the fallback decode, and might as well extract the
ugliness into a helper function.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Currently only used by gossipd for channel elimination.
Also print them in canonical form (/[01]), so tests need to be
changed.
Suggested-by: @cdecker
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>