Github API calls are now ratelimited to 60 per hour. Searching through
the subdirectory contents of lightningd/plugins will hit this limit after
two searches or installs. The simple answer is to look for the directory
rather than verifying a valid entrypoint in a suitably-named directory
prior to cloning the repository.
Also corrects a bug that was flagging submodules as files while
populating directory contents.
We were extracting the output script for all outputs, and discarding
them immediately again if they were not P2WSH outputs which are the
ones of interest to us. This patch move the extraction until after we
have determined it is useful, and so we should save a couple thousand
`tal()` and `tal_free()` calls.
Changelog-Changed: lightningd: Speed up blocksync by not parsing unused parts of the transactions
Processing blocks is rather slow at the moment, but one thing we can
do, is to prevent copying all output scripts, when really all we are
interested in are the couple of outputs that are P2WSH.
This builds the foundation of that by adding a method to introspect
the script without having to clone it first, saving us some
allocations, and deallocations.
Changelog-Changed: core: Processing blocks should now be faster
You otherwise get thousands of these zero count Splice resume checks
spamming your logs while most of the time it is only informing you
of "nothing is happening".
Reduces log noise.
Signed-off-by: Warren Togami <wtogami@gmail.com>
If we delete it the first time a channel before it is closed, we will
crash when we try to move it again:
```
$ lightning_gossipd: gossip_store: get delete entry offset 47411992/51608943 (version v23.11-378-gac2a386-modded)
0x1002544b send_backtrace
common/daemon.c:33
0x1003415f status_failed
common/status.c:221
0x10016ef3 gossip_store_get_with_hdr
gossipd/gossip_store.c:466
0x10016faf check_msg_type
gossipd/gossip_store.c:491
0x1001722b gossip_store_set_flag
gossipd/gossip_store.c:509
0x1001752b gossip_store_del
gossipd/gossip_store.c:561
0x10017f5b remove_channel
gossipd/gossmap_manage.c:299
0x100181cf kill_spent_channel
gossipd/gossmap_manage.c:1144
0x1001a7df gossmap_manage_new_block
gossipd/gossmap_manage.c:1183
0x10014673 new_blockheight
gossipd/gossipd.c:483
0x10014d73 recv_req
gossipd/gossipd.c:594
```
Reported-by: @microsatosi on Discord
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Watchtowers changed the code so that we *always* have a channel->shutdown_scriptpubkey[LOCAL]
(see new_channel()). The previous code had several problems:
1. It tested this for NULL, unnecessarily.
2. It allowed overriding if it was a default, *even* if we were already using it.
3. If the peer opened without option_shutdown_anysegwit, but upgraded before we closed,
we would not recognize the default.
4. It set the final scriptpubkey (and other things!) even if the command failed.
Changelog-Fixed: JSON-RPC: `close` with `destination` works even if prior `destination` was rejected.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Whenever there is a payment failure that requires gossip update, for
example changing the fee rates of remote channels, we call addgossip.
For renepay to consider this changes in the coming payment attempts, it
must update gossmap.
- previous pending sendpays must add up so that the plugin tries to pay
the rest of the amount,
- avoid groupid, partid collisions,
- add shadow fees if the option is set and the payment amount - total
delivering = 0
- add a test,
- also fix a buggy shadow routing test
If we accept a channel_update in state "NEED_SIGS" we should not set
refresh timer: we're simply holding it for the moment we get to that state
(which will happen as we mine the block).
```
0x7fd1cce39205 __assert_fail
./assert/assert.c:101
0x55c103cc6ee9 check_channel_gossip
lightningd/channel_gossip.c:128
0x55c103cc8a13 channel_gossip_update_from_gossipd
lightningd/channel_gossip.c:821
0x55c103cd752d handle_init_cupdate
lightningd/gossip_control.c:138
0x55c103cd79a3 gossip_msg
lightningd/gossip_control.c:190
```
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This code was trying to check that the address type is not one of the ADDR_TYPE_TOR*
types, but the is_toraddr() function checks a domain name! The cast should have been
a clue that this was wrong!
Anyway, wireaddr_to_addrinfo() aborts on these cases already, so the asserts here are
superfluous.
Found in unrelated CI run:
```
Valgrind error file: valgrind-errors.20610
==20610== Conditional jump or move depends on uninitialised value(s)
==20610== at 0x484ED28: strlen (in /usr/libexec/valgrind/vgpreload_memcheck-amd64-linux.so)
==20610== by 0x138FA3: is_toraddr (wireaddr.c:344)
==20610== by 0x11499B: conn_init (connectd.c:729)
==20610== by 0x28FD73: next_plan (io.c:59)
==20610== by 0x28FF94: io_new_conn_ (io.c:116)
==20610== by 0x11531B: try_connect_one_addr (connectd.c:927)
==20610== by 0x1182A8: try_connect_peer (connectd.c:1781)
==20610== by 0x11834E: connect_to_peer (connectd.c:1797)
==20610== by 0x119241: recv_req (connectd.c:2074)
==20610== by 0x12836F: handle_read (daemon_conn.c:35)
==20610== by 0x28FD73: next_plan (io.c:59)
==20610== by 0x2909A8: do_plan (io.c:407)
==20610==
```
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This was triggered by the recover plugin tests (not yet merged!) and causes a crash
because we don't have signatures yet. It can only happen if we lost our database,
but at least don't crash!
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We would sometimes propose fees which we couldn't afford, and thus the
peer would get upset: if we didn't recover it could cause force closes.
This was because we didn't take into account that pending htlcs will
remove our total available funds.
Changelog-Fixes: Protocol: Don't upset peers by sending `update_fee` with fees we cannot afford in the case where HTLCs are large.
This happens if:
1. The peer sets a timestamp filter to non-zero, and
2. We have a channel_announcement without a channel_update.
The timestamp is 0 as a placeholder as part of the recent gossip rework
(we used to hold these channel_announcement in memory, which was complex).
But this means we won't send it in this case, and if we later send the
channel_update, CI will complain about 'Bad gossip order'.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Normally, channels are marked dying, the 12 blocks later, removed.
But for local channels, we can access any spliced channel already, so
we remove them immediately from our local gossip. This left a hole in
our logic, if that channel was the last one keeping a
node_announcement alive.
Solution is to unify with the "moved node_announcement" path.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We make sure a node_announcement is preceeded by at least one channel_announcement,
but dying ones don't count (as they are not broadcast!).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We accept node_announcements on dying channels, but make sure we
set the dying flag it channels are alll dying.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We can update dying channels, though it seems weird! We accept gossip about them,
we just don't propagate it.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This avoids us gossiping about nodes which don't have live channels.
Interstingly, we previously tested that we *did* gossip such node
announcements, and now we fix that test.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It prints a message to stderr, but actually it's fine with this version:
```
dump-gossipstore: UNKNOWN GOSSIP minor VERSION 14 (expected 12)
```
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>