1. Don't refer to obsolete send_invoice flag.
2. Don't refer to obsolete quantity_min field.
3. Don't refer to unsigned vs signed offers: they're all unsigned.
4. Add references to invoicerequest(7).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
As reported on Discord, these are undocumented. And thus, um, hard to find!
Reported-by: Aaron Barnard
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Seems like LND is hanging up on receiving these messages, even though
they're odd :(
So, when a peer connects, check if it supplies or wants peer backup
(even if it doesn't support both, it shouldn't hang up, and I didn't
want to separate the two paths).
And when we go to send our own, updated backup, check features before
sending.
Fixes: #6065
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-EXPERIMENTAL: `experimental-peer-storage` caused LND to hang up on us, so only send to peers which support it.
e778ebb9af ("wallet: only log broken if we
have duplicate scids in channels.") downgraded the fatal() to a broken
log message, but the user reports it still won't start up.
Perhaps they're hitting the fatal() outside the loop? (And we're
not getting that output).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
ccan/io stores the context pointer for io_new_conn, but we were using
`daemon->listeners` which we reallocate, so it can use a stale pointer.
```
0x3e1700 call_error
ccan/ccan/tal/tal.c:93
0x3e1700 check_bounds
ccan/ccan/tal/tal.c:165
0x3e1700 to_tal_hdr
ccan/ccan/tal/tal.c:174
0x3e1211 to_tal_hdr_or_null
ccan/ccan/tal/tal.c:186
0x3e1211 tal_alloc_
ccan/ccan/tal/tal.c:426
0x3db8f4 io_new_conn_
ccan/ccan/io/io.c:91
0x3dd2e1 accept_conn
ccan/ccan/io/poll.c:277
0x3dd2e1 io_loop
ccan/ccan/io/poll.c:444
0x3419fa main
connectd/connectd.c:2081
```
Fixes: #6060
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This reverts us to the v22.11 behaviour, pending a revisit for the
next release.
Changelog-Changed: gossipd: revert zombification change, keep all gossip for now.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Without this patch, we only ever loaded the "nodes" table once, then
didn't see updates.
How this ever got past CI is a mystery; perhaps valgrind was so slow that
the updated node_announcement hit the gossmap before we even asked sql
on l3 about the nodes table?
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Fixed: Plugins: `sql` nodes table now gets refreshed when gossip changes.
Loading the gossip_store would not create a pending node announcement
when the node already had a zombie channel. This would cause the node
announcement to attempt to be loaded, but fail because it had no
broadcastable channels. Accepting a pending node announcement as when
normally loading from the channel corrects this.
`node_has_public_channels` taking into account zombie channels enables
this behavior.
Separately, node_announcements were still being flagged as zombies
in the gossip store despite that feature being removed.
Changelog-None
Without inheriting zombie status, gossipd would allow regular channel updates
into the store until the pruning cycle hits (and the channel is properly
flagged) which is 3.5 days. Applying zombie status when reading channel
updates from the store prevents this.
Changelog-None
remove_chan_from_node already corrects the ordering if a node_announcement
is left ahead of the next oldest channel_announcement, but zombifying should
do that check (and reorder if necessary) too.
Changelog-None
Since it is an optional field in the `listconfigs` output we can't use
the `rpc_scan` mechanism (doesn't handle optionals yet). We'll use
that list of accepted types later to avoid stripping them.
Commit a418615b7f ("rpc: adds
num_channels to listpeers") broke the sql tests. Turns out, no
openchannel v2 tests are run in CI!
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This commit will disable the peerstorage plugins
when the feature is not enabled.
I found this issue with lnprototest, and I guess
we did not find it with normal run because
other the unknown messages are ingored?
Changelog-Fixed: Disable the protocol messages when peerstorage is disabled.
Signed-off-by: Vincenzo Palazzo <vincenzopalazzodev@gmail.com>
It works for the trivial case, where groupid and partid are the same,
but silently deletes nothing in the other cases (or worse, deletes the
wrong entry!).
See: #5835
Changelog-Fixed: `delpay`: actually delete the specified payment (mainly found by `autoclean`).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
If we had two things to clean, we fired off two requests (eg.
listforwards and listinvoices) and both marked the timer as finished,
triggering an assert.
We already have a refcount for outstanding requests to avoid this
for e.g. outstanding del commands, so use it here too!
```
Jan 19 19:20:00 lightningd[748044]: autoclean: plugins/libplugin.c:445: timer_complete: Assertion `p->in_timer > 0' failed.
Jan 19 19:20:00 lightningd[748044]: autoclean: FATAL SIGNAL 6 (version v22.11.1)
Jan 19 19:20:00 lightningd[748044]: 0x562c388136e4 send_backtrace
Jan 19 19:20:00 lightningd[748044]: common/daemon.c:33
Jan 19 19:20:00 lightningd[748044]: 0x562c3881376c crashdump
Jan 19 19:20:00 lightningd[748044]: common/daemon.c:46
Jan 19 19:20:00 lightningd[748044]: 0x7f26d0898d5f ???
Jan 19 19:20:00 lightningd[748044]: ./signal/../sysdeps/unix/sysv/linux/x86_64/sigaction.c:0
Jan 19 19:20:00 lightningd[748044]: 0x7f26d0898ce1 __GI_raise
Jan 19 19:20:00 lightningd[748044]: ../sysdeps/unix/sysv/linux/raise.c:51
Jan 19 19:20:00 lightningd[748044]: 0x7f26d0882536 __GI_abort
Jan 19 19:20:00 lightningd[748044]: ./stdlib/abort.c:79
Jan 19 19:20:00 lightningd[748044]: 0x7f26d088240e __assert_fail_base
Jan 19 19:20:00 lightningd[748044]: ./assert/assert.c:92
Jan 19 19:20:00 lightningd[748044]: 0x7f26d0891661 __GI___assert_fail
Jan 19 19:20:00 lightningd[748044]: ./assert/assert.c:101
Jan 19 19:20:00 lightningd[748044]: 0x562c3880355d timer_complete
Jan 19 19:20:00 lightningd[748044]: plugins/libplugin.c:445
Jan 19 19:20:00 lightningd[748044]: 0x562c38800b54 clean_finished
Jan 19 19:20:00 lightningd[748044]: plugins/autoclean.c:122
Jan 19 19:20:00 lightningd[748044]: 0x562c388010ed clean_finished_one
Jan 19 19:20:00 lightningd[748044]: plugins/autoclean.c:132
Jan 19 19:20:00 lightningd[748044]: 0x562c388011b6 del_done
Jan 19 19:20:00 lightningd[748044]: plugins/autoclean.c:149
Jan 19 19:20:00 lightningd[748044]: 0x562c388058b5 handle_rpc_reply
Jan 19 19:20:00 lightningd[748044]: plugins/libplugin.c:768
Jan 19 19:20:00 lightningd[748044]: 0x562c38805a39 rpc_read_response_one
Jan 19 19:20:00 lightningd[748044]: plugins/libplugin.c:944
Jan 19 19:20:00 lightningd[748044]: 0x562c38805ad7 rpc_conn_read_response
Jan 19 19:20:00 lightningd[748044]: plugins/libplugin.c:968
Jan 19 19:20:00 lightningd[748044]: 0x562c38876b60 next_plan
Jan 19 19:20:00 lightningd[748044]: ccan/ccan/io/io.c:59
Jan 19 19:20:00 lightningd[748044]: 0x562c38876fe7 do_plan
Jan 19 19:20:00 lightningd[748044]: ccan/ccan/io/io.c:407
Jan 19 19:20:00 lightningd[748044]: 0x562c38877080 io_ready
Jan 19 19:20:00 lightningd[748044]: ccan/ccan/io/io.c:417
Jan 19 19:20:00 lightningd[748044]: 0x562c3887823c io_loop
Jan 19 19:20:00 lightningd[748044]: ccan/ccan/io/poll.c:453
Jan 19 19:20:00 lightningd[748044]: 0x562c38805d11 plugin_main
Jan 19 19:20:00 lightningd[748044]: plugins/libplugin.c:1801
Jan 19 19:20:00 lightningd[748044]: 0x562c38801c7a main
Jan 19 19:20:00 lightningd[748044]: plugins/autoclean.c:613
```
Fixes: #5912
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Missed a DEFAULT in the db clause.
Feb 15 16:02:12 citrine lightningd[902093]: Accessing a null column lease_satoshi/15 in query SELECT funding_tx_id, funding_tx_outnum, funding_feerate, funding_satoshi, our_funding_satoshi, funding_psbt, last_tx, last_sig, funding_tx_remote_sigs_received, lease_expiry, lease_commit_sig, lease_chan_max_msat, lease_chan_max_ppt, lease_blockheight_start, lease_fee, lease_satoshi FROM channel_funding_inflights WHERE channel_id = ? ORDER BY funding_feerate
Fixes#6016
Closing channels would previously require moving the node announcements
in the gossip store on occasion. They incorrectly lost their spam flag
during this process (would no longer be squelched.)
Changelog-None
A zombie channel is not considered broadcastable, so if all channels
are zombies (i.e. is_node_zombie() is true), then
node_has_broadcastable_channels() is false.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This simplifies things (we'll get node_announcement if they ever
rebroadcast), since we clearly have an issue with node_announcement for
zombie nodes.
Changes:
1. Remove now-unused gossip_store_mark_nannounce_zombie and resurrect_nannouncements.
2. Don't consider zombie channels to count when deciding whether to move node_announcement
(node_announcement must be preceded by at least one broadcastable channel_announcement).
3. Treat incoming node_announcement where we have all-zombie channels the same as if
we had no channels.
4. Remove node_announcement whenever we have no announcable channels (not just zombies).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This could always happen if we armed the timer when we did have public
channels, and by the time we did our node_announcement we no longer
did, but it gets triggered in our tests when we remove (our own!)
zombied node_announcement in the next patch.
This will fix a crash that I caused on armv7
and by looking inside the coredump with gdb
(by adding an assert on n that must be
different from null) I get the following stacktrace
```
(gdb) bt
\#0 0x00000000 in ?? ()
\#1 0x0043a038 in send_backtrace (why=0xbe9e3600 "FATAL SIGNAL 11") at common/daemon.c:36
\#2 0x0043a0ec in crashdump (sig=11) at common/daemon.c:46
\#3 <signal handler called>
\#4 0x00406d04 in node_announcement (map=0x938ecc, nann_off=495146) at common/gossmap.c:586
\#5 0x00406fec in map_catchup (map=0x938ecc, num_rejected=0xbe9e3a40) at common/gossmap.c:643
\#6 0x004073a4 in load_gossip_store (map=0x938ecc, num_rejected=0xbe9e3a40) at common/gossmap.c:697
\#7 0x00408244 in gossmap_load (ctx=0x0, filename=0x4e16b8 "gossip_store", num_channel_updates_rejected=0xbe9e3a40) at common/gossmap.c:976
\#8 0x0041a548 in init (p=0x93831c, buf=0x9399d4 "\n\n{\"jsonrpc\":\"2.0\",\"id\":\"cln:init#25\",\"method\":\"init\",\"params\":{\"options\":{},\"configuration\":{\"lightning-dir\":\"/home/vincent/.lightning/testnet\",\"rpc-file\":\"lightning-rpc\",\"startup\":true,\"network\":\"te"..., config=0x939cdc) at plugins/topology.c:622
\#9 0x0041e5d0 in handle_init (cmd=0x938934, buf=0x9399d4 "\n\n{\"jsonrpc\":\"2.0\",\"id\":\"cln:init#25\",\"method\":\"init\",\"params\":{\"options\":{},\"configuration\":{\"lightning-dir\":\"/home/vincent/.lightning/testnet\",\"rpc-file\":\"lightning-rpc\",\"startup\":true,\"network\":\"te"..., params=0x939c8c)
at plugins/libplugin.c:1208
\#10 0x0041fc04 in ld_command_handle (plugin=0x93831c, toks=0x939bec) at plugins/libplugin.c:1572
\#11 0x00420050 in ld_read_json_one (plugin=0x93831c) at plugins/libplugin.c:1667
\#12 0x004201bc in ld_read_json (conn=0x9391c4, plugin=0x93831c) at plugins/libplugin.c:1687
\#13 0x004cb82c in next_plan (conn=0x9391c4, plan=0x9391d8) at ccan/ccan/io/io.c:59
\#14 0x004cc67c in do_plan (conn=0x9391c4, plan=0x9391d8, idle_on_epipe=false) at ccan/ccan/io/io.c:407
\#15 0x004cc6dc in io_ready (conn=0x9391c4, pollflags=1) at ccan/ccan/io/io.c:417
\#16 0x004cf8cc in io_loop (timers=0x9383c4, expired=0xbe9e3ce4) at ccan/ccan/io/poll.c:453
\#17 0x00420af4 in plugin_main (argv=0xbe9e3eb4, init=0x41a46c <init>, restartability=PLUGIN_STATIC, init_rpc=true, features=0x0, commands=0x6167e8 <commands>, num_commands=4, notif_subs=0x0, num_notif_subs=0, hook_subs=0x0, num_hook_subs=0, notif_topics=0x0, num_notif_topics=0) at plugins/libplugin.c:1891
\#18 0x0041a6f8 in main (argc=1, argv=0xbe9e3eb4) at plugins/topology.c:679
```
I do not know if this is a solution because I do not know
when I can parse a node announcement for a node that
it is not longer in the gossip map.
So, I hope this is just usefult for @rustyrussell
Changelog-Fixed: fixes `FATAL SIGNAL 11` on gossmap node announcement parsing.
Signed-off-by: Vincenzo Palazzo <vincenzopalazzodev@gmail.com>
When doing some plugin related work, I discovered that the datastore API
has two issues:
- Error messages on startup of plugins init method when the datastore is
still completely empty: "Parsing '{datastore:[0:': token has no index 0: []"
- Data is escaped but not unwrapped again when sending and getting from
the API.
[ Removed xfail, it now passes! --RR ]
Closes: #5990
We were feeding in the raw JSON, which escapes \". Then we were
escaping *again* to return it.
Reported-by: @m-schmook
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Fixed: JSON-RPC: `datastore` handles escapes in `string` parameter correctly.
This fixes the following compilation error
and allow rebuilding again on 32-bit platform.
```
lightningd/dual_open_control.c: In function 'validate_input_unspent':
lightningd/dual_open_control.c:2627:43: error: format '%llu' expects argument of type 'long long unsigned int', but argument 4 has type 'size_t' {aka 'unsigned int'} [-Werror=format=]
2627 | err = tal_fmt(pv, "PSBT input at index %"PRIu64
| ^~~~~~~~~~~~~~~~~~~~~~~
2628 | " missing serial id", i);
| ~
| |
| size_t {aka unsigned int}
ccan/ccan/tal/str/str.h:43:46: note: in definition of macro 'tal_fmt'
43 | tal_fmt_(ctx, TAL_LABEL(char, "[]"), __VA_ARGS__)
| ^~~~~~~~~~~
```
PS: apparently I'm the only remaining people that ran cln on an old raspberry pi 2?
Changelog-None
Signed-off-by: Vincenzo Palazzo <vincenzopalazzodev@gmail.com>
People are upgrading to 22.11.1 not, and in some configurations like the one
mentioned in the issue, we should
put some info information in the log when we are not able to upgrade.
Signed-off-by: Vincenzo Palazzo <vincenzopalazzodev@gmail.com>