In this case, the user's default was info, but they specifically asked for debug
from one plugin. Since there were no per-file filters, it set filtering to the
default level, info, and rejected it. Since it's been explicitly filtered in,
we need to pass it at this point.
Reported-by: @wtogami
Fixes: #6503
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This was strongly recommended by Russell O'Connor: the "ms" implies that
it's a BIP-32 master secret, and this is CLN specific.
If we changed the hrp to "cln" it would be better, but apparently that
means we no longer fit in a "standard billfold metal wallet" (and
our code assumes a 2-byte prefix anyway).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Thread the signed tx through so close's JSON return contains that,
rather than the unsigned channel->last_tx.
We have to split the "get cmd_id" from "resolve the close commands" though;
and of course, as before, we don't actually print the txids of multiple
transactions even though we may have multi in flight due to splice!
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Fixed: JSON-RPC: `close` returns a `tx` field with witness data populated (i.e. signed).
Fixes: #6440
Update the lightningd <-> channeld interface with lots of new commands to needed to facilitate spicing.
Implement the channeld splicing protocol leveraging the interactivetx protocol.
Implement lightningd’s channel_control to support channeld in its splicing efforts.
Changelog-Added: Added the features to enable splicing & resizing of active channels.
Update gossip routiens and various other hecks on the channel state to consider AWAITING_SPLICE to be routable and treated similar to CHANNELD_NORMAL.
Small updates to psbt interface
Changelog-None
Firstly, I wanted the results easier to use:
1. Make them always lower case, even if the string was UPPER.
2. Decode the payload for them.
3. Don't give the user any fields they don't need, and make
the field sizes explicit.
Secondly, I wanted to avoid the pattern of "check in one place, assume
in another", in favour of "check on use".
So, I changed the code to lower the string if it needs to at the start,
and then changed the pull functions so we always use them to get data:
this way we should fail clearly and gracefully if we don't have enough data.
I made all the checks explicit, where we assign the fields.
I also addressed the FIXME: I think the array is *often* one shorter,
but not always, so I trim the last byte at the end if needed.
[ Aditya modified the tests to work ]
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Clean restart of daemon after a tx-abort is a nice way to work around
the 'persistent' disconnect that we t-bast noticed.
Changelog-Fixed: `dualopend`: Fix behavior for tx-aborts. No longer hangs, appropriately continues re-init of RBF requests without reconnction msg exchange.
This will at least *help* the case where these were not populated, causing us
to send errors without channel_updated appended.
It's not perfect: we can still send such errors if the gossip store is
corrupted, and we still send them for private channels, but it should
help.
(The much better fix is far more invasive, so slips to next release!)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This cause of cascading failure was pointed out by @t-bast: if fees spike and
you don't timeout an outgoing onchain HTLC, you should nonetheless fail the incoming htlc
because otherwise the incoming peer will close on you.
Of course, there's a risk of losing funds, but this only happens if you weren't going to get the HTLC spend in time anyway. And it would also catch any other reason that the downstream onchain goes wrong, containing the damage.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Reported-by: @t-bast
Changelog-Fixed: Protocol: We will close incoming HTLCs early if the outgoing HTLC is stuck onchain long enough, to avoid cascating failure.
This is the simplest solution, not the best, but there's significant risk in try to remove the "we have a path" assumption in the code pay code.
Includes removing a `tal_steal` which was incorrect: the buffer has the same lifetime as the plugin, so if we steal it then things get messy when we free the struct payment.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Added: Plugins: `pay` will now pay your own invoices if you try.
Previously, the "payment" and "invoice" paths were completely separate, but this now calls both. It bypasses htlc_sets (and thus, cannot do MPP), and bypasses the hook too: the former is tied closely to HTLCs, and the hook is also very htlc-centric.
Includes finishing unfinished sentence in sendpay man page, as a bonus.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Added: Plugins: `sendpay` now allows self-payment of invoices, by specifying an empty route.
Clean these up: they were debug logs, but we want to pass this information
back for self-payments.
Also fixes "Attept" typo which altered tests!
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
If they have invalid runes, we bail, but if they have runes which used
a different master secret (old commando.py allowed you to override
secret), we just complain and delete them.
Note that this requires more mocks in wallet/test/run-db.c...
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The wallet_datastore_first() SELECT statement only iterates from the
given key (if any), relying on the caller to notice when the key no
longer applies. (e.g. startkey = ["foo", "bar"] will return key
["foo", "bar"] then ["foo", "bar", "child" ], then ["foo", "baz"]).
The only caller (listdatastore) would notice the keychange and stop
looping, but reallly wallet_datastore_next() should do this. When I
tried to use it for migrations, I got very confused!
Also, several places want a simple "wallet_datastore_get()" function,
so provide that.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We want to access this in db migrations, which happen very early, but
runes_init needs the db, creating a circular dependency which must be
split.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The uid isn't enough: it could be someone else's rune. This is tested
in the command rune list tests.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
At cold start, if your node is behind the blocktip and you've sent your
peer a blockheight counter from the future, we shouldn't confuse ourselves
with our rollback/replay.
Should fix flakes in CI that were spotting BROKEN blockheight updates.
Logs below from a previuos CI fail (edited for relative clarity)
The one that sasy "{ SENT_ADD_ACK_REVOCATION:111 }, our current 108` is
the tell; the last line is the node finally catching up to the tip.
In the test we get into this state by stopping and restarting the node.
```
2023-07-22T11:24:28.2754533Z lightningd-1 2023-07-22T11:19:34.188Z DEBUG 022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59-chan#2: Already have funding locked in
2023-07-22T11:24:28.2755486Z lightningd-1 2023-07-22T11:19:34.188Z DEBUG 022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59-chan#2: attempting update blockheight a5b23dff5177badd6df725c
efeb83ceccbfc52dc64a16b38894a41f0ad8fa181
2023-07-22T11:24:28.2755778Z lightningd-1 2023-07-22T11:19:34.188Z DEBUG lightningd: update_blockheight: height = 108
2023-07-22T11:24:28.2766210Z lightningd-1 2023-07-22T11:19:34.210Z DEBUG 022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59-channeld-chan#2: init LOCAL: remote_per_commit = 029563e7c898
5d8b95bdfe19e47e494bb8ec8d53ff4edb93f156be57667bfee8c9, old_remote_per_commit = 02bf3117c149d324361f0b418db8984b1e29af70c773eb2865a41ff7f583c7c9ed next_idx_local = 3 next_idx_remote = 3 revocations_recei
ved = 2 feerates { SENT_ADD_ACK_REVOCATION:3750 } range 253-150000 blockheights { SENT_ADD_ACK_REVOCATION:111 }, our current 108
2023-07-22T11:24:28.2768866Z lightningd-1 2023-07-22T11:19:34.211Z DEBUG 022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59-channeld-chan#2: peer_out WIRE_CHANNEL_REESTABLISH
2023-07-22T11:24:28.2769416Z lightningd-1 2023-07-22T11:19:34.211Z DEBUG 022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59-channeld-chan#2: billboard: Sent reestablish, waiting for the
irs
2023-07-22T11:24:28.2771115Z lightningd-1 2023-07-22T11:19:34.212Z DEBUG 022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59-channeld-chan#2: peer_in WIRE_CHANNEL_REESTABLISH
2023-07-22T11:24:28.2774150Z lightningd-1 2023-07-22T11:19:34.212Z DEBUG 022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59-channeld-chan#2: Got reestablish commit=3 revoke=2
2023-07-22T11:24:28.2776056Z lightningd-1 2023-07-22T11:19:34.212Z DEBUG 022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59-channeld-chan#2: next_revocation_number = 2
2023-07-22T11:24:28.2805639Z lightningd-1 2023-07-22T11:19:34.239Z **BROKEN** 022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59-channeld-chan#2: current blockheight 108 less than last 111
2023-07-22T11:24:28.2823960Z lightningd-1 2023-07-22T11:19:34.240Z DEBUG lightningd: Adding block 109: 5f67b6e110eb3c3457bea4fcf0d04ce9be90efeee5df8e083ed4266074ca911f
2023-07-22T11:24:28.2833154Z lightningd-1 2023-07-22T11:19:34.251Z **BROKEN** 022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59-channeld-chan#2: current blockheight 108 less than last 111
2023-07-22T11:24:28.2833630Z lightningd-1 2023-07-22T11:19:34.252Z DEBUG 022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59-channeld-chan#2: Trying commit
2023-07-22T11:24:28.2834165Z lightningd-1 2023-07-22T11:19:34.252Z **BROKEN** 022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59-channeld-chan#2: current blockheight 108 less than last 111
2023-07-22T11:24:28.2835070Z lightningd-1 2023-07-22T11:19:34.252Z DEBUG 022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59-channeld-chan#2: Can't send commit: nothing to send, feechange not wanted ({ SENT_ADD_ACK_REVOCATION:3750 }) blockheight not wanted ({ SENT_ADD_ACK_REVOCATION:111 })
2023-07-22T11:24:28.2835516Z lightningd-1 2023-07-22T11:19:34.350Z DEBUG lightningd: Adding block 110: 5f43f3ac9d808e3a309720d1b0727a00d5a3d3ddca71d97401e233637e87639c
2023-07-22T11:24:28.2835962Z lightningd-1 2023-07-22T11:19:34.476Z DEBUG lightningd: Adding block 111: 55b0d1e0a08ff6233e186e6735cb1cbec33e2b0a6e7d08f2622e8c1db30b54b9
```
We get intermittant reports of subd->conn being leaked, but I could never find it.
That's because it's actually subd which is not referenced any more: subd->conn
gets reported because it's subd's tal_parent (and, except for the reference in
subd, not referenced either).
The real issue is that the channel->owner is reassigned to the new subdaemon,
and the old one is still exiting. During that time, we can see a "leak".
```
- Node /tmp/ltests-hkr089bp/test_sql_1/lightning-3/ has memory leaks: [
{
"backtrace": [
"ccan/ccan/tal/tal.c:477 (tal_alloc_)",
"ccan/ccan/io/io.c:91 (io_new_conn_)",
"lightningd/subd.c:774 (new_subd)",
"lightningd/subd.c:828 (new_channel_subd_)",
"lightningd/dual_open_control.c:3662 (peer_restart_dualopend)",
"lightningd/peer_control.c:1161 (connect_activate_subd)",
"lightningd/peer_control.c:1273 (peer_connected_hook_final)",
"lightningd/plugin_hook.c:213 (plugin_hook_callback)",
"lightningd/plugin.c:591 (plugin_response_handle)",
"lightningd/plugin.c:702 (plugin_read_json_one)",
"lightningd/plugin.c:747 (plugin_read_json)",
"ccan/ccan/io/io.c:59 (next_plan)",
"ccan/ccan/io/io.c:407 (do_plan)",
"ccan/ccan/io/io.c:417 (io_ready)",
"ccan/ccan/io/poll.c:453 (io_loop)",
"lightningd/io_loop_with_timers.c:22 (io_loop_with_timers)",
"lightningd/lightningd.c:1249 (main)"
],
"label": "ccan/ccan/io/io.c:91:struct io_conn",
"parents": [
"lightningd/lightningd.c:107:struct lightningd"
],
"value": "0x556c63c859f8"
}
```
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The channel selection did not consider the amounts that we are trying
to transfer, which in a multiplexed channel world could end up always
selecting a channel that is too small for the payment. We also log
which channel was selected based on the selector that is passed in,
allowing us to better follow the decisions.
Changelog-Fixed: pay: `sendonion` and `sendpay` will now consider amounts involved when using picking one channel for a peer
If you miss a wait event, you can catch up by doing listinvoices and
getting the max of these fields. It's also a good debugging clue.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Now we have defined ordering, we can add a start param.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Added: JSON-RPC: `listinvoices` has `index` and `start` parameters for listing control.
This will initially be for listinvoices, but can be expanded to other
list commands.
It's documented, but it makes promises which currently don't exist:
* listinvoice does not support `index` or `start` yet.
* It doesn't actually fire when invoices change yet.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Added: JSON-RPC: `wait`: new generic command to wait for events.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Added: JSON-RPC: `setchannel` adds a new `ignorefeelimits` parameter to allow peer to set arbitrary commitment transaction fees on a per-channel basis.