If we had two things to clean, we fired off two requests (eg.
listforwards and listinvoices) and both marked the timer as finished,
triggering an assert.
We already have a refcount for outstanding requests to avoid this
for e.g. outstanding del commands, so use it here too!
```
Jan 19 19:20:00 lightningd[748044]: autoclean: plugins/libplugin.c:445: timer_complete: Assertion `p->in_timer > 0' failed.
Jan 19 19:20:00 lightningd[748044]: autoclean: FATAL SIGNAL 6 (version v22.11.1)
Jan 19 19:20:00 lightningd[748044]: 0x562c388136e4 send_backtrace
Jan 19 19:20:00 lightningd[748044]: common/daemon.c:33
Jan 19 19:20:00 lightningd[748044]: 0x562c3881376c crashdump
Jan 19 19:20:00 lightningd[748044]: common/daemon.c:46
Jan 19 19:20:00 lightningd[748044]: 0x7f26d0898d5f ???
Jan 19 19:20:00 lightningd[748044]: ./signal/../sysdeps/unix/sysv/linux/x86_64/sigaction.c:0
Jan 19 19:20:00 lightningd[748044]: 0x7f26d0898ce1 __GI_raise
Jan 19 19:20:00 lightningd[748044]: ../sysdeps/unix/sysv/linux/raise.c:51
Jan 19 19:20:00 lightningd[748044]: 0x7f26d0882536 __GI_abort
Jan 19 19:20:00 lightningd[748044]: ./stdlib/abort.c:79
Jan 19 19:20:00 lightningd[748044]: 0x7f26d088240e __assert_fail_base
Jan 19 19:20:00 lightningd[748044]: ./assert/assert.c:92
Jan 19 19:20:00 lightningd[748044]: 0x7f26d0891661 __GI___assert_fail
Jan 19 19:20:00 lightningd[748044]: ./assert/assert.c:101
Jan 19 19:20:00 lightningd[748044]: 0x562c3880355d timer_complete
Jan 19 19:20:00 lightningd[748044]: plugins/libplugin.c:445
Jan 19 19:20:00 lightningd[748044]: 0x562c38800b54 clean_finished
Jan 19 19:20:00 lightningd[748044]: plugins/autoclean.c:122
Jan 19 19:20:00 lightningd[748044]: 0x562c388010ed clean_finished_one
Jan 19 19:20:00 lightningd[748044]: plugins/autoclean.c:132
Jan 19 19:20:00 lightningd[748044]: 0x562c388011b6 del_done
Jan 19 19:20:00 lightningd[748044]: plugins/autoclean.c:149
Jan 19 19:20:00 lightningd[748044]: 0x562c388058b5 handle_rpc_reply
Jan 19 19:20:00 lightningd[748044]: plugins/libplugin.c:768
Jan 19 19:20:00 lightningd[748044]: 0x562c38805a39 rpc_read_response_one
Jan 19 19:20:00 lightningd[748044]: plugins/libplugin.c:944
Jan 19 19:20:00 lightningd[748044]: 0x562c38805ad7 rpc_conn_read_response
Jan 19 19:20:00 lightningd[748044]: plugins/libplugin.c:968
Jan 19 19:20:00 lightningd[748044]: 0x562c38876b60 next_plan
Jan 19 19:20:00 lightningd[748044]: ccan/ccan/io/io.c:59
Jan 19 19:20:00 lightningd[748044]: 0x562c38876fe7 do_plan
Jan 19 19:20:00 lightningd[748044]: ccan/ccan/io/io.c:407
Jan 19 19:20:00 lightningd[748044]: 0x562c38877080 io_ready
Jan 19 19:20:00 lightningd[748044]: ccan/ccan/io/io.c:417
Jan 19 19:20:00 lightningd[748044]: 0x562c3887823c io_loop
Jan 19 19:20:00 lightningd[748044]: ccan/ccan/io/poll.c:453
Jan 19 19:20:00 lightningd[748044]: 0x562c38805d11 plugin_main
Jan 19 19:20:00 lightningd[748044]: plugins/libplugin.c:1801
Jan 19 19:20:00 lightningd[748044]: 0x562c38801c7a main
Jan 19 19:20:00 lightningd[748044]: plugins/autoclean.c:613
```
Fixes: #5912
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Since it's not spec-final yet (hell, it's not even properly specified
yet!) we need to put it behind an experimental flag.
Unfortunately, we don't have support for doing this in a plugin; a
plugin must present features before parsing options. So we need to do
it in core.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
When you return an allocated pointer, you should always hand in the
context you want it allocated from. This is more explicit, because it may
really matter to the caller!
This also folds some simple operations, and avoids doing too much
variable assignment in the declarations themselves: some coding styles
prohibit such initializers, but that's a bit exteme.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
And we should always represent them as is, not as optional: it's
possible in future we could *require* "WANT_PEER_BACKUP_STORAGE".
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
v2 opens require you to use native segwit inputs
Changelog-Added: JSONRPC: `upgradewallet` command, sweeps all p2sh-wrapped outputs to a native segwit output
The htlc_budget only exists iff the hint is a 'local' one; we were
failing to write to the htlc_budget field for non-local cases.
To avoid this, we make `local` into a struct that contains the fields
that pertain to local-only payments (in this case, `htlc_budget`).
Valgrind error file: valgrind-errors.1813487
==1813487== Conditional jump or move depends on uninitialised value(s)
==1813487== at 0x4A9C958: __vfprintf_internal (vfprintf-internal.c:1687)
==1813487== by 0x4AB0F99: __vsnprintf_internal (vsnprintf.c:114)
==1813487== by 0x1D2EF9: do_vfmt (str.c:66)
==1813487== by 0x1D3006: tal_vfmt_ (str.c:92)
==1813487== by 0x11A60A: paymod_log (libplugin-pay.c:167)
==1813487== by 0x11B749: payment_chanhints_apply_route (libplugin-pay.c:534)
==1813487== by 0x11EB36: payment_compute_onion_payloads (libplugin-pay.c:1707)
==1813487== by 0x12000F: payment_continue (libplugin-pay.c:2135)
==1813487== by 0x1245B9: adaptive_splitter_cb (libplugin-pay.c:3800)
==1813487== by 0x11FFB6: payment_continue (libplugin-pay.c:2123)
==1813487== by 0x1206BC: retry_step_cb (libplugin-pay.c:2301)
==1813487== by 0x11FFB6: payment_continue (libplugin-pay.c:2123)
==1813487==
{
<insert_a_suppression_name_here>
Memcheck:Cond
fun:__vfprintf_internal
fun:__vsnprintf_internal
fun:do_vfmt
fun:tal_vfmt_
fun:paymod_log
fun:payment_chanhints_apply_route
fun:payment_compute_onion_payloads
fun:payment_continue
fun:adaptive_splitter_cb
fun:payment_continue
fun:retry_step_cb
[sesh] 0:[tmux]*Z
Suggested-By: @nothingmuch
If we only specify the node_id, we get the "first" channel.
Closes: #5803
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Fixed: Plugins: `pay` uses the correct local channel for payments when there are multiple available (not just always the first!)
I noticed that our subtables were not being cleaned, despite being "ON
DELETE CASCADE". This is because foreign keys were not enabled, but
then I got foreign key errors: rowid cannot be a foreign key anyway!
So create a real "rowid" column. We want "ON DELETE CASCADE" for
nodes and channels (and other tables in future) where we update
partially.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Valgrind correctly reports it as uninitialized for this log message, and
the only way this can happen is channel_hints_update() when we receive a
temporary_channel_failure. Put a dummy value here in this case.
```
Valgrind error file: valgrind-errors.23404
==23404== Conditional jump or move depends on uninitialised value(s)
==23404== at 0x49E4B56: __vfprintf_internal (vfprintf-internal.c:1516)
==23404== by 0x49F6519: __vsnprintf_internal (vsnprintf.c:114)
==23404== by 0x1EBCEB: do_vfmt (str.c:66)
==23404== by 0x1EBDF8: tal_vfmt_ (str.c:92)
==23404== by 0x11A336: paymod_log (libplugin-pay.c:167)
==23404== by 0x11B4B2: payment_chanhints_apply_route (libplugin-pay.c:534)
==23404== by 0x11E999: payment_compute_onion_payloads (libplugin-pay.c:1707)
==23404== by 0x11FF4C: payment_continue (libplugin-pay.c:2135)
==23404== by 0x1245C0: adaptive_splitter_cb (libplugin-pay.c:3800)
==23404== by 0x11FEF3: payment_continue (libplugin-pay.c:2123)
==23404== by 0x1205FE: retry_step_cb (libplugin-pay.c:2301)
==23404== by 0x11FEF3: payment_continue (libplugin-pay.c:2123)
==23404==
```
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This *would* be a 1-line change (add it to Makefile) except that we
previously assumed a "list" prefix on commands.
These use the default refreshing, but they could be done better using
the time-range parameters.
Suggested-by: @niftynei
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
In particular, we generate the schema part from the plugin itself.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Added: Plugins: `sql` plugin command to perform server-side complex queries.
We now add tables to the strmap as we allocate them, since we don't
want to call "finish_td" when we're merely invoked for the
documentation, and don't need a database.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
For now, we ignore every deprecated field, but put in the logic so
that future deprecations will work as expected.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This requires us to rename "index" fields, rename fields if we have a
sub-object, and create sub-tables if we have an array, and handle the
fact that some listX commands don't contain array X (listsendpays
contains "payments").
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Rather than two arrays "columns" (for names) and "fieldtypes" (for
types), use a struct. This makes additions easier for successive
patches.
Also pull process_json_obj() out of the loop in list_done().
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It's actually two separate u16 fields, so actually treat it as
such!
Cleans up zombie handling code a bit too.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It's a core concept in the spec which isn't directly exposed.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Added: JSON-RPC: `listchannels` added a `direction` field (0 or 1) as per gossip specification.
Our pay code handles this correctly, but decode was still using an old model
where there was a payinfo per hop, not per path.
Reported-by: @t-bast
See: #5823
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
```
make check-source-bolt CHECK_BOLT_PREFIX="--prefix=BOLT-offers" BOLTVERSION=guilt/offers
```
In this case, only trivial mods.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
```
make check-source-bolt CHECK_BOLT_PREFIX="--prefix=BOLT-route-blinding" BOLTVERSION=guilt/offers
```
Other than textual changes, this does:
1. Ensures we put total_amount_msat in onion final hop (reported by @t-bast).
2. Require that they put total_amount_msat in onion final hop.
3. Return `invalid_onion_blinding` exactly as defined by the spec (i.e. less
aggressive when we're the final hop) (also reported by @t-bast, but I already
knew).
See: #5823
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-EXPERIMENTAL: `offers` breaking blinded payments change (total_amount_sat required, Eclair compat)
Also, we don't need to pass the total length to the field parsers,
just the length for this field (confusingly, this was called
"data_length").
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Don't parse the listpeers.channels output ourselves: with two extra fields
we can simply reuse json_to_listpeers_channels().
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Instead of returning a peers -> channels heirarchy, return (as callers
want!) a flat array of channels.
This is actually most of the transition work to make them work with
listpeerchannels.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
1. When we receive a commando command from a remote using the `filter`
field, use it.
2. Add a `filter` parameter to `commando` to send it: this is usually
more efficient than using filtering locally.
Of course, older remote nodes will ignore the filter, but that's
harmless.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Added: Plugins: `commando` now supports `filter` as a parameter (for send and receive).
This was reported a while ago: now do it properly.
Fixes: #5637
Changelog-Fixed: Plugins: `commando` now responds to remote JSON calls with the correct JSON `id` field.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We change the libplugin API so commando can provide its own ID base.
This id chaining enables much nicer diagnostics!
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We don't do this yet, so we add deprecated to those test (until next
patch!).
Changelog-Deprecated: plugins: `commando` JSON commands without an `id` (see doc/lightningd-rpc.7.md for how to construct a good id field).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
When called with `"id": 1` we replied with `"id": "1"`. lightningd doesn't
actually care, but it's weird.
Copy the entire token: this way we don't have to special case anything.
Also, remove the doubled test in json_add_jsonstr.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
local_connected allocates a struct node_map off of the struct
listchannels_opts but fails to ever release the internal table, so those
table allocations hang around forever. Add node_map_clear as a
destructor for the struct node_map to ensure that the internal table is
freed when the enclosing struct is freed.
Changelog-Fixed: topology: Fixed memleak in `listchannels`
Signed-off-by: Matt Whitlock <c-lightning@mattwhitlock.name>
There are several cases where you want to access to the configuration,
and given the nature of the rust API, there is no way to access to
the `configuration` field at any point after the configuration logic.
Suggested-by: Sergi Delgado Segura <@sr-gi>
Signed-off-by: Vincenzo Palazzo <vincenzopalazzodev@gmail.com>
This commit updates outdated dependencies and hangs all
bitcoin-related dependencies off of the `bitcoin` crate, using its
re-exports. This means that as long as the bitcoin crate matches, all
of its dependents will also match.
Changelog-None
Deprecated v0.11.0.
Changelog-Removed: JSON-RPC: `pay` for a bolt11 which uses a `description_hash`, without setting `description` (deprecated v0.11.0).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We only incremented this if it was the wrong state.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Fixed: JSON-RPC: `autoclean-once` response `uncleaned` count is now correct.
The autoclean plugin would assume we have a `resolved_time` which may
not be true for oldish nodes that predate our annotations.
Changelog-None Unreleased change
Reported-by: <@devastgh>
This is how we put new invoice_requests into the db; this will be used
by a new "invoicerequest" command which replaces "offerout".
The API is now the same as the offers api.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We no longer use offers for "I want to send you money", but we'll use
invoice_requests directly. Create a new table for them, and
associated functions.
The "localofferid" for "pay" and "sendpay" is now "localinvreqid".
This is an experimental-only option, so document the change under
experimental only.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-EXPERIMENTAL: JSON-RPC: `pay` and `sendpay` `localofferid` is now `localinvreqid`.
We no longer have to refer back to the offer for which we're making
the invoice_request, or to the invoice_request we made for an invoice,
as they are all mirrored (and we check!).
It's clearer to simply look at the object directly.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
I know this is an unforgivably large diff, but the spec has changed so
much that most of this amounts to a rewrite.
Some points:
* We no longer have "offer_id" fields, we generate that locally, as all
offer fields are mirrored into invoice_request and then invoice.
* Because of that mirroring, field names all have explicit offer/invreq/invoice
prefixes.
* The `refund_for` fields have been removed from spec: will re-add locally later.
* quantity_min was removed, max == 0 now mean "must specify a quantity".
* I have put recurrence fields back in locally.
This brings us to 655df03d8729c0918bdacac99eb13fdb0ee93345 ("BOLT 12:
add explicit invoice_node_id.")
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It's just to a direct peer, and we only create one, but this is
enough to test, and make payments to non-public nodes work.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is needed for offers to generate blinded paths.
No documentation changes since listincoming is an undocumented
internal hack interface which topology presents for production
of routehints.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We had a scheme where lightningd itself would put a per-node secret in
the blinded path, then we'd tell the caller when it was used. Then it
simply checks the alias to determine if the correct path was used.
But this doesn't work when we start to offer multiple blinded paths.
So go for a far simpler scheme, where the secret is generated (and
stored) by the caller, and hand it back to them.
We keep the split "with secret" or "without secret" API, since I'm
sure callers who don't care about the secret won't check that it
doesn't exist! And without that, someone can use a blinded path for a
different message and get a response which may reveal the node.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We actually want lightningd to create these, since it wants to put the
path_id secret in the last element. So best API is actually a generic
one, rather than separate APIs to create first and last ones.
And really, the more explicit initialization makes the users clearer.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We simply take the first one, and route to the start of that. Then we
append the blinded path to the onion construction.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Since the "struct command" is different from plugins and lightningd, we
need an accessor for this to work (the plugin one is a dummy for now!).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Mainly, field name changes.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-EXPERIMENTAL: Protocol: Support for forwarding blinded payments (as per latest draft)
We need to print out first_node_id, and "node_id" is now called
"blinded_node_id" in the spec.
And the schema didn't include the payment fields in the blinded path
for invoices (which broke as soon as we actually tested one!).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It's 2b7ad577d7a790b302bd1aa044b22c809c76e49d, which reverts the
point32 changes.
It also restores send_invoice in `invoice`, which we had removed
from spec and put into the recurrence patch.
I originally had implemented compatibility, but other changes
which followed this are far too widespread.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-EXPERIMENTAL: offers: complete rework of spec from other teams (yay!) breaks previous compatibility (boo!)
We use the saved previous outputs (plus maybe some new ones?) to build a
psbt for an RBF request.
RBFs utxo reuse is now working so we can unfail the test (and update
it to reflect that the lease sticks around through an RBF cycle).
Changelog-Fixed: Plugins: `funder` now honors lease requests across RBFs
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Fixed: Plugins: `keysend` now removes unknown even (technically illegal!) fields, to try to accept more payments.
We filter based on the environment variable `CLN_PLUGIN_LOG`,
defaulting to `info` as that is not as noisy as `debug` or `trace`, at
least libraries will not spam us too heavily.
Changelog-Added cln-plugin: The logs level from cln-plugins can be configured by the `CLN_PLUGIN_LOG` environment variable.
We had a bit of a chicken-and-egg problem, where we instantiated the
`state` to be managed by the `Plugin` during the very first step when
creating the `Builder`, but then the state might depend on the
configuration we only get later. This would force developers to add
placeholders in the form of `Option` into the state, when really
they'd never be none after configuring.
This defers the binding until after we get the configuration and
cleans up the semantics:
- `Builder`: declare options, hooks, etc
- `ConfiguredPlugin`: we have exchanged the handshake with
`lightningd`, now we can construct the `state` accordingly
- `Plugin`: Running instance of the plugin
Changelog-Changed: cln-plugin: Moved the state binding to the plugin until after the configuration step
We will now simply reject old-style ones as invalid. Turns out the
only trace we could find is a channel between two nodes unconnected to
the rest of the network.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Changed: Protocol: We now require all channel_update messages include htlc_maximum_msat (as per latest BOLTs)
Many changes to gossmap (including the pending ones!) don't actually
concern readers, as long as they obey certain rules:
1. Ignore unknown messages.
2. Treat all 16 upper bits of length as flags, ignore unknown ones.
So now we split the version byte into MAJOR and MINOR, and you can
ignore MINOR changes.
We don't expose the internal version (for creating the map)
programmatically: you should really hardcode what major version you
understand!
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is actually what the autoclean plugin wants, especially since
you can't otherwise delete a payment which has failed then succeeded.
But insist on neither or both being specified, at least for now.
Changelog-Added: JSON-RPC: `delpay` takes optional `groupid` and `partid` parameters to specify exactly what payment to delete.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Previous commit was a hack which *always* batched where possible, this
is a more sophisticated opt-in varaint, with a timeout sanity check.
Final performance for cleaning up 1M pays/forwards/invoices:
```
$ time l1-cli autoclean-once succeededpays 1
{
"autoclean": {
"succeededpays": {
"cleaned": 1000000,
"uncleaned": 26895
}
}
}
real 6m9.828s
user 0m0.003s
sys 0m0.001s
$ time l2-cli autoclean-once succeededforwards 1
{
"autoclean": {
"succeededforwards": {
"cleaned": 1000000,
"uncleaned": 40
}
}
}
real 3m20.789s
user 0m0.004s
sys 0m0.001s
$ time l3-cli autoclean-once paidinvoices 1
{
"autoclean": {
"paidinvoices": {
"cleaned": 1000000,
"uncleaned": 0
}
}
}
real 6m47.941s
user 0m0.001s
sys 0m0.000s
```
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Added: JSON-RPC: `batching` command to allow database transactions to cross multiple back-to-back JSON commands.
autoclean was using 98% of its time in memmove; we should simply keep
an offset, and memmove when it's empty. And also, only memmove the
used region, not the entire buffer!
Running on product of giantnodes.py:
$ time l1-cli autoclean-once failedpays 1
{
"autoclean": {
"failedpays": {
"cleaned": 26895,
"uncleaned": 1000000
}
}
}
Before:
real 20m46.579s
user 0m0.000s
sys 0m0.001s
After:
real 2m10.568s
user 0m0.001s
sys 0m0.000s
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It's more natural: we will eventually support dynamic config variables,
so this will be quite nice.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Added: Plugins: `autoclean` can now delete old forwards, payments, and invoices automatically.
And take the opportunity to rename l0 and l1 in the tests to the
more natural l1 l2 (since we add l3).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This can happen, and in fact does below in our test_autoclean_once
test where we update the datastore, and return from the cmd.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
"Who needs specs?" FFS...
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Added: Protocol: `keysend` will now attach the longest valid text field in the onion to the invoice (so you can have Sphinx.chat users spam you!)
It means we consume the channel completely to the best of our
knowledge, so let that through.
Changelog-Fixed: pay: Squeezed out the last `msat` from our local view of the network
In the case of the local channel we set the estimation to the exact
value spendable, which is important when we want to drain a channel,
because there we actually want to get the last msat.
Changelog-Added: JSON-RPC: `fundchannel`, `multifundchannel` and `fundchannel_start` now accept a `reserve` parameter to indicate the absolute reserve to impose on the peer.
This is what we do in lightningd, which makes memleak much more forgiving:
you can hang temporaries off cmd without getting reports of leaks (also
when send_outreq called).
We remove all the notleak() calls in plugins which worked around this!
And avoid multiple notleak labels, since both send_outreq() and
command_still_pending() can be called multiple times.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Add memleak_ignore_children() so callers can do exclusions themselves.
Having two exclusions was always such a hack!
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This allows GDB to print values, but also allows us to use them in
'case' statements. This wasn't allowed before because they're not
constant terms.
This also made it clear there's a clash between two error codes,
so move one.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Changed: JSON-RPC: Error code from bcli plugin changed from 400 to 500.
This is usually fine, but without this, commando (another branch!) has
a race:
1. A command has multiple parts.
2. We start sending them out.
3. We get a response, which completes the cmd.
4. We go to send the next one out, but it's been freed.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
And fold start_json_request() and start_json_rpc() into the core
function since it's the only caller.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Build them from the command which caused them, and take plugin name
as basename with extension stripped.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This avoids having to escape | or &, though we still allow that for
the deprecation period.
To detect deprecated usage, we insist that alternatives are *always*
an array (which could be loosened later), but that also means that
restrictions must *always* be an array for now.
Before:
```
# invoice, description either A or B
lightning-cli commando-rune '["method=invoice","pnamedescription=A|pnamedescription=B"]'
# invoice, description literally 'A|B'
lightning-cli commando-rune '["method=invoice","pnamedescription=A\\|B"]'
```
After:
```
# invoice, description either A or B
lightning-cli commando-rune '[["method=invoice"],["pnamedescription=A", "pnamedescription=B"]]'
# invoice, description literally 'A|B'
lightning-cli commando-rune '[["method=invoice"],["pnamedescription=A|B"]]'
```
Changelog-Deprecated: JSON-RPC: `commando-rune` restrictions is always an array, each element an array of alternatives. Replaces a string with `|`-separators, so no escaping necessary except for `\\`.
JSON needs to escape \, since it can't be in front of anything unexpected,
so no \|. So we need to return \\ to \, and in theory handle \n etc.
Changelog-Fixed: JSON-RPC: `commando-rune` now handles \\ escapes properly.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The completion of a successful payment is defined as the first
completion of a successful part, hence we just collect the minimum
completion time of successes.
Changelog-Added: JSON-RPC: `pay` and `listpays` now lists the completion time.
It was deprecated in v0.10.1, but only one channel on the network
doesn't set it now anyway, and we'll be ignoring that soon.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
1. If the tool changes, you need to regenerate since the output may
change.
2. This didn't just filter that out, ignored all but the first
dependency, which made bisecting the bookkeeper plugin a nightmare:
it didn't regenerate the .po file, causing random crashes.
If we want this, try $(filter-out tools/fromschema.py) instead. But I
don't think we want that.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
we weren't making records for 'missed' accounts that had a zero balance
at snapshot time (if peer opens channel and is unused)
Fixes: #5502
Reported-By: https://github.com/niftynei/cln-logmaid
Track rebalances, and report income events for them.
Previously `listincome` would report:
- invoice event, debit, outgoing channel
- invoice_fee event, debit, outgoing channel
- invoice event, credit, inbound channel
Now reports:
- rebalance_fee, debit, outgoing channel
(same value as invoice_fee above)
Note: only applies on channel events; if a rebalance falls to chain
we'll use the older style of accounting.
Changelog-None
When traversing the list we call `command_finished` which modifies the
list we are traversing. This ensures we don't end up advancing in the
list iteration.
Reported-by: Rusty Russell <@rustyrussell>
Technically this is a use-after-free since `command_finished` frees
the `cmd` which is also the parent of `p`, so reset it early. All
paths lead to `command_finished` so setting it early is ok.
Reported-by: Rusty Russell <@rustyrussell>
Suspended payments now have the same groupid as the actual attempt,
this allows us to identify pay calls that were suspended due to us and
terminate only those.
Changelog-Fixed pay: Fixed a crash when `pay` was called multiple times while an attempt was already in progress.
This should prevent us from accidentally completing a payment twice,
when replaying the result of an actual attempt against pay call that
was suspended due to it still being pending.
We rely on sqlite3 being present to run unit tests for some bookkeeper
tests; here we effectively disable these tests if not available
Fixes report in #4928
Reported-By: @whitslack
The initial snapshots on an already-running lightningd are expected to
be unbalanced, but this shouldn't cause users to long for the green,
green grass of home.
This controls the Art of Noise.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
First, how we record "our_funds" and then apply pushes vs lease_fees
(for liquidity ad buys/sales) was exactly opposite.
For pushes we were reporting the total funded into the channel, with the
push representing how much we'd later moved to the peer.
For lease_fees we were rerporting the total in the channel, with the
push representing how much was already moved to the peer.
We fix this (from a view perspective) by re-adding lease fees to what's
reported in the channel funding totals. Since this is now new behavior
(for leased channel values), we added new fields so we can take the old
field names thru a deprecation cycle.
We also make it possible to differentiate btw a push and a lease_fee
(before they were all the same), by adding to new fields to `listpeers`:
`fee_paid_msat` and `fee_rcvd_msat`.
This allows us to avoid math in the bookkeeper, instead we just pick
the numbers out directly and record them.
Fixes#5472
Changelog-Added: JSON-RPC: `listpeers` now has a few new fields for `funding` (`remote_funds_msat`, `local_funds_msat`, `fee_paid_msat`, `fee_rcvd_msat`).
Changelog-Deprecated: JSON-RPC: `listpeers`.`funded` fields `local_msat` and `remote_msat` are now deprecated.
Keep the accounts as an 'append only' log, instead we move the marker
for the 'channel_open' forward when a 'channel_open' comes out.
We also neatly hide the 'channel_proposed' events in 'inspect' if
there's a 'channel_open' for that same event.
If you call inspect before the 'channel_open' is confirmed, you'll see
the tag as 'channel_proposed', afterwards it shows up as
'channel_open'. However the event log rolls forward -- listaccountevents
will show the correct history of the proposal then open confirming (plus
any routing that happened before the channel confirmed).
We need a record of the channel account before you start sending
payments through it. Normally we don't start allowing payments to be
sent until after the channel has locked in but zeroconf does away with
this assumption.
Instead we push out a "channel_proposed" event, which should only show
up for zeroconfs.
This was originally done as part of the bookkeeper introduction, but it deserves its
own patch (and that one didn't remove the bitcoin/chainparams.o from the individual
requirements lines).
Suggested-by: @niftynei
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
`lightningd` has an option --wallet that lets you supply a database dsn
string to connect to a sqlite3/postgres database that's hosted/stored
elsewhere.
This adds the `--bookkeeper-db` option which does the same, except for
the bookkeeping data for a node!
Note that the default is to go in the `lightning-dir` in a database
called `accounts.sqlite3`
First off, when we pull data out of JSON, unescape it so we don't end up
with extraneous escapes in our bookkeeping data. I promise, it's worth
it.
Then, when we print descriptions out to the csvs, we gotta wrap
everything in quotes... but also we have to change all the double-quotes
to singles so that adding the quotes doesn't do anything untoward.
We also just pass it thru json_escape to get rid of linebreaks etc.
Note that in the tests we do a byte comparison instead of converting the
CSV dumps to strings because python will escape the strings on
conversion...
So we print out invoice fees on the same line for those CSVs! This means
we have to do a little bit of gymnastics (but not too bad):
- we save the fee amount onto the income event now so we can use
it later
- we ignore every "invoice_fee" event for the koinly/cointracker
Note that since we're not skipping income events in the loops we also
move the newline character to the start of every `_entry` function so
skipped records dont incur empth lines.
Changelog-Added: bkpr: print out invoice fees on the same line for `koinly` and `cointracker` csv types
It's possible we'll get an "external" event for an account/channel and
then try to do close checks on it and it'll bomb out because we didn't
go look up the originating account to make sure that that had open info
Now, we make sure and do that when we hear about an originating account
If two events for the same (unlogged) account come in and get run
through the "lookup peer" code, we should anticipate that.
We do two things here:
- one, if it's a duplicate "event" to create the channel open
we check for that and just exit early
- two, we were using a copy of the account that was
fetched/pulled from before the RPC ran, so instead we
just re-pull the most up to date account info for the
close checks.
This fixes the crash I was getting in re-running these things
If we expect further events for an onchain output (because we can steal
it away from the 'external'/rightful owner), we mark them.
This prevents us from marking a channel as 'onchain-resolved' before
all events that we're interested in have actually hit the chain.
Case that this matters:
Peer publishes a (cheating) unilateral close and a timeout htlc (which
we can steal).
We then steal the timeout htlc.
W/o the stealable flag, we'd have marked the channel as resolved when
the peer published the timeout htlc, which is incorrect as we're still
waiting for the resolution of that timeout htlc (b/c we *can* steal it).
We were failing to mark channels as resolved b/c we weren't using later
events to external (but originated from this account) events as signals
to run the channel resolution check.
This fixes that, and adds a test.
Adds schema definitions and manpages for bkpr- commands; also renames
the commands to all start with 'bkpr-', so they're easier to identify/
make runes about.
it's nice to know what node your channel was opened with. in theory we
could use listpeers to merge the data after the fact, except that
channels disappear after they've been closed for a bit. it's better to
just save the info.
we print it out in `listbalances`, as that's a great place account level
information
Anchor outputs are ignored by the clightning wallet, but we keep track
of them in the bookkeeper. This causes problems when we do the balance
checks on restart w/ the balance_snapshot -- it results in us printing
out a journal_entry to 'get rid of' the anchors that the clightning node
doesnt know about.
Instead, we mark some outputs as 'ignored' and exclude these from our
account balance sums when we're comparing to the clightning snapshot.
Our consolidate fees had a crash bug (and was pretty convoluted). This
makes it less convoluted and resolves the crash.
The only kinda meh thing is that we have to look up the most recent
timestamp data for the onchain fee entry separately, because of the way
SQL sums work.
For income events, break out the amount paid in routing fees vs the
total amount of the *invoice* that is paid.
Also printout these fees, when available, on listaccountevents
Add ability to filter results by timestamp.
Note that "start_time" is everything *after* that timestamp; and
"end_time" is everything *up to and including* that timestamp.
Prints out the `listincome` events as a CSV formatted file
Current csv_format options:
- koinly
- cointracker
- harmony (https://github.com/harmony-csv/harmony)
- quickbooks*
*Quickbooks expects values in 'USD', whereas we print values out
in <currency> (will be noted in the Description field). This won't work
how you'd expect -> you might need to do some conversion etc before
uploading it.
All amounts emitted as 'btc' w/ *11* decimals.
This is a rare case where we RBF the output of a penalty until it no
longer has an output value we can reclaim. We ignore the txid for these
events when closing a channel.
We issue events for external deposits (withdrawals) before the tx is
confirmed in a block. To avoid double counting these, we don't count
them as confirmed/included until after they're confirmed. We do this
by keeping the blockheight as zero until the withdraw for the input for
them comes through.
Note that since we don't have any way to note when RBF'd withdraws
aren't eligible for block inclusion anymore, we don't really have a good
heuristic to trim them. Which is fine, they *will* show up in account
events however.
onchain fees are weird at channel close because:
- you may be missing an trimmed htlc (which went to fees)
- the balance from close may have been rounded (msats cant land on
chain)
- the close might have been a past state and you've actually
ended up with more money onchain than you had in the channel. wut
This commit accounts for all of this appropriately, with some tests.
channel_close.debit should equal onchain_fee.credit (for that txid)
plus sum(chain_event.credit [wallet/channel_acct]).
In the penalty case, channel_close.debit becomes channel_close.debit +
penalty_adj.debit, i.e.
channel-close.debit + (penalty_adj.debit) =
onchain_fee.credit
+ sum(chain_event.credit [wallet/channel_acct])
Due to the way that onchain channel closes work, there is often a delay
between when the funding output is spent and the channel is considered
'closed'.
Once *every* downstream utxo of a channel has landed on chain, we
annotate the account with the resolving blockheight.
This gives us some insight into whether or not the chain fees etc of a
channel are going to update further and allows for a natural marker to
prune data (at a later date)
Pass in an account id, get out a utxo chain of the channel open and
close (and any other related htlc txs etc).
Note that this prints all wallet deposits that occurred in any of the
tx's that touched this channel.
This is fine and expected for any tx that's not the open; when
considerig the tx open event, the wallet deposit that's present is
typically the change. If there were other channels opened in the same tx
then the change won't match up exactly...
Also note that we might have ignored/missed fees for the channel
closed's spending txid, so we attempt to update those as well.
Backfilling for missed events is a beast.
We need the total output_value, and we can figure this out if we look at
the remote amount also.
We also need to account for the pushed/leased amount, as for leased
channels this really messes with onchain fee calculations.
We copy basically the events that lightningd emits for leased channels:
an open event with the 'lease fee' (pushed?) amount credited to the side
that made the lease/push; then an in-channel event to effectively push
the pushed/leased amount over to the peer it was paid to.
We run the journal entry info after this, so the journal snapshot will
take the pushed amount into account when figuring out what the further
missed in-channel payments were (if any)
Because we update the onchain_fee table every time a new event comes in,
it's possible (and in fact happens) that we get a wallet
withdraw/deposit event and then the channel open output event.
What we'd expect in this case is that the fees for the tx were credited
to the channel's account, not the wallet. But since we got the two
in/out events first, the fees were accumulated there first.
Our existing logic will add the channel's fees correctly, but we weren't
zeroing out the wallet's balance once it'd been determined that they
were 'ineligble' so to speak, for being included in the fees that round.
We now add chain events for starting channel balances, so print these
out with chain first then channel events.
Makes it less confusing for channel lease fee events.
Prints all the events for the requested account. If no account
requested, prints out all the events. Ordered by timestamp.
Changelog-Added: bookkeeper: new command `listaccountevents`
There's two situations where we're missing info.
One is we get a 'channel_closed' event (but there's no 'channel_open')
The other is a balance_snapshot arrives with information about accounts
that doesn't match what's already on disk. (For some of these cases, we
may be missing 'channel_open' events..)
In the easy case (no channel_open missing), we just figure out what the
When we print events out, we need to know the account name. This makes
our lookup a lot easier, since we just pull it out from the database
every time we query for these.