onchain fees are weird at channel close because:
- you may be missing an trimmed htlc (which went to fees)
- the balance from close may have been rounded (msats cant land on
chain)
- the close might have been a past state and you've actually
ended up with more money onchain than you had in the channel. wut
This commit accounts for all of this appropriately, with some tests.
channel_close.debit should equal onchain_fee.credit (for that txid)
plus sum(chain_event.credit [wallet/channel_acct]).
In the penalty case, channel_close.debit becomes channel_close.debit +
penalty_adj.debit, i.e.
channel-close.debit + (penalty_adj.debit) =
onchain_fee.credit
+ sum(chain_event.credit [wallet/channel_acct])
Due to the way that onchain channel closes work, there is often a delay
between when the funding output is spent and the channel is considered
'closed'.
Once *every* downstream utxo of a channel has landed on chain, we
annotate the account with the resolving blockheight.
This gives us some insight into whether or not the chain fees etc of a
channel are going to update further and allows for a natural marker to
prune data (at a later date)
Pass in an account id, get out a utxo chain of the channel open and
close (and any other related htlc txs etc).
Note that this prints all wallet deposits that occurred in any of the
tx's that touched this channel.
This is fine and expected for any tx that's not the open; when
considerig the tx open event, the wallet deposit that's present is
typically the change. If there were other channels opened in the same tx
then the change won't match up exactly...
Also note that we might have ignored/missed fees for the channel
closed's spending txid, so we attempt to update those as well.
Backfilling for missed events is a beast.
We need the total output_value, and we can figure this out if we look at
the remote amount also.
We also need to account for the pushed/leased amount, as for leased
channels this really messes with onchain fee calculations.
We copy basically the events that lightningd emits for leased channels:
an open event with the 'lease fee' (pushed?) amount credited to the side
that made the lease/push; then an in-channel event to effectively push
the pushed/leased amount over to the peer it was paid to.
We run the journal entry info after this, so the journal snapshot will
take the pushed amount into account when figuring out what the further
missed in-channel payments were (if any)
Because we update the onchain_fee table every time a new event comes in,
it's possible (and in fact happens) that we get a wallet
withdraw/deposit event and then the channel open output event.
What we'd expect in this case is that the fees for the tx were credited
to the channel's account, not the wallet. But since we got the two
in/out events first, the fees were accumulated there first.
Our existing logic will add the channel's fees correctly, but we weren't
zeroing out the wallet's balance once it'd been determined that they
were 'ineligble' so to speak, for being included in the fees that round.
We now add chain events for starting channel balances, so print these
out with chain first then channel events.
Makes it less confusing for channel lease fee events.
Prints all the events for the requested account. If no account
requested, prints out all the events. Ordered by timestamp.
Changelog-Added: bookkeeper: new command `listaccountevents`
There's two situations where we're missing info.
One is we get a 'channel_closed' event (but there's no 'channel_open')
The other is a balance_snapshot arrives with information about accounts
that doesn't match what's already on disk. (For some of these cases, we
may be missing 'channel_open' events..)
In the easy case (no channel_open missing), we just figure out what the
When we print events out, we need to know the account name. This makes
our lookup a lot easier, since we just pull it out from the database
every time we query for these.
One really rough thing about how we did onchain fees is that the records update
every time a new event comes in.
The better way to do this is to create new entries for every adjustment,
so that reconciliation between printouts isn't a misery.
We add a timestamp and `update_count` to these records, so you can
roughly order them now (and have a good idea of the last time an event
that updated an onchain_fee occurred).
When the node starts up, it records missing/updated account balances
to the 'channel' events... which is kinda fucked for wallet + external
events now that i think about it but these are all treated the same
anyway so it's fine.
This is the magic piece that lets your bookkeeping data startup ok on an
already running/established node.
clightning doesn't give us any info about onchain fees (how could it?
it only knows about utxo object levels, and doesn't keep track of
how/when those are all related)
Instead, we keep running totals of the onchain fees for utxos. This
implements the master method for accounting for them, plus includes
tests to account for channel opens (across two accounts) as well as a
htlc-tx channel close.
Missing: we don't currently emit an event from cln for `withdraw`
initiated removal of funds, so the accounting for wallet -> external
funds is a bit janky. We don't account for the fees on these
transactions since we don't have the resulting 'external' event to
register them against!
Originally I (incorrectly?) assumed that since TX_COMMITMENT_SIGNED
always came before TX_SIGNATURES, we would always receive a response
from openchannel_update (w/ commitment_secured = true) before getting
notification of receipt of the peer's signatures.
But it's observable in the logs of hung tests that this in fact is a
wrong assumption -- the notification for the tx_sigs arrives at our
spender plugin before the callback from our openchannel_update RPC.
This mis-ordering causes a hang.
Luckily we're pretty much setup to handle this race already w/ states
etc, minus actually calling the method advance the plot in case we're
ready.
2022-07-26T05:37:59.4529095Z lightningd-1 2022-07-26T05:10:07.395Z DEBUG 035d2b1192dfba134e10e540875d366ebc8bc353d5aa766b80c090b39c3a5d885d-dualopend-chan#2: peer_in WIRE_COMMITMENT_SIGNED
2022-07-26T05:37:59.4530452Z lightningd-1 2022-07-26T05:10:07.396Z DEBUG 035d2b1192dfba134e10e540875d366ebc8bc353d5aa766b80c090b39c3a5d885d-hsmd: Got WIRE_HSMD_VALIDATE_COMMITMENT_TX
2022-07-26T05:37:59.4530719Z lightningd-1 2022-07-26T05:10:07.396Z DEBUG hsmd: Client: Received message 35 from client
2022-07-26T05:37:59.4531386Z lightningd-1 2022-07-26T05:10:07.396Z DEBUG 035d2b1192dfba134e10e540875d366ebc8bc353d5aa766b80c090b39c3a5d885d-dualopend-chan#2: billboard: channel open: commitment received, sending to lightningd to save
2022-07-26T05:37:59.4531856Z lightningd-1 2022-07-26T05:10:07.398Z DEBUG 035d2b1192dfba134e10e540875d366ebc8bc353d5aa766b80c090b39c3a5d885d-dualopend-chan#2: peer_in WIRE_TX_SIGNATURES
>>> 2022-07-26T05:37:59.4532553Z lightningd-1 2022-07-26T05:10:07.400Z DEBUG plugin-spenderp: mfc 60:`openchannel_peer_sigs` notice received for channel 9d145e763f08ee6f715ba7677f869cbb9580c7406f4d0b0ff3a0987efe501e13 <<<< THIS ONE WAS ASSUMED TO COME AFTER openchannel_update (next line)
2022-07-26T05:37:59.4533048Z lightningd-1 2022-07-26T05:10:07.400Z DEBUG plugin-spenderp: mfc 60, dest 0: openchannel_update 035d2b1192dfba134e10e540875d366ebc8bc353d5aa766b80c090b39c3a5d885d returned.
2022-07-26T05:37:59.4554292Z lightningd-1 2022-07-26T05:10:07.400Z DEBUG plugin-spenderp: mfc 60: parallel `openchannel_update`.
2022-07-26T05:37:59.4555485Z lightningd-1 2022-07-26T05:10:07.400Z DEBUG plugin-spenderp: mfc 60: funding tx 50425e20dbf0ca6fe112a8811b8048edb5bfa8d2922079668c5f353b859b45cb
2022-07-26T05:37:59.4557934Z lightningd-1 2022-07-26T05:10:07.508Z DEBUG 0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518-hsmd: Got WIRE_HSMD_CUPDATE_SIG_REQ
2022-07-26T05:37:59.4558244Z lightningd-1 2022-07-26T05:10:07.508Z DEBUG hsmd: Client: Received message 3 from client
2022-07-26T05:37:59.4558738Z lightningd-3 2022-07-26T05:11:03.234Z DEBUG 0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518-gossipd: seeker: startup peer finished
2022-07-26T05:37:59.4559209Z lightningd-3 2022-07-26T05:11:03.234Z DEBUG 0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518-gossipd: seeker: state = PROBING_SCIDS Seeking scids 1 - 105
(The last 2 log messages (from a different node) are >1min after the last
log line from lightning-1, because lightning-1 hung)
Hacked lightningd up to test this (such that notification always sent
before the RPC response, works as intended w/ patch)
We were cmd was getting free'd but holding on to reference of the
thing was causing problems.
==523280== Invalid read of size 8
==523280== at 0x1B3E14: del_notifier_property (tal.c:326)
==523280== by 0x1B3E14: tal_del_notifier_ (tal.c:569)
==523280== by 0x1123E7: handle_rpc_reply (libplugin.c:671)
==523280== by 0x1123E7: rpc_read_response_one (libplugin.c:866)
==523280== by 0x1123E7: rpc_conn_read_response (libplugin.c:886)
==523280== by 0x1A7B53: next_plan (io.c:59)
==523280== by 0x1A7B53: do_plan (io.c:407)
==523280== by 0x1A7B53: io_ready (io.c:417)
==523280== by 0x1A9BDB: io_loop (poll.c:453)
==523280== by 0x1141D0: plugin_main (libplugin.c:1708)
==523280== by 0x10D7E4: main (commando.c:937)
==523280== Address 0x52de928 is 8 bytes inside a block of size 40 free'd
==523280== at 0x483F0C3: free (vg_replace_malloc.c:872)
==523280== by 0x1B2CDD: del_tree (tal.c:419)
==523280== by 0x1B37BB: tal_free (tal.c:486)
==523280== by 0x1B37BB: tal_free (tal.c:474)
==523280== by 0x110CB2: command_complete (libplugin.c:255)
==523280== by 0x110CB2: command_done_err (libplugin.c:390)
==523280== by 0x10F511: handle_reply (commando.c:560)
==523280== by 0x10F511: handle_custommsg (commando.c:609)
==523280== by 0x113877: ld_command_handle (libplugin.c:1441)
==523280== by 0x113877: ld_read_json_one (libplugin.c:1491)
==523280== by 0x113877: ld_read_json (libplugin.c:1511)
==523280== by 0x1A7B53: next_plan (io.c:59)
==523280== by 0x1A7B53: do_plan (io.c:407)
==523280== by 0x1A7B53: io_ready (io.c:417)
==523280== by 0x1A9BDB: io_loop (poll.c:453)
==523280== by 0x1141D0: plugin_main (libplugin.c:1708)
==523280== by 0x10D7E4: main (commando.c:937)
==523280== Block was alloc'd at
==523280== at 0x483C855: malloc (vg_replace_malloc.c:381)
==523280== by 0x1B3BBD: allocate (tal.c:250)
==523280== by 0x1B3BBD: add_notifier_property (tal.c:303)
==523280== by 0x1B3BBD: tal_add_destructor2_ (tal.c:529)
==523280== by 0x110725: jsonrpc_request_start_ (libplugin.c:181)
==523280== by 0x10E0EA: send_more_cmd (commando.c:643)
==523280== by 0x11243C: handle_rpc_reply (libplugin.c:696)
==523280== by 0x11243C: rpc_read_response_one (libplugin.c:866)
==523280== by 0x11243C: rpc_conn_read_response (libplugin.c:886)
==523280== by 0x1A7B53: next_plan (io.c:59)
==523280== by 0x1A7B53: do_plan (io.c:407)
==523280== by 0x1A7B53: io_ready (io.c:417)
==523280== by 0x1A9BDB: io_loop (poll.c:453)
==523280== by 0x1141D0: plugin_main (libplugin.c:1708)
==523280== by 0x10D7E4: main (commando.c:937)
==523280==
{
<insert_a_suppression_name_here>
If rune contains invalid UTF-8, offers (which implements decode) would
produce JSON with invalid UTF-8, which causes lightningd to complain
and kill it, and then die because it's an important plugin.
So don't decode invalid UTF-8!
Reported-by: @jb55
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The error routine returns a string literal in this case, which we can't take().
Reported-by: @jb55
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is usually a signal that lightningd is shutting down, so notify
any instance that is waiting on `plugin.join()`.
Changelog-Fixed: cln-plugin: Fixed an issue where plugins would hang indefinitely despite `lightningd` closing the connection
Connectd already does this when we *receive* an error or warning, but
now do it on send. This causes some slight behavior change: we don't
disconnect when we close a channel, for example (our behaviour here
has been inconsistent across versions, depending on the code).
When connectd is told to disconnect, it now does so immediately, and
doesn't wait for subds to drain etc. That simplifies the manual
disconnect case, which now cleans up as it would from any other
disconnection when connectd says it's disconnected.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Disconnecting a peer after openingd fails is not instantaneous:
we abort the open, so openingd sends out a WIRE_ERROR which makes
connectd close the connection.
As a result this test fails often. The simplest fix is to wait for a
second in multifundchannel before retrying, which is also robust
against behaviour changes if we decide *not* to disconnect in future.
Also make sure that addrhint ownership is correct, since this can
lead to a use-after-free if we filter dests.
```
tests/test_connection.py::test_multifunding_best_effort FAILED [100%]
======================================================= FAILURES ========================================================
_____________________________________________ test_multifunding_best_effort _____________________________________________
node_factory = <pyln.testing.utils.NodeFactory object at 0x7f6c0c95c1c0>
bitcoind = <pyln.testing.utils.BitcoinD object at 0x7f6c0c92a880>
@pytest.mark.openchannel('v1')
@pytest.mark.developer("disconnect=... needs DEVELOPER=1")
def test_multifunding_best_effort(node_factory, bitcoind):
'''
Check that best_effort flag works.
'''
disconnects = ["-WIRE_INIT",
"-WIRE_ACCEPT_CHANNEL",
"-WIRE_FUNDING_SIGNED"]
l1 = node_factory.get_node()
l2 = node_factory.get_node()
l3 = node_factory.get_node(disconnect=disconnects)
l4 = node_factory.get_node()
l1.fundwallet(2000000)
destinations = [{"id": '{}@localhost:{}'.format(l2.info['id'], l2.port),
"amount": 50000},
{"id": '{}@localhost:{}'.format(l3.info['id'], l3.port),
"amount": 50000},
{"id": '{}@localhost:{}'.format(l4.info['id'], l4.port),
"amount": 50000}]
for i, d in enumerate(disconnects):
# Should succeed due to best-effort flag.
> l1.rpc.multifundchannel(destinations, minchannels=2)
tests/test_connection.py:2070:
...
> raise RpcError(method, payload, resp['error'])
E pyln.client.lightning.RpcError: RPC call failed: method: multifundchannel, payload: {'destinations': [{'id': '022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59@localhost:41023', 'amount': 50000}, {'id': '035d2b1192dfba134e10e540875d366ebc8bc353d5aa766b80c090b39c3a5d885d@localhost:41977', 'amount': 50000}, {'id': '0382ce59ebf18be7d84677c2e35f23294b9992ceca95491fcf8a56c6cb2d9de199@localhost:34943', 'amount': 50000}], 'minchannels': 2}, error: {'code': 305, 'message': 'Peer not connected at start', 'data': {'id': '0382ce59ebf18be7d84677c2e35f23294b9992ceca95491fcf8a56c6cb2d9de199', 'method': 'fundchannel_start'}}
```
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is a bit weird since it lives in the offers plugin, but it works
well. This should make runes much more approachable for people!
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>