Commit Graph

12177 Commits

Author SHA1 Message Date
niftynei
595c52f611 bkpr: add timestamp filters to event lists 2022-07-28 12:08:18 +09:30
niftynei
462fa20c17 bkpr: move json_to functions to respective type files 2022-07-28 12:08:18 +09:30
niftynei
5f41d9247e bkpr: properly account for onchain fees for channel closes
onchain fees are weird at channel close because:

- you may be missing an trimmed htlc (which went to fees)
- the balance from close may have been rounded (msats cant land on
chain)
- the close might have been a past state and you've actually
  ended up with more money onchain than you had in the channel. wut

This commit accounts for all of this appropriately, with some tests.

channel_close.debit should equal onchain_fee.credit (for that txid)
plus sum(chain_event.credit [wallet/channel_acct]).

In the penalty case, channel_close.debit becomes channel_close.debit +
penalty_adj.debit, i.e.

	channel-close.debit + (penalty_adj.debit) =
		onchain_fee.credit
	      + sum(chain_event.credit [wallet/channel_acct])
2022-07-28 12:08:18 +09:30
niftynei
8f869ade3c bkpr: use chain_closed count to do mark account closed
More robust than inspecting the returned utxo treeset -- if the tree is
complete but missing outputs we'll get the wrong result
2022-07-28 12:08:18 +09:30
niftynei
7d5a0988db bkpr: save closed_count for account, when known
When the closing channel event tells you how many outputs to expect,
save it to the account so we can verify the account is fully closed/onchain.
2022-07-28 12:08:18 +09:30
niftynei
7b6956e4f9 bkpr: annotate an account with the block at which it's been resolved
Due to the way that onchain channel closes work, there is often a delay
between when the funding output is spent and the channel is considered
'closed'.

Once *every* downstream utxo of a channel has landed on chain, we
annotate the account with the resolving blockheight.

This gives us some insight into whether or not the chain fees etc of a
channel are going to update further and allows for a natural marker to
prune data (at a later date)
2022-07-28 12:08:18 +09:30
niftynei
b33bd05524 bkpr: add an 'inspect' command to the bookkeeper
Pass in an account id, get out a utxo chain of the channel open and
close (and any other related htlc txs etc).

Note that this prints all wallet deposits that occurred in any of the
tx's that touched this channel.

This is fine and expected for any tx that's not the open; when
considerig the tx open event, the wallet deposit that's present is
typically the change. If there were other channels opened in the same tx
then the change won't match up exactly...
2022-07-28 12:08:18 +09:30
niftynei
fea33221d4 bkpr: once we get channel_opens, we might need to update the fee records
Also note that we might have ignored/missed fees for the channel
closed's spending txid, so we attempt to update those as well.

Backfilling for missed events is a beast.
2022-07-28 12:08:18 +09:30
niftynei
8827423556 bkpr: account for pushed amounts and record the output_value
We need the total output_value, and we can figure this out if we look at
the remote amount also.

We also need to account for the pushed/leased amount, as for leased
channels this really messes with onchain fee calculations.

We copy basically the events that lightningd emits for leased channels:
an open event with the 'lease fee' (pushed?) amount credited to the side
that made the lease/push; then an in-channel event to effectively push
the pushed/leased amount over to the peer it was paid to.

We run the journal entry info after this, so the journal snapshot will
take the pushed amount into account when figuring out what the further
missed in-channel payments were (if any)
2022-07-28 12:08:18 +09:30
niftynei
2385d8d613 bkpr: listbalances, rename 'account_id' => 'account' 2022-07-28 12:08:18 +09:30
niftynei
f767e41755 bkpr: listbalances, skip the external account
it's essentialy meaningless
2022-07-28 12:08:18 +09:30
niftynei
791c1a7526 bkpr: cleanup wallet fee entries if decide they belong to a channel
Because we update the onchain_fee table every time a new event comes in,
it's possible (and in fact happens) that we get a wallet
withdraw/deposit event and then the channel open output event.

What we'd expect in this case is that the fees for the tx were credited
to the channel's account, not the wallet. But since we got the two
in/out events first, the fees were accumulated there first.

Our existing logic will add the channel's fees correctly, but we weren't
zeroing out the wallet's balance once it'd been determined that they
were 'ineligble' so to speak, for being included in the fees that round.
2022-07-28 12:08:18 +09:30
niftynei
08d8de8e45 bkpr: don't try to add fees if this tx didn't touch any accts
Dividing by zero causes problems.
2022-07-28 12:08:18 +09:30
niftynei
8c3347d129 bkpr: dont count fees for channel closes if we're not the opener
The opener pays fees on close, so if we're not the opener we should
ignore any fees on the closing tx.
2022-07-28 12:08:18 +09:30
niftynei
ee8c04a63a bkpr: update tests for wallet/external onchain fees
Working as intended; had a bool switched etc
2022-07-28 12:08:18 +09:30
niftynei
2a3875204a bkpr: parse the 'originating_account' field, save to event
It's useful to know which account an 'external' event impacted, so we
save this data now
2022-07-28 12:08:18 +09:30
niftynei
307ea93592 bkpr: invert channel + chain event printouts
We now add chain events for starting channel balances, so print these
out with chain first then channel events.

Makes it less confusing for channel lease fee events.
2022-07-28 12:08:18 +09:30
niftynei
8089f246c1 bkpr: use tags not str for tag originations 2022-07-28 12:08:18 +09:30
niftynei
a1d72cef06 bkpr: add a new command listaccountevents
Prints all the events for the requested account. If no account
requested, prints out all the events. Ordered by timestamp.

Changelog-Added: bookkeeper: new command `listaccountevents`
2022-07-28 12:08:18 +09:30
niftynei
8039fde5ab bkpr: if we're missing info about an account, add in journal entry
There's two situations where we're missing info.

One is we get a 'channel_closed' event (but there's no 'channel_open')

The other is a balance_snapshot arrives with information about accounts
that doesn't match what's already on disk. (For some of these cases, we
may be missing 'channel_open' events..)

In the easy case (no channel_open missing), we just figure out what the
2022-07-28 12:08:18 +09:30
niftynei
ccffac8208 bkpr: put the account name on the event
When we print events out, we need to know the account name. This makes
our lookup a lot easier, since we just pull it out from the database
every time we query for these.
2022-07-28 12:08:18 +09:30
niftynei
d943e5e85c bkpr: use pointer for payment_id for channel events
sometimes these are null!
2022-07-28 12:08:18 +09:30
niftynei
8ec35b7eb1 bkpr: turns out these fields are optional
pushes and channel leases don't provide fees or payment_id info
2022-07-28 12:08:18 +09:30
niftynei
899d54edd0 bkpr: have onchain_fee records be write-only, don't update in place
One really rough thing about how we did onchain fees is that the records update
every time a new event comes in.

The better way to do this is to create new entries for every adjustment,
so that reconciliation between printouts isn't a misery.

We add a timestamp and `update_count` to these records, so you can
roughly order them now (and have a good idea of the last time an event
that updated an onchain_fee occurred).
2022-07-28 12:08:18 +09:30
niftynei
29c6884468 bkpr: add journal entry for offset account balances; report listbalances
When the node starts up, it records missing/updated account balances
to the 'channel' events... which is kinda fucked for wallet + external
events now that i think about it but these are all treated the same
anyway so it's fine.

This is the magic piece that lets your bookkeeping data startup ok on an
already running/established node.
2022-07-28 12:08:18 +09:30
niftynei
b7d85f1d0b bkpr: wire up our chain fee accting to chain event reception
When we get a chain event, check to see if this updates any onchain fee
records that we have.
2022-07-28 12:08:18 +09:30
Rusty Russell
721ceb7519 patch db-fatal-plugin_err.patch 2022-07-28 12:08:18 +09:30
niftynei
351dc17e46 bkpr: add bookkeeper to PLUGINS list
This makes it start up automatically
2022-07-28 12:08:18 +09:30
niftynei
dc113d0a3f bkpr: create onchain fee records for events
clightning doesn't give us any info about onchain fees (how could it?
it only knows about utxo object levels, and doesn't keep track of
how/when those are all related)

Instead, we keep running totals of the onchain fees for utxos. This
implements the master method for accounting for them, plus includes
tests to account for channel opens (across two accounts) as well as a
htlc-tx channel close.

Missing: we don't currently emit an event from cln for `withdraw`
initiated removal of funds, so the accounting for wallet -> external
funds is a bit janky. We don't account for the fees on these
transactions since we don't have the resulting 'external' event to
register them against!
2022-07-28 12:08:18 +09:30
niftynei
c12cd99039 bkpr: tests for db crud 2022-07-28 12:08:18 +09:30
niftynei
b08ccfec1e bookkeeper: initial crud (no tests) 2022-07-28 12:08:18 +09:30
niftynei
cd95d91ed5 bkpr-tests: first test of plugin bkpr database
First test of bookkeeper database, for just the migrations.
2022-07-28 12:08:18 +09:30
niftynei
fb951dbbd6 bkpr: first attempt at database code for accounting
A database scheme and first attempt at drivers for the bookkeeper
database.

Also moves bookkeeper plugin into its own subdirectory
2022-07-28 12:08:18 +09:30
niftynei
1a3bfc479f bookkeep: first commit, stub of new plugin
Will manage bookkeeping for cln
2022-07-28 12:08:18 +09:30
niftynei
967c56859f sql: use last " as " to find name token for column
We were using the first ' as ', which causes problems for the following
types of lines

	CAST(SUM(of.debit) AS BIGINT) as debit
2022-07-28 12:08:18 +09:30
Rusty Russell
75c89f0b8e doc: fix bolt 12 link (it's not in master), update bolt 11 to new "bolts" repo.
It's weird that readthedocs.io drops the ".md" suffix though;
let's see if this fixes it.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2022-07-27 14:51:27 -07:00
Rusty Russell
1c26ebdb31 pytest: fix flake in test_wumbo_channels
We might only have seen one side of the channel, as shown below.  Wait
for both:

```
_____________________________ test_wumbo_channels ______________________________
[gw2] linux -- Python 3.7.13 /opt/hostedtoolcache/Python/3.7.13/x64/bin/python3

node_factory = <pyln.testing.utils.NodeFactory object at 0x7f5d51743b10>
bitcoind = <pyln.testing.utils.BitcoinD object at 0x7f5d51699d10>

    @pytest.mark.openchannel('v1')
    @pytest.mark.openchannel('v2')
    def test_wumbo_channels(node_factory, bitcoind):
        l1, l2, l3 = node_factory.get_nodes(3,
                                            opts=[{'large-channels': None},
                                                  {'large-channels': None},
                                                  {}])
        conn = l1.rpc.connect(l2.info['id'], 'localhost', port=l2.port)
    
        expected_features = expected_peer_features(wumbo_channels=True)
        if l1.config('experimental-dual-fund'):
            expected_features = expected_peer_features(wumbo_channels=True,
                                                       extra=[21, 29])
    
        assert conn['features'] == expected_features
        assert only_one(l1.rpc.listpeers(l2.info['id'])['peers'])['features'] == expected_features
    
        # Now, can we open a giant channel?
        l1.fundwallet(1 << 26)
        l1.rpc.fundchannel(l2.info['id'], 1 << 24)
    
        # Get that mined, and announced.
        bitcoind.generate_block(6, wait_for_mempool=1)
    
        # Connect l3, get gossip.
        l3.rpc.connect(l1.info['id'], 'localhost', port=l1.port)
        wait_for(lambda: len(l3.rpc.listnodes(l1.info['id'])['nodes']) == 1)
        wait_for(lambda: 'features' in only_one(l3.rpc.listnodes(l1.info['id'])['nodes']))
    
        # Make sure channel capacity is what we expected.
>       assert ([c['amount_msat'] for c in l3.rpc.listchannels()['channels']]
                == [Millisatoshi(str(1 << 24) + "sat")] * 2)
E       assert [16777216000msat] == [16777216000m...777216000msat]
E         Right contains one more item: 16777216000msat
E         Full diff:
E         - [16777216000msat, 16777216000msat]
E         + [16777216000msat]
```

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2022-07-27 19:31:04 +09:30
niftynei
79a76a96f7 v2open: dont rely on ordering of interprocess messages
Originally I (incorrectly?) assumed that since TX_COMMITMENT_SIGNED
always came before TX_SIGNATURES, we would always receive a response
from openchannel_update (w/ commitment_secured = true) before getting
notification of receipt of the peer's signatures.

But it's observable in the logs of hung tests that this in fact is a
wrong assumption -- the notification for the tx_sigs arrives at our
spender plugin before the callback from our openchannel_update RPC.

This mis-ordering causes a hang.

Luckily we're pretty much setup to handle this race already w/ states
etc, minus actually calling the method advance the plot in case we're
ready.

2022-07-26T05:37:59.4529095Z lightningd-1 2022-07-26T05:10:07.395Z DEBUG   035d2b1192dfba134e10e540875d366ebc8bc353d5aa766b80c090b39c3a5d885d-dualopend-chan#2: peer_in WIRE_COMMITMENT_SIGNED
2022-07-26T05:37:59.4530452Z lightningd-1 2022-07-26T05:10:07.396Z DEBUG   035d2b1192dfba134e10e540875d366ebc8bc353d5aa766b80c090b39c3a5d885d-hsmd: Got WIRE_HSMD_VALIDATE_COMMITMENT_TX
2022-07-26T05:37:59.4530719Z lightningd-1 2022-07-26T05:10:07.396Z DEBUG   hsmd: Client: Received message 35 from client
2022-07-26T05:37:59.4531386Z lightningd-1 2022-07-26T05:10:07.396Z DEBUG   035d2b1192dfba134e10e540875d366ebc8bc353d5aa766b80c090b39c3a5d885d-dualopend-chan#2: billboard: channel open: commitment received, sending to lightningd to save
2022-07-26T05:37:59.4531856Z lightningd-1 2022-07-26T05:10:07.398Z DEBUG   035d2b1192dfba134e10e540875d366ebc8bc353d5aa766b80c090b39c3a5d885d-dualopend-chan#2: peer_in WIRE_TX_SIGNATURES
>>> 2022-07-26T05:37:59.4532553Z lightningd-1 2022-07-26T05:10:07.400Z DEBUG   plugin-spenderp: mfc 60:`openchannel_peer_sigs` notice received for channel 9d145e763f08ee6f715ba7677f869cbb9580c7406f4d0b0ff3a0987efe501e13 <<<< THIS ONE WAS ASSUMED TO COME AFTER openchannel_update (next line)
2022-07-26T05:37:59.4533048Z lightningd-1 2022-07-26T05:10:07.400Z DEBUG   plugin-spenderp: mfc 60, dest 0: openchannel_update 035d2b1192dfba134e10e540875d366ebc8bc353d5aa766b80c090b39c3a5d885d returned.
2022-07-26T05:37:59.4554292Z lightningd-1 2022-07-26T05:10:07.400Z DEBUG   plugin-spenderp: mfc 60: parallel `openchannel_update`.
2022-07-26T05:37:59.4555485Z lightningd-1 2022-07-26T05:10:07.400Z DEBUG   plugin-spenderp: mfc 60: funding tx 50425e20dbf0ca6fe112a8811b8048edb5bfa8d2922079668c5f353b859b45cb
2022-07-26T05:37:59.4557934Z lightningd-1 2022-07-26T05:10:07.508Z DEBUG   0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518-hsmd: Got WIRE_HSMD_CUPDATE_SIG_REQ
2022-07-26T05:37:59.4558244Z lightningd-1 2022-07-26T05:10:07.508Z DEBUG   hsmd: Client: Received message 3 from client
2022-07-26T05:37:59.4558738Z lightningd-3 2022-07-26T05:11:03.234Z DEBUG   0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518-gossipd: seeker: startup peer finished
2022-07-26T05:37:59.4559209Z lightningd-3 2022-07-26T05:11:03.234Z DEBUG   0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518-gossipd: seeker: state = PROBING_SCIDS Seeking scids 1 - 105

(The last 2 log messages (from a different node) are >1min after the last
log line from lightning-1, because lightning-1 hung)

Hacked lightningd up to test this (such that notification always sent
before the RPC response, works as intended w/ patch)
2022-07-27 19:31:04 +09:30
niftynei
2c2bcc8eb4 flake: permit test_v2_open_sigs_restart_while_dead to succeed/fail
There's a race btw disconnecting and returning a successful RPC call for
openchannel_signed; if we disconnect quickly, we get an RPC error back.

Too slow and it returns w/o an error.

This needs to be cleaned up on a whole, work that I'm planning to get
into as part of a funder re-write. For now, let's just let the test
continue whether this call succeeds or fails.
2022-07-27 19:31:04 +09:30
niftynei
d3ba017672 valgrind: rm ref to cmd when cmd is free'd
We were cmd was getting free'd but holding on to reference of the
thing was causing problems.

==523280== Invalid read of size 8
==523280==    at 0x1B3E14: del_notifier_property (tal.c:326)
==523280==    by 0x1B3E14: tal_del_notifier_ (tal.c:569)
==523280==    by 0x1123E7: handle_rpc_reply (libplugin.c:671)
==523280==    by 0x1123E7: rpc_read_response_one (libplugin.c:866)
==523280==    by 0x1123E7: rpc_conn_read_response (libplugin.c:886)
==523280==    by 0x1A7B53: next_plan (io.c:59)
==523280==    by 0x1A7B53: do_plan (io.c:407)
==523280==    by 0x1A7B53: io_ready (io.c:417)
==523280==    by 0x1A9BDB: io_loop (poll.c:453)
==523280==    by 0x1141D0: plugin_main (libplugin.c:1708)
==523280==    by 0x10D7E4: main (commando.c:937)
==523280==  Address 0x52de928 is 8 bytes inside a block of size 40 free'd
==523280==    at 0x483F0C3: free (vg_replace_malloc.c:872)
==523280==    by 0x1B2CDD: del_tree (tal.c:419)
==523280==    by 0x1B37BB: tal_free (tal.c:486)
==523280==    by 0x1B37BB: tal_free (tal.c:474)
==523280==    by 0x110CB2: command_complete (libplugin.c:255)
==523280==    by 0x110CB2: command_done_err (libplugin.c:390)
==523280==    by 0x10F511: handle_reply (commando.c:560)
==523280==    by 0x10F511: handle_custommsg (commando.c:609)
==523280==    by 0x113877: ld_command_handle (libplugin.c:1441)
==523280==    by 0x113877: ld_read_json_one (libplugin.c:1491)
==523280==    by 0x113877: ld_read_json (libplugin.c:1511)
==523280==    by 0x1A7B53: next_plan (io.c:59)
==523280==    by 0x1A7B53: do_plan (io.c:407)
==523280==    by 0x1A7B53: io_ready (io.c:417)
==523280==    by 0x1A9BDB: io_loop (poll.c:453)
==523280==    by 0x1141D0: plugin_main (libplugin.c:1708)
==523280==    by 0x10D7E4: main (commando.c:937)
==523280==  Block was alloc'd at
==523280==    at 0x483C855: malloc (vg_replace_malloc.c:381)
==523280==    by 0x1B3BBD: allocate (tal.c:250)
==523280==    by 0x1B3BBD: add_notifier_property (tal.c:303)
==523280==    by 0x1B3BBD: tal_add_destructor2_ (tal.c:529)
==523280==    by 0x110725: jsonrpc_request_start_ (libplugin.c:181)
==523280==    by 0x10E0EA: send_more_cmd (commando.c:643)
==523280==    by 0x11243C: handle_rpc_reply (libplugin.c:696)
==523280==    by 0x11243C: rpc_read_response_one (libplugin.c:866)
==523280==    by 0x11243C: rpc_conn_read_response (libplugin.c:886)
==523280==    by 0x1A7B53: next_plan (io.c:59)
==523280==    by 0x1A7B53: do_plan (io.c:407)
==523280==    by 0x1A7B53: io_ready (io.c:417)
==523280==    by 0x1A9BDB: io_loop (poll.c:453)
==523280==    by 0x1141D0: plugin_main (libplugin.c:1708)
==523280==    by 0x10D7E4: main (commando.c:937)
==523280==
{
   <insert_a_suppression_name_here>
2022-07-26 15:11:30 -07:00
niftynei
282ab72e2d tests: valgrind barfing on uninitialized value
------------------------------- Valgrind errors --------------------------------
Valgrind error file: valgrind-errors.493330
==493330== Conditional jump or move depends on uninitialised value(s)
==493330==    at 0x154051: opt_add_addr_withtype (options.c:275)
==493330==    by 0x154406: opt_add_announce_addr (options.c:302)
==493330==    by 0x2696E6: parse_one (parse.c:121)
==493330==    by 0x25CFB5: opt_parse (opt.c:228)
==493330==    by 0x155DB6: handle_opts (options.c:1413)
==493330==    by 0x127317: main (lightningd.c:994)
==493330==
{
   <insert_a_suppression_name_here>
   Memcheck:Cond
   fun:opt_add_addr_withtype
   fun:opt_add_announce_addr
   fun:parse_one
   fun:opt_parse
   fun:handle_opts
   fun:main
}
--------------------------------------------------------------------------------
Leaving base_dir /tmp/ltests-iyf2dw3n intact, it still has test sub-directories with failure details: ['test_announce_dns_without_port_1']
====================================== short test summary info ======================================
ERROR tests/test_gossip.py::test_announce_dns_without_port - ValueError:
2022-07-26 15:11:30 -07:00
niftynei
f4abc3a661 tests: local flake fix; l1 was waiting too long to reconnect
Impacts local tests, when TIMEOUT is set low...
2022-07-26 15:11:30 -07:00
Christian Decker
b55df5c626 msggen: Use tempfile + rename to make changes to .msggen.json atomic
This was causing issues when multiple instances of msggen were running
in parallel on CI.

Changelog-None
2022-07-26 09:49:20 -07:00
Rusty Russell
17b9bd5ca3 pytest: fix test_commando_rune flake.
We reset counters every minute, so ratelimit tests can flake since we
might hit that boundary.

Instead, wait for the reset then test explicitly, assuming that takes
less than 60 seconds.

```
        for rune, cmd, params in failures:
            print("{} {}".format(cmd, params))
            with pytest.raises(RpcError, match='Not authorized:') as exc_info:
                l2.rpc.call(method='commando',
                            payload={'peer_id': l1.info['id'],
                                     'rune': rune['rune'],
                                     'method': cmd,
>                                    'params': params})
E               Failed: DID NOT RAISE <class 'pyln.client.lightning.RpcError'>
...
DEBUG:root:Calling commando with payload {'peer_id': '0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518', 'rune': 'O8Zr-ULTBKO3_pKYz0QKE9xYl1vQ4Xx9PtlHuist9Rk9NCZwbnVtPTAmcmF0ZT0zJnJhdGU9MQ==', 'method': 'getinfo', 'params': {}}
```

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2022-07-26 09:48:56 -07:00
Rusty Russell
85180dbfee pytest: fix flake in test_feerates
As the comment in set_feerates says: "Technically, this waits until
it's called, not until it's processed.".

And the wait_for() line doesn't work, since that condition is already true.

```
    @unittest.skipIf(TEST_NETWORK == 'liquid-regtest', "Fees on elements are different")
    @unittest.skipIf(
        not DEVELOPER or DEPRECATED_APIS, "Without DEVELOPER=1 we snap to "
        "FEERATE_FLOOR on testnets, and we test the new API."
    )
    def test_feerates(node_factory):
        l1 = node_factory.get_node(options={'log-level': 'io',
                                            'dev-no-fake-fees': True}, start=False)
        l1.daemon.rpcproxy.mock_rpc('estimatesmartfee', {
            'error': {"errors": ["Insufficient data or no feerate found"], "blocks": 0}
        })
        l1.start()
    
        # All estimation types
        types = ["opening", "mutual_close", "unilateral_close", "delayed_to_us",
                 "htlc_resolution", "penalty"]
    
        # Try parsing the feerates, won't work because can't estimate
        for t in types:
            with pytest.raises(RpcError, match=r'Cannot estimate fees'):
                feerate = l1.rpc.parsefeerate(t)
    
        # Query feerates (shouldn't give any!)
        wait_for(lambda: len(l1.rpc.feerates('perkw')['perkw']) == 2)
        feerates = l1.rpc.feerates('perkw')
        assert feerates['warning_missing_feerates'] == 'Some fee estimates unavailable: bitcoind startup?'
        assert 'perkb' not in feerates
        assert feerates['perkw']['max_acceptable'] == 2**32 - 1
        assert feerates['perkw']['min_acceptable'] == 253
        for t in types:
            assert t not in feerates['perkw']
    
        wait_for(lambda: len(l1.rpc.feerates('perkb')['perkb']) == 2)
        feerates = l1.rpc.feerates('perkb')
        assert feerates['warning_missing_feerates'] == 'Some fee estimates unavailable: bitcoind startup?'
        assert 'perkw' not in feerates
        assert feerates['perkb']['max_acceptable'] == (2**32 - 1)
        assert feerates['perkb']['min_acceptable'] == 253 * 4
        for t in types:
            assert t not in feerates['perkb']
    
        # Now try setting them, one at a time.
        # Set CONSERVATIVE/2 feerate, for max
        l1.set_feerates((15000, 0, 0, 0), True)
        wait_for(lambda: len(l1.rpc.feerates('perkw')['perkw']) == 2)
        feerates = l1.rpc.feerates('perkw')
        assert feerates['warning_missing_feerates'] == 'Some fee estimates unavailable: bitcoind startup?'
        assert 'perkb' not in feerates
>       assert feerates['perkw']['max_acceptable'] == 15000 * 10
E       assert 4294967295 == (15000 * 10)

tests/test_misc.py:1392: AssertionError
```

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2022-07-26 09:48:56 -07:00
Rusty Russell
0a9a87ec10 pytest: fix test_commando_stress
On fast machines, we don't get failures sometimes on commando commands.

(*But* we still got "New cmd replacing old" messages, which is how I
realized we weren't freeing them promptly, hence the previous fix).

```
        # Should have exactly one discard msg from each discard
>       nodes[0].daemon.wait_for_logs([r"New cmd from .*, replacing old"] * discards)

tests/test_plugin.py:2839: 
...
>                   raise TimeoutError('Unable to find "{}" in logs.'.format(exs))
E                   TimeoutError: Unable to find "[]" in logs.

```

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2022-07-26 09:33:40 -07:00
Rusty Russell
9aa9a8236f commando: free incmd as soon as we use it.
Otherwise we left it in the cache, causing "New cmd replacing old"
messages.
2022-07-26 09:33:40 -07:00
Rusty Russell
8da361b49b pytest: fix flake in test_channel_persistence w/ TEST_CHECK_DBSTMTS
This was weird.  Here is the message (with \n turned into real new lines):

```
2022-07-24T07:20:08.9144998Z Plugin '/home/runner/work/lightning/lightning/tests/plugins/dblog.py' returned an invalid response to the db_write hook: {"jsonrpc": "2.0", "id": 40, "error": {"code": -32600, "message": "Error while processing db_write: UNIQUE constraint failed: shachain_known.shachain_id, shachain_known.pos", "traceback": "Traceback (most recent call last):
  File \"/home/runner/work/lightning/lightning/contrib/pyln-client/pyln/client/plugin.py\", line 631, in _dispatch_request
    result = self._exec_func(method.func, request)
  File \"/home/runner/work/lightning/lightning/contrib/pyln-client/pyln/client/plugin.py\", line 616, in _exec_func
    return func(*ba.args, **ba.kwargs)
  File \"/home/runner/work/lightning/lightning/tests/plugins/dblog.py\", line 45, in db_write
    plugin.conn.execute(c)
sqlite3.IntegrityError: UNIQUE constraint failed: shachain_known.shachain_id, shachain_known.pos
"}}
```

Finally, I realized that we *kill* l2: this means it has updated the
plugin db but not the real db.  This is expected: a real backup plugin
would handle this case.

Simply disable the test for this case.

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2022-07-26 09:33:40 -07:00
Rusty Russell
008a59b004 lightningd: ignore default if it's a literal 'null' JSON token.
I wondered how `tests/plugins/dblog.py` worked, since it is supposed to fail
unless the `dblog-file` arg is set:

```
@plugin.init()
def init(configuration, options, plugin):
    if not plugin.get_option('dblog-file'):
        raise RpcError("No dblog-file specified")
```

But it was set to "null".  That's because 'None' in python is turned into a literal
JSON "null", and we take that as the default value.

We also cleanup the popt->description double-assignment (a leftover
from when this was optional).

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Fixed: plugins: setting the default value of a parameter to `null` is the same as not setting it (pyln plugins did this!).
2022-07-26 09:33:40 -07:00
Rusty Russell
1480257644 pytest: set dblog-file when adding the dblog plugin (TEST_CHECK_DBSTMTS=1)
As we'll see in the next patch, this wasn't *supposed* to work wihtout dblog-file,
but it did, creating a dblog called "null".

Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2022-07-26 09:33:40 -07:00