First, how we record "our_funds" and then apply pushes vs lease_fees
(for liquidity ad buys/sales) was exactly opposite.
For pushes we were reporting the total funded into the channel, with the
push representing how much we'd later moved to the peer.
For lease_fees we were rerporting the total in the channel, with the
push representing how much was already moved to the peer.
We fix this (from a view perspective) by re-adding lease fees to what's
reported in the channel funding totals. Since this is now new behavior
(for leased channel values), we added new fields so we can take the old
field names thru a deprecation cycle.
We also make it possible to differentiate btw a push and a lease_fee
(before they were all the same), by adding to new fields to `listpeers`:
`fee_paid_msat` and `fee_rcvd_msat`.
This allows us to avoid math in the bookkeeper, instead we just pick
the numbers out directly and record them.
Fixes#5472
Changelog-Added: JSON-RPC: `listpeers` now has a few new fields for `funding` (`remote_funds_msat`, `local_funds_msat`, `fee_paid_msat`, `fee_rcvd_msat`).
Changelog-Deprecated: JSON-RPC: `listpeers`.`funded` fields `local_msat` and `remote_msat` are now deprecated.
```
# 0x0100 = channel_announcement
# 0x0102 = channel_update
# (Node announcement may have any timestamp)
types = Counter([m[0:4] for m in msgs])
assert types['0100'] == 1
> assert types['0102'] == 2
E assert 1 == 2
tests/test_gossip.py:324: AssertionError
```
Examining the logs shows that we ask l4 for timestamps
"first_timestamp=1658892115 timestamp_range=13", and the timestamp on
the missing update is exactly `1658892128`.
So round the end time up by 1 for filtering.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We used to tell connectd to remember our connect delay, and hand it
back (increased if necessary).
Instead, simply record when we last tried to connect. If it was less
than 10 minutes ago, double delay (up to 5 minutes max), otherwise
reset delay to 1 second.
This covers all scenarios: whether we reconnect then immediately
disconnect, or never successfully connect, it doesn't matter.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Fixes: #5453
Keep the accounts as an 'append only' log, instead we move the marker
for the 'channel_open' forward when a 'channel_open' comes out.
We also neatly hide the 'channel_proposed' events in 'inspect' if
there's a 'channel_open' for that same event.
If you call inspect before the 'channel_open' is confirmed, you'll see
the tag as 'channel_proposed', afterwards it shows up as
'channel_open'. However the event log rolls forward -- listaccountevents
will show the correct history of the proposal then open confirming (plus
any routing that happened before the channel confirmed).
We need a record of the channel account before you start sending
payments through it. Normally we don't start allowing payments to be
sent until after the channel has locked in but zeroconf does away with
this assumption.
Instead we push out a "channel_proposed" event, which should only show
up for zeroconfs.
First off, when we pull data out of JSON, unescape it so we don't end up
with extraneous escapes in our bookkeeping data. I promise, it's worth
it.
Then, when we print descriptions out to the csvs, we gotta wrap
everything in quotes... but also we have to change all the double-quotes
to singles so that adding the quotes doesn't do anything untoward.
We also just pass it thru json_escape to get rid of linebreaks etc.
Note that in the tests we do a byte comparison instead of converting the
CSV dumps to strings because python will escape the strings on
conversion...
So we print out invoice fees on the same line for those CSVs! This means
we have to do a little bit of gymnastics (but not too bad):
- we save the fee amount onto the income event now so we can use
it later
- we ignore every "invoice_fee" event for the koinly/cointracker
Note that since we're not skipping income events in the loops we also
move the newline character to the start of every `_entry` function so
skipped records dont incur empth lines.
Changelog-Added: bkpr: print out invoice fees on the same line for `koinly` and `cointracker` csv types
If we expect further events for an onchain output (because we can steal
it away from the 'external'/rightful owner), we mark them.
This prevents us from marking a channel as 'onchain-resolved' before
all events that we're interested in have actually hit the chain.
Case that this matters:
Peer publishes a (cheating) unilateral close and a timeout htlc (which
we can steal).
We then steal the timeout htlc.
W/o the stealable flag, we'd have marked the channel as resolved when
the peer published the timeout htlc, which is incorrect as we're still
waiting for the resolution of that timeout htlc (b/c we *can* steal it).
We were failing to mark channels as resolved b/c we weren't using later
events to external (but originated from this account) events as signals
to run the channel resolution check.
This fixes that, and adds a test.
Adds schema definitions and manpages for bkpr- commands; also renames
the commands to all start with 'bkpr-', so they're easier to identify/
make runes about.
Anchor outputs are ignored by the clightning wallet, but we keep track
of them in the bookkeeper. This causes problems when we do the balance
checks on restart w/ the balance_snapshot -- it results in us printing
out a journal_entry to 'get rid of' the anchors that the clightning node
doesnt know about.
Instead, we mark some outputs as 'ignored' and exclude these from our
account balance sums when we're comparing to the clightning snapshot.
For income events, break out the amount paid in routing fees vs the
total amount of the *invoice* that is paid.
Also printout these fees, when available, on listaccountevents
This is a rare case where we RBF the output of a penalty until it no
longer has an output value we can reclaim. We ignore the txid for these
events when closing a channel.
We issue events for external deposits (withdrawals) before the tx is
confirmed in a block. To avoid double counting these, we don't count
them as confirmed/included until after they're confirmed. We do this
by keeping the blockheight as zero until the withdraw for the input for
them comes through.
Note that since we don't have any way to note when RBF'd withdraws
aren't eligible for block inclusion anymore, we don't really have a good
heuristic to trim them. Which is fine, they *will* show up in account
events however.
onchain fees are weird at channel close because:
- you may be missing an trimmed htlc (which went to fees)
- the balance from close may have been rounded (msats cant land on
chain)
- the close might have been a past state and you've actually
ended up with more money onchain than you had in the channel. wut
This commit accounts for all of this appropriately, with some tests.
channel_close.debit should equal onchain_fee.credit (for that txid)
plus sum(chain_event.credit [wallet/channel_acct]).
In the penalty case, channel_close.debit becomes channel_close.debit +
penalty_adj.debit, i.e.
channel-close.debit + (penalty_adj.debit) =
onchain_fee.credit
+ sum(chain_event.credit [wallet/channel_acct])
We might only have seen one side of the channel, as shown below. Wait
for both:
```
_____________________________ test_wumbo_channels ______________________________
[gw2] linux -- Python 3.7.13 /opt/hostedtoolcache/Python/3.7.13/x64/bin/python3
node_factory = <pyln.testing.utils.NodeFactory object at 0x7f5d51743b10>
bitcoind = <pyln.testing.utils.BitcoinD object at 0x7f5d51699d10>
@pytest.mark.openchannel('v1')
@pytest.mark.openchannel('v2')
def test_wumbo_channels(node_factory, bitcoind):
l1, l2, l3 = node_factory.get_nodes(3,
opts=[{'large-channels': None},
{'large-channels': None},
{}])
conn = l1.rpc.connect(l2.info['id'], 'localhost', port=l2.port)
expected_features = expected_peer_features(wumbo_channels=True)
if l1.config('experimental-dual-fund'):
expected_features = expected_peer_features(wumbo_channels=True,
extra=[21, 29])
assert conn['features'] == expected_features
assert only_one(l1.rpc.listpeers(l2.info['id'])['peers'])['features'] == expected_features
# Now, can we open a giant channel?
l1.fundwallet(1 << 26)
l1.rpc.fundchannel(l2.info['id'], 1 << 24)
# Get that mined, and announced.
bitcoind.generate_block(6, wait_for_mempool=1)
# Connect l3, get gossip.
l3.rpc.connect(l1.info['id'], 'localhost', port=l1.port)
wait_for(lambda: len(l3.rpc.listnodes(l1.info['id'])['nodes']) == 1)
wait_for(lambda: 'features' in only_one(l3.rpc.listnodes(l1.info['id'])['nodes']))
# Make sure channel capacity is what we expected.
> assert ([c['amount_msat'] for c in l3.rpc.listchannels()['channels']]
== [Millisatoshi(str(1 << 24) + "sat")] * 2)
E assert [16777216000msat] == [16777216000m...777216000msat]
E Right contains one more item: 16777216000msat
E Full diff:
E - [16777216000msat, 16777216000msat]
E + [16777216000msat]
```
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
There's a race btw disconnecting and returning a successful RPC call for
openchannel_signed; if we disconnect quickly, we get an RPC error back.
Too slow and it returns w/o an error.
This needs to be cleaned up on a whole, work that I'm planning to get
into as part of a funder re-write. For now, let's just let the test
continue whether this call succeeds or fails.
We reset counters every minute, so ratelimit tests can flake since we
might hit that boundary.
Instead, wait for the reset then test explicitly, assuming that takes
less than 60 seconds.
```
for rune, cmd, params in failures:
print("{} {}".format(cmd, params))
with pytest.raises(RpcError, match='Not authorized:') as exc_info:
l2.rpc.call(method='commando',
payload={'peer_id': l1.info['id'],
'rune': rune['rune'],
'method': cmd,
> 'params': params})
E Failed: DID NOT RAISE <class 'pyln.client.lightning.RpcError'>
...
DEBUG:root:Calling commando with payload {'peer_id': '0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518', 'rune': 'O8Zr-ULTBKO3_pKYz0QKE9xYl1vQ4Xx9PtlHuist9Rk9NCZwbnVtPTAmcmF0ZT0zJnJhdGU9MQ==', 'method': 'getinfo', 'params': {}}
```
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
As the comment in set_feerates says: "Technically, this waits until
it's called, not until it's processed.".
And the wait_for() line doesn't work, since that condition is already true.
```
@unittest.skipIf(TEST_NETWORK == 'liquid-regtest', "Fees on elements are different")
@unittest.skipIf(
not DEVELOPER or DEPRECATED_APIS, "Without DEVELOPER=1 we snap to "
"FEERATE_FLOOR on testnets, and we test the new API."
)
def test_feerates(node_factory):
l1 = node_factory.get_node(options={'log-level': 'io',
'dev-no-fake-fees': True}, start=False)
l1.daemon.rpcproxy.mock_rpc('estimatesmartfee', {
'error': {"errors": ["Insufficient data or no feerate found"], "blocks": 0}
})
l1.start()
# All estimation types
types = ["opening", "mutual_close", "unilateral_close", "delayed_to_us",
"htlc_resolution", "penalty"]
# Try parsing the feerates, won't work because can't estimate
for t in types:
with pytest.raises(RpcError, match=r'Cannot estimate fees'):
feerate = l1.rpc.parsefeerate(t)
# Query feerates (shouldn't give any!)
wait_for(lambda: len(l1.rpc.feerates('perkw')['perkw']) == 2)
feerates = l1.rpc.feerates('perkw')
assert feerates['warning_missing_feerates'] == 'Some fee estimates unavailable: bitcoind startup?'
assert 'perkb' not in feerates
assert feerates['perkw']['max_acceptable'] == 2**32 - 1
assert feerates['perkw']['min_acceptable'] == 253
for t in types:
assert t not in feerates['perkw']
wait_for(lambda: len(l1.rpc.feerates('perkb')['perkb']) == 2)
feerates = l1.rpc.feerates('perkb')
assert feerates['warning_missing_feerates'] == 'Some fee estimates unavailable: bitcoind startup?'
assert 'perkw' not in feerates
assert feerates['perkb']['max_acceptable'] == (2**32 - 1)
assert feerates['perkb']['min_acceptable'] == 253 * 4
for t in types:
assert t not in feerates['perkb']
# Now try setting them, one at a time.
# Set CONSERVATIVE/2 feerate, for max
l1.set_feerates((15000, 0, 0, 0), True)
wait_for(lambda: len(l1.rpc.feerates('perkw')['perkw']) == 2)
feerates = l1.rpc.feerates('perkw')
assert feerates['warning_missing_feerates'] == 'Some fee estimates unavailable: bitcoind startup?'
assert 'perkb' not in feerates
> assert feerates['perkw']['max_acceptable'] == 15000 * 10
E assert 4294967295 == (15000 * 10)
tests/test_misc.py:1392: AssertionError
```
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
On fast machines, we don't get failures sometimes on commando commands.
(*But* we still got "New cmd replacing old" messages, which is how I
realized we weren't freeing them promptly, hence the previous fix).
```
# Should have exactly one discard msg from each discard
> nodes[0].daemon.wait_for_logs([r"New cmd from .*, replacing old"] * discards)
tests/test_plugin.py:2839:
...
> raise TimeoutError('Unable to find "{}" in logs.'.format(exs))
E TimeoutError: Unable to find "[]" in logs.
```
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This was weird. Here is the message (with \n turned into real new lines):
```
2022-07-24T07:20:08.9144998Z Plugin '/home/runner/work/lightning/lightning/tests/plugins/dblog.py' returned an invalid response to the db_write hook: {"jsonrpc": "2.0", "id": 40, "error": {"code": -32600, "message": "Error while processing db_write: UNIQUE constraint failed: shachain_known.shachain_id, shachain_known.pos", "traceback": "Traceback (most recent call last):
File \"/home/runner/work/lightning/lightning/contrib/pyln-client/pyln/client/plugin.py\", line 631, in _dispatch_request
result = self._exec_func(method.func, request)
File \"/home/runner/work/lightning/lightning/contrib/pyln-client/pyln/client/plugin.py\", line 616, in _exec_func
return func(*ba.args, **ba.kwargs)
File \"/home/runner/work/lightning/lightning/tests/plugins/dblog.py\", line 45, in db_write
plugin.conn.execute(c)
sqlite3.IntegrityError: UNIQUE constraint failed: shachain_known.shachain_id, shachain_known.pos
"}}
```
Finally, I realized that we *kill* l2: this means it has updated the
plugin db but not the real db. This is expected: a real backup plugin
would handle this case.
Simply disable the test for this case.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
As we'll see in the next patch, this wasn't *supposed* to work wihtout dblog-file,
but it did, creating a dblog called "null".
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
If rune contains invalid UTF-8, offers (which implements decode) would
produce JSON with invalid UTF-8, which causes lightningd to complain
and kill it, and then die because it's an important plugin.
So don't decode invalid UTF-8!
Reported-by: @jb55
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We were waiting too long for the reconnect to happen (60s default),
which caused this test to timeout.
When testing, let's speed up the reconnect.
L2 tried to reconnect but didn't have connection information in its
gossip -- is there a way to ask/save connection data from a node you're
making a channel with that doesn't rely on their node_announcement?
This gives a nice way to ensure your secret is the correct one.
Also, we don't need to suppress VALGRIND for this test, now the output
races are fixed.
Changelog-Added: `hsmtool`: new command `checkhsm` to check BIP39 passphrase against hsm_secret.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Some tests need to inspect it, but most don't, and I suspect I'm missing some
error messages due to this.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The error routine returns a string literal in this case, which we can't take().
Reported-by: @jb55
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
These may eventually end up in pyln-client, as they allow talking to
the GRPC interface exposed by cln-grpc, however for now they are used
for testing only. Once we have sufficient API and test coverage we can
move them and leave imports in their place.
If you get the wrong hsm_secret, your node_id will change, and
peers won't know who you are, bitcoind will reject your transaction
signatures, and other madness.
Catch this as soon as it happens, by storing our node_id in the db.
Suggested-by: @cdecker, @fiatjaf
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Changed: Config: `lightningd` will refuse to start with the wrong node_id (i.e. hsm_secret changes).