Returning an warning message when the pub key is not specified and there is no node in the graph.
We try to help people that use core lightning as a signer and nothings else.
Changelog-Deprecated: rpc: checkmessage return an error when the pubkey is not specified and it is unknown in the network graph.
We close the channel because the payment fulfilment is not totally done, and
we generate 6 blocks, causing it to hit CLTV deadline.
```
# send some payments, mine a block or two
inv = l2.rpc.invoice(10**4, '1', 'no_1')
l1.rpc.pay(inv['bolt11'])
# l2 attempts to close a channel that it leased, should fail
with pytest.raises(RpcError, match=r'Peer leased this channel from us'):
l2.rpc.close(l1.get_channel_scid(l2))
bitcoind.generate_block(6)
sync_blockheight(bitcoind, [l1, l2])
# make sure we're at the right place for the csv lock
> l2.daemon.wait_for_log('Blockheight: SENT_ADD_ACK_COMMIT->RCVD_ADD_ACK_REVOCATION LOCAL now 115')
tests/test_closing.py:823:
...
lightningd-2 2022-07-17T13:15:34.242Z DEBUG lightningd: Adding block 115: 39d95061935e9fc42b04c86ae60d0cf157765aff4c040f3a8d0b7888db19e015
lightningd-2 2022-07-17T13:15:34.244Z UNUSUAL 0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518-chan#2: Peer permanent failure in CHANNELD_NORMAL: Fulfilled HTLC 0 SENT_REMOVE_COMMIT cltv 115 hit deadline
```
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Dualopend is not listening to the peer fd when it hangs up, so doesn't
notice it's gone. We don't clean up the channel until it's done (usually
a good thing: it could be about to lock it in), but this harms us
here.
Fix the test failure and make a comment.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The only places which should call try_reconnect now are the "connect"
command, and the disconnect path when it decides there's still an
active channel.
This introduces one subtlety: if we disconnect when there's no active
channel, but then the subd makes one, we have to catch that case!
This temporarily reverts "slow" reconnections to fast ones: see next
patch.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Connectd already does this when we *receive* an error or warning, but
now do it on send. This causes some slight behavior change: we don't
disconnect when we close a channel, for example (our behaviour here
has been inconsistent across versions, depending on the code).
When connectd is told to disconnect, it now does so immediately, and
doesn't wait for subds to drain etc. That simplifies the manual
disconnect case, which now cleans up as it would from any other
disconnection when connectd says it's disconnected.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
In various places, we assumed that when `connected` is false,
everything is finished. This is not true: we should wait for the
state we expect.
In addition, various places allows reconnections, which interfered
with the logic; suppress them.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We want to avoid lost messages in the common cases.
This generalizes our drain code, by giving the subds each 5 seconds to
close themselves, but continue to allow them to send us traffic (if
peer is still connected) and continue to send them traffic.
We continue to send traffic *out* to the peer (if it's still
connected), until all subds are gone. We still have a 5 second timer
to close the connection to peer.
On reconnects, we don't do this "drain period" on reconnects: we kill
immediately.
We fix up one test which was looking for the "disconnect" message
explicitly.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It's caused by a reconnection race: we hold the new incoming connection while we
ask lightningd to kill the old connection. But under some circumstances we leave
the new incoming hanging (with, in this case, old reestablish messages unread!)
and another connection comes in.
Then, later we service the long-gone "incoming" connection, channeld
reads the ancient reestablish message and gets upset.
This test used to hang, but now we've fixed reconnection races it is fine.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This adds an X-Fail testcase that demonstrates that currently
the port of a DNS announcement is not set to the corresponding
network port (in this case regtest), but it will be set to 0.
Changelog-None
This is a bit weird since it lives in the offers plugin, but it works
well. This should make runes much more approachable for people!
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
I'm assuming that nobody wants a rate slower than 1 per minute; we can
introduce 'drate' if we want a per-day kind of limit.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is needed for invoice, which can be asked to commit to giant descriptions
(though that's antisocial!).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au
Changelog-Added: Plugins: `commando` a new builtin plugin to send/recv peer commands over the lightning network, using runes.
We have them split over common/param.c, common/json.c,
common/json_helpers.c, common/json_tok.c and common/json_stream.c.
Change that to:
* common/json_parse (all the json_to_xxx routines)
* common/json_parse_simple (simplest the json parsing routines, for cli too)
* common/json_stream (all the json_add_xxx routines)
* common/json_param (all the param and param_xxx routines)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This was introduced to allow creating a shared secret, but it's better to use
makesecret which creates unique secrets. getsharedsecret being a generic ECDH
function allows the caller to initiate conversations as if it was us; this
is generally OK, since we don't allow untrusted API access, but the commando
plugin had to blacklist this for read-only runes explicitly.
Since @ZmnSCPxj never ended up using this after introducing it, simply
remove it.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Removed: JSONRPC: `getsharedsecret` API: use `makesecret`
We do smoothing, waiting for this to hit the target (50k+) was taking
longer than my TIMEOUT=15; here we increase the speed at which it hits
exit velocity, so to speak.
We cleanup our output tracking for timeout txs when the peer's
htlc_timeout self-expiry is hit; we'd also log its spend if happen to
see it get spent.
This is a bit of a race as they can't spend it until the locktime is
available. Hence the flakiness in tests that expected the `htlc_timeout`
to *not* be spent.
Instead, we only log an external's `htlc_timeout` spend in the case
where we also immediately register another output to track for it (only
happens when said htlc is stealable)
Fixes#5405
In-Collab-With: @ddustin
Previously we wouldn't notify when a channel moves into state
"CHANNELD_AWAITING_LOCKIN", as this is the original state (so there's
no movement btw states). This meant that it's impossible to track when a
channel's commitment txs have been exchanged and we're waiting for
onchain confirmation.
It's useful to have notice of this initialization though, all in one
place so that the `channel_state_changed` notification can successfully
track the entire lifecycle of a channel, from inception to close.
Note that for v2 "dual-funded" channels, we already notify at the same place, at
"DUALOPEND_AWAITING_LOCKIN" (the initial state for a dualopend channel
is "DUALOPEND_OPEN_INIT" -- this is the only state we don't get notified
at now...)
Changelog-Added: Plugins: `channel_state_changed` now triggers for a v1 channel's initial "CHANNELD_AWAITING_LOCKIN" state transition (from prior state "unknown")
Fixes#5271
In-Collaboration-With: Base58 'n Coding Seminar Participants
Changelog-Changed: `fundchannel` now errors if you try to buy a liquidity ad but dont' have `experimental-dual-fund` enabled
Since it's a deprecation, we simply ignore one, rather than properly
checking they match etc.
Fixes: #5386
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We deprecated msatoshi, but getroute() still returns both (unless
--allow-deprecated-apis=False), so we need to accept both.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is a side-effect of fixing aging: sometimes, we age our
rcvd_filter cache too fast, and thus re-xmit. This breaks
our test, since it used dev-disconnect on the channel_announce,
but that closes to l3, not l1!
```
> assert l1.rpc.listchannels()['channels'] == []
E AssertionError: assert [{'active': T...ags': 1, ...}] == []
E Left contains 2 more items, first extra item: {'active': True, 'amount_msat': 100000000msat, 'base_fee_millisatoshi': 1, 'channel_flags': 0, ...}
```
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Fixes: #5403
There's one case where we can present more infomation, so do that, and
fix documentation.
Changelog-Added: JSON-RPC: `listforwards` now shows `out_channel` in more cases: even if it couldn't actually send to it.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Fixes: #5329
Fixes 25699994e5 (pytest: fix flake in test_node_reannounce).
Converting ->set->list does not give deterministic order, so we can
end up with the two lists differing due to order:
```
May send its own announcement *twice*, since it always spams us.
msgs2 = list(set(msgs2))
> assert msgs == msgs2
E AssertionError: assert ['01012ff5580...000000000000'] == ['01014973d81...000000000000']
E At index 0 diff: '01012ff55800f5b9492021372d74df4d6547bb0d32aec8d4c932a8c3b044e4bd983c429154e73091b0a2aff1cf9bbf16b37e6e9dd10ce4c2d949217366472acd341b0007800000080269a262bbd1750266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c035180266e453454e494f524245414d000000000000000000000000000000000000000000000000' != '01014973d8160dd8fc28e8fb25c40b9d5c68aed8dfb36af9fc13e4d2040fb3718553051a188ce98239c0bed138e1f8713a64acc7de98c183c9597fa58bf37f0b89bb0007800000080269a262bbd16c022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59022d2253494c454e544152544953542d336333626132392d6d6f6464656400000000000000'
E Full diff:
E [
E + '01012ff55800f5b9492021372d74df4d6547bb0d32aec8d4c932a8c3b044e4bd983c429154e73091b0a2aff1cf9bbf16b37e6e9dd10ce4c2d949217366472acd341b0007800000080269a262bbd1750266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c035180266e453454e494f524245414d000000000000000000000000000000000000000000000000',
E '01014973d8160dd8fc28e8fb25c40b9d5c68aed8dfb36af9fc13e4d2040fb3718553051a188ce98239c0bed138e1f8713a64acc7de98c183c9597fa58bf37f0b89bb0007800000080269a262bbd16c022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59022d2253494c454e544152544953542d336333626132392d6d6f6464656400000000000000',
E - '01012ff55800f5b9492021372d74df4d6547bb0d32aec8d4c932a8c3b044e4bd983c429154e73091b0a2aff1cf9bbf16b37e6e9dd10ce4c2d949217366472acd341b0007800000080269a262bbd1750266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c035180266e453454e494f524245414d000000000000000000000000000000000000000000000000',
E ]
``
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Got complaints about us hanging up on some nodes because they don't respond
to pings in a timely manner (e.g. ACINQ?), but that turned out to be something
else.
Nonetheless, we've had reports in the past of LND badly prioritizing gossip
traffic, and thus important messages can get queued behind gossip dumps!
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Changed: connectd: give busy peers more time to respond to pings.
Because we expire cache every flush, in DEVELOPER mode that can happen in
just over a second. If gossipd takes a while to process the gossip,
this can mean we actually forget we received it from the peer.
Easiest fix is to run this test in non-DEVELOPER mode.
``` # With DEVELOPER, this is long enough for gossip flush.
time.sleep(2)
> assert not l3.daemon.is_in_log(r'\[OUT\] 0100')
E AssertionError: assert not '2022-06-30T06:00:31.031Z 022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59-connectd: [OUT] 0100294834e94aa2407ace65373c84c2a8056d67a6778c22bd43f03a9000d8a01f075c3996ab6c88951bd9805adbfde78292022d7514ffcec37d5e466ef5ebd235c87ffa8a8567904502cd65ae913dbb8014509f442a6e37b00fd59fa3a06811a4965f45df1612d3a32b2566001430cfec4346cc021bd0a1bf6a87417dbc29e0715f16c4f9dc5a01bff07368ba8f54cf9813c3fc8f9f8b8c8005da2a18021f322a9a168af8f82d9deba49ced51bb26a9abc001bbad314a15baccc87c685e9302533c2e86a7b9dfe90bd4cb6be5c2da375be8c4a535a63cacff0544957a34ca865fdb61f9198d19dfd990a0fcbf8de39fa2c0fc54435c16e74df2b49fe3b6e905d599000006226e46111a0b59caaf126043eb5bbf28c34f3a5e332a1fc7b2b73cf188910f0000670000010001022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d590266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c0351802e3bd38009866c9da8ec4aa99cc4ea9c6c0dd46df15c61ef0ce1f271291714e5702324266de8403b3ab157a09f1f784d587af61831c998c151bcc21bb74c2b2314b'
```
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It actually only sets the prefix for the lightningd core log messages;
the other logs have their own prefix.
Make it a real, process-wide prefix which actually goes in front of the timestamp.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Fixed: options: `log-prefix` now correctly prefixes *all* log messages.
Looking at the CI logs, it seems like it took over 5 seconds, so
the unilateral close occurred instead of the expected rejection
of the WIRE_SHUTDOWN reply. Make it bulletproof.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>