It only had an effect if the peer didn't support option_gossip_queries, but
still, we don't want a gossip blast any more.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This avoids having to correlate with listpeers for the most pertinent
information.
This API predates plugins, otherwise we'd have listutxos and listpeers
and this would simply combine them appropriately. Still, it exists so
there's little reason not to make it more friendly.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Allow a user to select the utxo set that will be added to a
transaction, via the `utxos` parameter. Optional.
Format for utxos should be of the form ["txid:vout","..."]
As per BOLT02 #message-retransmission :
if `next_commitment_number` is 1 in both the `channel_reestablish` it sent and received:
- MUST retransmit `funding_locked`
This is probably worth preventing.
1. Our depth estimate would be inaccurate possibly leading to us
timing out too early.
2. If we're not up-to-date our onchain funds are unknown.
3. We wouldn't be able to send or receive HTLCs until we're synced anyway.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
`close` takes two optional arguments: `force` and `timeout`.
`timeout` doesn't timeout the close (there's no way to do that), just
the JSON call. `force` (default `false`) if set, means we unilaterally
close at the timeout, instead of just failing.
Timing out JSON calls is generally deprecated: that's the job of the
client. And the semantics of this are confusing, even to me! A
better API is a timeout which, if non-zero, is the time at which we
give up and unilaterally close.
The transition code is awkward, but we'll manage for the three
releases until we can remove it.
The new defaults are to unilaterally close after 48 hours.
Fixes: #2791
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The helpme plugin is more comprensive, but this at least connects to a
few random nodes, and doesn't require python libraries in path or anything.
I selected the nodes from helpme.py, eliminating ones I couldn't reach.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Note that we move adding the plugin to the plugins list to the end, otherwise
the hook from logging can examine the (uninitialized) plugin.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is easy since we did the option parsing cleanup, but it has the
effect that plugins are launched from the lightning-dir. Now
we have dynamic plugins, this means startup and post-startup plugins
experience the same environment.
This is absolutely a desirable thing: they can just drop files in
their cwd rather than having to move (including, I might note, core
files!).
We also highlight the change in various places (and a drive-up update
of PLUGINS.md which says you have to use --plugin).
The next patch adds a backwards compatibility wedge for old users of
relative plugin paths.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The reason lnd was sending sync error was that we were taking more than
30 seconds to send the channel_reestablish after connect. That's
understandable on my test node under valgrind, but shouldn't happen normally.
However, it seems it has at least once,
(see https://github.com/ElementsProject/lightning/issues/2847)
: space out startup so it's less likely to happen.
Suggested-by: @cfromknecht
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
updates the bolt version to 6639cef095a2ecc7b8f0c48c6e7f2f906fbfbc58.
this requires us to use the new bolt parser at generate-bolt.py
and updates to all of the type specifications (ie. from u8 -> byte)
Due to API instability we are disabling the RPC method for this release, but
will re-enable it after the release again.
Signed-off-by: Christian Decker <@cdecker>
This is a painpoint with testing, that there's a noticable delay between
"Shutting down" from lightning-cli and being able to restart lightningd.
This fixes that by creating a canned response for this case, which is
simply written out immediately before exit. At this point, the pidfile
has been deleted, the sockets have been closed, and the database
has been closed.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The next commit breaks it: `if b' }\n' not in buff:` is always true since
we're about to clean up our JSON so there won't be a space. I could have
hacked the space in our JSON, but 6 months is long enough anyway.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
@pm47 gave a great bug report showing c-lightning sending the same
UPDATE_FEE over and over, with the final surprise result being that we
blamed the peer for sending us multiple empty commits!
The spam is caused by us checking "are we at the desired feerate?" but
then if we can't afford the desired feerate, setting the feerate we
can afford, even though it's a duplicate. Doing the feerate cap before
we test if it's what we have already eliminates this.
But the empty commits was harder to find: it's caused by a heuristic in
channel_rcvd_revoke_and_ack:
```
/* For funder, ack also means time to apply new feerate locally. */
if (channel->funder == LOCAL &&
(channel->view[LOCAL].feerate_per_kw
!= channel->view[REMOTE].feerate_per_kw)) {
status_trace("Applying feerate %u to LOCAL (was %u)",
channel->view[REMOTE].feerate_per_kw,
channel->view[LOCAL].feerate_per_kw);
channel->view[LOCAL].feerate_per_kw
= channel->view[REMOTE].feerate_per_kw;
channel->changes_pending[LOCAL] = true;
}
```
We assume we never send duplicates, so we detect an otherwise-empty
change using the difference in feerates. If we don't set this flag,
we will get upset if we receive a commitment_signed since we consider
there to be no changes to commit.
This is actually hard to test: the previous commit adds a test which
spams update_fee and doesn't trigger this bug, because both sides
use the same "there's nothing outstanding" logic.
Fixes: #2701
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This allows you to prepare a tx, then release or discard it later.
Shares almost all the code with json_withdraw (which is now technically
superfluous).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
1. Add remote_ann_node_sigs and remote_bitcoin_sigs fields in channel_init message;
2. Master add announcement signatures into channel_init message, and send this message to Channeld.
Channeld will initial the channel with this signatures when it reenables the channel.
Unfortuntely we get spurious uninitialized variable warnings with
anything but -O3 or no optimization, so set default CWARNFLAGS
appropriately.
MCP bench results without optimization:
store_load_msec:28509-31001(29206.6+/-9.4e+02)
vsz_kb:580004-580016(580006+/-4.8)
store_rewrite_sec:11.640000-12.730000(11.908+/-0.41)
listnodes_sec:1.790000-1.880000(1.83+/-0.032)
listchannels_sec:21.180000-21.950000(21.476+/-0.27)
routing_sec:2.210000-11.160000(7.126+/-3.1)
peer_write_all_sec:36.270000-41.200000(38.168+/-1.9)
MCP bench with -Og: 22% speedup vs no optimization
store_load_msec:21963-23645(22841+/-6.6e+02)
vsz_kb:579916
store_rewrite_sec:10.080000-10.960000(10.456+/-0.3)
listnodes_sec:1.280000-1.390000(1.338+/-0.047)
listchannels_sec:14.770000-16.080000(15.518+/-0.46)
routing_sec:0.990000-6.660000(3.958+/-2.2)
peer_write_all_sec:29.950000-32.950000(31.138+/-1)
MCP bench with -O2: 31% speedup vs no optimization
store_load_msec:20713-22088(21505.6+/-4.8e+02)
vsz_kb:579928
store_rewrite_sec:9.570000-11.200000(10.192+/-0.54)
listnodes_sec:0.960000-1.090000(1.028+/-0.045)
listchannels_sec:10.400000-11.770000(11.012+/-0.48)
routing_sec:0.300000-3.140000(1.978+/-1.1)
peer_write_all_sec:28.980000-30.310000(29.572+/-0.44)
MCP bench with -O3 -flto: 36% speedup vs no optimization
store_load_msec:19616-20191(19862.6+/-1.9e+02)
vsz_kb:578452
store_rewrite_sec:8.980000-9.960000(9.55+/-0.32)
listnodes_sec:0.920000-1.910000(1.18+/-0.38)
listchannels_sec:8.960000-9.450000(9.206+/-0.16)
routing_sec:0.730000-1.850000(1.438+/-0.42)
peer_write_all_sec:28.090000-29.410000(28.772+/-0.42)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Without this, the connect command hangs in one of my branches. This logic
is from the old days when gossipd handled connections, and we wanted
to make sure it didn't hang up on this client due to the error.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
68fe5eacde introduced a skip in the iteration
of the available funds, which means utxos[i] may be off the end of utxos.
Reported-and-debugged-by: @nitramiz
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The spec says not to send a commitment_signed without any changes, but LND
does this. To understand why, you have to understand how LND works. I
haven't read the code, but I'm pretty sure it works like this:
1. lnd slows down to do garbage collection, because it's in Go.
2. When an alert timer goes off, noticing it's not making process, it
sends a twitter message to @roasbeef.
3. @roasbeef sshs into the user's machine and binary patches lnd to send
a commitment_signed message.
4. Unfortunately he works so fast that various laws of causality are broken,
meaning sometimes the commitment_signed is sent before any of thes
other things happen.
I'm fairly sure that this will stop as @roasbeef ages, or lnd introduces
some kind of causality enforcement fix.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This also allows plugins to do "hold invoices" a-la LND, useful for
just-in-time inventory handling.
We're careful to handle the invoice getting paid behind our backs, and
the incoming HTLC going away.
Once @cdecker's sphinx rework is in, we can also hand the raw payload
to the invoice_payment_hook, for special effects.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
For online services, shorter may be fine, but for casual use I'm usually
in a different timezone than the payer, so needs to be at least 1 day.
Certainly 1 hr is short if they have to open a channel.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The user can explicitly create such things (within [] or ") as we paste
those cases literally, but not for the simple cases.
Fixes: #2550
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Travis caught an error where this happened: when closingd reconnects it
was sending the reestablish message without the option_dataloss_protect
fields. That causes the peer to fail the channel!
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
1. Rename channel_funding_locked to channel_funding_depth in
channeld/channel_wire.csv.
2. Add minimum_depth in struct channel in common/initial_channel.h and
change corresponding init function: new_initial_channel().
3. Add confirmation_needed in struct peer in channeld/channeld.c.
4. Rename channel_tell_funding_locked to channel_tell_depth.
5. Call channel_tell_depth even if depth < minimum, and still call
lockin_complete in channel_tell_depth, iff depth > minimum_depth.
6. channeld ignore the channel_funding_depth unless its >
minimum_depth(except to update billboard, and set
peer->confirmation_needed = minimum_depth - depth).