If we ever do this, we'd end up with an unspendable commitment tx anyway.
It might be able to happen if we have htlcs added from the non-fee-paying
party while the fees are increased, though. But better to close the
channel and get a report about it if that happens.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We used to produce these, but they're invalid. When we switched to
libwally it (correctly) refuses to get a txid for them.
Fixes: #2772Fixes: #2759
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It needs this in compat mode to detect old (pre-0.6.3) end of JSON.
But it always does the first command in compat mode.
This was never really reliable, since the first command could be to
a plugin for which we simply pass through the JSON (though, carefully
appending the expected '\n\n' if not already there).
Reported-by: @laanwj
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
That was changed to start the response object, which broke the openingd
code once we merged.
Of course, I should have *renamed it* when I changed the semantic!
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Compile broke because we were using low-level JSON primitives here
(which, incidentally, would produce bad JSON now, since we can't just
put a raw string inside an object!).
Use json_add_string, which also has the benefit of escaping JSON
for us.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Big wiring re-org for funding-continue
In openingd, we move the 'persistent' state (their basepoints,
pubkey, and the minimum_depth requirement for the opening tx) into
the state object. We also look to keep code-reuse between
'continue' and normal 'fundchannel' as high as possible. Both
of these call the same 'fundchannel_reply' at the end.
In opening_control.c, we remap fundchannel_reply such that it is
now aware of the difference between an external/internally funded
channel open. It's the same return path, with the difference that
one finishes making and broadcasting the funding transaction; the
other is skips this.
Add an RPC method (not working at the moment) called
`fundchannel_continue` that takes as its parameters a
node_id and a txid for a transaction (that ostensibly has an output
for a channel)
Some channels won't be opened with a wtx struct, so keep
the total funding amount separate from it so we can
show some stats for listpeers.
Note that we're going to need to update/confirm this once
the transaction gets confirmed.
For the `fundchannel_cancel` we're going to want
to 'successfully' fail a funding channel operation. This allows
us to report it a failure back as an RPC success, instead of
automatically failing the RPC request.
We're going to need this for P2WSH scripts. pull it out into
a common file plus adopt the sanity checks so that it will allow for
either P2WSH or P2WPKH (previously only encoded P2WPKH scripts)
This is an old bug, where a plugin can get called while we're shutting
down (and have freed plugins), but it's triggered more reliably by the
new warning notification hook.
For good measure, we also make freeing a plugin self-delete.
Valgrind error file: valgrind-errors.16763
==16886== Invalid read of size 8
==16886== at 0x422919: plugins_notify (plugin.c:1096)
==16886== by 0x413919: notify_warning (notification.c:61)
==16886== by 0x412BDE: logv (log.c:251)
==16886== by 0x412A98: log_ (log.c:311)
==16886== by 0x4044BE: bcli_finished (bitcoind.c:178)
==16886== by 0x459480: destroy_conn (poll.c:244)
==16886== by 0x459499: destroy_conn_close_fd (poll.c:250)
==16886== by 0x4619E1: notify (tal.c:235)
==16886== by 0x461A7E: del_tree (tal.c:397)
==16886== by 0x461AB5: del_tree (tal.c:407)
==16886== by 0x461AB5: del_tree (tal.c:407)
==16886== by 0x461AB5: del_tree (tal.c:407)
==16886== Address 0x634a578 is 40 bytes inside a block of size 352 free'd
==16886== at 0x4C2EDEB: free (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==16886== by 0x461AFD: del_tree (tal.c:416)
==16886== by 0x461FB7: tal_free (tal.c:481)
==16886== by 0x411E0A: main (lightningd.c:841)
==16886== Block was alloc'd at
==16886== at 0x4C2DB8F: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==16886== by 0x4617CE: allocate (tal.c:245)
==16886== by 0x461E4C: tal_alloc_ (tal.c:423)
==16886== by 0x42255E: plugins_new (plugin.c:106)
==16886== by 0x41133D: new_lightningd (lightningd.c:218)
==16886== by 0x411AD4: main (lightningd.c:649)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is a painpoint with testing, that there's a noticable delay between
"Shutting down" from lightning-cli and being able to restart lightningd.
This fixes that by creating a canned response for this case, which is
simply written out immediately before exit. At this point, the pidfile
has been deleted, the sockets have been closed, and the database
has been closed.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Move it closer to ccan/json_out, in preparation for using that as a
replacement.
In particular:
1. Add a 'quote' field in json_add_member.
2. json_add_member now always escapes if 'quote' is true.
3. json_member_direct is exposed to allow avoiding of escaping.
4. json_add_hex can use this, so no longer needs to be in json_stream.c.
5. We don't make JSON manually, but always use helpers.
6. We now flush the stream (wake reader) only when we close it, or mark
command as pending.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
"result" should always be an object (so that we can add new fields),
so make that implicit in json_stream_success.
This makes our primitives well-formed: we previously used NULL as our
fieldname when calling the first json_object_start, which is a hack
since we're actually in an object and the fieldname is 'result' (which
was already written by json_object_start).
There were only two cases which didn't do this:
1. dev-memdump returned an array. No API guarantees on this.
2. shutdown returned a string.
I temporarily made shutdown return an empty object, which shouldn't
break anything, but I want to fix that later anyway.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
These are generalized from our internal implementations.
The main difference is that 'struct json_escaped' is now 'struct
json_escape', so we replace that immediately.
The difference between lightningd's json-writing ringbuffer and the
more generic ccan/json_out is that the latter has a better API and
handles escaping transparently if something slips through (though
it does offer direct accessors so you can mess things up yourself!).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This new parameter takes a list of outpoints (as txid:vout) and fund a channel from the corresponding utxos.
Example : fundchannel <id> 10000 normal 1 [10767f0db0e568127fffd7f70a154d4599f42d62babf63230a7c3378bfce3cb0:0, c9e040e0b5fc8c59d5e7834108fbc5583001f414dd83faf0a05cff9d1a92d32c:0]
This means there's now a semantic difference between the default `fromid`
and setting `fromid` explicitly to our own node_id. In the default case,
it means we don't charge ourselves fees on the route.
This means we can spend the full channel balance.
We still want to consider the pricing of local channels, however:
there's a *reason* to discount one over another, and that is to bias
things. So we add the first-hop fee to the *risk* value instead.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Take into account the fee we'd have to pay if we're the funder, and
also drop to 0 if the amount is less than the smallest HTLC the peer
will accept.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This was incorrectly handled before, hence the wrapper which checks
correctness of the arguments.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
This is important for things we automatically watched because it spends a
watch txo, but only onchaind knows the details about what the TX really is and
how it should be handled.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
This takes the guesswork out of `drop_to_chain` and allows us to annotate the
last_tx consistently.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
Fixes a corner case when reconnecting (which restarts channeld) at depth=6
where we didn't correctly send/respond with announce_signatures.
NOTE: A complete restart of node may initialize channeld with unupdated height
because of an unfinished rescan. But when rescan is finished, funding tx_watch is
fired (at least once), which then tells channeld the latest depth.
- Related Changes for `warning` notification
Add a `bool` type parameter in `log_()` and `lov()`, this `bool` flag
indicates if we should call `warning` notifier.
1) The process of copying `log_book` of every peer to the `log_book` of
`ld` is usually included in `log_()` and `lov()`, and it may lead to
repeated `warning` notification. So a `bool`, which explicitly indicates
if the `warning` notification is disabled during this call, is necessary
.
2) The `LOG_INFO` and `LOG_DEBUG` level don't need to call
warning, so set that `bool` paramater as `FALSE` for these log level and
only set it as `TRUE` for `LOG_UNUAUSL`/`LOG_BROKEN`. As for `LOG_IO`,
it use `log_io()` to log, so we needn't think about notifier for it.
This notification bases on `LOG_BROKEN` and `LOG_UNUSUAL` level log.
--Introduction
A notification for topic `warning` is sent every time a new `BROKEN`/
`UNUSUAL` level(in plugins, we use `error`/`warn`) log generated, which
means an unusual/borken thing happens, such as channel failed,
message resolving failed...
```json
{
"warning": {
"level": "warn",
"time": "1559743608.565342521",
"source": "lightningd(17652): 0821f80652fb840239df8dc99205792bba2e559a05469915804c08420230e23c7c chan #7854:",
"log": "Peer permanent failure in CHANNELD_NORMAL: lightning_channeld: sent ERROR bad reestablish dataloss msg"
}
}
```
1. `level` is `warn` or `error`:
`warn` means something seems bad happened and it's under control, but
we'd better check it;
`error` means something extremely bad is out of control, and it may lead
to crash;
2. `time` is the second since epoch;
3. `source`, in fact, is the `prefix` of the log_entry. It means where
the event happened, it may have the following forms:
`<node_id> chan #<db_id_of_channel>:`, `lightningd(<lightningd_pid>):`,
`plugin-<plugin_name>:`, `<daemon_name>(<daemon_pid>):`, `jsonrpc:`,
`jcon fd <error_fd_to_jsonrpc>:`, `plugin-manager`;
4. `log` is the context of the original log entry.
--Note:
1. The main code uses `UNUSUAL`/`BROKEN`, and plugin module uses `warn`
/`error`, considering the consistency with plugin, warning choose `warn`
/`error`. But users who use c-lightning with plugins may want to
`getlog` with specified level when receive warning. It's the duty for
plugin dev to turn `warn`/`error` into `UNUSUAL`/`BROKEN` and present it
to the users, or pass it directly to `getlog`;
2. About time, `json_log()` in `log` module uses the Relative Time, from
the time when `log_book` inited to the time when this event happend.
But I consider the `UNUSUAL`/`BROKEN` event is rare, and it is very
likely to happen after running for a long time, so for users, they will
pay more attention to Absolute Time.
-- Related Change
1. Remove the definitions of `log`, `log_book`, `log_entry` from `log.c`
to `log.h`, then they can be used in warning declaration and definition.
2. Remove `void json_add_time(struct json_stream *result, const char
*fieldname, struct timespec ts)` from `log.c` to `json.c`, and add
related declaration in `json.h`. Now the notification function in
`notification.c` can call it.
2. Add a pointer to `struct lightningd` in `struct log_book`. This may
affect the independence of the `log` module, but storing a pointer to
`ld` is more direct;
We reserve inputs when we're going to send a transaction, but we don't
unreserve them if we crash. This is most graphically demonstrated by
the txprepare case, which makes it easier to trigger.
Instead, we should query bitcoind to see whether the tx made it out or
not, as we would do manually with dev-rescan-outputs.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We currently allocate utxos off cmd, but the next commit will persist a
wtx beyond the command which created it, breaking that assumption.
In general, a struct member should be owned by the struct itself, and
a tal context should be an explicit arg, not implicit.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Keeping the uintmap ordering all the broadcastable messages is expensive:
130MB for the million-channels project. But now we delete obsolete entries
from the store, we can have the per-peer daemons simply read that sequentially
and stream the gossip itself.
This is the most primitive version, where all gossip is streamed;
successive patches will bring back proper handling of timestamp filtering
and initial_routing_sync.
We add a gossip_state field to track what's happening with our gossip
streaming: it's initialized in gossipd, and currently always set, but
once we handle timestamps the per-peer daemon may do it when the first
filter is sent.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Encapsulating the peer state was a win for lightningd; not surprisingly,
it's even more of a win for the other daemons, especially as we want
to add a little gossip information.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Since we have more or less given up on the separation between response
callback and deserialization we can also just have the individual parts
returned.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
Suggested-by: Rusty Russell <@rustyrussell>
It disables the error when attempting to do a state transition from
`RCVD_ADD_ACK_REVOCATION` to `RCVD_ADD_ACK_REVOCATION` which was done before
getting to this point.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
Since we might soon be changing the payload it is a good idea to not just
expose the v0 payload, but also the raw payload for the plugin to
interpret. This might also include payloads that `lightningd` itself cannot
understand, but the plugin might.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
Suggested-by: Corné Plooy <@bitonic-cjp>
This is a rather simple hook that allows a plugin to take control over
HTLCs that were accepted, but weren't resolved as part of an invoice
or forwarded to the next hop yet.
The goal is to allow plugins to terminate a route early, perform
intermediate checks before the payment is accepted (check inventory or
service delivery before accepting in order to avoid a refund for
example) or handle an onion differently if it has a different
realm (cross-chain atomic swaps).
This doesn't implement serializing the payload or deserializing it,
instead just passes the full context along. The details for
serializing and deserializing will be implemented in a future commit.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
A new string field is added to the command structure and is specified at the creation of each native command, and in the JSON created by 'json_add_help_command()'.
Before:
Plugin for invoice_payment returned non-result response
"subscriptions": [], "hooks": ["invoice_payment"]}}
�V
After:
Plugin for invoice_payment returned non-result response {"jsonrpc": "2.0", "id": 6, "error": "Error while processing invoice_payment: ValueError(\"invalid literal for int() with base 10: '5.0'\")"}
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
1. Add remote_ann_node_sigs and remote_bitcoin_sigs fields in channel_init message;
2. Master add announcement signatures into channel_init message, and send this message to Channeld.
Channeld will initial the channel with this signatures when it reenables the channel.
Channeld sends announcement signatures to Master by this message.
When Channeld receive a new channel announcement msg, (After channel locking)it will sends announcement signatures to Master by this message.
Keep watching and updating scid until ANNOUNCE_MIN_DEPTH, even when channel is private.
When scid changes, we fail channeld so it will restart and initialize with updated
scid and add it to rtable. Reorgs can change funding tx's height/index after lockin,
which could happen with small minimum_depth=1.
This has two effects: most importantly, it avoids the problem where
lightningd creates a 800MB JSON blob in response to listchannels,
which causes OOM on the Raspberry Pi (our previous max allocation was
832MB). This is because lightning-cli can start draining the JSON
while we're filling the buffer, so we end up with a max allocation of
68MB.
But despite being less efficient (multiple queries to gossipd), it
actually speeds things up due to the parallelism:
MCP with -O3 -flto before vs after:
-listchannels_sec:8.980000-9.330000(9.206+/-0.14)
+listchannels_sec:7.500000-7.830000(7.656+/-0.11)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This happened with the 800M JSON for the MCP listchannels on the raspberry
pi, and tal calls abort() by default.
We switch to raw malloc here; we could override the error hook for
tal, but this is neater since we're doing low-level things anyway,
I tested it manually with this patch:
diff --git a/lightningd/json_stream.c b/lightningd/json_stream.c
index cec9f5771..206ba37c0 100644
--- a/lightningd/json_stream.c
+++ b/lightningd/json_stream.c
@@ -43,6 +43,14 @@ static void free_json_stream_membuf(struct json_stream *js)
free(membuf_cleanup(&js->outbuf));
}
+static void *membuf_realloc_hack(struct membuf *mb, void *rawelems,
+ size_t newsize)
+{
+ if (newsize > 1000000000)
+ return NULL;
+ return realloc(rawelems, newsize);
+}
+
struct json_stream *new_json_stream(const tal_t *ctx,
struct command *writer,
struct log *log)
@@ -53,7 +61,7 @@ struct json_stream *new_json_stream(const tal_t *ctx,
js->reader = NULL;
/* We don't use tal here, because we handle failure externally (tal
* helpfully aborts with a msg, which is usually right) */
- membuf_init(&js->outbuf, malloc(64), 64, membuf_realloc);
+ membuf_init(&js->outbuf, malloc(64), 64, membuf_realloc_hack);
tal_add_destructor(js, free_json_stream_membuf);
#if DEVELOPER
js->wrapping = tal_arr(js, jsmntype_t, 0);
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We now have a test blockchain for MCP which has the correct channels,
so this is not needed.
Also fix a benchmark script bug where 'mv "$DIR"/log
"$DIR"/log.old.$$' would fail if you log didn't exist from a previous run.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This was deeply surprising to me; there's a difference between a value not being
specified, and it being specified as "".
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
New fields don't have to be spelled out twice.
The raw version are called _only, so we don't miss a call
accidentally. We can rename them when we finally deprecated old
fields.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Instead of lightningd telling us when it's ready, we ask it.
This also provides an opportunity to have a plugin hook at this point.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The original idea is to "tal" channel on the "ctx"(In fact, we'd like to set ctx as "ld").
But we already tal channel on "ld" in new_channel(), so "ctx" is unused.
These are always handed to subdaemons as a set, so group them. This makes
it easier to add an fd (in the next patch).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We were checking against a hard-coded list, now we return a valid address only
if the hrp matches the chainparams.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
The chainparams are needed to know the prefixes, so instead of passing down
the testnet, we pass the entire params struct.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
We were deciding whether an address is a testnet address or not in the parser,
and then checking whether it matches our expectation outside as well. This
just returns the address version instead, and still checks it against our
expectation, but without having the parser need to know about address types.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
* remove libbase58, use base58 from libwally
This removes libbase58 and uses libwally instead.
It allocates and then frees some memory, we may want to
add a function in wally that doesn't or override
wally_operations to use tal.
Signed-off-by: Lawrence Nahum lawrence@greenaddress.it
Without this, the connect command hangs in one of my branches. This logic
is from the old days when gossipd handled connections, and we wanted
to make sure it didn't hang up on this client due to the error.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
I misunderstood the API, this ended up nesting a result inside the JSON-RPC
result.
No concerns about backwards compatibility since this is so new.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This seems like overkill, at least for now. Handling the JSON
inline is clearer, for the existing examples at least.
We also remove the dummy hook, rather than fix it.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
For me this happened only under valgrind with
test_option_upfront_shutdown_script (in a pending branch):
==5063== by 0x51FC076: raise (raise.c:48)
==5063== by 0x51DD534: abort (abort.c:79)
==5063== by 0x1292D2: fatal (log.c:647)
==5063== by 0x116570: channel_set_state (channel.c:340)
==5063== by 0x116E04: lockin_complete (channel_control.c:73)
==5063== by 0x116F15: peer_got_funding_locked (channel_control.c:108)
==5063== by 0x117354: channel_msg (channel_control.c:208)
No CHANGELOG: this was introduced in a recent refactor.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The old value of 1000 sat was too small to cover the dust reserves.
This lead to the situation when trying to open a channel with minimal
amount, the channels got refused because they were not able cover the
commitment fees.
For this reason the minimal capacity should be increased to i.e. 10k
satoshi, as the technical minimum that also accounts for fees and
reserves is somewhere around 6k sat.
Refactored the weighted-reservoir-sampling algo to make it more straightforward.
It now uses the excess as fraction of capacity as weight. This favors channels that
are more _relatively_ unbalanced to be used for incoming payment.
Now passes test_invoice_routeboost_private() when using max fundamount=16777215.
make it a bit easier to track mutual channel closures by
adding broadcast txid to the listpeers billboard.
since lightningd manages the 'identity' of the closing tx we need
to send it back to closingd so it can update the billboard
appropriately.
This also allows plugins to do "hold invoices" a-la LND, useful for
just-in-time inventory handling.
We're careful to handle the invoice getting paid behind our backs, and
the incoming HTLC going away.
Once @cdecker's sphinx rework is in, we can also hand the raw payload
to the invoice_payment_hook, for special effects.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We're going to make it async, so start by moving the core code into
invoice.c and having that directly call fail/success functions for the
htlc.
We add an extra check in fulfill_htlc() that the HTLC state is correct:
that can't happen now, but may once we're async.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
For online services, shorter may be fine, but for casual use I'm usually
in a different timezone than the payer, so needs to be at least 1 day.
Certainly 1 hr is short if they have to open a channel.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We'd like to display the receive and resolution times in the forwardings
table. In order to remember the receive time we need to store it in the DB
along with the other information.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
The timestamps are UNIX-Timestamps with 3 decimal places, even though we have
the timestamp with nanosecond granularity. This is deliberate choice not to
over overload the users :-)
Signed-off-by: Christian Decker <decker.christian@gmail.com>
Travis caught an error where this happened: when closingd reconnects it
was sending the reestablish message without the option_dataloss_protect
fields. That causes the peer to fail the channel!
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Make json_start_member allocate extra space, which caller can directly
print into, and also make caller call js_written_some() itself.
MCP results from 5 runs, min-max(mean +/- stddev):
store_load_msec:35071-36817(35617.2+/-7e+02)
vsz_kb:2637488
store_rewrite_sec:35.790000-37.500000(36.6375+/-0.63)
listnodes_sec:0.690000-0.780000(0.72+/-0.035)
listchannels_sec:34.600000-36.340000(35.36+/-0.77)
routing_sec:30.310000-30.730000(30.445+/-0.17)
peer_write_all_sec:50.830000-52.750000(51.82+/-0.89)
MCP notable changes from previous patch (>1 stddev):
-listnodes_sec:0.720000-0.950000(0.86+/-0.077)
+listnodes_sec:0.690000-0.780000(0.72+/-0.035)
-listchannels_sec:40.300000-41.080000(40.668+/-0.29)
+listchannels_sec:34.600000-36.340000(35.36+/-0.77)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This doesn't result in a speedup for our benchmark, since we use the
cli tool which does the formatting.
MCP results from 5 runs, min-max(mean +/- stddev):
store_load_msec:33422-36830(35196.2+/-1.2e+03)
vsz_kb:2637488
store_rewrite_sec:36.030000-37.630000(36.794+/-0.52)
listnodes_sec:0.720000-0.950000(0.86+/-0.077)
listchannels_sec:40.300000-41.080000(40.668+/-0.29)
routing_sec:30.440000-31.030000(30.69+/-0.2)
peer_write_all_sec:50.060000-52.800000(51.416+/-0.91)
MCP notable changes from 2 patches ago (>1 stddev):
-listchannels_sec:48.560000-55.680000(52.642+/-2.7)
+listchannels_sec:40.300000-41.080000(40.668+/-0.29)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is one of the more significant fields we print, but there's no
need to allocate a temp buffer or escape the resulting string.
MCP results from 5 runs, min-max(mean +/- stddev):
store_load_msec:34048-36002(35070.4+/-8.5e+02)
vsz_kb:2637488
store_rewrite_sec:35.110000-38.120000(36.604+/-1.2)
listnodes_sec:0.830000-1.020000(0.95+/-0.065)
listchannels_sec:48.560000-55.680000(52.642+/-2.7)
routing_sec:29.800000-33.170000(30.536+/-1.3)
peer_write_all_sec:49.260000-52.490000(50.316+/-1.1)
MCP notable changes from previous patch (>1 stddev):
-listchannels_sec:55.390000-58.110000(56.998+/-0.99)
+listchannels_sec:48.560000-55.680000(52.642+/-2.7)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We can save significant space by combining both sides: so much that we
can reduce the WIRE_LEN_LIMIT to something sane again.
MCP results from 5 runs, min-max(mean +/- stddev):
store_load_msec:34467-36764(35517.8+/-7.7e+02)
vsz_kb:2637488
store_rewrite_sec:35.310000-36.580000(35.816+/-0.44)
listnodes_sec:1.140000-2.780000(1.596+/-0.6)
listchannels_sec:55.390000-58.110000(56.998+/-0.99)
routing_sec:30.330000-30.920000(30.642+/-0.19)
peer_write_all_sec:50.640000-53.360000(51.822+/-0.91)
MCP notable changes from previous patch (>1 stddev):
-store_rewrite_sec:34.720000-35.130000(34.94+/-0.14)
+store_rewrite_sec:35.310000-36.580000(35.816+/-0.44)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
I tried to just do gossipd, but it was uncontainable, so this ended up being
a complete sweep.
We didn't get much space saving in gossipd, even though we should save
24 bytes per node.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Node ids are pubkeys, but we only use them as pubkeys for routing and checking
gossip messages. So we're packing and unpacking them constantly, and wasting
some space and time.
This introduces a new type, explicitly the SEC1 compressed encoding
(33 bytes). We ensure its validity when we load from the db, or get it
from JSON. We still use 'struct pubkey' for peer messages, which checks
validity.
Results from 5 runs, min-max(mean +/- stddev):
store_load_msec,vsz_kb,store_rewrite_sec,listnodes_sec,listchannels_sec,routing_sec,peer_write_all_sec
39475-39572(39518+/-36),2880732,41.150000-41.390000(41.298+/-0.085),2.260000-2.550000(2.336+/-0.11),44.390000-65.150000(58.648+/-7.5),32.740000-33.020000(32.89+/-0.093),44.130000-45.090000(44.566+/-0.32)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Pubkeys are not not actually DER encoding, but Pieter Wuille corrected
me: it's SEC 1 documented encoding.
Results from 5 runs, min-max(mean +/- stddev):
store_load_msec,vsz_kb,store_rewrite_sec,listnodes_sec,listchannels_sec,routing_sec,peer_write_all_sec
38922-39297(39180.6+/-1.3e+02),2880728,41.040000-41.160000(41.106+/-0.05),2.270000-2.530000(2.338+/-0.097),44.570000-53.980000(49.696+/-3),32.840000-33.080000(32.95+/-0.095),43.060000-44.950000(43.696+/-0.72)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
- add config value min_capacity_sat that will replaces the magic value
min_effective_htlc_capacity = AMOUNT_MSAT(1000000)
- add config switch min_capacity_sat
Adding a giant IO message simply causes it to be pruned immediately,
so truncate it if it's more than 1/64 the max size.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This causes natural batching, rather than on every little addition of
JSON formatting.
Before, to listchannels 100,000 channels took 82.48 seconds, after
6.82 seconds.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This lets us benchmark without a valid blockchain.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Header from folded patch 'fixup!_gossipd__dev_option_to_allow_unknown_channels.patch':
fixup! gossipd: dev option to allow unknown channels.
Suggested-by: @cdecker
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
I started by trying to change the current infrastructure, but this is
really the only completely sync hook which makes sense; it needs to avoid
doing the db_transation, as well as waiting, and using a callback is just
overkill.
So with some regret, I open coded it.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
1. Rename channel_funding_locked to channel_funding_depth in
channeld/channel_wire.csv.
2. Add minimum_depth in struct channel in common/initial_channel.h and
change corresponding init function: new_initial_channel().
3. Add confirmation_needed in struct peer in channeld/channeld.c.
4. Rename channel_tell_funding_locked to channel_tell_depth.
5. Call channel_tell_depth even if depth < minimum, and still call
lockin_complete in channel_tell_depth, iff depth > minimum_depth.
6. channeld ignore the channel_funding_depth unless its >
minimum_depth(except to update billboard, and set
peer->confirmation_needed = minimum_depth - depth).
- This will make a channel loop when 'id' argument was "all".
- The response will now contain an array of objects (peer_id, scid, ...)
- It will skip channels in invalid states.
- Moves iffy channel/peer param stuff to param_channel_or_all
We set the version BIP32_VER_TEST_PRIVATE for testnet/regtest
BIP32 privkey generation with libwally-core, and set
BIP32_VER_MAIN_PRIVATE for mainnet.
For litecoin, we also set it like bitcoin else.
1. amount operations should force you to check validity, rather than
needing a separate call, so make amount_msat_to_u32 return bool,
and WARN_UNUSED_RESULT it.
2. Create a special parsing function for this; not only does this mean
we now only need that one amount call, but also 'check' will correctly
fail with invalid amounts (it only does the parsing step).
3. If we create a primitive which we immediately take(), we allocate it
off NULL to make it clear we expect its lifetime to end here.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>