This has two effects: most importantly, it avoids the problem where
lightningd creates a 800MB JSON blob in response to listchannels,
which causes OOM on the Raspberry Pi (our previous max allocation was
832MB). This is because lightning-cli can start draining the JSON
while we're filling the buffer, so we end up with a max allocation of
68MB.
But despite being less efficient (multiple queries to gossipd), it
actually speeds things up due to the parallelism:
MCP with -O3 -flto before vs after:
-listchannels_sec:8.980000-9.330000(9.206+/-0.14)
+listchannels_sec:7.500000-7.830000(7.656+/-0.11)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This happened with the 800M JSON for the MCP listchannels on the raspberry
pi, and tal calls abort() by default.
We switch to raw malloc here; we could override the error hook for
tal, but this is neater since we're doing low-level things anyway,
I tested it manually with this patch:
diff --git a/lightningd/json_stream.c b/lightningd/json_stream.c
index cec9f5771..206ba37c0 100644
--- a/lightningd/json_stream.c
+++ b/lightningd/json_stream.c
@@ -43,6 +43,14 @@ static void free_json_stream_membuf(struct json_stream *js)
free(membuf_cleanup(&js->outbuf));
}
+static void *membuf_realloc_hack(struct membuf *mb, void *rawelems,
+ size_t newsize)
+{
+ if (newsize > 1000000000)
+ return NULL;
+ return realloc(rawelems, newsize);
+}
+
struct json_stream *new_json_stream(const tal_t *ctx,
struct command *writer,
struct log *log)
@@ -53,7 +61,7 @@ struct json_stream *new_json_stream(const tal_t *ctx,
js->reader = NULL;
/* We don't use tal here, because we handle failure externally (tal
* helpfully aborts with a msg, which is usually right) */
- membuf_init(&js->outbuf, malloc(64), 64, membuf_realloc);
+ membuf_init(&js->outbuf, malloc(64), 64, membuf_realloc_hack);
tal_add_destructor(js, free_json_stream_membuf);
#if DEVELOPER
js->wrapping = tal_arr(js, jsmntype_t, 0);
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We now have a test blockchain for MCP which has the correct channels,
so this is not needed.
Also fix a benchmark script bug where 'mv "$DIR"/log
"$DIR"/log.old.$$' would fail if you log didn't exist from a previous run.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This was deeply surprising to me; there's a difference between a value not being
specified, and it being specified as "".
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
New fields don't have to be spelled out twice.
The raw version are called _only, so we don't miss a call
accidentally. We can rename them when we finally deprecated old
fields.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Instead of lightningd telling us when it's ready, we ask it.
This also provides an opportunity to have a plugin hook at this point.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The original idea is to "tal" channel on the "ctx"(In fact, we'd like to set ctx as "ld").
But we already tal channel on "ld" in new_channel(), so "ctx" is unused.
These are always handed to subdaemons as a set, so group them. This makes
it easier to add an fd (in the next patch).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We were checking against a hard-coded list, now we return a valid address only
if the hrp matches the chainparams.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
The chainparams are needed to know the prefixes, so instead of passing down
the testnet, we pass the entire params struct.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
We were deciding whether an address is a testnet address or not in the parser,
and then checking whether it matches our expectation outside as well. This
just returns the address version instead, and still checks it against our
expectation, but without having the parser need to know about address types.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
* remove libbase58, use base58 from libwally
This removes libbase58 and uses libwally instead.
It allocates and then frees some memory, we may want to
add a function in wally that doesn't or override
wally_operations to use tal.
Signed-off-by: Lawrence Nahum lawrence@greenaddress.it
Without this, the connect command hangs in one of my branches. This logic
is from the old days when gossipd handled connections, and we wanted
to make sure it didn't hang up on this client due to the error.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
I misunderstood the API, this ended up nesting a result inside the JSON-RPC
result.
No concerns about backwards compatibility since this is so new.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This seems like overkill, at least for now. Handling the JSON
inline is clearer, for the existing examples at least.
We also remove the dummy hook, rather than fix it.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
For me this happened only under valgrind with
test_option_upfront_shutdown_script (in a pending branch):
==5063== by 0x51FC076: raise (raise.c:48)
==5063== by 0x51DD534: abort (abort.c:79)
==5063== by 0x1292D2: fatal (log.c:647)
==5063== by 0x116570: channel_set_state (channel.c:340)
==5063== by 0x116E04: lockin_complete (channel_control.c:73)
==5063== by 0x116F15: peer_got_funding_locked (channel_control.c:108)
==5063== by 0x117354: channel_msg (channel_control.c:208)
No CHANGELOG: this was introduced in a recent refactor.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The old value of 1000 sat was too small to cover the dust reserves.
This lead to the situation when trying to open a channel with minimal
amount, the channels got refused because they were not able cover the
commitment fees.
For this reason the minimal capacity should be increased to i.e. 10k
satoshi, as the technical minimum that also accounts for fees and
reserves is somewhere around 6k sat.
Refactored the weighted-reservoir-sampling algo to make it more straightforward.
It now uses the excess as fraction of capacity as weight. This favors channels that
are more _relatively_ unbalanced to be used for incoming payment.
Now passes test_invoice_routeboost_private() when using max fundamount=16777215.
make it a bit easier to track mutual channel closures by
adding broadcast txid to the listpeers billboard.
since lightningd manages the 'identity' of the closing tx we need
to send it back to closingd so it can update the billboard
appropriately.
This also allows plugins to do "hold invoices" a-la LND, useful for
just-in-time inventory handling.
We're careful to handle the invoice getting paid behind our backs, and
the incoming HTLC going away.
Once @cdecker's sphinx rework is in, we can also hand the raw payload
to the invoice_payment_hook, for special effects.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We're going to make it async, so start by moving the core code into
invoice.c and having that directly call fail/success functions for the
htlc.
We add an extra check in fulfill_htlc() that the HTLC state is correct:
that can't happen now, but may once we're async.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
For online services, shorter may be fine, but for casual use I'm usually
in a different timezone than the payer, so needs to be at least 1 day.
Certainly 1 hr is short if they have to open a channel.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We'd like to display the receive and resolution times in the forwardings
table. In order to remember the receive time we need to store it in the DB
along with the other information.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
The timestamps are UNIX-Timestamps with 3 decimal places, even though we have
the timestamp with nanosecond granularity. This is deliberate choice not to
over overload the users :-)
Signed-off-by: Christian Decker <decker.christian@gmail.com>
Travis caught an error where this happened: when closingd reconnects it
was sending the reestablish message without the option_dataloss_protect
fields. That causes the peer to fail the channel!
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Make json_start_member allocate extra space, which caller can directly
print into, and also make caller call js_written_some() itself.
MCP results from 5 runs, min-max(mean +/- stddev):
store_load_msec:35071-36817(35617.2+/-7e+02)
vsz_kb:2637488
store_rewrite_sec:35.790000-37.500000(36.6375+/-0.63)
listnodes_sec:0.690000-0.780000(0.72+/-0.035)
listchannels_sec:34.600000-36.340000(35.36+/-0.77)
routing_sec:30.310000-30.730000(30.445+/-0.17)
peer_write_all_sec:50.830000-52.750000(51.82+/-0.89)
MCP notable changes from previous patch (>1 stddev):
-listnodes_sec:0.720000-0.950000(0.86+/-0.077)
+listnodes_sec:0.690000-0.780000(0.72+/-0.035)
-listchannels_sec:40.300000-41.080000(40.668+/-0.29)
+listchannels_sec:34.600000-36.340000(35.36+/-0.77)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This doesn't result in a speedup for our benchmark, since we use the
cli tool which does the formatting.
MCP results from 5 runs, min-max(mean +/- stddev):
store_load_msec:33422-36830(35196.2+/-1.2e+03)
vsz_kb:2637488
store_rewrite_sec:36.030000-37.630000(36.794+/-0.52)
listnodes_sec:0.720000-0.950000(0.86+/-0.077)
listchannels_sec:40.300000-41.080000(40.668+/-0.29)
routing_sec:30.440000-31.030000(30.69+/-0.2)
peer_write_all_sec:50.060000-52.800000(51.416+/-0.91)
MCP notable changes from 2 patches ago (>1 stddev):
-listchannels_sec:48.560000-55.680000(52.642+/-2.7)
+listchannels_sec:40.300000-41.080000(40.668+/-0.29)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is one of the more significant fields we print, but there's no
need to allocate a temp buffer or escape the resulting string.
MCP results from 5 runs, min-max(mean +/- stddev):
store_load_msec:34048-36002(35070.4+/-8.5e+02)
vsz_kb:2637488
store_rewrite_sec:35.110000-38.120000(36.604+/-1.2)
listnodes_sec:0.830000-1.020000(0.95+/-0.065)
listchannels_sec:48.560000-55.680000(52.642+/-2.7)
routing_sec:29.800000-33.170000(30.536+/-1.3)
peer_write_all_sec:49.260000-52.490000(50.316+/-1.1)
MCP notable changes from previous patch (>1 stddev):
-listchannels_sec:55.390000-58.110000(56.998+/-0.99)
+listchannels_sec:48.560000-55.680000(52.642+/-2.7)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We can save significant space by combining both sides: so much that we
can reduce the WIRE_LEN_LIMIT to something sane again.
MCP results from 5 runs, min-max(mean +/- stddev):
store_load_msec:34467-36764(35517.8+/-7.7e+02)
vsz_kb:2637488
store_rewrite_sec:35.310000-36.580000(35.816+/-0.44)
listnodes_sec:1.140000-2.780000(1.596+/-0.6)
listchannels_sec:55.390000-58.110000(56.998+/-0.99)
routing_sec:30.330000-30.920000(30.642+/-0.19)
peer_write_all_sec:50.640000-53.360000(51.822+/-0.91)
MCP notable changes from previous patch (>1 stddev):
-store_rewrite_sec:34.720000-35.130000(34.94+/-0.14)
+store_rewrite_sec:35.310000-36.580000(35.816+/-0.44)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
I tried to just do gossipd, but it was uncontainable, so this ended up being
a complete sweep.
We didn't get much space saving in gossipd, even though we should save
24 bytes per node.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Node ids are pubkeys, but we only use them as pubkeys for routing and checking
gossip messages. So we're packing and unpacking them constantly, and wasting
some space and time.
This introduces a new type, explicitly the SEC1 compressed encoding
(33 bytes). We ensure its validity when we load from the db, or get it
from JSON. We still use 'struct pubkey' for peer messages, which checks
validity.
Results from 5 runs, min-max(mean +/- stddev):
store_load_msec,vsz_kb,store_rewrite_sec,listnodes_sec,listchannels_sec,routing_sec,peer_write_all_sec
39475-39572(39518+/-36),2880732,41.150000-41.390000(41.298+/-0.085),2.260000-2.550000(2.336+/-0.11),44.390000-65.150000(58.648+/-7.5),32.740000-33.020000(32.89+/-0.093),44.130000-45.090000(44.566+/-0.32)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Pubkeys are not not actually DER encoding, but Pieter Wuille corrected
me: it's SEC 1 documented encoding.
Results from 5 runs, min-max(mean +/- stddev):
store_load_msec,vsz_kb,store_rewrite_sec,listnodes_sec,listchannels_sec,routing_sec,peer_write_all_sec
38922-39297(39180.6+/-1.3e+02),2880728,41.040000-41.160000(41.106+/-0.05),2.270000-2.530000(2.338+/-0.097),44.570000-53.980000(49.696+/-3),32.840000-33.080000(32.95+/-0.095),43.060000-44.950000(43.696+/-0.72)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
- add config value min_capacity_sat that will replaces the magic value
min_effective_htlc_capacity = AMOUNT_MSAT(1000000)
- add config switch min_capacity_sat
Adding a giant IO message simply causes it to be pruned immediately,
so truncate it if it's more than 1/64 the max size.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This causes natural batching, rather than on every little addition of
JSON formatting.
Before, to listchannels 100,000 channels took 82.48 seconds, after
6.82 seconds.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This lets us benchmark without a valid blockchain.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Header from folded patch 'fixup!_gossipd__dev_option_to_allow_unknown_channels.patch':
fixup! gossipd: dev option to allow unknown channels.
Suggested-by: @cdecker
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
I started by trying to change the current infrastructure, but this is
really the only completely sync hook which makes sense; it needs to avoid
doing the db_transation, as well as waiting, and using a callback is just
overkill.
So with some regret, I open coded it.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
1. Rename channel_funding_locked to channel_funding_depth in
channeld/channel_wire.csv.
2. Add minimum_depth in struct channel in common/initial_channel.h and
change corresponding init function: new_initial_channel().
3. Add confirmation_needed in struct peer in channeld/channeld.c.
4. Rename channel_tell_funding_locked to channel_tell_depth.
5. Call channel_tell_depth even if depth < minimum, and still call
lockin_complete in channel_tell_depth, iff depth > minimum_depth.
6. channeld ignore the channel_funding_depth unless its >
minimum_depth(except to update billboard, and set
peer->confirmation_needed = minimum_depth - depth).
- This will make a channel loop when 'id' argument was "all".
- The response will now contain an array of objects (peer_id, scid, ...)
- It will skip channels in invalid states.
- Moves iffy channel/peer param stuff to param_channel_or_all
We set the version BIP32_VER_TEST_PRIVATE for testnet/regtest
BIP32 privkey generation with libwally-core, and set
BIP32_VER_MAIN_PRIVATE for mainnet.
For litecoin, we also set it like bitcoin else.
1. amount operations should force you to check validity, rather than
needing a separate call, so make amount_msat_to_u32 return bool,
and WARN_UNUSED_RESULT it.
2. Create a special parsing function for this; not only does this mean
we now only need that one amount call, but also 'check' will correctly
fail with invalid amounts (it only does the parsing step).
3. If we create a primitive which we immediately take(), we allocate it
off NULL to make it clear we expect its lifetime to end here.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
- Intrduce DB update `channel` values: `feerate_base` and `feerate_ppm`
- Make fist use of now context realted DB migration
- Add `struct channel` members of the same name
- Use struct values instead of config when commiting new channels
New name is less confusing, and most people should be transitioning to
listpays rather than this anyway.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This field was used by `pay` to hold the bolt11 description if the bolt11
string used `h` to hash the description (which nobody ever did). If the
`h` field wasn't present, it could contain anything, as it wasn't checked.
It's really useful to have a label for payments (eg. '1 Cuban'), but adding
yet-another option would be painful, so we simply rename 'description'
to 'label' except inside the db.
This means we need to do some tricky parameter parsing to handle array
and keyword JSON arguments, but only until we remove the old name.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Without this, there's no proof of payment, since it is the signed invoice
that make the receipt valid.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
onchaind is in the correct position to tell us about them, so have it pass
them up as well.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
In order to avoid having to ask the HSM for public keys to
their_unilateral/to_us outputs we just store the `scriptPubkey` with the UTXO,
which can then be converted to the P2WPKH address.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
We want to disallow using unconfirmed outputs by default, so making the
default 1 confirmation seems a good idea. This also matches `bitcoind`s
minimum confirmation requirement.
Arming however breaks some of our tests, so I used `minconf=0` for the
breaking tests and added a new test specifically for the `minconf` parameter
for `fundchannel`.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
This allows us to specify that an output must have been confirmed before the
given maximum height. This allows us to specify a minimum number of
confirmations for an output to be selected.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
This will make the logger write 4 newlines to re-attached logfiles.
The newlines wont appear on logfiles that are just created.
Additionally the server prints 50 '-' dashes before printing his
startup message, which also help increase readability on logfile.
This was inspired by the way Bitcoin Core handles logfiles.
An uncommitted channel should not keep the peer in the db, since the
uncommitted channel isn't in the db itself.
Fixes: #2367
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We need to do it in various places, but we shouldn't do it lightly:
the primitives are there to help us get overflow handling correct.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
As a side-effect of using amount_msat in gossipd/routing.c, we explicitly
handle overflows and don't need to pre-prune ridiculous-fee channels.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>