We now have a much stronger consistency check from the combination of
transaction wrapping, tal memory leak detection. Tramsaction wrapping ensures
that each statement is executed before the transaction is committed. The
commit is also driven by the `io_loop`, which means that it is no longer
possible for us to have statements outside of transactions and transactions
are guaranteed to commit at the round's end.
By adding the tal-awareness we can also get a much better indication as to
whether we have un-freed statements flying around, which we can test at the
end of the round as well.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
We will soon generalize the DB, so directly reaching into the `struct db`
instance to talk to the sqlite3 connection is bad anyway. This increases
flexibility and allows us to tailor the actual implementation to the
underlying DB.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
Dumb programs which have a --daemon option call fork() early. This is
terrible UX since startup errors get lost: the program exits with
"success" immediately then you discover via the logs that it didn't
start at all.
However, forking late introduced a heap of problems with changing
pids. Instead, fork early but keep stderr and the parent around: if
we fail early on, the parent fails with us. We release our parent
with an explicit action just before the main loop.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
No code changes, just move.
Put all the dev options into the one function, and register (and
comment on) the early args first.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
I noticed that --network=regtest didn't override 'network=bitcoin' in
the config file.
Normally we parse the config file first, then the commandline (so the cmdline
wins). But for early options, we do cmdline first so we can find the config
file. That was fine when the only early option was the location of the
config file, but now it includes plugins and the network setting.
So do a boutique cmdline parse *just* to find the config file, then parse
the config file early options, then the cmdline early options.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The spec says to close the channel if they send us an error, but we
need to be more lenient to preserve channels with other
implementations.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Our previous param support was a bit limited in this case.
We create a dev- command multiplexer, so we can exercise it.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Since we now have a couple of long-lived dependents it is time we stop
removing channels from the table once they are fully closed, and instead just
mark them as closed. This allows us to keep forwards and transactions foreign
keys intact, and it may help us debug things after the fact.
Fixes#2028
Signed-off-by: Christian Decker <decker.christian@gmail.com>
I was working on rewriting our (somewhat chaotic) tx watching code
for 0.7.2, when I found this bug: we don't always notice the funding
tx in corner cases where more than one block is detected at
once.
This is just the one commit needed to fix the problem: it has some
unnecessary changes, but I'd prefer not to diverge too far from my
cleanup-txwatch branch.
Fixes: #2352
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Move it closer to ccan/json_out, in preparation for using that as a
replacement.
In particular:
1. Add a 'quote' field in json_add_member.
2. json_add_member now always escapes if 'quote' is true.
3. json_member_direct is exposed to allow avoiding of escaping.
4. json_add_hex can use this, so no longer needs to be in json_stream.c.
5. We don't make JSON manually, but always use helpers.
6. We now flush the stream (wake reader) only when we close it, or mark
command as pending.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
"result" should always be an object (so that we can add new fields),
so make that implicit in json_stream_success.
This makes our primitives well-formed: we previously used NULL as our
fieldname when calling the first json_object_start, which is a hack
since we're actually in an object and the fieldname is 'result' (which
was already written by json_object_start).
There were only two cases which didn't do this:
1. dev-memdump returned an array. No API guarantees on this.
2. shutdown returned a string.
I temporarily made shutdown return an empty object, which shouldn't
break anything, but I want to fix that later anyway.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
These are generalized from our internal implementations.
The main difference is that 'struct json_escaped' is now 'struct
json_escape', so we replace that immediately.
The difference between lightningd's json-writing ringbuffer and the
more generic ccan/json_out is that the latter has a better API and
handles escaping transparently if something slips through (though
it does offer direct accessors so you can mess things up yourself!).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Take into account the fee we'd have to pay if we're the funder, and
also drop to 0 if the amount is less than the smallest HTLC the peer
will accept.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
- Related Changes for `warning` notification
Add a `bool` type parameter in `log_()` and `lov()`, this `bool` flag
indicates if we should call `warning` notifier.
1) The process of copying `log_book` of every peer to the `log_book` of
`ld` is usually included in `log_()` and `lov()`, and it may lead to
repeated `warning` notification. So a `bool`, which explicitly indicates
if the `warning` notification is disabled during this call, is necessary
.
2) The `LOG_INFO` and `LOG_DEBUG` level don't need to call
warning, so set that `bool` paramater as `FALSE` for these log level and
only set it as `TRUE` for `LOG_UNUAUSL`/`LOG_BROKEN`. As for `LOG_IO`,
it use `log_io()` to log, so we needn't think about notifier for it.
We reserve inputs when we're going to send a transaction, but we don't
unreserve them if we crash. This is most graphically demonstrated by
the txprepare case, which makes it easier to trigger.
Instead, we should query bitcoind to see whether the tx made it out or
not, as we would do manually with dev-rescan-outputs.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Keeping the uintmap ordering all the broadcastable messages is expensive:
130MB for the million-channels project. But now we delete obsolete entries
from the store, we can have the per-peer daemons simply read that sequentially
and stream the gossip itself.
This is the most primitive version, where all gossip is streamed;
successive patches will bring back proper handling of timestamp filtering
and initial_routing_sync.
We add a gossip_state field to track what's happening with our gossip
streaming: it's initialized in gossipd, and currently always set, but
once we handle timestamps the per-peer daemon may do it when the first
filter is sent.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Encapsulating the peer state was a win for lightningd; not surprisingly,
it's even more of a win for the other daemons, especially as we want
to add a little gossip information.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This happened with the 800M JSON for the MCP listchannels on the raspberry
pi, and tal calls abort() by default.
We switch to raw malloc here; we could override the error hook for
tal, but this is neater since we're doing low-level things anyway,
I tested it manually with this patch:
diff --git a/lightningd/json_stream.c b/lightningd/json_stream.c
index cec9f5771..206ba37c0 100644
--- a/lightningd/json_stream.c
+++ b/lightningd/json_stream.c
@@ -43,6 +43,14 @@ static void free_json_stream_membuf(struct json_stream *js)
free(membuf_cleanup(&js->outbuf));
}
+static void *membuf_realloc_hack(struct membuf *mb, void *rawelems,
+ size_t newsize)
+{
+ if (newsize > 1000000000)
+ return NULL;
+ return realloc(rawelems, newsize);
+}
+
struct json_stream *new_json_stream(const tal_t *ctx,
struct command *writer,
struct log *log)
@@ -53,7 +61,7 @@ struct json_stream *new_json_stream(const tal_t *ctx,
js->reader = NULL;
/* We don't use tal here, because we handle failure externally (tal
* helpfully aborts with a msg, which is usually right) */
- membuf_init(&js->outbuf, malloc(64), 64, membuf_realloc);
+ membuf_init(&js->outbuf, malloc(64), 64, membuf_realloc_hack);
tal_add_destructor(js, free_json_stream_membuf);
#if DEVELOPER
js->wrapping = tal_arr(js, jsmntype_t, 0);
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
New fields don't have to be spelled out twice.
The raw version are called _only, so we don't miss a call
accidentally. We can rename them when we finally deprecated old
fields.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Instead of lightningd telling us when it's ready, we ask it.
This also provides an opportunity to have a plugin hook at this point.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The original idea is to "tal" channel on the "ctx"(In fact, we'd like to set ctx as "ld").
But we already tal channel on "ld" in new_channel(), so "ctx" is unused.
These are always handed to subdaemons as a set, so group them. This makes
it easier to add an fd (in the next patch).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
* remove libbase58, use base58 from libwally
This removes libbase58 and uses libwally instead.
It allocates and then frees some memory, we may want to
add a function in wally that doesn't or override
wally_operations to use tal.
Signed-off-by: Lawrence Nahum lawrence@greenaddress.it
Refactored the weighted-reservoir-sampling algo to make it more straightforward.
It now uses the excess as fraction of capacity as weight. This favors channels that
are more _relatively_ unbalanced to be used for incoming payment.
Now passes test_invoice_routeboost_private() when using max fundamount=16777215.
We're going to make it async, so start by moving the core code into
invoice.c and having that directly call fail/success functions for the
htlc.
We add an extra check in fulfill_htlc() that the HTLC state is correct:
that can't happen now, but may once we're async.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This doesn't result in a speedup for our benchmark, since we use the
cli tool which does the formatting.
MCP results from 5 runs, min-max(mean +/- stddev):
store_load_msec:33422-36830(35196.2+/-1.2e+03)
vsz_kb:2637488
store_rewrite_sec:36.030000-37.630000(36.794+/-0.52)
listnodes_sec:0.720000-0.950000(0.86+/-0.077)
listchannels_sec:40.300000-41.080000(40.668+/-0.29)
routing_sec:30.440000-31.030000(30.69+/-0.2)
peer_write_all_sec:50.060000-52.800000(51.416+/-0.91)
MCP notable changes from 2 patches ago (>1 stddev):
-listchannels_sec:48.560000-55.680000(52.642+/-2.7)
+listchannels_sec:40.300000-41.080000(40.668+/-0.29)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
I tried to just do gossipd, but it was uncontainable, so this ended up being
a complete sweep.
We didn't get much space saving in gossipd, even though we should save
24 bytes per node.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Node ids are pubkeys, but we only use them as pubkeys for routing and checking
gossip messages. So we're packing and unpacking them constantly, and wasting
some space and time.
This introduces a new type, explicitly the SEC1 compressed encoding
(33 bytes). We ensure its validity when we load from the db, or get it
from JSON. We still use 'struct pubkey' for peer messages, which checks
validity.
Results from 5 runs, min-max(mean +/- stddev):
store_load_msec,vsz_kb,store_rewrite_sec,listnodes_sec,listchannels_sec,routing_sec,peer_write_all_sec
39475-39572(39518+/-36),2880732,41.150000-41.390000(41.298+/-0.085),2.260000-2.550000(2.336+/-0.11),44.390000-65.150000(58.648+/-7.5),32.740000-33.020000(32.89+/-0.093),44.130000-45.090000(44.566+/-0.32)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Pubkeys are not not actually DER encoding, but Pieter Wuille corrected
me: it's SEC 1 documented encoding.
Results from 5 runs, min-max(mean +/- stddev):
store_load_msec,vsz_kb,store_rewrite_sec,listnodes_sec,listchannels_sec,routing_sec,peer_write_all_sec
38922-39297(39180.6+/-1.3e+02),2880728,41.040000-41.160000(41.106+/-0.05),2.270000-2.530000(2.338+/-0.097),44.570000-53.980000(49.696+/-3),32.840000-33.080000(32.95+/-0.095),43.060000-44.950000(43.696+/-0.72)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Adding a giant IO message simply causes it to be pruned immediately,
so truncate it if it's more than 1/64 the max size.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
1. Rename channel_funding_locked to channel_funding_depth in
channeld/channel_wire.csv.
2. Add minimum_depth in struct channel in common/initial_channel.h and
change corresponding init function: new_initial_channel().
3. Add confirmation_needed in struct peer in channeld/channeld.c.
4. Rename channel_tell_funding_locked to channel_tell_depth.
5. Call channel_tell_depth even if depth < minimum, and still call
lockin_complete in channel_tell_depth, iff depth > minimum_depth.
6. channeld ignore the channel_funding_depth unless its >
minimum_depth(except to update billboard, and set
peer->confirmation_needed = minimum_depth - depth).
As a side-effect of using amount_msat in gossipd/routing.c, we explicitly
handle overflows and don't need to pre-prune ridiculous-fee channels.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
These create two fields, one old one which is purely numeric,
and a modern on with a suffix, eg "msatoshi" and "amount_msat".
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The current param_sat accepts "any": rename and move that to invoice.c
where it's called. We rename it to param_msat_or_any and invoice.c
is our first (trivial) amount_msat user.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
They're generally used pass-by-copy (unusual for C structs, but
convenient they're basically u64) and all possibly problematic
operations return WARN_UNUSED_RESULT bool to make you handle the
over/underflow cases.
The new #include in json.h means we bolt11.c sees the amount.h definition
of MSAT_PER_BTC, so delete its local version.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This hook is used to let a plugin decide whether we should continue
with the connection normally, or whether we should be dropping the
connection. Possible use-cases include policy enforcement (dropping
connections based on previous interactions), draining a node by
allowing only peers with an active channel to reconnect, or
temporarily preventing a channel from making progress.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
We need to still accept it when parsing the database, but this flag
should allow upgrade testing for devs building on top
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We store it in a strmap. This means we call the jsonrpc handler earlier,
so all callers need to call param() before they do anything else; only
json_listaddrs and json_help needed fixing.
Plugins still use '[usage]' for now.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Next patch will call commands to get usage inside jsonrpc_new(): to do
this it will need access to ld->jsonrpc, so we can't use the current
pattern.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Christian and I both unwittingly used it in form:
*tal_arr_expand(&x) = tal(x, ...)
Since '=' isn't a sequence point, the compiler can (and does!) cache
the value of x, handing it to tal *after* tal_arr_expand() moves it
due to tal_resize().
The new version is somewhat less convenient to use, but doesn't have
this problem, since the assignment is always evaluated after the
resize.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Obviously the Facebook relationship status joke was a bit subtle, but I've
continued it anyway because I'm especially susceptible to Dad jokes.
Suggested-by: @niftynei
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Usually, this means they return 'command_param_failed()' if param()
fails, and changing 'command_success(); return;' to 'return
command_success()'.
Occasionally, it's more complex: there's a command_its_complicated()
for the case where we can't exactly determine what the status is,
but it should be considered a last resort.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Handers of a specific form are both designed to be used as callbacks
for param(), and also dispose of the command if something goes wrong.
Make them return the 'struct command_result *' from command_failed(),
or NULL.
Renaming them just makes sense: json_tok_XXX is used for non-command-freeing
parsers too.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
These routines free the 'struct command': a common coding error is not
to return immediately.
To catch this, we make them return a non-NULL 'struct command_result
*', and we're going to make the command handlers return the same (to
encourage 'return command_fail(...)'-style usage).
We also provide two sources for external use:
1. command_param_failed() when param() fails.
2. command_its_complicated() for some complex cases.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
json_escaped.[ch], param.[ch] and jsonrpc_errors.h move from lightningd/
to common/. Tests moved too.
We add a new 'common/json_tok.[ch]' for the common parameter parsing
routines which a plugin might want, taking them out of
lightningd/json.c (which now only contains the lightningd-specific
ones).
The rest is mainly fixing up includes.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
I want to use param functions in plugins, and they don't have struct
command.
I had to use a special arg to param() for check to flag it as allowing
extra parameters, rather than adding a one-use accessor.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This (will) avoid the plugin having to walk back from the params object
as it currently does.
No code changes; I removed UNUSED and UNNEEDED labels from the other
parameters though (as *every* json_rpc callback needs to call param()
these days, they're *always* used).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This currently just invokes GDB, but we could generalize it (though
pdb doesn't allow attaching to a running process, other python
debuggers seem to).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This tells the plugin both the `lightning-dir` as well as the
`rpc-filename` to use to talk to `lightningd`. Prior to this they'd
had to guess.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
This is prep work for when we sign htlc txs with
SIGHASH_SINGLE|SIGHASH_ANYONECANPAY.
We still deal with raw signatures for the htlc txs at the moment, since
we send them like that across the wire, and changing that was simply too
painful (for the moment?).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is needed in order to be able to add methods while initializing
the plugins, but before actually moving to the config dir and starting
to listen.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
For onchaind we need to remove globals from memleak consideration;
we also change the htlc pointer to an htlc copy, which simplifies
things as well.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We promote 'struct json_stream' to contain the membuf; we only attach
the json_stream to the command when we actually call
json_stream_success / json_stream_fail.
This means we are closer to 'struct json_stream' being an independent
layer; the tests are already modified to use it directly to create
JSON.
This is also the first step toward re-enabling non-serial command
execution.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is the final step to get the plugins working. After parsing the
early options (including `--plugin`), then starting and asking the
plugins for options, and finally reading in the options we just
registered, we just need to assemble the options and send them over.
Signed-off-by: Christian Decker <@cdecker>
Also includes some sanity checks for the results returned by the
plugin, such as ensuring that the ID is as expected and that we have
either an error or a real result.
The idea is that `plugin` is an early arg that is parsed (from command
line or the config file). We can then start the plugins and have them
tell us about the options they'd like to add to the mix, before we
actually parse them.
Signed-off-by: Christian Decker <@cdecker>