This moves field initialization into plugins_new(), and
adds a memleak helper to search the request map:
=================================== ERRORS ====================================
___________________ ERROR at teardown of test_plugin_command ___________________
[gw0] linux -- Python 3.7.1 /opt/python/3.7.1/bin/python3.7
> lambda: ihook(item=item, **kwds),
when=when,
)
../../../.local/lib/python3.7/site-packages/flaky/flaky_pytest_plugin.py:306:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/fixtures.py:112: in node_factory
ok = nf.killall([not n.may_fail for n in nf.nodes])
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <utils.NodeFactory object at 0x7f873b245278>, expected_successes = [True]
def killall(self, expected_successes):
"""Returns true if every node we expected to succeed actually succeeded""
unexpected_fail = False
for i in range(len(self.nodes)):
leaks = None
# leak detection upsets VALGRIND by reading uninitialized mem.
# If it's dead, we'll catch it below.
if not VALGRIND:
try:
# This also puts leaks in log.
leaks = self.nodes[i].rpc.dev_memleak()['leaks']
except Exception:
pass
try:
self.nodes[i].stop()
except Exception:
if expected_successes[i]:
unexpected_fail = True
if leaks is not None and len(leaks) != 0:
raise Exception("Node {} has memory leaks: {}".format(
self.nodes[i].daemon.lightning_dir,
> json.dumps(leaks, sort_keys=True, indent=4)
))
E Exception: Node /tmp/ltests-qm87my20/test_plugin_command_1/lightnng-1/ has memory leaks: [
E {
E "backtrace": [
E "ccan/ccan/tal/tal.c:437 (tal_alloc_)",
E "lightningd/jsonrpc.c:1112 (jsonrpc_request_start_)",
E "lightningd/plugin.c:1041 (plugin_config)",
E "lightningd/plugin.c:1072 (plugins_config)",
E "lightningd/plugin.c:846 (plugin_manifest_cb)",
E "lightningd/plugin.c:252 (plugin_response_handle)",
E "lightningd/plugin.c:342 (plugin_read_json_one)",
E "lightningd/plugin.c:367 (plugin_read_json)",
E "ccan/ccan/io/io.c:59 (next_plan)",
E "ccan/ccan/io/io.c:407 (do_plan)",
E "ccan/ccan/io/io.c:417 (io_ready)",
E "ccan/ccan/io/poll.c:445 (io_loop)",
E "lightningd/io_loop_with_timers.c:24 (io_loop_with_tiers)",
E "lightningd/lightningd.c:840 (main)"
E ],
E "label": "lightningd/jsonrpc.c:1112:struct jsonrpc_reques",
E "parents": [
E "lightningd/plugin.c:66:struct plugin",
E "lightningd/lightningd.c:103:struct lightningd"
E ],
E "value": "0x55d6385e4088"
E },
E {
E "backtrace": [
E "ccan/ccan/tal/tal.c:437 (tal_alloc_)",
E "lightningd/jsonrpc.c:1112 (jsonrpc_request_start_)",
E "lightningd/plugin.c:1041 (plugin_config)",
E "lightningd/plugin.c:1072 (plugins_config)",
E "lightningd/plugin.c:846 (plugin_manifest_cb)",
E "lightningd/plugin.c:252 (plugin_response_handle)",
E "lightningd/plugin.c:342 (plugin_read_json_one)",
E "lightningd/plugin.c:367 (plugin_read_json)",
E "ccan/ccan/io/io.c:59 (next_plan)",
E "ccan/ccan/io/io.c:407 (do_plan)",
E "ccan/ccan/io/io.c:417 (io_ready)",
E "ccan/ccan/io/poll.c:445 (io_loop)",
E "lightningd/io_loop_with_timers.c:24 (io_loop_with_tiers)",
E "lightningd/lightningd.c:840 (main)"
E ],
E "label": "lightningd/jsonrpc.c:1112:struct jsonrpc_reques",
E "parents": [
E "lightningd/plugin.c:66:struct plugin",
E "lightningd/lightningd.c:103:struct lightningd"
E ],
E "value": "0x55d6386529d8"
E }
E ]
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Rather than reaching into data structures, let them register their own
callbacks. This avoids us having to expose "memleak_remove_xxx"
functions, and call them manually.
Under the hood, this is done by having a specially-named tal child of
the thing we want to assist, containing the callback.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We now have a much stronger consistency check from the combination of
transaction wrapping, tal memory leak detection. Tramsaction wrapping ensures
that each statement is executed before the transaction is committed. The
commit is also driven by the `io_loop`, which means that it is no longer
possible for us to have statements outside of transactions and transactions
are guaranteed to commit at the round's end.
By adding the tal-awareness we can also get a much better indication as to
whether we have un-freed statements flying around, which we can test at the
end of the round as well.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
We will soon generalize the DB, so directly reaching into the `struct db`
instance to talk to the sqlite3 connection is bad anyway. This increases
flexibility and allows us to tailor the actual implementation to the
underlying DB.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
This removes the WIRE_FINAL_EXPIRY_TOO_SOON which leaked too much info,
and adds the blockheight to WIRE_INCORRECT_OR_UNKNOWN_PAYMENT_DETAILS.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We currently end up sleeping for 1 second for channeld and gossipd:
better to use a normal blocking waitpid and an alarm to wake us in
case they don't exit.
This speeds up `lightning-cli stop` on my machine from 2.008s to 0.008s:
a 286 times speedup!
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
During sync it is highly likely that we can coalesce multiple calls and share
results among them. We also report back failures for non-existing blocks early
on, so we don't run into issues with blocks that our bitcoind doesn't have
yet.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
This was caused by us not checking against the max_blockheight, but rather the
min_blockheight which can be negative with a newly created node. This is still
safe since we check for duplicates anyway in `wallet_filteredblock_add`.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
This is probably worth preventing.
1. Our depth estimate would be inaccurate possibly leading to us
timing out too early.
2. If we're not up-to-date our onchain funds are unknown.
3. We wouldn't be able to send or receive HTLCs until we're synced anyway.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We want to still allow incoming connections, and reestablishment of
channels, but if one tries to give us an HTLC, stall until we're
synced.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
If we don't know block height, we shouldn't be sending HTLCs. This
stops us forwarding HTLCs as well as new payments.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
I suspect multiple plugins trying to connect at the same
time are overrunning the 1-deep listen queue:
From man listen(2):
The backlog argument defines the maximum length to which the queue of
pending connections for sockfd may grow. If a connection request arβ
rives when the queue is full, the client may receive an error with an
indication of ECONNREFUSED
Fixes: #2922
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
`close` takes two optional arguments: `force` and `timeout`.
`timeout` doesn't timeout the close (there's no way to do that), just
the JSON call. `force` (default `false`) if set, means we unilaterally
close at the timeout, instead of just failing.
Timing out JSON calls is generally deprecated: that's the job of the
client. And the semantics of this are confusing, even to me! A
better API is a timeout which, if non-zero, is the time at which we
give up and unilaterally close.
The transition code is awkward, but we'll manage for the three
releases until we can remove it.
The new defaults are to unilaterally close after 48 hours.
Fixes: #2791
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>