(The json when sendpay successes is too different when sendpay fails, so
divide the sendpay result into two notifications: `sendpay_success` and
`sendpay_failure`)
`sendpay_failure`
A notification for topic `sendpay_failure` is sent every time a sendpay
success(with `failed` status). The json is same as the return value of
command `sendpay`/`waitsendpay` when this cammand fails.
```json
{
"sendpay_failure": {
"code": 204,
"message": "failed: WIRE_UNKNOWN_NEXT_PEER (reply from remote)",
"data": {
"id": 2,
"payment_hash": "9036e3bdbd2515f1e653cb9f22f8e4c49b73aa2c36e937c926f43e33b8db8851",
"destination": "035d2b1192dfba134e10e540875d366ebc8bc353d5aa766b80c090b39c3a5d885d",
"msatoshi": 100000000,
"amount_msat": "100000000msat",
"msatoshi_sent": 100001001,
"amount_sent_msat": "100001001msat",
"created_at": 1561395134,
"status": "failed",
"erring_index": 1,
"failcode": 16394,
"failcodename": "WIRE_UNKNOWN_NEXT_PEER",
"erring_node": "022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59",
"erring_channel": "103x2x1",
"erring_direction": 0
}
}
}
```
`sendpay` doesn't wait for the result of sendpay and `waitsendpay`
returns the result of sendpay in specified time or timeout, but
`sendpay_failure` will always return the result anytime when sendpay
fails if is was subscribed.
pPayment field includes the basic information of the payment, so the return valves of 'sendpay_success()' and 'sendpay_fail()' should include this field.
Note "immediate_routing_failure" is before payment creation, and for this case, return won't include payment fields.
`sendpay_success`
A notification for topic `sendpay_success` is sent every time a sendpay
success(with `complete` status). The json is same as the return value of
command `sendpay`/`waitsendpay` when these cammand succeeds.
```json
{
"sendpay_success": {
"id": 1,
"payment_hash": "5c85bf402b87d4860f4a728e2e58a2418bda92cd7aea0ce494f11670cfbfb206",
"destination": "035d2b1192dfba134e10e540875d366ebc8bc353d5aa766b80c090b39c3a5d885d",
"msatoshi": 100000000,
"amount_msat": "100000000msat",
"msatoshi_sent": 100001001,
"amount_sent_msat": "100001001msat",
"created_at": 1561390572,
"status": "complete",
"payment_preimage": "9540d98095fd7f37687ebb7759e733934234d4f934e34433d4998a37de3733ee"
}
}
```
`sendpay` doesn't wait for the result of sendpay and `waitsendpay`
returns the result of sendpay in specified time or timeout, but
`sendpay_success` will always return the result anytime when sendpay
successes if is was subscribed.
531c8d7d9b
In this one, we always send my_current_per_commitment_point, though it's
ignored. And we have our official feature numbers.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
/bin/sh: 1: ccan/ccan/cdump/tools/cdump-enumstr: Text file busy
make[1]: *** [common/Makefile:81: common/gen_htlc_state_names.h] Error 2
make[1]: *** Waiting for unfinished jobs....
The fix is to make sure all generated headers are made first, and
thus cdump-enumstr.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The largest change is inside hsmd: it hands a null per-commitment key
to the wallet to tell it to spend the to_remote output.
It can also now resolve unknown commitments, even if it doesn't have a
possible_remote_per_commitment_point from the peer.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Somehow this change got lost, but it's needed for option_static_remotekey,
to quote gen_peer_wire_csv:
msgtype,channel_reestablish,136
msgdata,channel_reestablish,channel_id,channel_id,
msgdata,channel_reestablish,next_commitment_number,u64,
msgdata,channel_reestablish,next_revocation_number,u64,
msgdata,channel_reestablish,your_last_per_commitment_secret,byte,32,option_data_loss_protect,option_static_remotekey
msgdata,channel_reestablish,my_current_per_commitment_point,point,,option_data_loss_protect
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
As per BOLT02 #message-retransmission :
if `next_commitment_number` is 1 in both the `channel_reestablish` it sent and received:
- MUST retransmit `funding_locked`
This moves field initialization into plugins_new(), and
adds a memleak helper to search the request map:
=================================== ERRORS ====================================
___________________ ERROR at teardown of test_plugin_command ___________________
[gw0] linux -- Python 3.7.1 /opt/python/3.7.1/bin/python3.7
> lambda: ihook(item=item, **kwds),
when=when,
)
../../../.local/lib/python3.7/site-packages/flaky/flaky_pytest_plugin.py:306:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/fixtures.py:112: in node_factory
ok = nf.killall([not n.may_fail for n in nf.nodes])
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <utils.NodeFactory object at 0x7f873b245278>, expected_successes = [True]
def killall(self, expected_successes):
"""Returns true if every node we expected to succeed actually succeeded""
unexpected_fail = False
for i in range(len(self.nodes)):
leaks = None
# leak detection upsets VALGRIND by reading uninitialized mem.
# If it's dead, we'll catch it below.
if not VALGRIND:
try:
# This also puts leaks in log.
leaks = self.nodes[i].rpc.dev_memleak()['leaks']
except Exception:
pass
try:
self.nodes[i].stop()
except Exception:
if expected_successes[i]:
unexpected_fail = True
if leaks is not None and len(leaks) != 0:
raise Exception("Node {} has memory leaks: {}".format(
self.nodes[i].daemon.lightning_dir,
> json.dumps(leaks, sort_keys=True, indent=4)
))
E Exception: Node /tmp/ltests-qm87my20/test_plugin_command_1/lightnng-1/ has memory leaks: [
E {
E "backtrace": [
E "ccan/ccan/tal/tal.c:437 (tal_alloc_)",
E "lightningd/jsonrpc.c:1112 (jsonrpc_request_start_)",
E "lightningd/plugin.c:1041 (plugin_config)",
E "lightningd/plugin.c:1072 (plugins_config)",
E "lightningd/plugin.c:846 (plugin_manifest_cb)",
E "lightningd/plugin.c:252 (plugin_response_handle)",
E "lightningd/plugin.c:342 (plugin_read_json_one)",
E "lightningd/plugin.c:367 (plugin_read_json)",
E "ccan/ccan/io/io.c:59 (next_plan)",
E "ccan/ccan/io/io.c:407 (do_plan)",
E "ccan/ccan/io/io.c:417 (io_ready)",
E "ccan/ccan/io/poll.c:445 (io_loop)",
E "lightningd/io_loop_with_timers.c:24 (io_loop_with_tiers)",
E "lightningd/lightningd.c:840 (main)"
E ],
E "label": "lightningd/jsonrpc.c:1112:struct jsonrpc_reques",
E "parents": [
E "lightningd/plugin.c:66:struct plugin",
E "lightningd/lightningd.c:103:struct lightningd"
E ],
E "value": "0x55d6385e4088"
E },
E {
E "backtrace": [
E "ccan/ccan/tal/tal.c:437 (tal_alloc_)",
E "lightningd/jsonrpc.c:1112 (jsonrpc_request_start_)",
E "lightningd/plugin.c:1041 (plugin_config)",
E "lightningd/plugin.c:1072 (plugins_config)",
E "lightningd/plugin.c:846 (plugin_manifest_cb)",
E "lightningd/plugin.c:252 (plugin_response_handle)",
E "lightningd/plugin.c:342 (plugin_read_json_one)",
E "lightningd/plugin.c:367 (plugin_read_json)",
E "ccan/ccan/io/io.c:59 (next_plan)",
E "ccan/ccan/io/io.c:407 (do_plan)",
E "ccan/ccan/io/io.c:417 (io_ready)",
E "ccan/ccan/io/poll.c:445 (io_loop)",
E "lightningd/io_loop_with_timers.c:24 (io_loop_with_tiers)",
E "lightningd/lightningd.c:840 (main)"
E ],
E "label": "lightningd/jsonrpc.c:1112:struct jsonrpc_reques",
E "parents": [
E "lightningd/plugin.c:66:struct plugin",
E "lightningd/lightningd.c:103:struct lightningd"
E ],
E "value": "0x55d6386529d8"
E }
E ]
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It seems we spend a lot of time waiting for `bitcoind` and `lightningd` to
talk to disks. This adds the `TEST_DIR` environment variable, allowing for
example to use `/dev/shm`, or a faster disk than the disk `/tmp` is on, as the
root directory for all test-related files.
Testing this on one of our builder machines cut the time to run the entire
suite under valgrind roughly in half (180-200 seconds vs 440-490 seconds).
My machine would accumulate a number of zombie lightningd and bitcoind
processes over time while testing. Investigating this showed that if a fixture
raised an exception during fixture teardown then other fixtures that have not
been torn down would linger around. The issue is that pytest treats exceptions
in fixtures as non-recoverable and therefore will not catch them and call the
remaining ones.
This commit adds a new fixture, that is there just to collect eventual errors
from other fixtures and ensure that anything that needs to clean up something,
e.g., processes started by the fixture, are cleaned up before we raise an
eventual exception. This is achieved by making any fixture that needs cleaning
up dependent on the teardown_checks fixture, which also serves as central
point to collect errors and printer of eventual errors.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
If they don't send us a gossip timestamp filter, we won't be sending
them any gossip, thus won't be aging the gossip_rcvd_filter. So
restrict it to 10,000 elements just to be sure.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Use the same "child of tal object" trick to mark things "notleak".
That simplifies things and means we don't have to track them being
reallocated.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>