1. We asserted that there wouldn't be a raw failcode.
2. We didn't pass the failure information via JSON in this case.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We could use sendonion to do this, but it actually takes a different path through
pay, and I wanted to test all of it, so I made a new dev flag.
We currently get upset with the response:
lightningd/pay.c:556: payment_failed: Assertion `!hout->failcode' failed.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This completes the custommsg epic, finally we are back where we began all that
time ago (about 4 hours really...): in a plugin that implements some custom
logic.
This solves a couple of issues with the need to synchronously drop the
connection in case we were required to understand what the peer was talking
about while still allowing users to experiment, just not kill connections.
This is needed to fully implement handling of blockheight disagreements
between us and payee.
If payee believes the blockheight is higher than ours, then `pay`
should wait for our node to achieve that blockheight.
Changelog-Add: Implement `waitblockheight` to wait for a specific blockheight.
Some Linux OSs impose a length limit on the path a Unix socket may have. This
is not an issue in `lightningd` since we `chdir()` into that directory before
opening the socket, however in pyln this became a problem for some tests,
since we use absolute paths in the testing framework. It's also a rather
strange quirk to expose to users.
This patch introduces a `UnixSocket` abstraction that attempts to work around
these limitations by aliasing the directory containing the socket into
`/proc/self/fd` and then connecting using that alias.
It was inspired by Open vSwitch code here https://github.com/openvswitch/ovs/blob/master/python/ovs/socket_util.py
Signed-off-by: Christian Decker <@cdecker>
Thanks to @t-bast, who made this possible by interop testing with Eclair!
Changelog-Added: Protocol: can now send and receive TLV-style onion messages.
Changelog-Added: Protocol: can now send and receive BOLT11 payment_secrets.
Changelog-Added: Protocol: can now receive basic multi-part payments.
Changelog-Added: RPC: low-level commands sendpay and waitsendpay can now be used to manually send multi-part payments.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
addresses issue #2753.
Formatting the JSON with the default parameters will escape the unicode
symbols in a way that c-lightning won't allow, leading to an exception.
Changelog-Fixed: `pylightning` now handles unicode characters in JSON-RPC requests and responses correctly.
This will change the command `listconfigs` output in several ways:
- Deprecated the duplicated "plugin" JSON output by replacing it with
- a "plugins" array with substructures for each plugin with:
- path, name and their options
Changelog-Changed: JSON-RPC: `listconfigs` now structures plugins and include their options
Changelog-Deprecated: JSON-RPC: `listconfigs` duplicated "plugin" paths
Do the same thing '--help' does with them; append `...`.
Valgrind noticed that we weren't NUL-terminarting if answer was over
78 characters.
Changelog-Fixed: JSONRPC: listconfigs appends '...' to truncated config options.
Changelog-changed: .lightningd plugins and files moved into <network>/ subdir
Changelog-changed: WARNING: If you don't have a config file, you now may need to specify the network to lightning-cli
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This lets you have a default, but also a network-specific config.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-changed: Options: `config` and <network>/`config` read by default.
This leads to all sorts of problems; in particular it's incredibly
slow (days, weeks!) if bitcoind is a long way back. This also changes
the behaviour of a rescan argument referring to a future block: we will
also refuse to start in that case, which I think is the correct behavior.
We already ignore bitcoind if it goes backwards while we're running.
Also cover a false positive memleak.
Changelog-Fixed: If bitcoind goes backwards (e.g. reindex) refuse to start (unless forced with --rescan).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Spaces just make life a little harder for everyone.
(Plus, fix documentation: it's 'jsonrpc' not 'json' subsystem).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We can definitely get a pong from l1 (should test be slow enough):
it's l3 we are concerned about.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This simplifies our tests, too, since we don't need a magic option to
enable io logging in subdaemons.
Note that test_bad_onion still takes too long, due to a separate minor
bug, so that's marked and left dev-only for now.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
1. Printed form is always "[<nodeid>-]<prefix>: <string>"
2. "jcon fd %i" becomes "jsonrpc #%i".
3. "jsonrpc" log is only used once, and is removed.
4. "database" log prefix is use for db accesses.
5. "lightningd(%i)" becomes simply "lightningd" without the pid.
6. The "lightningd_" prefix is stripped from subd log prefixes, and pid removed.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-changed: Logging: formatting made uniform: [NODEID-]SUBSYSTEM: MESSAGE
Changelog-removed: `lightning_` prefixes removed from subdaemon names, including in listpeers `owner` field.
A log can have a default node_id, which can be overridden on a per-entry
basis. This changes the format of logging, so some tests need rework.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
*If* we know the key has signed something else (as is the case for
channel_announcement) then we can effectively trust the key derivation.
This matches how LND's VerifyMessage works, though in the next patch
we will document it exactly.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Thanks Twitter helpers @duck1123 and @jochemin for tests!
And @bitconner for the initial test vector.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
I had a report of a 0.7.2 user whose node hadn't appeared on 1ml. Their
node_announcement wasn't visible to my node, either.
I suspect this is a consequence of recent version reducing the amount of
gossip they send, as well as large nodes increasingly turning off gossip
altogether from some peers (as we do). We should ignore timestamp filters
for our own channels: the easiest way to do this is to push them out
directly from gossipd (other messages are sent via the store).
We change channeld to wrap the local channel_announcements: previously
we just handed it to gossipd as for any other gossip message we received
from our peer. Now gossipd knows to push it out, as it's local.
This interferes with the logic in tests/test_misc.py::test_htlc_send_timeout
which expects the node_announcement message last, so we generalize
that too.
[ Thanks to @trueptolmy for bugfix! ]
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Since elements addresses look quite different from the bitcoin mainnet
addresses I just added a sample to the chainparams fixture. In addition I
extracted some of the fixed strings to reference chainparams instead.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
We were checking against hardcoded hrp and prefixes. Now we parametrize via
the chainparams.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
We are checking against chain-dependent constants, so let's make sure we are
using the ones for the correct chain.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
It only had an effect if the peer didn't support option_gossip_queries, but
still, we don't want a gossip blast any more.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We will soon have a postgres backend as well, so we need a way to control the
postgres process and to provision DBs to the nodes. The two interfaces are the
dsn that we pass to the node, and the python query interface needed to query
from tests.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
It's generally clearer to have simple hardcoded numbers with an
#if DEVELOPER around it, than apparent variables which aren't, really.
Interestingly, our pruning test was always kinda broken: we have to pass
two cycles, since l2 will refresh the channel once to avoid pruning.
Do the more obvious thing, and cut the network in half and check that
l1 and l3 time out.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>