And use wallet_forward_status_in_db() everywhere in db code.
And clean up extra CHANGELOG.md entry (looks like rebase error?)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The left join should make sure we still get the results but
referencing the fields and/or attempting to write them to the JSON-RPC
result will cause unforeseen problems. So just omit if we forgot
something.
Adapts the `test_forward_stats` test to include checks for the
`forwarded_payments` table. Will add checks for the `listforwardings`
RPC call next.
Signed-off-by: Christian Decker <@cdecker>
When the wrong key is used, the remote end simply hangs up.
We used to get a random errno, which tends to be "Operation now in progress."
Now it's defined to be 0, detect and provide a better error.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This was from a different series, so I just cherry-picked it.
It adds ccan/membuf as a depenency of ccan/rbuf, though we don't use
it directly yet.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Give a clear error at the beginning if it's not bolt11 payment,
rather than falling foul of other checks.
This will work at least until some altcoin adapts the 'ln' prefix :)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This covers both cbde3e20f7 which added
the parameter names, and d23a0e8adc which
added fallback for missing man page entries.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This was fixed in d5c3626fa73967ff59d60b36c8ec21394f357f9d; we're still
getting use to CHANGELOG.md.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Usually Travis triggers corner cases because it's so slow, but this
time the moons aligned, and it managed to fail test_node_reannounce
because it generated the updated node_announcement with the same
timestamp as the old one.
This is because we only updated "last_announce_timestamp" when
we generated the announcement, not when we got it off the wire or
loaded it from the gossip store.
The fix is to ask the routing code what the latest timestamp is;
we could still generate a clashing timestamp if (1) the gossip store
is lost, and (2) we restart within one second. Hard to care.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Have c-lightning nodes send out the largest value for
`htlc_maximum_msat` that makes sense, ie the lesser of
the peer's max_inflight_htlc value or the total channel
capacity minus the total channel reserve.
While not strictly necessary it's certainly a good idea to test
against the latest one and not encourage users to use old versions.
Reported-by: Jonas Nick <@jonasnick>
Signed-off-by: Christian Decker <@cdecker>
We initialize it to 30 seconds, but it's *always* overridden by the
gossip_init message (and usually to 60 seconds, so it's doubly
misleading).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Gossipd provided a generic "get endpoints of this scid" and we only
use it in one place: to look up htlc forwards. But lightningd just
assumed that one would be us.
Instead, provide a simpler API which only returns the peer node
if any, and now we handle it much more gracefully.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We don't create unannouncable channels, but other implementations can.
Not only is it rude to expose these via invoices, it's probably not
useable anyway.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
There is an issue with the way we retrieve the channel accounting info
that will result in always showing 0 for all stats. This tests for it
and the next commit will fix it.
LND does this, and we get upset with it. I had assumed we would only
do this after funding_locked (since we don't consider the channel
shortid stable until that point), but TBH 6 confirms is probably
enough.
Fixes: #1985
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Right now, the `config` file is read *after* the configuration working directory is moved to in the software. However one configuration option `lightning-dir` settable in the `config` file sets this working directory. As the directory is already opened (which defaults to `$HOME/.lightning`) before the configuration is read, the configured directory will not be used.
This patch parses the configuration file before opening the working directory, fixing this bug.
[ Update CHANGELOG.md and man pages -- RR ]
It went something like:
niftynei: Hey, cppcheck complains this might be NULL, so I put in a check.
rusty: cppcheck is dumb. Make it an assert("Rusty always right!").
niftynei: You seem certain of this so I shall do that.
https://github.com/ElementsProject/lightning/pull/1994
...
renepickhardt: I asked fiatjaf to run
`lightning-cli sendpay "[{'id':'02db8f487fcc0a'}]" 4efe0ba89b`
and his node crashed!
rusty: grep Assertion logs/*
lightningd/jsonrpc.c:326: connection_complete_error: Assertion `Rusty is always right!' failed.
It turns out that in the 'can't parse' error case, we hand NULL cmd to
connection_compete_error.
Next time, less asserting, more grepping!
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Totally forgot to add this test. It just shows how a writer can take
exclusive access of a socket over multiple `io_write` calls, and
queuing all others behind it.
This is a bit of overkill now that we simply accumulate the entire
JSON response in the buffer before flushing, but when we move to
streamed responses it allows us to have a single command that has
exclusive access to the out direction of the JSON-RPC connection.
We've done this a number of times already where we're getting
exclusive access to either the out direction of a connection, or we
try to lock out the read side while we are responding to a previous
request. They usually are really cumbersome because we reach around to
the other direction to stop it from proceeding, or we flag our
exclusive access somewhere, and we always need to know whom to notify.
PR ElementsProject/lightning#1970 adds two new instances of this:
- Streaming a JSON response requires that nothing else should write
while the stream is active.
- We also want to stop reading new requests while we are responding
to one.
To remove the complexity of having to know whom to stop and notify
when we're done, this adds a simple `io_lock` primitive that can be
used to get exclusive access to a connection. This inverts the
requirement for notifications, since everybody registers interest in
the lock and they get notified if the lock holder releases it.
This is the source of failure in the test_restart_many_payments stress
test: we don't commit the outgoing HTLC immediately, instead waiting for
gossip to tell us the peer for the outgoing channel, then waiting for
that channeld to tell is it's committed. The result was incoming HTLCs
with no outgoing.
I initially pushed the HTLCs through that same path, but of course
(since peers are not connected yet!) the only result was that we failed
these HTLCs immediately. So I chose the far simpler course of just
failing them directly.
To reproduce this, I had to increase the test_restart_many_payments
num to 10, and run it with nice -20 taskset -c 0.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
During tests, this is half our log! And Travis truncates it if we get
a failure in test_restart_many_payments.
Interestingly, test_logging had a bug which relied on this spam :)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Now that we're returning all the help data, we need to update the human_help
formatter to handle the extra data.
Signed-off-by: William Casarin <jb55@jb55.com>
Instead of exiting when we can't find a manpage, set the command and continue so
that we can try the json rpc for help.
Signed-off-by: William Casarin <jb55@jb55.com>
Instead of two code paths that return different help objects, simplify things by
always returning the full help object. This not only includes description and
the command name, but the verbose description as well.
Signed-off-by: William Casarin <jb55@jb55.com>
If another channel has set the optional `htlc_maximum_msat` field,
we should correctly parse that field and respect it when drawing up
routes for payments.