We have an incompatibility with lnd it seems: I've lost channels on
reconnect with 'sync error'. Since I never got this code to be reliable,
disable it for next release since I suspect it's our fault :(
And reenable the check which didn't work, for others to untangle.
I couldn't get option_data_loss_protect to be reliable, and I disabled
the check. This was a mistake, I should have either spent even more
time trying to get to the bottom of this (especially, writing test
vectors for the spec and testing against other implementations).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Both of these plugins will fail in interesting ways, and we should
still handle them correctly.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
It wasn't JSON formatted either so there was no nice pretty-printing
way. This jsonifies and pretty prints it.
Signed-off-by: Christian Decker <@cdecker>
We no longer use `dev-override-fees` to set the fees, rather we
instrument the `bitcoind` proxy to return the desired feerates.
Signed-off-by: Christian Decker <@cdecker>
pytest was an indirect dependency so far, making that one
explicit, and the timeout plugin should allow us to kill a stuck test
before travis kills it, and thus allow us to see where it got stuck.
Signed-off-by: Christian Decker <@cdecker>
Because gossip in this case takes up to a minute, this test took 10
minutes. The workaround is to do the waiting-for-gossip all at once.
Now it takes 362 seconds.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Unlike other daemons, closingd doesn't listen to the master, but runs
simply to its own beat. So instead of responding to the JSON dev_memleak
command, we always check for memory leaks, and make sure that the
python tests fail if they see MEMLEAK in the logs.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We now keep multiple commands for a json_connection, and an array of
json_streams.
When a command wants to write something, we allocate a new json_stream
at the end of the array.
We always output from the first available json_stream; once that
command has finished, we free that and move to the next. Once all are
done, we wake the reader.
This means we won't read a new command if output is still pending, but
as most commands don't start writing until they're ready to write
everything, we still get command parallelism.
In particular, you can now 'waitinvoice' and 'delinvoice' and it will
work even though the 'waitinvoice' blocks.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We need to keep the remaining buffer, and we need to try to parse it
before we read the next. I first tried keeping it in the object, but
its lifetime is that of the *socket*, which we actually reopen for
every command.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This was hanging sometimes in travis, but actually checking the result
of the commands makes it *always* hang. We remove the waitinvoice
which will not return.
ZmnSCPxj points out that this behavior, introduced in
ce0bd7abd3, is a regression: it would be
nice to be able to cancel a waitinvoice. But that fix is more complex,
and will have to be another PR.
This test will now hang, but it's OK: we're about to fix it!
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
When developing in regtest or testnet it is really inconvenient to
have to fake traffic and generate blocks just to get estimatesmartfee
to return a valid estimate. This just sets the minfee if bitcoind
doesn't return a valid estimate.
Reported-by: Rene Pickhardt <@renepickhardt>
Signed-off-by: Christian Decker <@cdecker>
In one case we can reduce, in the others we eliminated if VALGRIND.
Here are the ten slowest tests on my laptop:
469.75s call tests/test_closing.py::test_closing_torture
243.61s call tests/test_closing.py::test_onchain_multihtlc_our_unilateral
222.73s call tests/test_closing.py::test_onchain_multihtlc_their_unilateral
217.80s call tests/test_closing.py::test_closing_different_fees
146.14s call tests/test_connection.py::test_dataloss_protection
138.93s call tests/test_connection.py::test_restart_many_payments
129.66s call tests/test_gossip.py::test_gossip_persistence
128.73s call tests/test_connection.py::test_no_fee_estimate
122.46s call tests/test_misc.py::test_htlc_send_timeout
118.79s call tests/test_closing.py::test_onchain_dust_out
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
generate was deprecated some time ago, so we added the generate_block()
helper. But many calls crept back in, and git master refuses it.
(test_blockchaintrack relied on the return value, so make generate_block
return the list of blocks).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
After Ubuntu 18.10 upgrade, lots of new flake8 warnings.
$ flake8 --version:
3.5.0 (mccabe: 0.6.1, pycodestyle: 2.4.0, pyflakes: 1.6.0) CPython 3.6.7rc1 on Linux
Note it seems that W503 warned about line breaks before binary
operators, and W504 complains about them after. I prefer W504, so
disable W503.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Occasional failure in test_fulfill_incoming_first where the channel
closed before the final message from dev_disonnect was read. Cause
was the peer writing a gossip msg and failing due to ECONNRESET, before
it read the final message.
(Managed to reproduce under strace -f, FTW).
This is really a symptom of the fact that line_graph's announce=True
didn't wait for node announcements. Let's do that.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
When we have multiple HTLCs with the same preimage and the same CLTV,
it doesn't matter what order we treat them (they're literally
identical). But when we offer HTLCs with the same preimage but
different CLTVs, the commitment tx outputs look identical, but the
HTLC txs are different: if we simply take the first HTLC which matches
(and that's not the right one), the HTLC signature we got from them
won't match. As we rely on the signature matching to detect the fee
paid, we get:
onchaind: STATUS_FAIL_INTERNAL_ERROR: grind_fee failed
So we alter match_htlc_output() to return an array of all matching
HTLC indices, which can have more than one entry for offered HTLCs.
If it's our commitment, we loop through until one of the HTLC
signatures matches. If it's their commitment, we choose the HTLC with
the largest CLTV: we're going to ignore it once that hits anyway, so
this is the most conservative approach. If it's a penalty, it doesn't
matter since we steal all HTLC outputs the same independent of CLTV.
For accepted HTLCs, the CLTV value is encoded in the witness script,
so this confusion isn't possible. We nonetheless assert that the
CLTVs all match in that case.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We set up HTLCs with the same preimage and both different and same
CLTVs in both directions, then make sure that onchaind is OK and that
the HTLCs are failed without causing downstream failure.
We do this for both our-unilateral and their-unilateral cases.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Create a second HTLC with a different CTLV but same preimage; onchaind
uses the wrong signature and fails to grind it.
Reported-by: molz (#c-lightning)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This was suggested by Pierre-Marie as the solution to the 'same HTLC,
different CLTV' signature mismatch.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
My test case is a mainnet gossip store with 22107 channels, and
time to do `lightning-cli listchannels`:
Before: `lightning-cli listchannels` DEVELOPER=0
real 0m1.303000-1.324000(1.3114+/-0.0091)s
After:
real 0m0.629000-0.695000(0.64985+/-0.019)s
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This also highlights the danger of searching the logs: that error
appeared previously in the logs, so we didn't notice that the actual
withdraw call gave a different error.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
And use wallet_forward_status_in_db() everywhere in db code.
And clean up extra CHANGELOG.md entry (looks like rebase error?)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Adapts the `test_forward_stats` test to include checks for the
`forwarded_payments` table. Will add checks for the `listforwardings`
RPC call next.
Signed-off-by: Christian Decker <@cdecker>
When the wrong key is used, the remote end simply hangs up.
We used to get a random errno, which tends to be "Operation now in progress."
Now it's defined to be 0, detect and provide a better error.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Have c-lightning nodes send out the largest value for
`htlc_maximum_msat` that makes sense, ie the lesser of
the peer's max_inflight_htlc value or the total channel
capacity minus the total channel reserve.
We don't create unannouncable channels, but other implementations can.
Not only is it rude to expose these via invoices, it's probably not
useable anyway.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
There is an issue with the way we retrieve the channel accounting info
that will result in always showing 0 for all stats. This tests for it
and the next commit will fix it.
During tests, this is half our log! And Travis truncates it if we get
a failure in test_restart_many_payments.
Interestingly, test_logging had a bug which relied on this spam :)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We don't save them to the database, so fix things up as we load them.
Next patch will actually save them into the db, and this will become
COMPAT code.
Also: call htlc_in_check() with NULL on db load, as otherwise it aborts
internally.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Usually, we only close an incoming HTLC once the outgoing HTLC is completely
resolved. However, we short-cut this in the FULFILL case: we have the
preimage, so might as well use it immediately (in fact, we wait for it to
be committed, but we don't need to in theory).
As a side-effect of this, our assumption that every outgoing HTLC has
a corresponding incoming HTLC is incorrect, and this test (xfail) tickles
that corner case.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We split json_invoice(), as it now needs to round-trip to the gossipd,
and uniqueness checks need to happen *after* gossipd replies to avoid
a race.
For every candidate channel gossipd gives us, we check that it's in
state NORMAL (not shutting down, not still waiting for lockin), that
it's connected, and that it has capacity. We then choose one with
probability weighted by excess capacity, so larger channels are more
likely.
As a side effect of this, we can tell if an invoice is unpayble (no
channels have sufficient incoming capacity) or difficuly (no *online*
channels have sufficient capacity), so we add those warnings.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We're about to slow down the invoice rpc (esp. under valgrind), which
breaks the delicate timing of the autocleaninvoice test.
Change that so the autocleaner (and timestamp) starts after the invoices
are added.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Travis failures:
valgrind: m_scheduler/sema.c:104 (vgModuleLocal_sema_down): Assertion 'sema->owner_lwpid != lwpid' failed.
host stacktrace:
==1296== at 0x38083F48: ??? (in /usr/lib/valgrind/memcheck-amd64-linux)
==1296== by 0x38084064: ??? (in /usr/lib/valgrind/memcheck-amd64-linux)
==1296== by 0x380841F1: ??? (in /usr/lib/valgrind/memcheck-amd64-linux)
==1296== by 0x38135DAE: ??? (in /usr/lib/valgrind/memcheck-amd64-linux)
==1296== by 0x380D328D: ??? (in /usr/lib/valgrind/memcheck-amd64-linux)
==1296== by 0x3809A4AC: ??? (in /usr/lib/valgrind/memcheck-amd64-linux)
==1296== by 0x3809AE43: ??? (in /usr/lib/valgrind/memcheck-amd64-linux)
==1296== by 0x380988CF: ??? (in /usr/lib/valgrind/memcheck-amd64-linux)
sched status:
running_tid=0
Thread 1: status = VgTs_WaitSys (lwpid 1296)
==1296== at 0x5729730: __poll_nocancel (syscall-template.S:84)
==1296== by 0x4348DF: daemon_poll (daemon.c:78)
==1296== by 0x4169E7: io_poll_lightningd (lightningd.c:543)
==1296== by 0x471ECD: io_loop (poll.c:282)
==1296== by 0x416E06: main (lightningd.c:744)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We have a lot of infrastructure to delay local channel_updates to
avoid spamming on each peer reconnect; we had to keep tracking of
pending ones though, in case we needed the very latest for sending an
error when failing an HTLC.
Instead, it's far simpler to set the local_disabled flag on a channel
when we disconnect, but only send a disabling channel_update if we
actually fail an HTLC.
Note: handle_channel_update() TAKES update (due to tal_arr_dup), but we
didn't use that before. Now we do, add annotation.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We trade channel_update before channel_announce makes the channel
public, and currently forget them when we finally get the
channel_announce. We should instead apply them, and not rely on
retransmission (which we remove in the next patch!).
This earlier channel_update means test_gossip_jsonrpc triggers too
early, so have that wait for node_announcement.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The help command now adds command usage to its output by calling each
command handler in CMD_USAGE mode.
Instead of seeing, for example:
decodepay
Decode {bolt11}, using {description} if necessary
we see:
decodepay bolt11 [description]
Decode {bolt11}, using {description} if necessary
Signed-off-by: Mark Beckwith <wythe@intrig.com>
Incrementing version number means stores which were prior to the previous
commit will be removed, and refreshed. The simplest fix, if not the most
efficient.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
There's no reason for the db to ever return non-NULL if it's spent. And there's
only one caller, for which that is definitely true.
Suggested-by: @cdecker
Fixes: #1934
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We extract the tx from the logs, and then we wait until that hits
the mempool. This is more reliable than 'sendrawtx' in the logs,
which might catch a previous sendrawtx; it's also more explicit
that we expect that tx exactly.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
BOLT 7's been updated to split the flags field in `channel_update`
into two: `channel_flags` and `message_flags`. This changeset does the
minimal necessary to get to building with the new flags.
We currently just ignore them. This is one reason the hsm (in some places)
explicitly calls log_broken so we get some idea.
This was the only subdaemon which had a NULL msgcb and msgname, so eliminate
those checks in subd.c.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Got a spurious failure in test_no_fee_estimate; we fired too soon from the logs (presumably
we raced in on the first response, but estimatesmartfee gets called 3 times).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is a simple reverse proxy that `bitcoin-cli` can talk to when invoked by
`lightningd`. It allows us to trace `bitcoin-cli` calls, and intercept calls to
mock the replies, better than the current bash-script based method.
It's an array: we were only saving the single element; if there was more than
one changed HTLC we'd get a bad signature!
The report in #1907 is probably caused by the other side re-requesting
something we considered already finalized; to avoid this particular error,
we should set the field to NULL if there's no last_sent_commit.
I'm increasingly of the opinion we want to just save all the update
packets to the db and blast them out, instead of doing this
second-guessing dance.
Fixes: #1907
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
These happen after we compact the store; every log I've seen of a
restart on a real node has a message about truncating the store,
because node_announcements predate channel_announcements.
I extracted one such case from testnet, and reduced it to test here.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
1. Wait for a 'sendrawtransaction' *after* the dev-fail message; don't be
fooled by a previous one.
2. Turning on estimate fee sets fees exactly; just wait for it to be processed.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It's probably unnecessary to have this weird way of injecting results
now we have explicit feerate args.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
And, reluctantly, default to bitcoind style.
"It's wrong to be right too soon."
Suggested-by: @cdecker
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We could refine this later (based on existing wallet, for example), but
this gives some estimate.
[ Rename onchain_estimates -> onchain_fee_estimates Suggested-by: @SimonVrouwe ]
[ Factor of 1000 fix Reported-by: @SimonVrouwe ]
Suggested-by: @molxyz
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We don't know what our peer is doing, but if we see those values, maybe
they did too, and for longer. And add the min/max acceptable values
into our JSON API.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is useful mainly in the case where bitcoind is not giving estimates,
but can also be used to bias results if you want.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
And no more filtering out messages, as we should no longer spam the
logs with them (the 'Connected json input' one was removed some time
ago).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The comment was wrong: the channel being locked in was triggering
the fee update and hence the disconnect. But that can actually
happen before fund_channel returns, as that waits for the gossipd
to see the channel active.
Best to do the fee update manually, so it's exactly what we want.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
If feerates change, L2 sends L3 a commit for that, which causes us to
fail the assert (which says we won't send a commitment_signed).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We use feerate in several places, and each one really should react
differently when it's not available (such as when bitcoind is still
catching up):
1. For general fee-enforcement, we use the broadest possible limits.
2. For closingd, we use it as our opening negotiation point: just use half
the last tx feerate.
3. For onchaind, we can use the last tx feerate as a guide for our own txs;
it might be too high, but at least we know it was sufficient to be mined.
4. For withdraw and fund_channel, we can simply refuse.
Fixes: #1836
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Manipulate fees via fake-bitcoin-cli. It's not quite the same, as
these are pre-smoothing, so we need a restart to override that where
we really need an exact change. Or we can wait until it reaches a
certain value in cases we don't care about exact amounts.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We don't respond to fee changes until we're locked in: make sure we catch
up at that point.
Note that we use NORMAL fees during opening, but IMMEDIATE after, so
this often sends a fee update. The tests which break, we set those
feerates to be equal.
This (sometimes) changes the behavior of test_permfail, as we now
get an immediate commit, so that is fixed too so we always wait for
that to complete.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is a noop if we're opening a new channel (channel_fees_can_change(channel)
is false until funding locked in), but important if we're restarting.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We don't update a channel's feerate on reestablishment: we insert a restart
in test_onchain_different_fees() (which we'll need soon anyway) to show it.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
When in this state, we send a canned error "Awaiting unilateral close".
We enter this both when we drop to chain, and when we're trying to get
them to drop to chain due to option_data_loss_protect.
As this state (unlike channel errors) is saved to the database, it means
we will *never* talk to a peer again in this state, so they can't
confuse us.
Since we set this state in channel_fail_permanent() (which is the only
place we call drop_to_chain for a unilateral close), we don't need to
save to the db: channel_set_state() does that for us.
This state change has a subtle effect: we return WIRE_UNKNOWN_NEXT_PEER
instead of WIRE_TEMPORARY_CHANNEL_FAILURE as soon as we get a failure
with a peer. To provoke a temporary failure in test_pay_disconnect we
take the node offline.
Reported-by: Christian Decker @cdecker
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This means we don't try to unilaterally close after a restart, *and*
we can tell onchaind to try to use the point to recover funds when the
peer unilaterally closes.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Firstly, if they claim to know a future value, we ask the HSM; if
they're right, we tell master what the per-commitment-secret it gave
us (we have no way to validate this, though) and it will not broadcast
a unilateral (knowing it will cause them to use a penalty tx!).
Otherwise, we check the results they sent were valid. The spec says
to do this (and close the channel if it's wrong!), because otherwise they
could continually lie and give us a bad per-commitment-secret when we
actually need it.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Tests were failing when in the same thread after a test which set
log_all_io=True, because SIGUSR1 seemed to be turning logging *off*.
This is due to Python using references not copies for assignment.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is required for the next test, which has to log messages from channeld
as soon as it starts (so might be too late if it sends SIGUSR1).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We ignore incoming for now, but this means we advertize the option and
we send the required fields.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
peer features are only kept for connected peers (as they can change),
but we didn't update them on reconnect. The main effect was that
after a restart we displayed the features as empty, even after
reconnect.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
1. l1 update_fee -> l2
2. l1 commitment_signed -> l2 (using new feerate)
3. l1 <- revoke_and_ack l2
4. l1 <- commitment_signed l2 (using new feerate)
5. l1 -> revoke_and_ack l2
When we break the connection after #3, the reconnection causes #4 to
be retransmitted, but it turns out l1 wasn't telling the master to set
the local feerate until it received the commitment_signed, so on
reconnect it uses the old feerate, with predictable results (bad
signature).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Useful it we want to intercept bitcoin-cli first.
We move the getinfo() caching into start(), as that's when we can actually
use RPC.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We're going to use it to override specific commands. It's non-valgrinded
already since we use '--trace-children-skip=*bitcoin-cli*' so the overhead
should be minimal.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We got an index error, because status had only one field (onchaind not
started yet).
> wait_for(lambda: only_one(p.rpc.listpeers(l1.info['id'])['peers'][0]['channels'])['status'][1] == 'ONCHAIN:Tracking mutual close transaction')
E IndexError: list index out of range1
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Normal wallet txs get reconfirmed as blocks come in, but ones which need
closeinfo are more fragile, so we do it manually using txwatch for them.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
In one case, the channel_update which we expected to activate the channel
from l2 was suppressed as redundant. This is certainly valid, so just
check the results.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>