We now have a much stronger consistency check from the combination of
transaction wrapping, tal memory leak detection. Tramsaction wrapping ensures
that each statement is executed before the transaction is committed. The
commit is also driven by the `io_loop`, which means that it is no longer
possible for us to have statements outside of transactions and transactions
are guaranteed to commit at the round's end.
By adding the tal-awareness we can also get a much better indication as to
whether we have un-freed statements flying around, which we can test at the
end of the round as well.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
We will soon generalize the DB, so directly reaching into the `struct db`
instance to talk to the sqlite3 connection is bad anyway. This increases
flexibility and allows us to tailor the actual implementation to the
underlying DB.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
This removes the WIRE_FINAL_EXPIRY_TOO_SOON which leaked too much info,
and adds the blockheight to WIRE_INCORRECT_OR_UNKNOWN_PAYMENT_DETAILS.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We currently end up sleeping for 1 second for channeld and gossipd:
better to use a normal blocking waitpid and an alarm to wake us in
case they don't exit.
This speeds up `lightning-cli stop` on my machine from 2.008s to 0.008s:
a 286 times speedup!
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
During sync it is highly likely that we can coalesce multiple calls and share
results among them. We also report back failures for non-existing blocks early
on, so we don't run into issues with blocks that our bitcoind doesn't have
yet.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
This was caused by us not checking against the max_blockheight, but rather the
min_blockheight which can be negative with a newly created node. This is still
safe since we check for duplicates anyway in `wallet_filteredblock_add`.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
This is probably worth preventing.
1. Our depth estimate would be inaccurate possibly leading to us
timing out too early.
2. If we're not up-to-date our onchain funds are unknown.
3. We wouldn't be able to send or receive HTLCs until we're synced anyway.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We want to still allow incoming connections, and reestablishment of
channels, but if one tries to give us an HTLC, stall until we're
synced.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
If we don't know block height, we shouldn't be sending HTLCs. This
stops us forwarding HTLCs as well as new payments.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
I suspect multiple plugins trying to connect at the same
time are overrunning the 1-deep listen queue:
From man listen(2):
The backlog argument defines the maximum length to which the queue of
pending connections for sockfd may grow. If a connection request ar‐
rives when the queue is full, the client may receive an error with an
indication of ECONNREFUSED
Fixes: #2922
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
`close` takes two optional arguments: `force` and `timeout`.
`timeout` doesn't timeout the close (there's no way to do that), just
the JSON call. `force` (default `false`) if set, means we unilaterally
close at the timeout, instead of just failing.
Timing out JSON calls is generally deprecated: that's the job of the
client. And the semantics of this are confusing, even to me! A
better API is a timeout which, if non-zero, is the time at which we
give up and unilaterally close.
The transition code is awkward, but we'll manage for the three
releases until we can remove it.
The new defaults are to unilaterally close after 48 hours.
Fixes: #2791
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
If we were to just insert filtered blocks in the range that we will scan later
we'd be hitting the uniqueness constraints later.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
Instead of allowing all calls to `getfilteredblock` to be scheduled on the
`bitcoind` queue right away we instead add them in a separate queue, and
process a single call at a time. This limits the concurrency and avoids
thrashing `bitcoind`. At the same time we dispatch incoming results back to
all calls that were queued for that particular blockheight, reducing the
overall number of calls and an increase in overall speed.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
We will be calling the callback out of order once we fan out the results of a
single lookip to multiple calls, so being sure that everything is allocated
ahead of time is necessary.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
Since we now check all P2WSH outputs in a block, this is getting quite a
common occurence, so logging just produces lots of noise.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
This will eventually replace the multi-step `getblockhash` + `getblock` +
`gettxout` mechanism, and return entire filtered blocks which can be added to
the DB, and represent the full set of P2WSH UTXOs.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
This was causing `--help` to fail if we already had a `lightningd` running
with the same `--lightning-dir`.
Signed-off-by: Christian Decker <decker.christian@gmail.com>