With enable-autotor-v2 defined in cmdline the default behavior to create
v3 onions with the tor service call, is set to v2 onions.
Signed-off-by: Saibato <saibato.naga@pm.me>
The DB field type has to match the size of the accessor-type, and we had to
split the `REPLACE INTO` and `INSERT INTO OR IGNORE` queries into two
queries (update and insert if not updated) since there is no portable UPSERT
operation, but impact should be minimal.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
sqlite3 was forgiving, postgres isn't, so let's make sure we use the strictest
field type possible, relaxing when rewriting.
The commit consists just of the following mapping
- INTEGER -> BIGSERIAL if it is the primary key
- INTEGER -> BIGINT if it is an amount or a reference to a primary key
Signed-off-by: Christian Decker <decker.christian@gmail.com>
This was already done in `db_step` but `db_count_changes` and
`db_last_insert_id` also rely on the statement being executed. Furthermore we
now check that the statement was executed before freeing it, so it can't
happen that we dispose of a statement we meant to execute but forgot.
The combination of these could be used to replace the pending_statement
tracking based on lists, since we now make sure to execute all statements and
we use the memleak checker to make sure we don't keep a statement in memory.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
sqlite3 will just report 0 for anything that it thinks should be numeric, or
is accessed using a numeric accessor. Postgres does not, so we need to check
for is_null before trying to read it.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
sqlite3 doesn't really do any validation whatsoever, and there is no
difference between 64bit and 32bit numbers. Posgtres on the other hand gets
very upset if the size doesn't match.
This commit swaps out handwavy types with the ones that should be there :-)
Signed-off-by: Christian Decker <decker.christian@gmail.com>
This was weird right from the start, so we just split the table into integers
and blobs, so each column has a well-defined format. It is also required for
postgres not to cry about explicit casts in the `paramTypes` array.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
The first ever query to check if the version DB exists may fail. We allow
this, but we need to restart the DB transaction since postgres fails the
current transaction and rolls back any changes.
This just commits (and fails) and starts a new transaction so the rest of the
migration can continue.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
Needed to change a couple of migrations. The changes are mostly innocuous:
- changing BLOB to TEXT for short_channel_ids which is the correct type
anyway, and sqlite3 treats them the same anyway.
- Use `int` for version since the byte representation is checked by postgres.
- Change anything that is INT, but will be bound to u64 to BIGINT (again
postgres checks these more carefully than sqlite3).
Two migrations were replaced with dummy values, since they are buried deep
enough, and I found no portable way of expressing `strftime()` and `INSERT OR
IGNORE`.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
Using a generated identifier with filename and line proved to be brittle since
compilers assign the __LINE__ macro differently on multi-line macro
invocations.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
This is dangerous but needed since postgres is not as forgiving about
unsatisfied foreign key constraints even while in a DB transaction.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
We used to do some of the setup work in db.c, which is now free of any
sqlite3-specific code. In addition we also switch over to fully qualified DSNs
to specify the location of the wallet.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
This avoids having to correlate with listpeers for the most pertinent
information.
This API predates plugins, otherwise we'd have listutxos and listpeers
and this would simply combine them appropriately. Still, it exists so
there's little reason not to make it more friendly.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Allow a user to select the utxo set that will be added to a
transaction, via the `utxos` parameter. Optional.
Format for utxos should be of the form ["txid:vout","..."]
Rather than reaching into data structures, let them register their own
callbacks. This avoids us having to expose "memleak_remove_xxx"
functions, and call them manually.
Under the hood, this is done by having a specially-named tal child of
the thing we want to assist, containing the callback.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We're about to remove them.
Includes fix to sqlite3_bind_short_channel_id to not assume `id` is a
tal object.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Now that all the users are migrated to the abstraction layer we can remove the
legacy implementation.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
We are about to delete all the `sqlite3`-specific code from `db.c` and this is
one of the last uses of the old interface.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
It's better to let the driver decide when and how to expand. It can then
report the expanded statement back to the dispatch through the
`db_changes_add` function.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
We now have a much stronger consistency check from the combination of
transaction wrapping, tal memory leak detection. Tramsaction wrapping ensures
that each statement is executed before the transaction is committed. The
commit is also driven by the `io_loop`, which means that it is no longer
possible for us to have statements outside of transactions and transactions
are guaranteed to commit at the round's end.
By adding the tal-awareness we can also get a much better indication as to
whether we have un-freed statements flying around, which we can test at the
end of the round as well.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
This is likely the last part we need to completely encapsulate the part of the
sqlite3 API that we were using. Like the `db_count_changes` call I decided to
pass in the `struct db_stmt` since really they refer to the statement that was
executed and not the db.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
These are based on top of the basic column access functions, and act as a
small type-safe wrapper, that also does a bit of validation.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
This has a slight side-effect of removing the actual begin and commit
statements from the `db_write` hooks, but they are mostly redundant anyway (no
harm in grouping pre-init statements into one transaction, and we know that
each post-init call is supposed to be wrapped anyway).
Signed-off-by: Christian Decker <decker.christian@gmail.com>
These are used to do one-time initializations and wait for pending statements
before closing.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
I was hoping to get rid of these by using "ON CONFLICT" upserts, however
sqlite3 only started supporting them in version 3.24.0 which is newer than
some of our deployment targets.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
This is the first step towards being able to extract information from query
rows. Only the most basic types are exposed, the others will be built on top
of these primitives.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
For some of the query methods in the next step we need to have an idea of
whether the stmt was executed (db_step function) so let's track that
explicitly.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
These do not require the ability to iterate over the result, hence they can be
migrated already.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
These functions implement the lookup of the query, and the dispatch to the
DB-specific functions that do the actual heavy lifting.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
This gets rid of the two parallel execution paths of read-only and write
queries, by explicitly stating with each query whether it is a read-only
query, we only need to remember the ones marked as write queries.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
All drivers will have to reach into it, so put it in a place that is reachable
from the drivers, along with all other definitions.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
This is the counterpart of the annotations we did in the last few commits. It
extracts queries, passes them through a driver-specific query rewriter and
dumps them into a driver-specific query-list, along with some metadata to
facilitate processing later on. The generated query list is then registered as
a `db_config` and will be loaded by the driver upon instantiation.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
We will soon generalize the DB, so directly reaching into the `struct db`
instance to talk to the sqlite3 connection is bad anyway. This increases
flexibility and allows us to tailor the actual implementation to the
underlying DB.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
These two simple macros have a twofold use:
1) They serve as annotations for the query extraction tool to find them when
extracting queries from the C source code.
2) They replace the actual queries with names that can be used to lookup the
queries in a table again, once they have been rewritten into the target SQL dialect.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
`db_select_prepare` was prepending the "SELECT" part in an attempt to limit
its use to read-only statements. This is leads to the queries in the code not
actually being well-formed, which we'll need in a later commit, and was also
resulting in extra allocations. This switches the behavior to just enforce a
"SELECT" prefix being present which allows us to have well-formed queries in
the code again and avoids the extra allocation.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
We need to have full DB queries that can be extracted at compile time later in
order to be able to rewrite them in other SQL dialects. In addition we had a
bit of unnecessary code-duplication in db_select and db_select_prepare. Now
the former uses the latter internally.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
These will interfere with our query extraction process later on, and they were
really separating definition from use anyway, so let's expand these field lists.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
We want to still allow incoming connections, and reestablishment of
channels, but if one tries to give us an HTLC, stall until we're
synced.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
`close` takes two optional arguments: `force` and `timeout`.
`timeout` doesn't timeout the close (there's no way to do that), just
the JSON call. `force` (default `false`) if set, means we unilaterally
close at the timeout, instead of just failing.
Timing out JSON calls is generally deprecated: that's the job of the
client. And the semantics of this are confusing, even to me! A
better API is a timeout which, if non-zero, is the time at which we
give up and unilaterally close.
The transition code is awkward, but we'll manage for the three
releases until we can remove it.
The new defaults are to unilaterally close after 48 hours.
Fixes: #2791
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Dumb programs which have a --daemon option call fork() early. This is
terrible UX since startup errors get lost: the program exits with
"success" immediately then you discover via the logs that it didn't
start at all.
However, forking late introduced a heap of problems with changing
pids. Instead, fork early but keep stderr and the parent around: if
we fail early on, the parent fails with us. We release our parent
with an explicit action just before the main loop.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The withdraw_tx function shouldn't use it, but GCC is right it's uninitialized:
wallet/walletrpc.c: In function ‘json_prepare_tx’:
wallet/walletrpc.c:202:15: error: ‘changekey’ may be used uninitialized in this function [-Werror=maybe-uninitialized]
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is the other origin, besides `bitcoin_tx`, where we create `bitcoin_tx`
instances, so add the context as soon as possible. Sadly I can't weave the
chainparams into the deserialization code since that'd need to change all the
generated wire code as well.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
The way we build transactions, serialize them, and compute fees depends on the
chain we are working on, so let's add some context to the transactions.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
Since we now have a couple of long-lived dependents it is time we stop
removing channels from the table once they are fully closed, and instead just
mark them as closed. This allows us to keep forwards and transactions foreign
keys intact, and it may help us debug things after the fact.
Fixes#2028
Signed-off-by: Christian Decker <decker.christian@gmail.com>
Instead of deleting the channels we will simple mark them as `CLOSED` from now
on. This is needed for some of the other tables not to end up with dangling
references that would otherwise survive the channel lifetime, e.g., forwards
and transactions.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
Direct leak of 32 byte(s) in 1 object(s) allocated from:
#0 0x7f7678ee863e in calloc (/lib/x86_64-linux-gnu/libasan.so.5+0x10c63e)
#1 0x55f8c7b0fce5 in htable_default_alloc ccan/ccan/htable/htable.c:19
#2 0x55f8c7b1064f in double_table ccan/ccan/htable/htable.c:226
#3 0x55f8c7b10b19 in htable_add_ ccan/ccan/htable/htable.c:331
#4 0x55f8c7afac63 in scriptpubkeyset_add wallet/txfilter.c:30
#5 0x55f8c7afafce in txfilter_add_scriptpubkey wallet/txfilter.c:77
#6 0x55f8c7afb05f in txfilter_add_derkey wallet/txfilter.c:91
#7 0x55f8c7aa4d67 in init_txfilter lightningd/lightningd.c:482
#8 0x55f8c7aa52d8 in main lightningd/lightningd.c:721
#9 0x7f767889ab6a in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x26b6a)
Direct leak of 16 byte(s) in 1 object(s) allocated from:
#0 0x7f05f389563e in calloc (/lib/x86_64-linux-gnu/libasan.so.5+0x10c63e)
#1 0x55cac1e6bc99 in htable_default_alloc ccan/ccan/htable/htable.c:19
#2 0x55cac1e6c603 in double_table ccan/ccan/htable/htable.c:226
#3 0x55cac1e6cacd in htable_add_ ccan/ccan/htable/htable.c:331
#4 0x55cac1e56e48 in outpointset_add wallet/txfilter.c:61
#5 0x55cac1e57162 in outpointfilter_add wallet/txfilter.c:116
#6 0x55cac1e5ea3a in wallet_utxoset_add wallet/wallet.c:2365
#7 0x55cac1deddc2 in topo_add_utxos lightningd/chaintopology.c:603
#8 0x55cac1dedeac in add_tip lightningd/chaintopology.c:620
#9 0x55cac1dee2de in have_new_block lightningd/chaintopology.c:694
#10 0x55cac1deaab0 in process_rawblock lightningd/bitcoind.c:466
#11 0x55cac1de9fb4 in bcli_finished lightningd/bitcoind.c:214
#12 0x55cac1e6f5be in destroy_conn ccan/ccan/io/poll.c:244
#13 0x55cac1e6f5de in destroy_conn_close_fd ccan/ccan/io/poll.c:250
#14 0x55cac1e7baf5 in notify ccan/ccan/tal/tal.c:235
#15 0x55cac1e7bfe4 in del_tree ccan/ccan/tal/tal.c:397
#16 0x55cac1e7c370 in tal_free ccan/ccan/tal/tal.c:481
#17 0x55cac1e6dddd in io_close ccan/ccan/io/io.c:450
#18 0x55cac1e6fcf9 in io_loop ccan/ccan/io/poll.c:449
#19 0x55cac1dfac66 in io_loop_with_timers lightningd/io_loop_with_timers.c:24
#20 0x55cac1e0156b in main lightningd/lightningd.c:822
#21 0x7f05f3247b6a in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x26b6a)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
I was working on rewriting our (somewhat chaotic) tx watching code
for 0.7.2, when I found this bug: we don't always notice the funding
tx in corner cases where more than one block is detected at
once.
This is just the one commit needed to fix the problem: it has some
unnecessary changes, but I'd prefer not to diverge too far from my
cleanup-txwatch branch.
Fixes: #2352
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Due to API instability we are disabling the RPC method for this release, but
will re-enable it after the release again.
Signed-off-by: Christian Decker <@cdecker>
We're going to need this for P2WSH scripts. pull it out into
a common file plus adopt the sanity checks so that it will allow for
either P2WSH or P2WPKH (previously only encoded P2WPKH scripts)
Move it closer to ccan/json_out, in preparation for using that as a
replacement.
In particular:
1. Add a 'quote' field in json_add_member.
2. json_add_member now always escapes if 'quote' is true.
3. json_member_direct is exposed to allow avoiding of escaping.
4. json_add_hex can use this, so no longer needs to be in json_stream.c.
5. We don't make JSON manually, but always use helpers.
6. We now flush the stream (wake reader) only when we close it, or mark
command as pending.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
"result" should always be an object (so that we can add new fields),
so make that implicit in json_stream_success.
This makes our primitives well-formed: we previously used NULL as our
fieldname when calling the first json_object_start, which is a hack
since we're actually in an object and the fieldname is 'result' (which
was already written by json_object_start).
There were only two cases which didn't do this:
1. dev-memdump returned an array. No API guarantees on this.
2. shutdown returned a string.
I temporarily made shutdown return an empty object, which shouldn't
break anything, but I want to fix that later anyway.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
These are generalized from our internal implementations.
The main difference is that 'struct json_escaped' is now 'struct
json_escape', so we replace that immediately.
The difference between lightningd's json-writing ringbuffer and the
more generic ccan/json_out is that the latter has a better API and
handles escaping transparently if something slips through (though
it does offer direct accessors so you can mess things up yourself!).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This new parameter takes a list of outpoints (as txid:vout) and fund a channel from the corresponding utxos.
Example : fundchannel <id> 10000 normal 1 [10767f0db0e568127fffd7f70a154d4599f42d62babf63230a7c3378bfce3cb0:0, c9e040e0b5fc8c59d5e7834108fbc5583001f414dd83faf0a05cff9d1a92d32c:0]
Take into account the fee we'd have to pay if we're the funder, and
also drop to 0 if the amount is less than the smallest HTLC the peer
will accept.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This was incorrectly handled before, hence the wrapper which checks
correctness of the arguments.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
Since we add the transactions while processing the blockchain, and before we
have enough context to annotate them correctly, i.e., in the txwatches, we add
them first and then annotate them aposteriori.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
Mainly used to differentiate channel-related transactions from on-chain wallet
transactions. Will be used to filter `listtransaction` results and bundle
transactions that belong to the same channel.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
I was bumping against some blocksync performance issues with 12k+ keys, 24k+
scriptpubkeys being checked against, and migrating that list to a hashset is
an easy fix to shave off 99% of the time to process a block.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
- Related Changes for `warning` notification
Add a `bool` type parameter in `log_()` and `lov()`, this `bool` flag
indicates if we should call `warning` notifier.
1) The process of copying `log_book` of every peer to the `log_book` of
`ld` is usually included in `log_()` and `lov()`, and it may lead to
repeated `warning` notification. So a `bool`, which explicitly indicates
if the `warning` notification is disabled during this call, is necessary
.
2) The `LOG_INFO` and `LOG_DEBUG` level don't need to call
warning, so set that `bool` paramater as `FALSE` for these log level and
only set it as `TRUE` for `LOG_UNUAUSL`/`LOG_BROKEN`. As for `LOG_IO`,
it use `log_io()` to log, so we needn't think about notifier for it.
We reserve inputs when we're going to send a transaction, but we don't
unreserve them if we crash. This is most graphically demonstrated by
the txprepare case, which makes it easier to trigger.
Instead, we should query bitcoind to see whether the tx made it out or
not, as we would do manually with dev-rescan-outputs.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This allows you to prepare a tx, then release or discard it later.
Shares almost all the code with json_withdraw (which is now technically
superfluous).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We currently allocate utxos off cmd, but the next commit will persist a
wtx beyond the command which created it, breaking that assumption.
In general, a struct member should be owned by the struct itself, and
a tal context should be an explicit arg, not implicit.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We generally want to do as much validation as possible inside
parameter parsing, as that means the 'check' command detects more
erroneous uses. In this case, we can try to interpret the destination
address as soon as we encounter it.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
As a side-effect, we now only add txfilters for addresses we actually
expose, rather than always filtering for both p2sh and native segwit.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It would always return bech32; fix that, and don't bother printing
it if they use the (new) 'all' parameter.
This API was introduced in 3e67c09d5e,
which means it wasn't in a release so no CHANGELOG entry necessary.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Keeping the uintmap ordering all the broadcastable messages is expensive:
130MB for the million-channels project. But now we delete obsolete entries
from the store, we can have the per-peer daemons simply read that sequentially
and stream the gossip itself.
This is the most primitive version, where all gossip is streamed;
successive patches will bring back proper handling of timestamp filtering
and initial_routing_sync.
We add a gossip_state field to track what's happening with our gossip
streaming: it's initialized in gossipd, and currently always set, but
once we handle timestamps the per-peer daemon may do it when the first
filter is sent.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Encapsulating the peer state was a win for lightningd; not surprisingly,
it's even more of a win for the other daemons, especially as we want
to add a little gossip information.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
A new string field is added to the command structure and is specified at the creation of each native command, and in the JSON created by 'json_add_help_command()'.
New fields don't have to be spelled out twice.
The raw version are called _only, so we don't miss a call
accidentally. We can rename them when we finally deprecated old
fields.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Instead of lightningd telling us when it's ready, we ask it.
This also provides an opportunity to have a plugin hook at this point.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The original idea is to "tal" channel on the "ctx"(In fact, we'd like to set ctx as "ld").
But we already tal channel on "ld" in new_channel(), so "ctx" is unused.
These are always handed to subdaemons as a set, so group them. This makes
it easier to add an fd (in the next patch).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The chainparams are needed to know the prefixes, so instead of passing down
the testnet, we pass the entire params struct.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
68fe5eacde introduced a skip in the iteration
of the available funds, which means utxos[i] may be off the end of utxos.
Reported-and-debugged-by: @nitramiz
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The old value of 1000 sat was too small to cover the dust reserves.
This lead to the situation when trying to open a channel with minimal
amount, the channels got refused because they were not able cover the
commitment fees.
For this reason the minimal capacity should be increased to i.e. 10k
satoshi, as the technical minimum that also accounts for fees and
reserves is somewhere around 6k sat.
This fixes block parsing on testnet; specifically, non-standard tx versions.
We hit a type bug in libwally (wallt_get_secp_context()) which I had to
work around for the moment, and the updated libsecp adds an optional hash
function arg to the ECDH function.
Fixes: #2563
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We're going to make it async, so start by moving the core code into
invoice.c and having that directly call fail/success functions for the
htlc.
We add an extra check in fulfill_htlc() that the HTLC state is correct:
that can't happen now, but may once we're async.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We'd like to display the receive and resolution times in the forwardings
table. In order to remember the receive time we need to store it in the DB
along with the other information.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
I tried to just do gossipd, but it was uncontainable, so this ended up being
a complete sweep.
We didn't get much space saving in gossipd, even though we should save
24 bytes per node.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Node ids are pubkeys, but we only use them as pubkeys for routing and checking
gossip messages. So we're packing and unpacking them constantly, and wasting
some space and time.
This introduces a new type, explicitly the SEC1 compressed encoding
(33 bytes). We ensure its validity when we load from the db, or get it
from JSON. We still use 'struct pubkey' for peer messages, which checks
validity.
Results from 5 runs, min-max(mean +/- stddev):
store_load_msec,vsz_kb,store_rewrite_sec,listnodes_sec,listchannels_sec,routing_sec,peer_write_all_sec
39475-39572(39518+/-36),2880732,41.150000-41.390000(41.298+/-0.085),2.260000-2.550000(2.336+/-0.11),44.390000-65.150000(58.648+/-7.5),32.740000-33.020000(32.89+/-0.093),44.130000-45.090000(44.566+/-0.32)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Pubkeys are not not actually DER encoding, but Pieter Wuille corrected
me: it's SEC 1 documented encoding.
Results from 5 runs, min-max(mean +/- stddev):
store_load_msec,vsz_kb,store_rewrite_sec,listnodes_sec,listchannels_sec,routing_sec,peer_write_all_sec
38922-39297(39180.6+/-1.3e+02),2880728,41.040000-41.160000(41.106+/-0.05),2.270000-2.530000(2.338+/-0.097),44.570000-53.980000(49.696+/-3),32.840000-33.080000(32.95+/-0.095),43.060000-44.950000(43.696+/-0.72)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
- add config value min_capacity_sat that will replaces the magic value
min_effective_htlc_capacity = AMOUNT_MSAT(1000000)
- add config switch min_capacity_sat
wallet/test/run-wallet was failing the valgrind check; turns out
`sqlite3_expanded_sql` expects you to manage the memory of strings
it returns. from `sqlite3.h`:
** ^The string returned by sqlite3_expanded_sql(P), on the other hand,
** is obtained from [sqlite3_malloc()] and must be free by the application
** by passing it to [sqlite3_free()].
Adding a giant IO message simply causes it to be pruned immediately,
so truncate it if it's more than 1/64 the max size.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
I was tempted to create a new db_select_stmt wrapper type, but that means
a lot of boilerplate around binding, which expects to work with db_prepare
*and* db_select_prepare.
This lets us clearly differentiate between db queries (which don't need to
go to a plugin) and db changes (which do).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
1. Rename channel_funding_locked to channel_funding_depth in
channeld/channel_wire.csv.
2. Add minimum_depth in struct channel in common/initial_channel.h and
change corresponding init function: new_initial_channel().
3. Add confirmation_needed in struct peer in channeld/channeld.c.
4. Rename channel_tell_funding_locked to channel_tell_depth.
5. Call channel_tell_depth even if depth < minimum, and still call
lockin_complete in channel_tell_depth, iff depth > minimum_depth.
6. channeld ignore the channel_funding_depth unless its >
minimum_depth(except to update billboard, and set
peer->confirmation_needed = minimum_depth - depth).
- Intrduce DB update `channel` values: `feerate_base` and `feerate_ppm`
- Make fist use of now context realted DB migration
- Add `struct channel` members of the same name
- Use struct values instead of config when commiting new channels
This will add the testnet config copyNpaste to test_utils.c,
so that the test stups can set these.
Alternatively, to reduce code duplication, we can move the
testnet_config and mainnet_config from options.c to options.h.
Allow a function as well as (or instead of!) an sql statement. That
will let us do things like set per-channel values to the global
defaults, for example.
Since we remove the NULL termination, the final entry is ARRAY_SIZE()-1
not ARRAY_SIZE()-2.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
More efficient to measure the ARRAY_SIZE(), which is a runtime
constant. We move it into the unit test.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Below this code appears:
if (current != orig)
db_exec(__func__, db,
"INSERT INTO db_upgrades VALUES (%i, '%s');",
orig, version());
But since the loop pre-increments current, this is always true. I wondered
why there were so many duplicates in my db_upgrades table!
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This field was used by `pay` to hold the bolt11 description if the bolt11
string used `h` to hash the description (which nobody ever did). If the
`h` field wasn't present, it could contain anything, as it wasn't checked.
It's really useful to have a label for payments (eg. '1 Cuban'), but adding
yet-another option would be painful, so we simply rename 'description'
to 'label' except inside the db.
This means we need to do some tricky parameter parsing to handle array
and keyword JSON arguments, but only until we remove the old name.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Without this, there's no proof of payment, since it is the signed invoice
that make the receipt valid.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
In particular this matches the case of `their_unilateral/to_us` outputs, which
were missing their addresses so far.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
In order to avoid having to ask the HSM for public keys to
their_unilateral/to_us outputs we just store the `scriptPubkey` with the UTXO,
which can then be converted to the P2WPKH address.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
We want to disallow using unconfirmed outputs by default, so making the
default 1 confirmation seems a good idea. This also matches `bitcoind`s
minimum confirmation requirement.
Arming however breaks some of our tests, so I used `minconf=0` for the
breaking tests and added a new test specifically for the `minconf` parameter
for `fundchannel`.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
This allows us to specify that an output must have been confirmed before the
given maximum height. This allows us to specify a minimum number of
confirmations for an output to be selected.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
We need to do it in various places, but we shouldn't do it lightly:
the primitives are there to help us get overflow handling correct.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
As a side-effect of using amount_msat in gossipd/routing.c, we explicitly
handle overflows and don't need to pre-prune ridiculous-fee channels.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Using param_tok is generally deprecated, as it doesn't give any sanity checking
for the JSON 'check' command. So make param_wtx usable directly, and
also make it have a struct amount_sat.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
They're generally used pass-by-copy (unusual for C structs, but
convenient they're basically u64) and all possibly problematic
operations return WARN_UNUSED_RESULT bool to make you handle the
over/underflow cases.
The new #include in json.h means we bolt11.c sees the amount.h definition
of MSAT_PER_BTC, so delete its local version.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Final step for the `peer_connected` hook, we parse the result and act
accordingly. Currently we just close the underlying connection, but we
may want to clean up peers that did not end up with a channel.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
This hook is used to let a plugin decide whether we should continue
with the connection normally, or whether we should be dropping the
connection. Possible use-cases include policy enforcement (dropping
connections based on previous interactions), draining a node by
allowing only peers with an active channel to reconnect, or
temporarily preventing a channel from making progress.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
We were not correctly allocating the `db->filename`, failing to copy the
null-terminator. This was causing and error when reopening the database after
the call to `fork()`.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
Reported-by: Sean McNally <@sfmcnally>
Changelog-fixed: Fixed a crash when running in daemon-mode due to db filename overrun
We need to still accept it when parsing the database, but this flag
should allow upgrade testing for devs building on top
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We store it in a strmap. This means we call the jsonrpc handler earlier,
so all callers need to call param() before they do anything else; only
json_listaddrs and json_help needed fixing.
Plugins still use '[usage]' for now.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Add the change output to owned_txfilter so its entry in db will
get a confirmation_height when detected in a block by filter_block_txs
before this commit, after a 'withdraw' command, 'listfunds' would
not show our change outputs as confirmed
Modified the log message in wallet_extract_owned_outputs to
append 'CONFIRMED' when it is called with a blockheight arg.
To make distinction between (1st call) when adding owned output to the
db and (2th call) when confirmed in block.
We had a bug 0ba547ee10 caused by
short_channel_id overflow. If we'd caught this, we'd have terminated
the peer instead of crashing, so add appropriate checks.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
I upgraded my node with --disable-compat, and a heap of channels closed like:
CHANNELD_NORMAL:We disagree on short_channel_ids: I have 557653x0x1351, you say 557653x2373x1",
This is because the scids are strings in the databases, and it failed to parse
them properly.
Now we'll not start if that happens.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Christian and I both unwittingly used it in form:
*tal_arr_expand(&x) = tal(x, ...)
Since '=' isn't a sequence point, the compiler can (and does!) cache
the value of x, handing it to tal *after* tal_arr_expand() moves it
due to tal_resize().
The new version is somewhat less convenient to use, but doesn't have
this problem, since the assignment is always evaluated after the
resize.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We currently use 'all-zeroes' as 'unknown', but NULL is more natural
even if we have to send it as all-zeroes over the wire due to
expressiveness limitations in our generation code.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Obviously the Facebook relationship status joke was a bit subtle, but I've
continued it anyway because I'm especially susceptible to Dad jokes.
Suggested-by: @niftynei
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This causes a compiler warning if we don't do something with the
result (hopefully return immediately!).
We use was_pending() to ignore the result in the case where we
complete a command in a callback (thus really do want to ignore
the result).
This actually fixes one bug: we didn't return after command_fail
in json_getroute with a bad seed value.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Usually, this means they return 'command_param_failed()' if param()
fails, and changing 'command_success(); return;' to 'return
command_success()'.
Occasionally, it's more complex: there's a command_its_complicated()
for the case where we can't exactly determine what the status is,
but it should be considered a last resort.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Handers of a specific form are both designed to be used as callbacks
for param(), and also dispose of the command if something goes wrong.
Make them return the 'struct command_result *' from command_failed(),
or NULL.
Renaming them just makes sense: json_tok_XXX is used for non-command-freeing
parsers too.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
These routines free the 'struct command': a common coding error is not
to return immediately.
To catch this, we make them return a non-NULL 'struct command_result
*', and we're going to make the command handlers return the same (to
encourage 'return command_fail(...)'-style usage).
We also provide two sources for external use:
1. command_param_failed() when param() fails.
2. command_its_complicated() for some complex cases.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
json_escaped.[ch], param.[ch] and jsonrpc_errors.h move from lightningd/
to common/. Tests moved too.
We add a new 'common/json_tok.[ch]' for the common parameter parsing
routines which a plugin might want, taking them out of
lightningd/json.c (which now only contains the lightningd-specific
ones).
The rest is mainly fixing up includes.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This (will) avoid the plugin having to walk back from the params object
as it currently does.
No code changes; I removed UNUSED and UNNEEDED labels from the other
parameters though (as *every* json_rpc callback needs to call param()
these days, they're *always* used).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is prep work for when we sign htlc txs with
SIGHASH_SINGLE|SIGHASH_ANYONECANPAY.
We still deal with raw signatures for the htlc txs at the moment, since
we send them like that across the wire, and changing that was simply too
painful (for the moment?).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
For onchaind we need to remove globals from memleak consideration;
we also change the htlc pointer to an htlc copy, which simplifies
things as well.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We probably also want to call secp_randomise/wally_secp_randomize here
too, and since these calls all call setup_tmpctx, it probably makes
sense to have a helper function to do all that. Until thats done, I
modified the tests so grepping will show the places where the sequence
of calls is repeated.
Signed-off-by: Jon Griffiths <jon_p_griffiths@yahoo.com>
We promote 'struct json_stream' to contain the membuf; we only attach
the json_stream to the command when we actually call
json_stream_success / json_stream_fail.
This means we are closer to 'struct json_stream' being an independent
layer; the tests are already modified to use it directly to create
JSON.
This is also the first step toward re-enabling non-serial command
execution.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We always have an addr entry in the db (though it may be an ephemeral socket
the peer connected in from). We don't have to handle a NULL address.
While we're there, simplify new_peer not to take the features args;
the caller who cares immediately updates the features anyway.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
If there are two HTLCs with the same preimage, lightningd would always
find the first one. By including the id in the `struct htlc_stub`
it's both faster (normal HTLC lookup) and allows lightningd to detect
that onchaind wants to fail both of them.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Such an API is required for when we stream it directly. Almost all our
handlers fit this pattern already, or nearly do.
We remove new_json_result() in favor of explicit json_stream_success()
and json_stream_fail(), but still allowing command_fail() if you just
want a simple all-in-one fail wrapper.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It's the only user of them, and it's going to get optimized.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
gossip.pydiff --git a/common/test/run-json.c b/common/test/run-json.c
index 956fdda35..db52d6b01 100644
And use wallet_forward_status_in_db() everywhere in db code.
And clean up extra CHANGELOG.md entry (looks like rebase error?)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The left join should make sure we still get the results but
referencing the fields and/or attempting to write them to the JSON-RPC
result will cause unforeseen problems. So just omit if we forgot
something.
Gossipd provided a generic "get endpoints of this scid" and we only
use it in one place: to look up htlc forwards. But lightningd just
assumed that one would be us.
Instead, provide a simpler API which only returns the peer node
if any, and now we handle it much more gracefully.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
If another channel has set the optional `htlc_maximum_msat` field,
we should correctly parse that field and respect it when drawing up
routes for payments.
Noted by @cdecker, the term 'local' is grossly overused, and the hout
preimage is basically only used as a sanity check (though I've just put
a FIXME there for now).
Also eliminated spurious blank line which crept into wallet.c.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
failoutchannel tells us which channel to send an update for (specifically
for temporary_channel_failure); but we don't save it into the db. It's
not even clear we should, since it's a corner case and the channel might
not even exist when we come back.
So on db restore, change such errors to WIRE_TEMPORARY_NODE_FAILURE
which doesn't need an update.
We also don't memset it to 0 in the normal case (we only access if it
failcode has the UPDATE bit set) so valgrind will trigger if we're
wrong.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We don't save them to the database, so fix things up as we load them.
Next patch will actually save them into the db, and this will become
COMPAT code.
Also: call htlc_in_check() with NULL on db load, as otherwise it aborts
internally.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We split json_invoice(), as it now needs to round-trip to the gossipd,
and uniqueness checks need to happen *after* gossipd replies to avoid
a race.
For every candidate channel gossipd gives us, we check that it's in
state NORMAL (not shutting down, not still waiting for lockin), that
it's connected, and that it has capacity. We then choose one with
probability weighted by excess capacity, so larger channels are more
likely.
As a side effect of this, we can tell if an invoice is unpayble (no
channels have sufficient incoming capacity) or difficuly (no *online*
channels have sufficient capacity), so we add those warnings.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We do this a lot, and had boutique helpers in various places. So add
a more generic one; for convenience it returns a pointer to the new
end element.
I prefer the name tal_arr_expand to tal_arr_append, since it's up to
the caller to populate the new array entry.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>