v2 channel open uses a different method to derive the channel_id, so now
we save it to the database so that we dont have to remember how to
derive it for each.
includes a migration for existing channels
Note that other directories were explicitly depending on the generated
file, instead of relying on their (already existing) dependency on
$(LIGHTNINGD_HSM_CLIENT_OBJS), so we remove that.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Otherwise valgrind gets upset when we *run* the statements: better
to get a backtrace when we bind, so we can tell which field it is!
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We've never hit this, we do check them on insert, and it's slowing
down some operations unnecessarily.
$ time lightning-cli -R --network=regtest --lightning-dir /tmp/ltests-k8jhvtty/test_pay_stress_1/lightning-1/ listpays > /dev/null
Before:
real 0m1.781s
user 0m0.127s
sys 0m0.013s
After:
real 0m1.545s
user 0m0.124s
sys 0m0.024s
Also, the raw listsendpays drops from 0.983s to 0.676s.
(With -O3 -flto, listsendpays is 0.416s, listpays 0.971s).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We need to remember this in the db (it's a P2WSH for option_anchor_outputs),
and we need to set nSequence to 1 to spend it.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is the same way we handle option_static_remotekey, which
is also sticky (if negotiated at opening time, it always applies).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is what txsend does, only we have a psbt so we have
to change the db interface to take a wally_tx.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Includes:
psbt: Use renamed functions for new wally version
psbt: Set the transaction directly to avoid script workarounds
psbt: Use low-S grinding when computing signatures
tx: Use wally_tx_clone from libwally now that its exported
Signed-off-by: Jon Griffiths <jon_p_griffiths@yahoo.com>
the way we use PSBTs to sign things requires that we have the
scriptpubkey available on the utxo so we can populate the witness-utxo
field with it.
this causes problems if we don't already have the scriptpubkey cached in
the database, as in *some* cases we require a round trip to the HSM to
populate them
to get over this hump, we backfill any and all missing scriptpubkey
information for the utxo's that we hold in our wallet.
this will allow us to clean up the NULL handling of missing
scriptpubkeys.
we're about to add a migration that requires access to the bip32_key
in order to calculate missing scriptpubkeys.
prior to this patch, we don't have access to the bip32 key in the db
migration, as it's set on the wallet but after the db migrations are
run.
here we patch it through so that every migration can access it
PSETs have a bit different requirements. The witness_utxo needs
the asset tag + values, and these should also be added to the PSET
struct separately as well. To do this, we create a new 'init' method for
elements inputs, which takes care of the elements specific things.
We erase peer data after the last channel close transaction for that
peer is 100 blocks deep. We were failing to finish the migration because
the peer_id lookup on these was failing.
Now we ignore any channel with a null peer_id.
Fixes#3768
We update the `last_tx` in `channels` to be psbt format, instead
of a linearized transaction.
We need the amount of the input populated, which we have since
this is the 'funding' amount. Ideally we'd also populate the funding
scriptPubkey, but to do that we'd need to access the HSM module to fetch
our local funding pubkey, which isn't initialized at the time that the
database migrations are run.
Since the only field the HSM uses currently when signing these is the
amount field, it's ok to just leave it out.
needs a test!
The current plan for coin movements involves tagging
origination/destination htlc's with a separate tag from 'routed' htlcs
(which pass through our node). In order to do this, we need a persistent flag on
incoming htlcs as to whether or not we are the final destination.
We're going to change our internal structure next, so this is preparation.
We populate existing errors with temporary node failures, for simplicity.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
I really want a type which means "I am a wrapped onion reply" as separate
from "I am a normal wire msg". Currently both user u8 *, and I got very
confused trying to figure out where each one was an unwrapped error msg,
or where it still needed (un)wrapping.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The optimistic lock prevents multiple instances of c-lightning making
concurrent modifications to the database. That would be unsafe as it messes up
the state in the DB. The optimistic lock is implemented by checking whether a
gated update on the previous value of the `data_version` actually results in
an update. If that's not the case the DB has been changed under our feet.
The lock provides linearizability of DB modifications: if a database is
changed under the feet of a running process that process will `abort()`, which
from a global point of view is as if it had crashed right after the last
successful commit. Any process that also changed the DB must've started
between the last successful commit and the unsuccessful one since otherwise
its counters would not have matched (which would also have aborted that
transaction). So this reduces all the possible timelines to an equivalent
where the first process died, and the second process recovered from the DB.
This is not that interesting for `sqlite3` where we are also protected via the
PID file, but when running on multiple hosts against the same DB, e.g., with
`postgres`, this protection becomes important.
Changelog-Added: DB: Optimistic logging prevents instances from running concurrently against the same database, providing linear consistency to changes.
This increments the `data_version` upon committing dirty transactions, reads
the last data_version upon startup, and tracks the number in memory in
parallel to the DB (see next commit for rationale).
Changelog-Changed: JSON-RPC: Added a `data_version` field to the `db_write` hook which returns a numeric transaction counter.
The upgrade here is a bit tricky: we map the two values into the
feerate_state. This is trivial if they're both the same, but if
they're different we don't know exactly what state they're in (this
being the source of the bug!).
So, we assume that the have received the update and not acked it,
as that would be the normal case.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is in preparation for partial payments. For existing payments,
partid is 0 (to match the corresponding payment).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is in preparation for partial payments. For existing payments,
partid is 0 (arbitrarity) and total_msat is msatoshi.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
In a future version, we will use features to insist that payers
provide the secret. In transition, we may have old invoices which
didn't insist on that, so we need to know this on a per-invoice basis.
Not sure if I got the right syntax for adding an empty blob though!
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
1. Printed form is always "[<nodeid>-]<prefix>: <string>"
2. "jcon fd %i" becomes "jsonrpc #%i".
3. "jsonrpc" log is only used once, and is removed.
4. "database" log prefix is use for db accesses.
5. "lightningd(%i)" becomes simply "lightningd" without the pid.
6. The "lightningd_" prefix is stripped from subd log prefixes, and pid removed.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-changed: Logging: formatting made uniform: [NODEID-]SUBSYSTEM: MESSAGE
Changelog-removed: `lightning_` prefixes removed from subdaemon names, including in listpeers `owner` field.
The case where this is needed is when the wallet had a forwarded payment
somewhere between commits 66a47d2 (which started tracking forwardings) and
d901304 (which added the `received_time` column). This just emulates the
behavior of sqlite3 for postgres as well.
Signed-off-by: Christian Decker <@cdecker>
Checking on whether we access a null field is ok, but should we crash right
away? Probably not. This reduces the access to a warning on sqlite3 and let's
it continue. We can look for occurences and fix them as they come up and then
re-arm the asserts once we addressed all cases.
We were implicitly relying on sqlite3 behavior that returns the zero-value for
nulled fields when accessing them. This adds the same behavior explicitly to
the DB abstraction in order to reduce `db_column_is_null` checks in the logic,
but still make it evident what is happening here.
Fixes https://github.com/fiatjaf/mcldsp/issues/1
Signed-off-by: Christian Decker <@cdecker>
`shutdown_scriptpubkey[REMOTE]` is original remote_shutdown_scriptpubkey;
`shutdown_scriptpubkey[LOCAL]` is the script used for "to-local" output when `close`. Add the default is generated form `final_key_idx`;
Store `shutdown_scriptpubkey[LOCAL]` into wallet;
sqlite3 was forgiving, postgres isn't, so let's make sure we use the strictest
field type possible, relaxing when rewriting.
The commit consists just of the following mapping
- INTEGER -> BIGSERIAL if it is the primary key
- INTEGER -> BIGINT if it is an amount or a reference to a primary key
Signed-off-by: Christian Decker <decker.christian@gmail.com>
This was already done in `db_step` but `db_count_changes` and
`db_last_insert_id` also rely on the statement being executed. Furthermore we
now check that the statement was executed before freeing it, so it can't
happen that we dispose of a statement we meant to execute but forgot.
The combination of these could be used to replace the pending_statement
tracking based on lists, since we now make sure to execute all statements and
we use the memleak checker to make sure we don't keep a statement in memory.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
sqlite3 doesn't really do any validation whatsoever, and there is no
difference between 64bit and 32bit numbers. Posgtres on the other hand gets
very upset if the size doesn't match.
This commit swaps out handwavy types with the ones that should be there :-)
Signed-off-by: Christian Decker <decker.christian@gmail.com>
This was weird right from the start, so we just split the table into integers
and blobs, so each column has a well-defined format. It is also required for
postgres not to cry about explicit casts in the `paramTypes` array.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
The first ever query to check if the version DB exists may fail. We allow
this, but we need to restart the DB transaction since postgres fails the
current transaction and rolls back any changes.
This just commits (and fails) and starts a new transaction so the rest of the
migration can continue.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
Needed to change a couple of migrations. The changes are mostly innocuous:
- changing BLOB to TEXT for short_channel_ids which is the correct type
anyway, and sqlite3 treats them the same anyway.
- Use `int` for version since the byte representation is checked by postgres.
- Change anything that is INT, but will be bound to u64 to BIGINT (again
postgres checks these more carefully than sqlite3).
Two migrations were replaced with dummy values, since they are buried deep
enough, and I found no portable way of expressing `strftime()` and `INSERT OR
IGNORE`.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
Using a generated identifier with filename and line proved to be brittle since
compilers assign the __LINE__ macro differently on multi-line macro
invocations.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
This is dangerous but needed since postgres is not as forgiving about
unsatisfied foreign key constraints even while in a DB transaction.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
We used to do some of the setup work in db.c, which is now free of any
sqlite3-specific code. In addition we also switch over to fully qualified DSNs
to specify the location of the wallet.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
We're about to remove them.
Includes fix to sqlite3_bind_short_channel_id to not assume `id` is a
tal object.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Now that all the users are migrated to the abstraction layer we can remove the
legacy implementation.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
It's better to let the driver decide when and how to expand. It can then
report the expanded statement back to the dispatch through the
`db_changes_add` function.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
We now have a much stronger consistency check from the combination of
transaction wrapping, tal memory leak detection. Tramsaction wrapping ensures
that each statement is executed before the transaction is committed. The
commit is also driven by the `io_loop`, which means that it is no longer
possible for us to have statements outside of transactions and transactions
are guaranteed to commit at the round's end.
By adding the tal-awareness we can also get a much better indication as to
whether we have un-freed statements flying around, which we can test at the
end of the round as well.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
This is likely the last part we need to completely encapsulate the part of the
sqlite3 API that we were using. Like the `db_count_changes` call I decided to
pass in the `struct db_stmt` since really they refer to the statement that was
executed and not the db.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
These are based on top of the basic column access functions, and act as a
small type-safe wrapper, that also does a bit of validation.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
This has a slight side-effect of removing the actual begin and commit
statements from the `db_write` hooks, but they are mostly redundant anyway (no
harm in grouping pre-init statements into one transaction, and we know that
each post-init call is supposed to be wrapped anyway).
Signed-off-by: Christian Decker <decker.christian@gmail.com>
These are used to do one-time initializations and wait for pending statements
before closing.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
I was hoping to get rid of these by using "ON CONFLICT" upserts, however
sqlite3 only started supporting them in version 3.24.0 which is newer than
some of our deployment targets.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
This is the first step towards being able to extract information from query
rows. Only the most basic types are exposed, the others will be built on top
of these primitives.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
For some of the query methods in the next step we need to have an idea of
whether the stmt was executed (db_step function) so let's track that
explicitly.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
These do not require the ability to iterate over the result, hence they can be
migrated already.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
These functions implement the lookup of the query, and the dispatch to the
DB-specific functions that do the actual heavy lifting.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
All drivers will have to reach into it, so put it in a place that is reachable
from the drivers, along with all other definitions.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
This is the counterpart of the annotations we did in the last few commits. It
extracts queries, passes them through a driver-specific query rewriter and
dumps them into a driver-specific query-list, along with some metadata to
facilitate processing later on. The generated query list is then registered as
a `db_config` and will be loaded by the driver upon instantiation.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
We will soon generalize the DB, so directly reaching into the `struct db`
instance to talk to the sqlite3 connection is bad anyway. This increases
flexibility and allows us to tailor the actual implementation to the
underlying DB.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
`db_select_prepare` was prepending the "SELECT" part in an attempt to limit
its use to read-only statements. This is leads to the queries in the code not
actually being well-formed, which we'll need in a later commit, and was also
resulting in extra allocations. This switches the behavior to just enforce a
"SELECT" prefix being present which allows us to have well-formed queries in
the code again and avoids the extra allocation.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
We need to have full DB queries that can be extracted at compile time later in
order to be able to rewrite them in other SQL dialects. In addition we had a
bit of unnecessary code-duplication in db_select and db_select_prepare. Now
the former uses the latter internally.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
Dumb programs which have a --daemon option call fork() early. This is
terrible UX since startup errors get lost: the program exits with
"success" immediately then you discover via the logs that it didn't
start at all.
However, forking late introduced a heap of problems with changing
pids. Instead, fork early but keep stderr and the parent around: if
we fail early on, the parent fails with us. We release our parent
with an explicit action just before the main loop.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
These are generalized from our internal implementations.
The main difference is that 'struct json_escaped' is now 'struct
json_escape', so we replace that immediately.
The difference between lightningd's json-writing ringbuffer and the
more generic ccan/json_out is that the latter has a better API and
handles escaping transparently if something slips through (though
it does offer direct accessors so you can mess things up yourself!).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Mainly used to differentiate channel-related transactions from on-chain wallet
transactions. Will be used to filter `listtransaction` results and bundle
transactions that belong to the same channel.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
We'd like to display the receive and resolution times in the forwardings
table. In order to remember the receive time we need to store it in the DB
along with the other information.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
Node ids are pubkeys, but we only use them as pubkeys for routing and checking
gossip messages. So we're packing and unpacking them constantly, and wasting
some space and time.
This introduces a new type, explicitly the SEC1 compressed encoding
(33 bytes). We ensure its validity when we load from the db, or get it
from JSON. We still use 'struct pubkey' for peer messages, which checks
validity.
Results from 5 runs, min-max(mean +/- stddev):
store_load_msec,vsz_kb,store_rewrite_sec,listnodes_sec,listchannels_sec,routing_sec,peer_write_all_sec
39475-39572(39518+/-36),2880732,41.150000-41.390000(41.298+/-0.085),2.260000-2.550000(2.336+/-0.11),44.390000-65.150000(58.648+/-7.5),32.740000-33.020000(32.89+/-0.093),44.130000-45.090000(44.566+/-0.32)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Pubkeys are not not actually DER encoding, but Pieter Wuille corrected
me: it's SEC 1 documented encoding.
Results from 5 runs, min-max(mean +/- stddev):
store_load_msec,vsz_kb,store_rewrite_sec,listnodes_sec,listchannels_sec,routing_sec,peer_write_all_sec
38922-39297(39180.6+/-1.3e+02),2880728,41.040000-41.160000(41.106+/-0.05),2.270000-2.530000(2.338+/-0.097),44.570000-53.980000(49.696+/-3),32.840000-33.080000(32.95+/-0.095),43.060000-44.950000(43.696+/-0.72)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>