This simplifies things, and means it's always in the database. Our
previous approach to creating it on the fly had holes when it was
created for onchaind, causing us to use another every time we
restarted.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Our testing also reveals a bug: we start lightningd and shut it down
before fully processing the blockchain, so we don't set
last_processed_block. Fix that by setting it immediately once we have
a block: worst case it goes backwards a little.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
When we already know about an output we would stop scanning the remaining
outputs. Known outputs happen whenever we extracted from our own transactions
and then extracted again from blocks. We would not update if the first update
fails.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
Will be used later to filter out outputs we are interested in, and
trigger db updates with them.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
Each state (effectively, each daemon) has two slots: a permanent slot
if something permanent happens (usually, a failure), and a transient
slot which summarizes what's happening right now.
Uncommitted channels only have a transient slot, by their very nature.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
And now we can finally do the db upgrade to remove any OPENINGD
channels once, since we never put them back.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It's giant, but it's encapsulating at least. It is called from the wallet
code when loading channels, or from the opening code when converting
an uncommitted_channel.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Now any struct channel is a genuine channel, the following fields are
always valid:
1. funding_txid: doesn't need to be a pointer.
2. our_msatoshi: doesn't need to be a pointer.
3. last_sig: doesn't need to be a pointer.
4. channel_info: doesn't need to be a pointer.
In addition, 'last_tx' is always valid.
The main effect is to remove a whole heap of branches from the wallet code.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Since we create new entries from wallet_channel_insert(), there's no
need for the branches. And indeed, many wallet functions can be
static.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We derive the seed from this, so it needs to be unique, but using
rowid forced us to put the channel into the db early, before it
was ready.
Instead, use a counter to ensure uniqueness, initialized when we load
existing peers. This doesn't need to touch the database at all.
As we now have only two places where the channel is committed (the
funder and fundee paths), so we create a new explicit
'wallet_channel_insert()' function: 'wallet_channel_save()' now just
updates.
Note that this also fixes some weirdness in
wallet_channels_load_active: we strangely avoided loading channels in
CLOSINGD_COMPLETE (which fortunately was a transient state, so
unlikely anyone hit this). Note that since the lines above already
delete all the OPENINGD channels, we now simply load them all.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Adds a simple check that compares genesis-blockhashes from the
chainparams against the blockhash that the wallet was created
with. The wallet is network specific, so mixing is always a bad idea.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
ZmnSCPxj queried the unilateral close case, so make that clearer.
Christian raise concerns about existing channels, so make it clear
what we're doing there too.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We only need to do that if it's possible there's something to find:
either we have an unspent output from a unilateral close, or we've
ever handed out an address.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
With fallback depending on chainparams: this means the first upgrade
will be slow, but after that it'll be fast.
Fixes: #990
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We usually did this, but sometimes they were named after what they did,
rather than what they cleaned up.
There are still a few exceptions:
1. I didn't bother creating destroy_xxx wrappers for htable routines
which already existed.
2. Sometimes destructors really are used for side-effects (eg. to simply
mark that something was freed): these are clearer with boutique names.
3. Generally destructors are static, but they don't need to be: in some
cases we attach a destructor then remove it later, or only attach
to *some* cases. These are best with qualifiers in the destroy_<type>
name.
Suggested-by: @ZmnSCPxj
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This provides a sanity check that we are in sync, and also keeps the
logic in the program and out of the SQL.
Since the destructor now doesn't clean up the peer, there are some
wider changes to be made when cleaning up. Most notably we create
lots of channels in run-wallet.c and they previously freed the peer:
now we need free the peer explicitly, so we need to free them first.
Suggested-by: @cdecker
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Much like the database; peer contains id, address, channel contains
per-channel information. Where we create a channel, we always create
the peer too.
For the moment, peer->log and channel->log coexist side-by-side, to
reduce some of the churn.
Note that this changes the API to dev-forget-channel: if we have more
than one channel, we insist they specify the short-channel-id.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Both when we forget about an opening peer, and at startup. We're
going to be relying on this, and the next patch, as we refactor
peer/channel handling to mirror the db.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Still writing the channel since some of the channel setup parameters
depends on `chan->id` to be set. If we later set the `chan->id`
signatures fail. This prevents OPENINGD channels showing up after
restarting.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
We were sideloading it, which is awkward, now it's a field that we can
actually use in the code.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
Maintaining it was always fraught, since the command could go away
if the JSON RPC died. Most recently, it was broken again on shutdown
(see below).
In future we may allow pay commands to block on previous payments, so
it won't even be a 1:1 mapping. Generalize it: keep commands in a
simple list and do a lookup when a payment fails/succeeds.
Valgrind error file: valgrind-errors.5732
==5732== Invalid read of size 8
==5732== at 0x4149FD: remove_cmd_from_hout (pay.c:292)
==5732== by 0x468BAB: notify (tal.c:237)
==5732== by 0x469077: del_tree (tal.c:400)
==5732== by 0x4690C7: del_tree (tal.c:410)
==5732== by 0x46948A: tal_free (tal.c:509)
==5732== by 0x40F1EA: main (lightningd.c:362)
==5732== Address 0x69df148 is 1,512 bytes inside a block of size 1,544 free'd
==5732== at 0x4C2EDEB: free (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==5732== by 0x469150: del_tree (tal.c:421)
==5732== by 0x46948A: tal_free (tal.c:509)
==5732== by 0x4198F2: free_htlcs (peer_control.c:1281)
==5732== by 0x40EBA9: shutdown_subdaemons (lightningd.c:209)
==5732== by 0x40F1DE: main (lightningd.c:360)
==5732== Block was alloc'd at
==5732== at 0x4C2DB8F: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==5732== by 0x468C30: allocate (tal.c:250)
==5732== by 0x4691F7: tal_alloc_ (tal.c:448)
==5732== by 0x40A279: new_htlc_out (htlc_end.c:143)
==5732== by 0x41FD64: send_htlc_out (peer_htlcs.c:397)
==5732== by 0x41511C: send_payment (pay.c:388)
==5732== by 0x41589E: json_sendpay (pay.c:513)
==5732== by 0x40D9B1: parse_request (jsonrpc.c:600)
==5732== by 0x40DCAC: read_json (jsonrpc.c:667)
==5732== by 0x45C706: next_plan (io.c:59)
==5732== by 0x45D1DD: do_plan (io.c:387)
==5732== by 0x45D21B: io_ready (io.c:397)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The previous tests didn't make sense anyway, but I think they were trying
to exclude onchain channels.
We delete completely forgotten channels anyway now, so we don't need
such testing.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
From test_reconnect_sender_add1:
lightningd(13643):BROKEN: backtrace: wallet/wallet.c:1537 (wallet_payment_set_status) 0x561c91b03080
lightningd(13643):BROKEN: backtrace: lightningd/pay.c:67 (payment_failed) 0x561c91ac4f99
lightningd(13643):BROKEN: backtrace: lightningd/peer_htlcs.c:132 (fail_out_htlc) 0x561c91acf627
lightningd(13643):BROKEN: backtrace: lightningd/peer_htlcs.c:321 (hout_subd_died) 0x561c91acfb62
When payment fails, we call wallet_payment_set_status; this is perfectly
possible before it's been committed.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
For performance, we delay entering the 'wallet_payment' into the db
until we actually commit to the HTLC (when we have to touch the DB
anyway).
This opens a race where we can try to pay twice, and since it's not in
the database yet, we don't notice the duplicate.
So remove the temporary payment field from htlc_out, which was always
an uncomfortable hack, and make the wallet code abstract over the
deferred entry a little by maintaining a 'unstored_payments' list
and incorporating that in results.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We need these to decode any returned errors.
We remove it from struct pay_command too, and load directly from db
when we need it.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We should be saving this, as it's our proof of payment. Also, we return
it if they try to pay again.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Paid invoices need to know how much was actually paid: both for the case
where no 'msatoshi' amount was specified, and for the normal case, where
clients are permitted to overpay in order to help them disguise their
payments.
While we migrate the db, we leave this field as 0 for old paid
invoices. This is unhelpful for accounting, but at least clearly
indicates what happened if we find this in the wild.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This reuses the same code internally, and also now means that we deal
correctly with "any" msatoshi invoices: the old code would a return
'msatoshi' of 0 in that case.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We were using int32 for msatoshi values for outputs, which would
overflow for values larger than 2^32.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
This is necessary to grad the their_unilateral/to-us outputs since
they aren't being harvested by `onchaind`
Signed-off-by: Christian Decker <decker.christian@gmail.com>
This gives us a lower bound on where funding tx could be.
In theory, it could be lower than this if we get a reorganization, but
in practice this is already a 1-block buffer (since we can't get into
current block, only the next one).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It's only used for tests, but it's better to use the wallet_channels_load_active like
the real code.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It's just a sha256_double, but importantly when we convert it to a
string (in type_to_string, which is used in logging) we use
bitcoin_txid_to_hex() so it's reversed as people expect.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It looks like we were missing the address on insert into the peers table. This
will insert the address formatted by fmt_wireaddr. This happens to include the
ip and port.
This is fine, since parse_wireaddr has been updated to parse ports in ip address
strings.
Signed-off-by: William Casarin <jb55@jb55.com>
Accuracy improvements:
1. We assumed the output was a p2wpkh, but it can be user-supplied now.
2. We assumed we always had change; remove this for wallet_select_all.
Calculation out-by-one fixes:
1. We need to add 1 byte (4 sipa) for the input count.
2. We need to add 1 byte (4 sipa) for the output count.
3. We need to add 1 byte (4 sipa) for the output script length for each output.
4. We need to add 1 byte (4 sipa) for the input script length for each input.
5. We need to add 1 byte (4 sipa) for the PUSH optcode for each P2SH input.
The results are now a slight overestimate (due to guessing 73 bytes
for signature, whereas they're 71 or 72 in practice).
Fixes: #458
Reported-by: Jonas Nick @jonasnick
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We weren't incrementing the `col` for the `local_shutdown_idx` field,
which meant that all following fields were incorrect. I removed the
`col` computation and opted for absolute indices instead, since they
are way less brittle. Just remember to add new fields to the query at
the end so we don't have to shift too often :-)
Reported-by: William Casarin @jb55
Signed-off-by: Christian Decker <decker.christian@gmail.com>
The wire protocol uses this, in the assumption that we'll never see feerates
in excess of 4294967 satoshi per kiloweight.
So let's use that consistently internally as well.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Used by the JSON-RPC for the listtransfers call. Currently does not
support any form of paging.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
We save location where transaction was started, in case we try to nest.
There's now no error case; db_exec_mayfail() is the only one.
This means the tests need to override fatal() if they want to intercept
these errors.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We're going to be always in a transaction soon.
Note the rollback we used to do was an optimization: the utxo destructors
would already clean up the new UTXOs in the database.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is the only case where we actually rely on the db to ensure we don't
do something twice: don't error out if it fails.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Nesting is provided by only actually performing the outermost
transaction and simulating the nested ones. This still allows us to
ensure on lower levels that we are in the context of a transaction
without having to resort to keeping explicitly track of it in the
calling code.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
In addition we also set some of the test values to a pattern instead
of just `memset`ting it to 0, which may hide some crossed lines.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
We use these quite often and it is cumbersome having to do these
simple conversions inline, so just expose pseudo-sqlite3 methods to
bind and extract from/to a stmt.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
"near \"AND\": syntax error"
This was caught by the "always keep errors for db_commit_transaction".
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We'd like to not keep them in memory and retrieve them on-demand when
`onchaind` is launched. This uses the `channel_htlcs` table as backing
but only fetches the minimal necessary information.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
This is a necessary evil since at the time we load `struct htlc_out`
associated with a channel we might not have loaded the `struct
htlc_in` that it depends on, so we defer the rewiring until we have
loaded all HTLCs for all channels. At that point rewiring MUST work,
otherwise we report a failure.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
While loading HTLCs from the database we might not yet have all the
incoming HTLCs loaded when loading a dependent htlc_out. So we defer
the wiring of the HTLCs until we are sure we have them loaded.
This is also the first step towards keeping that association only in
the database, since otherwise we cannot selectively load channels from
DB.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
This was causing some compilation trouble on 32bit systems, see #256.
Reported-by: @shsmith
Signed-off-by: Christian Decker <decker.christian@gmail.com>
Also, we split the more sophisticated json_add helpers to avoid pulling in
everything into lightning-cli, and unify the routines to print struct
short_channel_id (it's ':', not '/' too).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The peer->seed needs to be unique for each channel, since bitcoin
pubkeys and the shachain are generated from it. However we also need
to guarantee that the same seed is generated for a given channel every
time, e.g., upon a restart. The DB channel ID is guaranteed to be
unique, and will not change throughout the lifetime of a channel, so
we simply mix it in, instead of a separate increasing counter.
We also needed to make sure to store in the DB before deriving the
seed, in order to get an ID assigned by the DB.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
This is the big one, and it's completely anticlimactic: it loads all
channels that have reached opening and are not marked as
closingd_complete into memory, that's it.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
They happen to advance at the same pace but mixing them may have
unforeseen consequences, and I have done so a few times already so
this explicitly separates them.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
This was supposed to be a temporary solution anyway, and I had a
rather annoying mixup between peer_id and unique_id, the latter of
which is actually a connection identifier.
If we kill the daemon without performing any commits we ended up with
a 0 instead of UINT48_MAX which was expected.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
And store in peer->last_tx/peer->last_sig like all other places,
that way we broadcast it if we need to.
Note: the removal of tmpctx in funder_channel() is needed because we
use txs[0], which was allocated off tmpctx.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Definitely not as nice as it could be, but it works for now. This is
primarily intended as a simple dump method that just saves everything
to the database. We will later use smaller incremental updates to
update specific things. wallet_channel_save serves both to insert as
well as update.
This needed a rather annoying hack since sqlite3 can only store
integers up to 2^63, so I just squash it down/invert it, and hope that
we never ever have more than 2^63 updates.
Splitting the detection for outputs that we own into a separate
`wallet_extract_owned_outputs` function and use it when the broadcast
succeeds to re-add the change output back to the database.
Wallet should really be the container for anything bip32 related, so
I'd like to slowly wean off of `ld->bip32_base` in favor of
`ld->wallet->bip32_base`
We'll re-use them a few times so having them at a central location is
nice. We also fix a bug that was unreserving UTXO entries upon free,
instead of promoting them to being spent.
Since we have a simple way to query the database for UTXOs we can
simplify some of the coin selection logic. That gets rid of the
in-memory list of UTXOs.