This was always the intent, but now we have to reconstruct from the
disparate fields.
This means `option_anchor_outputs` is now redundant.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
I couldn't figure out why my new SQL query was returning 0 rows,
and it was because we were ignoring errors.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Fun story. We're changing onchaind to hand txs to us, and we will
construct them and do the broadcast for it. lightningd tells onchaind
the witness it used (with flags to indicate which fields were
signatures so should be ignored) so onchaind can recognize the tx
when/if it is mined.
And when onchaind was waiting for a CLTV delay, it wouldn't tell
lightningd yet, but wait until the parent was sufficiently deep
But this caused bugs!
In particular, on replay, onchaind would see transactions which it
hasn't sent yet. This was not a problem before, as onchaind had
created the tx, even if it hadn't told lightningd to broadcast it, so
recognized the variant when it came in. When we're relying on
lightningd to tell us what the tx will look like, this doesn't work
any more.
The cause of this is that we fire off txowatches ("this output was
spent!") while we process blocks, and only fire off txwatches ("this
tx increased depth") once all the current blocks are processed. Often
this didn't matter, since we replay messages to onchaind from the
database, *but* we trim the last few blocks on restart (or, if there's
a small reorg while we're stopped), and we can hit this misordering.
Changing our topology code to only ever process one block at a time
would be a solution, but slows down catchup (and tests, where we often
mine a run of blocks).
So, this seems like a premature optimization, but it's really
required! And in future, lightningd can use this knowledge of pending
transactions to combine them in more clever ways.
Note that if a tx is valid at block N, we broadcast it once we see
block N-1, to get it in the mempool for block N.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
There are cases (difficult to reproduce with a test) where
a payment will fail one time and succeed later.
As far I understand in this case the groupid field of the payment
is the same, and the only thing that change is the status, so
our logic inside the delpay is ambiguous where it is not
possible to delete a payment as described in https://github.com/ElementsProject/lightning/issues/6114
A sequence of commands that explain the problem is
```
$ lc -k listpays payment_hash=H
{
"pays": [
{
"bolt11": "I",
"destination": "redacted",
"payment_hash": "H",
"status": "complete",
"created_at": redacted,
"completed_at": redacted,
"preimage": "P",
"amount_msat": "redacted",
"amount_sent_msat": "redacted"
}
]
}
$ lc delpay H complete
{
"code": 211,
"message": "Payment with hash H has failed status but it should be complete"
}
```
In this case, the delpay is not able to delete a payment because the
listpays is returning only the succeeded one, so by running the
listsendpays we may see the following result where our delpay logic
will be stuck because it works to ensure that all the payments stored
in the database has the status specified by the user
```
➜ VincentSSD clightning --testnet listsendpays -k payment_hash=7fc74bedbb78f2f3330155d919a54e730cf19c11bc73e96c027f5cd4a34e53f4
{
"payments": [
{
"id": 322,
"payment_hash": "7fc74bedbb78f2f3330155d919a54e730cf19c11bc73e96c027f5cd4a34e53f4",
"groupid": 1,
"partid": 1,
"destination": "030b686a163aa2bba03cebb8bab7778fac251536498141df0a436d688352d426f6",
"amount_msat": 300,
"amount_sent_msat": 1664,
"created_at": 1679510203,
"completed_at": 1679510205,
"status": "failed",
"bolt11": "lntb1pjpkj4xsp52trda39rfpe7qtqahx8jjplhnj3tatxy8rh6sc6afgvmdz7n0llspp50lr5hmdm0re0xvcp2hv3nf2wwvx0r8q3h3e7jmqz0awdfg6w206qdp0w3jhxarfdenjqargv5sxgetvwpshjgrzw4njqun9wphhyaqxqyjw5qcqp2rzjqtp28uqy77te96ylt7ek703h4ayldljsf8rnlztgf3p8mg7pd0qzwf8a3yqqpdqqqyqqqqt2qqqqqqgqqc9qxpqysgqgeya2lguaj6sflc4hx2d89jvah8mw9uax4j77d8rzkut3rkm0554x37fc7gy92ws9l76yprdva2lalrs7fqjp9lcx40zuty8gca0g5spme3dup"
},
{
"id": 323,
"payment_hash": "7fc74bedbb78f2f3330155d919a54e730cf19c11bc73e96c027f5cd4a34e53f4",
"groupid": 1,
"partid": 2,
"destination": "030b686a163aa2bba03cebb8bab7778fac251536498141df0a436d688352d426f6",
"amount_msat": 300,
"amount_sent_msat": 3663,
"created_at": 1679510205,
"completed_at": 1679510207,
"status": "failed"
},
{
"id": 324,
"payment_hash": "7fc74bedbb78f2f3330155d919a54e730cf19c11bc73e96c027f5cd4a34e53f4",
"groupid": 1,
"partid": 3,
"destination": "030b686a163aa2bba03cebb8bab7778fac251536498141df0a436d688352d426f6",
"amount_msat": 300,
"amount_sent_msat": 3663,
"created_at": 1679510207,
"completed_at": 1679510209,
"status": "failed"
},
{
"id": 325,
"payment_hash": "7fc74bedbb78f2f3330155d919a54e730cf19c11bc73e96c027f5cd4a34e53f4",
"groupid": 1,
"partid": 4,
"destination": "030b686a163aa2bba03cebb8bab7778fac251536498141df0a436d688352d426f6",
"amount_msat": 300,
"amount_sent_msat": 4663,
"created_at": 1679510209,
"completed_at": 1679510221,
"status": "complete",
"payment_preimage": "43f746f2d28d4902489cbde9b3b8f3d04db5db7e973f8a55b7229ce774bf33a7"
}
]
}
```
This commit solves the problem by forcing the delete query in the
database to specify status too, and work around this kind of
ambiguous case.
Fixes: f52ff07558 (lightningd: allow delpay to delete a specific payment.)
Reported-by: Antoine Poinsot <darosior@protonmail.com>
Link: https://github.com/ElementsProject/lightning/issues/6114
Signed-off-by: Vincenzo Palazzo <vincenzopalazzodev@gmail.com>
Co-Developed-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Fixed: delpay be more pedantic about delete logic by allowing
delete payments by status directly on the database.
We don't cover three common patterns:
1. Optional integers (db_col_u64 has different form from structs)
2. Optional strings.
3. Optional array fields.
But it does neaten and reduce the scope for cut&paste errors in the
common "if not-NULL, tal and assign".
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We actually have an assertion that there are no channels remaining when
we delete peers, so this is confusing!
Actually removing the constraint is db-specific and deeply non-trivial.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
PSBTv2 support is quite low in the ecosystem, so having a call to convert
log messages and the like should be useful since they'll often be in v2.
Changelog-Added: Added setpsbtversion RPC to aid debugging and compatibility
Libwally update breaks compatibility, so
we do this in one large step.
Changelog-Changed: JSON-RPC: elements network PSET now only supports PSETv2.
Changelog-Added: JSON-RPC: PSBTv2 supported for fundchannel_complete, openchannel_update, reserveinputs, sendpsbt, signpsbt, withdraw and unreserveinputs parameter psbt, openchannel_init and openchannel_bump parameter initialpsbt, openchannel_signed parameter signed_psbt and utxopsbt parameter utxopsbt
`struct lightningd` is not completely initialized, so we added a
"migration_context" which only had some of the fields. But we ended
up handing in `struct lightningd` anyway, so I don't think this
complexity is worthwhile.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
At the moment only lightingd needs it, and this avoids missing any
places where we do bip32 derivation.
This uses a hsm capability to mean we're backwards compatible with older
hsmds.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Added: Protocol: we now always double-check bitcoin addresses are correct (no memory errors!) before issuing them.
It's needed as the db and wallet is being set up (db migrations), so
it's simpler this way to always use ld->bip32_base for the next patch.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Importantly, adds the version number at the *front* to help future
parsing.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Header from folded patch 'fix-hsm-check-pubkey.patch':
fixup! hsmd: capability addition: ability to check pubkeys.
e778ebb9af ("wallet: only log broken if we
have duplicate scids in channels.") downgraded the fatal() to a broken
log message, but the user reports it still won't start up.
Perhaps they're hitting the fatal() outside the loop? (And we're
not getting that output).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Missed a DEFAULT in the db clause.
Feb 15 16:02:12 citrine lightningd[902093]: Accessing a null column lease_satoshi/15 in query SELECT funding_tx_id, funding_tx_outnum, funding_feerate, funding_satoshi, our_funding_satoshi, funding_psbt, last_tx, last_sig, funding_tx_remote_sigs_received, lease_expiry, lease_commit_sig, lease_chan_max_msat, lease_chan_max_ppt, lease_blockheight_start, lease_fee, lease_satoshi FROM channel_funding_inflights WHERE channel_id = ? ORDER BY funding_feerate
Fixes#6016
People are upgrading to 22.11.1 not, and in some configurations like the one
mentioned in the issue, we should
put some info information in the log when we are not able to upgrade.
Signed-off-by: Vincenzo Palazzo <vincenzopalazzodev@gmail.com>
It's not likely but possible that the node's settings will shift btw a
start and an RBF; we persist the setting to the database so we don't
lose it.
Right now holding onto it forever is kind of extra but maybe we'll
reuse the setting for splices? idk.
Should this be a channel type??
technically we don't need this info after the channel opens, but for any
subsequent RBF (and maybe splice?) we need to remember what the
open/accept peer signaled
We need to be able to only use non-wrapped inputs for v2/interactive tx
protocol.
Changelog-Added: JSONRPC: `fundpsbt` option `nonwrapped` filters out p2sh wrapped inputs
We didn't actually populate them properly, and the real annotations
are on inputs and outputs.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-EXPERIMENTAL: JSON-RPC: `listtransactions` `channel` and `type` field removed at top level.
We only ever use this table for output and input transactions: indeed, my node
doesn't have any annotation types 0.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
After connecting 100,000 peers with one channel each (not all at
once!), we see various places where we exhibit O(N^2) behaviour.
Fix these by keeping a map of id->peer instead of a simple
linked-list.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Rework the logic of the version check used in the
database migration, and make sure
that it is full functional to avoid confusion
at release time.
Changelog-Fixed: database: Correctly identity official release versions for database upgrade.
Reported-by: @urza
Signed-off-by: Vincenzo Palazzo <vincenzopalazzodev@gmail.com>
We're soon going to call json_add_unsaved_channel and
json_add_uncommitted_channel from a new place, where we want the peer
state directly included.
Based on patch by @vincenzopalazzo.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
I discovered this while reworking the CI workflow, and it seems like
the HTLC queries do not order, while the tests assume a specific
order. This matches sqlite3 which without an explicit ORDER clause
will use insertion order, while postgres does not keep things in
insertion order, thus breaking the assumption. Ordering by `id`
re-establishes that implicit assumption
Changelog-Changed: postgres: Ordering of HTLCs in `listhtlcs` are now ordered by time of creation
Changelog-Removed: JSON-RPC: `fundpsbt`/`utxopsbt` `reserve` must be a number, not bool (deprecated v0.11.0)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This was reported, but the channel was closed. So however we ended
up with a duplicate, we're no *worse* off than we were before migration?
Fixes: #5760
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>