This is when they closed the channel, we can simply make our own tx to
expire the HTLC. (The other case is where we closed the channel, and
we have a special htlc_timeout tx which we have their signature for).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This breaks tests/test_closing.py::test_onchain_all_dust's accouting
checks.
That test doesn't really test what it claims to test; sure, onchaind
*says* it's going to ignore the output due to high fees, but the tx
still gets mined.
I cannot figure out what the test is supposed to look like, so I
simply disabled the accounting checks :(
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We'll want this, as lightningd will want to produce htlc txs based on
what it's told from onchaind, so we need a lower-level accessor.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We can no longer grab the tx in one line as we did with
wait_for_onchaind_broadcast, we need to track the broadcast from
lightningd.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Since we do both our own internal handling and handing it to
lightningd, we add to `proposed_resolution` to handle the lightningd
case.
Note, in particular, that we fix the blockheight calculation: it's out
by one, in that if we see a tx and our CSV lock is 5, we only need to
wait 4 more blocks, not 5. This will matter as we start using it, and
convert the tests.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We add code for the case of spending a (timelocked) to-us output of an
HTLC output, so lightningd can do it (rather than onchaind doing all
the work itself).
onchaind still needs to know whether we bothered to create the tx
(fees might have caused it to evaporate, so it should consider it
immediately resolved rather than waiting for it), and what the
witnesses were, and which parts of the witnesses were signatures (as
these parts might change, with RBF or in future, combining other txs).
The inputs (known to onchaind) and the witnesses (told by lightningd)
uniquely identify the spend for the purposes of onchaind. In
particular, they definitely distinguish HTLC-timeout and HTLC-success
cases.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We previously used WIRE_HSMD_SIGN_DELAYED_PAYMENT_TO_US,
WIRE_HSMD_SIGN_REMOTE_HTLC_TO_US, WIRE_HSMD_SIGN_PENALTY_TO_US and
WIRE_HSMD_SIGN_LOCAL_HTLC_TX which allow onchaind to sign txs,
but only for its specific channel.
We now want lightningd to sign these, but it's not bound to a specific
channel. So let's add variants that don't require that.
We are also now explicit about *what input* to sign. It's always zero
for now, but future combinations may change that.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Importantly, the code in jsonrpc.c which actually does the io_break:
```
/* Once the stop_conn conn is drained, we can shut down. */
if (jcon->ld->stop_conn == conn && jcon->ld->state == LD_STATE_RUNNING) {
/* Return us to toplevel lightningd.c */
log_debug(jcon->ld->log, "io_break: %s", __func__);
io_break(jcon->ld);
```
By having the state not set until later, we avoid running this. Of course,
we need to avoid calling the main loop when we get there, if we've already
been told to shutdown.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
In CI we see crashes in this case:
```
lightningd: lightningd/connect_control.c:734: void connectd_activate(struct lightningd *): Assertion `ret == ld->connectd' failed.
```
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is a visitor that ensures every new field has at least an `added`
field, and that we don't change the `added` or `deprecated` annotation
after the fact.
Changelog-Changed: msggen: The generated interfaces `cln-rpc` anc `cln-grpc` can now work with a range of versions rather than having to match the CLN version
This patch annotates the fields with a new `optional` attribute which
determines whether the field should be considered an inferred optional
due to being added or deprecated.
The patching system allows us to enrich the raw schema with some
additional information. In this specific case we want to backfill the
`added` and `deprecated` fields for the multiversion support.
I couldn't figure out why my new SQL query was returning 0 rows,
and it was because we were ignoring errors.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
CI seems to block; Christian suggests the throttler may be to blame somehow?
Since trying to fix it made it worse, let's just remove it.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Firstly, amount should not be `static`, so use a separate line to
declare those (fee is static, as it's cached across calls).
Secondly, new_tracked_output doesn't take(), it copies.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We only do this in one place now, but we're going to add another.
Also, make queued messages const.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Fun story. We're changing onchaind to hand txs to us, and we will
construct them and do the broadcast for it. lightningd tells onchaind
the witness it used (with flags to indicate which fields were
signatures so should be ignored) so onchaind can recognize the tx
when/if it is mined.
And when onchaind was waiting for a CLTV delay, it wouldn't tell
lightningd yet, but wait until the parent was sufficiently deep
But this caused bugs!
In particular, on replay, onchaind would see transactions which it
hasn't sent yet. This was not a problem before, as onchaind had
created the tx, even if it hadn't told lightningd to broadcast it, so
recognized the variant when it came in. When we're relying on
lightningd to tell us what the tx will look like, this doesn't work
any more.
The cause of this is that we fire off txowatches ("this output was
spent!") while we process blocks, and only fire off txwatches ("this
tx increased depth") once all the current blocks are processed. Often
this didn't matter, since we replay messages to onchaind from the
database, *but* we trim the last few blocks on restart (or, if there's
a small reorg while we're stopped), and we can hit this misordering.
Changing our topology code to only ever process one block at a time
would be a solution, but slows down catchup (and tests, where we often
mine a run of blocks).
So, this seems like a premature optimization, but it's really
required! And in future, lightningd can use this knowledge of pending
transactions to combine them in more clever ways.
Note that if a tx is valid at block N, we broadcast it once we see
block N-1, to get it in the mempool for block N.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It was once only called on failure, now it's always called (if set).
It was called different things in different places, so unify it.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We always call channel_fail_transient() on all channels when a peer
connects, to clean up any previous connections. However, when
we startup, this channel doesn't have an owner yet, resulting in
a fairly weird INFO level message.
Reported-by: Michael Schmook @mschmook
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Fixed: `lightningd`: don't log gratuitous "Peer transient failure" message on first connection after restart.
There are cases (difficult to reproduce with a test) where
a payment will fail one time and succeed later.
As far I understand in this case the groupid field of the payment
is the same, and the only thing that change is the status, so
our logic inside the delpay is ambiguous where it is not
possible to delete a payment as described in https://github.com/ElementsProject/lightning/issues/6114
A sequence of commands that explain the problem is
```
$ lc -k listpays payment_hash=H
{
"pays": [
{
"bolt11": "I",
"destination": "redacted",
"payment_hash": "H",
"status": "complete",
"created_at": redacted,
"completed_at": redacted,
"preimage": "P",
"amount_msat": "redacted",
"amount_sent_msat": "redacted"
}
]
}
$ lc delpay H complete
{
"code": 211,
"message": "Payment with hash H has failed status but it should be complete"
}
```
In this case, the delpay is not able to delete a payment because the
listpays is returning only the succeeded one, so by running the
listsendpays we may see the following result where our delpay logic
will be stuck because it works to ensure that all the payments stored
in the database has the status specified by the user
```
➜ VincentSSD clightning --testnet listsendpays -k payment_hash=7fc74bedbb78f2f3330155d919a54e730cf19c11bc73e96c027f5cd4a34e53f4
{
"payments": [
{
"id": 322,
"payment_hash": "7fc74bedbb78f2f3330155d919a54e730cf19c11bc73e96c027f5cd4a34e53f4",
"groupid": 1,
"partid": 1,
"destination": "030b686a163aa2bba03cebb8bab7778fac251536498141df0a436d688352d426f6",
"amount_msat": 300,
"amount_sent_msat": 1664,
"created_at": 1679510203,
"completed_at": 1679510205,
"status": "failed",
"bolt11": "lntb1pjpkj4xsp52trda39rfpe7qtqahx8jjplhnj3tatxy8rh6sc6afgvmdz7n0llspp50lr5hmdm0re0xvcp2hv3nf2wwvx0r8q3h3e7jmqz0awdfg6w206qdp0w3jhxarfdenjqargv5sxgetvwpshjgrzw4njqun9wphhyaqxqyjw5qcqp2rzjqtp28uqy77te96ylt7ek703h4ayldljsf8rnlztgf3p8mg7pd0qzwf8a3yqqpdqqqyqqqqt2qqqqqqgqqc9qxpqysgqgeya2lguaj6sflc4hx2d89jvah8mw9uax4j77d8rzkut3rkm0554x37fc7gy92ws9l76yprdva2lalrs7fqjp9lcx40zuty8gca0g5spme3dup"
},
{
"id": 323,
"payment_hash": "7fc74bedbb78f2f3330155d919a54e730cf19c11bc73e96c027f5cd4a34e53f4",
"groupid": 1,
"partid": 2,
"destination": "030b686a163aa2bba03cebb8bab7778fac251536498141df0a436d688352d426f6",
"amount_msat": 300,
"amount_sent_msat": 3663,
"created_at": 1679510205,
"completed_at": 1679510207,
"status": "failed"
},
{
"id": 324,
"payment_hash": "7fc74bedbb78f2f3330155d919a54e730cf19c11bc73e96c027f5cd4a34e53f4",
"groupid": 1,
"partid": 3,
"destination": "030b686a163aa2bba03cebb8bab7778fac251536498141df0a436d688352d426f6",
"amount_msat": 300,
"amount_sent_msat": 3663,
"created_at": 1679510207,
"completed_at": 1679510209,
"status": "failed"
},
{
"id": 325,
"payment_hash": "7fc74bedbb78f2f3330155d919a54e730cf19c11bc73e96c027f5cd4a34e53f4",
"groupid": 1,
"partid": 4,
"destination": "030b686a163aa2bba03cebb8bab7778fac251536498141df0a436d688352d426f6",
"amount_msat": 300,
"amount_sent_msat": 4663,
"created_at": 1679510209,
"completed_at": 1679510221,
"status": "complete",
"payment_preimage": "43f746f2d28d4902489cbde9b3b8f3d04db5db7e973f8a55b7229ce774bf33a7"
}
]
}
```
This commit solves the problem by forcing the delete query in the
database to specify status too, and work around this kind of
ambiguous case.
Fixes: f52ff07558 (lightningd: allow delpay to delete a specific payment.)
Reported-by: Antoine Poinsot <darosior@protonmail.com>
Link: https://github.com/ElementsProject/lightning/issues/6114
Signed-off-by: Vincenzo Palazzo <vincenzopalazzodev@gmail.com>
Co-Developed-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Fixed: delpay be more pedantic about delete logic by allowing
delete payments by status directly on the database.