This adapts our fee estimations requests to the Bitcoin backend to the
new semantic, and batch the requests.
This makes our request for fees much simpler, and leaves some more
flexibility for a plugin to do something smart (it could still lie before
but now it's explicit, at least.) as we don't explicitly request
estimation for a specific mode and a target.
Changelog-Changed: We now batch the requests for fee estimation to our Bitcoin backend.
Changelog-Changed: We now get more fine-grained fee estimation from our Bitcoin backend.
We kept track of an URGENT, a NORMAL, and a SLOW feerate. They were used
for opening (NORMAL), mutual (NORMAL), UNILATERAL (URGENT) transactions
as well as minimum and maximum estimations, and onchain resolution.
We now keep track of more fine-grained feerates:
- `opening` used for funding and also misc transactions
- `mutual_close` used for the mutual close transaction
- `unilateral_close` used for unilateral close (commitment transactions)
- `delayed_to_us` used for resolving our output from our unilateral close
- `htlc_resolution` used for resolving onchain HTLCs
- `penalty` used for resolving revoked transactions
We don't modify our requests to our Bitcoin backend, as the next commit
will batch them !
Changelog-deprecated: The "urgent", "slow", and "normal" field of the `feerates` command are now deprecated.
Changelog-added: The fields "opening", "mutual_close", "unilateral_close", "delayed_to_us", "htlc_resolution" and "penalty" have been added to the `feerates` command.
This allows us to set more fine-grained feerate for onchain resolution.
We still give it the same feerate for all types, but this will change as
we move feerates to bcli.
My node crashed as follows:
lightningd: lightningd/peer_control.c:957: peer_connected: Assertion `!peer->uncommitted_channel' failed.
In the logs I found:
Running lightning_openingd: Cannot allocate memory
Which reveals that we're not freeing uc in that path!
Changelog-Fixed: Fix assertion on reconnect if we fail to run openingd.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
For messages, we use the onion but payload lengths 0 and 1 aren't special.
Create a flag to disable that logic.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
also: convert the stored int value from 'int' to 's64'
atoi fails silently, returning a zero. instead we use the more robust
strtoll which will allow us fail with an error.
we also make the parsing for bools stricter, only allowing plausibly
boolean values to parse.
We were nesting like the following:
```json
{"params": {
"rpc_command": {
"rpc_command": {
}
}
}
```
This is really excessive, so we unwrap once, and now have the following:
```json
{"params": {
"rpc_command": {
}
}
```
Still more wrapping than necessary (the method is repeated in the `params`
object), but it's getting closer.
Changelog-Deprecated: JSON-RPC: Removed double wrapping of `rpc_command` payload in `rpc_command` JSON field.
Suggested-by: @fiatjaf
Signed-off-by: Christian Decker <@cdecker>
Before this patch we would only update `channel->last_tx` with the newly
proposed closure tx from the peer if the fee of the new one was lower.
In negotiations where we are at the higher end and the peer starts
lower, all peer's subsequent proposals will be higher than his initial
proposal and in this case we would never update `channel->last_tx`
and would wrongly broadcast his initial proposal at the end of the
negotiation.
Fixes https://github.com/ElementsProject/lightning/issues/3549
Changelog-Fixed: Always broadcast the latest close transaction at the end of the close fee negotiation, instead of sometimes broadcasting the peer's initial closing proposal.
Does the allocation and copying; this is useful because we can
avoid being fooled into doing giant allocations.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
ChangeLog-Added: New `getsharedsecret` command, which lets you compute a shared secret with this node knowing only a public point. This implements the BOLT standard of hashing the ECDH point, and is incompatible with ECIES.
Even without optimization, it's faster to walk all the channels than
ping another daemon and wait for the response.
Changelog-Changed: Forwarding messages is now much faster (less inter-daemon traffic)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Instead of saving a stripped_update, we use the new
local_fail_in_htlc_needs_update.
One minor change: we return the more correct
towire_temporary_channel_failure when the node is still syncing.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The idea is that gossipd can give us the cupdate we need for an error, and
we wire things up so that we ask for it (async) just before we send the
error to the subdaemon.
I tried many other things, but they were all too high-risk.
1. We need to ask gossipd every time, since it produces these lazily
(in particular, it doesn't actually generate an offline update unless
the channel is used).
2. We can't do async calls in random places, since we'll end up with
an HTLC in limbo. What if another path tries to fail it at the same time?
3. This allows us to use a temporary_node_failure error, and upgrade it
when gossipd replies. This doesn't change any existing assumptions.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is a common thing to do, so create a macro.
Unfortunately, it still needs the type arg, because the paramter may
be const, and the return cannot be, and C doesn't have a general
"(-const)" cast.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
common should not include specific per-daemon files. Turns out this
caused a lot of indirect includes to be exposed.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It was a pointer into the list of plugins for the hook, but it was rather
unstable: if a plugin exits after handling the event we could end up skipping
a later plugin. We now rely on the much more stable `call_chain` list, so we
can clean up that useless field.
We are attaching the destructor to notify us when the plugin exits, but we
also need to clear them once the request is handled correctly, so we don't
call the destructor when it exits later.
We make the current state of `lightningd` explicit so we don't have to
identify a shutdown by its side-effects. We then use this in order to prevent
the killing and freeing of plugins to continue down the chain of registered
plugins.
We were waiting for both stdin and stdout to close, however that resulted in
us deferring cleanup indefinitely since we did not poll stdout for being
writable most of the time. On the other hand we are almost always polling
the plugin's stdout, so that notifies us as soon as the plugin stops.
Changelog-Fixed: plugin: Plugins no longer linger indefinitely if their process terminates
Before this patch we used to send `double`s over the wire by just
copying them. This is not portable because the internal represenation
of a `double` is implementation specific.
Instead of this, multiply any floating-point numbers that come from
the outside (e.g. JSONs) by 1 million and round them to integers when
handling them.
* Introduce a new param_millionths() that expects a floating-point
number and returns it multipled by 1000000 as an integer.
* Replace param_double() and param_percent() with param_millionths()
* Previously the riskfactor would be allowed to be negative, which must
have been unintentional. This patch changes that to require a
non-negative number.
Changelog-None
This completes the conversion; any in-flight HTLC failures get turned into temporary_node_failures.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This cleans up the "local failure" callers for incoming HTLCs to hand
an onionreply instead of making us generate it from the code inside
make_failmsg.
(The db path still needs make_failmsg, so that's next).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-deprecated: Plugins: htlc_accepted_hook "failure_code" only handles simple cases now, use "failure_message".
Unfortunately the invoice_payment_hook can give us a failcode, so I simply
restrict it to the two sensible ones.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-deprecated: plugins: invoice_payment_hook "failure_code" only handles simple cases now, use "failure_message".
We tell channeld that an htlc is bad by sending it a 'struct
failed_htlc'. This usually contains an onionreply to forward, but for
the case where the onion itself was bad, it contains a failure code
instead.
This makes the "send a failed_htlc for a bad onion" a completely
separate code path, then we can work on removing failcodes from the
other path.
In several places 'failcode' is now changed to 'badonion' to reflect
that it can only be a BADONION failcode.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
At the moment, we store e.g. WIRE_TEMPORARY_CHANNEL_FAILURE, and then
lightningd has a large demux function which turns that into the correct
error message.
Such an enum demuxer is an anti-pattern.
Instead, store the message directly for output HTLCs; channeld now
sends us an error message rather than an error code.
For input HTLCs we will still need the failure code if the onion was
bad (since we need to prompt channeld to send a completely different
message than normal), though we can (and will!) eliminate its use in
non-BADONION failure cases.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We're going to change our internal structure next, so this is preparation.
We populate existing errors with temporary node failures, for simplicity.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Instead of making it ourselves, lightningd does it. Now we only have
two cases of failed htlcs: completely malformed (BADONION), and with
an already-wrapped onion reply to send.
This makes channeld's job much simpler.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
I hadn't realized that lightningd asks gossipd every time we forward
a payment. But I'm going to abuse it here to get the latest channel_update,
otherwise (as lightningd takes over error message generation) lightningd
needs to do an async request at various painful points.
So have gossipd tell us the lastest update (stripped so compatible with
the strange in-onion-error format).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Turn it into temporary node failure: this only happens if we restart
with a failed htlc in, but it's clearer and more robust to handle it
generically.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>