We add an actor that waits for a given peer to be connected and ready to
process payments. This is useful in the context of async payments for
the receiver's LSP.
When sending an outgoing multi-part payment, we forward the preimage back
to the sender as soon as we receive the first `update_fulfill_htlc`.
This is particularly useful when relaying trampoline payments, to be able
to propagate the fulfill upstream as early as possible.
However this meant that callers of the HTTP API would receive this
preimage event instead of the final payment result, which was quite bad.
We now disable this first event when used with the `--blocking` argument,
which ensures that the API always returns the payment result.
Fixes#2389
* removed unused code
* remove WaitingForRevocation.reSignAsap
It was hacky and completely un-needed, we just need to check pending changes when we receive a revocation. Probably was a left over for when we batch-signed.
The codec change is more tricky than it may look:
- we keep writing 8 bits in order not to have to introduce a new version
- older codecs use 1 or 8 bits depending on version
* Make totalAmount required in blinded final payloads
And update the reference test vector for blinded payments.
* Handle failures inside blinded routes
When a failure occurs inside a blinded route, we must avoid leaking any
information to upstream nodes.
We do that by returning `update_fail_malformed_htlc` with the
`invalid_onion_blinding` code whenever we are inside the blinded route,
and `update_fail_htlc` with the `invalid_onion_blinding` code when we are
the introduction node (and we add a delay).
When we are using only dummy hops or not using any blinded hop, we can
return normal errors.
We also fix an issue we had with `update_fail_malformed_htlc`: when we
received that kind of error from the downstream node, we also returned
an `update_fail_malformed_htlc` error upstream, whereas the specification
says we must convert it to an `update_fail_htlc`.
We also add many e2e tests for blinded payments.
* Remove hard-coded minimum witness weight
The spec provided a value of 107WU for the minimum witness weight, but it
is incorrect: p2tr inputs have a smaller witness than that, and segwit
allows arbitrarily low witnesses.
We remove that hard-code value altogether and consider unsigned
transactions, as we simply cannot do better.
* Allow non segwit outputs
While we restrict inputs to be segwit-only, we can allow any type of
outputs in a dual-funded transaction.
Since blinded routes have to be used from start to end and are somewhat
similar to Bolt 11 routing hints, we model them as an abstract single hop
during path-finding. This makes it trivial to reuse existing algorithms
without any modifications.
We then add support for paying blinded routes. We introduce a new type
of recipient for those payments, that uses blinded hops and creates
onion payloads accordingly. There is a subtlety in the case where we're
the introduction of the blinded route: when that happens we need to
decrypt the first payload to figure out where to send the payment.
When we receive a failure from a blinded route, we simply ignore it in
retries: we don't know what caused the issue so we assume it's permanent,
which makes sense in most cases since we cannot change the relaying
parameters (fees and expiry delta are chosen by the recipient).
* mempooltxmonitor: no need to unsubscribe from event stream
Adapters are child actors that will die along with their parents, and
the eventstream watches all subscribers for termination:
a8ee6fa42b/akka-actor/src/main/scala/akka/event/EventStream.scala (L50).
* mempooltxmonitor: send block count to actor
This solves a race condition with event stream subscription.
* Fix failing NormalStateSpec test
The `recv CurrentBlockCount (fulfilled signed htlc ignored by upstream peer)`
test was sometimes failing because of an event stream concurrency
issue on busy/slow machines:
- the previous test emits a `LocalChannelDown` event
- the next test starts before this event was emitted and registers to the
event stream
- then it receives the previous `LocalChannelDown` event before the
`LocalChannelUpdate`s we expect
* Improve MinimalNodeFixture watcher autopilot
We create an alternative ZmqWatcher that waits for the channel funder
to publish funding transactions before triggering watches.
* Fix ZeroConfAliasIntegrationSpec
A test was added to verify that when building a route to self, we use the
correct scid-alias when available (remote alias instead of local).
This test sometimes failed because Carol hadn't received Alice's channel
update and couldn't build the Alice->Bob hop. The only thing that we meant
to test was the Bob->Carol hop, so we can simplify that and remove the
flakiness,
* Fix MempoolTxMonitorSpec
We use context.pipeToSelf when publishing the transaction, which creates
a race condition between the time when the tx is added to the mempool and
the time where we're ready to process `WrappedCurrentBlockHeight`.
If `WrappedCurrentBlockHeight` arrives before `PublishOk`, the test will
fail waiting for `TxInMempool`.
* Fix TxTimeLocksMonitorSpec
The usual race condition for registering to the event stream applies,
so we directly send the `WrappedCurrentBlockHeight` event.
* Fix ZmqWatcherSpec
Some tests are randomly failing because watches can be triggered several
times and new blocks from previous tests may trigger events in subsequent
tests. We fish for the specific messages we're interested in and ignore
others.
We were previously directly creating onion payloads inside the various
payment state machines and manipulating tlv fields. This was a layering
violation that was somewhat ok because in most cases we only needed to
create the onion payload for the recipient at the beginning of the payment
flow and didn't need to modify it, except for a small change in the MPP
case.
This forced us to handle trampoline onions directly in the payment
initiator and will not work for blinded payments, where we can only build
the onion payload for the recipient after we've chosen the routes and how
to split the amount.
We clean this up by introducing payment recipients that abstract away the
creation of onion payloads. This makes it much easier to integrate blinded
payments. It also allows us to clean up the way we do trampoline payments
and potentially support splitting across multiple trampoline routes (not
included in this PR as this change isn't immediately needed).
It also lets us simplify the MultiPartPaymentLifecycle FSM, by moving the
logic of computing how much remains to be sent and what fee can be used
to the route calculation component.
When wallet transactions are evicted from the mempool (because their fee
is too low), we have a strong dependency on bitcoind's behavior: it must
not double-spend the inputs of the evicted transaction, otherwise it
could break 0-conf channels.
We verify that behavior in a unit test where we set the mempool to
5MB (the minimum authorized mempool size) and simulate this eviction
scenario.
* Improve htlc_maximum_msat in channel updates
We previously set the `htlc_maximum_msat` inside `channel_update` to the
channel's capacity, but that didn't make any sense: we will reject htlcs
that are above the local or remote `max_htlc_value_in_flight_msat`.
We now set this value to match the lowest `max_htlc_value_in_flight_msat`
of the channel, and properly type our local value to be a millisatoshi
amount instead of a more generic UInt64.
* Set max-htlc-in-flight based on channel capacity
We introduce a new parameter to set `max-htlc-value-in-flight` based on
the channel capacity, when it provides a lower value than the existing
`max-htlc-value-in-flight-msat` static value.
* Allow disabling max-htlc-value-in-flight-msat
When opening a channel to a mobile wallet user, we may want to set our
`max-htlc-value-in-flight-msat` to something greater than the funding
amount to allow the wallet user to empty their channels using a single
HTLC per channel.
We weren't correctly closing SQL statements when comparing DBs and
migrating them. All the DB handlers for normal operation (read/write
channels, payments, etc) are already correctly closed after execution.
Fixes#2425
We used to store UNIX timestamps in the waitingSince field before moving
to block count. In order to ensure backward compatibility, we converted
from timestamps to blockheight based on the value.
This code has shipped more than a year ago, so we can safely remove that
compatibility code since it only applies during the channel open or close
period, which cannot last long anyway.
Fixes#2125
Starting with bitcoind 23.0, a new `changetype` parameter was introduced.
If not specified, bitcoind will generate a change output with a type that
matches the main output to make it harder for chain analysis to detect
which output is the change.
The issue is that lightning requires segwit utxos: if an on-chain payment
is sent to a non-segwit address, we still want our change output to use
segwit, otherwise we won't be able to use it. We thus must set
`addresstype` and `changetype` in `bitcoin.conf` to ensure we never
generate legacy change addresses.
The introduction of `scid-alias` broke the ability to create a valid route
to our own node using `FinalizeRoute` for private or unconfirmed channels
because we use our local alias as the shortChannelId of the graph edge in
both directions.
Using the same identifier for both directions makes sense, because it's a
stable identifier whereas the remote alias could be updated by our peer.
It's generally not an issue, except when we are building a route, because
our peer will not know how to route if we give them our local alias in a
payment onion (they expect their own local alias, which from our point of
view is the remote alias).
The only way to build routes to ourselves is by using `FinalizeRoute`, so
we fix the issue only in this handler by looking at our private channels
when we notice that we are the destination.
* Implement correct ordering of `tx_signatures`
According to the dual funding specification, the peer that has contributed
the smaller amount of `tx_add_input` must sign first. We incorrectly used
the contributions to the funding output instead of the sum of the inputs
values.
This requirement guarantees that there can be no deadlocks when nodes
are batching multiple interactive-tx sessions.
* Send error instead tx_abort in interactive-tx
When building the first version of a dual-funded transaction, we should
use `error` instead of `tx_abort` if we encounter a failure. It makes it
more explicit to our peer that the channel should be closed at that point,
whereas `tx_abort` can be used in RBF attempts to abort a secondary
interactive-tx sessions without closing the channel.
This is a follow up to #2360 which was actually buggy: the channel actor
doesn't stop itself immediately after going into the CLOSED state, so it
received a `WatchFundingTxSpent` event that was handled in the global
`whenUnhandled` block, logged a warning and went back to the CLOSING
state.
We now go through the normal steps to close those channels, by waiting
for the commit tx to be confirmed before actually going to CLOSED.
When using `static_remotekey`, we ask `bitcoind` to generate a receiving
address and send the corresponding public key to our peer. We previously
assumed that `bitcoind` used `bech32`, which may change since #2466.
We need to explicitly tell `bitcoind` that we want to use a p2wpkh address
in that case, otherwise it may generate a `p2tr` address, we will use that
public key to get our funds back, but `bitcoind` will not detect that it
can spend this output (because it only expects this public key to be used
for a taproot address).
The default values were generated once when the eclair node starts instead
of being recomputed for every request. Fixes#2475.
We also add pagination to the `listPendingInvoice` API as a follow-up for
#2474.
This release of bitcoind contains several bug fixes that let us simplify
our fee bumping logic:
- fixed a bug where bitcoind dropped non-wallet signatures
- added an option to fund transactions containing non-wallet inputs
It also has support for Taproot, which we want in eclair.
We previously only supported `scid_alias` and `zero_conf` when used in
combination with `anchors_zero_fee_htlc_tx`, but we have no way of
advertising that to our peers, which leads to confusing failures when
opening channels.
Some nodes that don't have good access to a utxo pool may not migrate to
anchor outputs, but may still want to use `scid_alias` (and potentially
`zero_conf` as well).
Fixes#2394
When nodes receive HTLCs, they verify that the contents of those HTLCs
match the intructions that the sender provided in the onion. It is
important to ensure that intermediate nodes and final nodes have similar
requirements, otherwise a malicious intermediate node could easily probe
whether the next node is the final recipient or not.
Unfortunately, the requirements for intermediate nodes were more lenient
than the requirements for final nodes. Intermediate nodes allowed overpaying
and increasing the CLTV expiry, whereas final nodes required a perfect
equality between the HTLC values and the onion values.
This provided a trivial way of probing: when relaying an HTLC, nodes could
relay 1 msat more than what the onion instructed (or increase the outgoing
expiry by 1). If the next node was an intermediate node, they would accept
this HTLC, but if the next node was the recipient, they would reject it.
We update those requirements to fix this probing attack vector.
See https://github.com/lightning/bolts/pull/1032
When sending outgoing payments, using an expiry that is very close to
the current block height makes it obvious to the next-to-last node
who the recipient is.
We add a random expiry to the amount we use to make it plausible
that the route contains more hops.
Add a message flag to channel update to specify that an update is private
and should not be rebroadcast to other nodes.
We log an error if a private channel update gets into the rebroadcast list.
See https://github.com/lightning/bolts/pull/999
The specification recommends using a length of 256 for onion errors, but
it doesn't say that we should reject errors that use a different length.
We may want to start creating errors with a bigger length than 256 if we
need to transmit more data to the sender. In order to prepare for this,
we keep creating 256-bytes onion errors, but allow receiving errors of
arbitrary length.
See the specification here: https://github.com/lightning/bolts/pull/1021Fixes#2438
We previously had a few error messages where we would send the complete
offending transaction instead of just its txid. This can be dangerous if
we accidentally send a signed transaction that our counterparty may
broadcast.
We are currently careful and avoid sending signed transaction when it may
be dangerous for us, but there's a risk that a refactoring changes that.
It's safer to only send the txid.
* Explain how and why we use bitcoin core
Explain why we chose to delegate most on-chain tasks to bitcoin core (including on-chain wallet management), the additional requirements that it creates and also the benefits in terms of security.
The `tx_signatures` message uses `tx_hash` instead of `txid`.
Thanks @niftynei for pointing this out.
Reject `tx_add_input` with an `nSequence` set to `0xfffffffe` or
`0xffffffff`. In theory we only need one of the inputs to have an
`nSequence` below `0xfffffffe`, but it's simpler to check all of them
to be able to fail early in the protocol.
When a channel was pruned and we receive a valid channel update for it,
we need to wait until we're sure both sides of the channel are back
online: a channel that has only one side available will most likely not
be able to relay payments.
When both sides have created fresh, valid channel updates, we restore the
channel to our channels map and to the graph.
We remove the custom `pruned` table and instead directly use the data
from the channels table and cached data from the router.
Fixes#2388
Add tlv to require confirmed inputs in dual funding: when that tlv is specified,
the peer must use confirmed inputs, otherwise the funding attempt will be
rejected. This ensures that we won't pay the fees for a low-feerate ancestor.
This can be configured at two levels:
- globally using `eclair.conf`
- on a per-channel basis by overriding the default in `Peer.OpenChannel`
We used to drop onion messages above a certain size, but the onion message packet is already limited to 65536 bytes so we only keep this larger limit instead.
If we have activated 0-conf support for a given peer, we send our
`channel_ready` early regardless of whether our peer has activated support
for 0-conf. If they also immediately send their `channel_ready` it's great,
if they don't it's ok, we'll just wait for confirmations, but it was worth
trying.
In case feeRatePerKw is high and liquidity is low on the initiator side, the initiator can't send anything and the test would fail because we try to create a HTLC for an amount of 0.