We used to store UNIX timestamps in the waitingSince field before moving
to block count. In order to ensure backward compatibility, we converted
from timestamps to blockheight based on the value.
This code has shipped more than a year ago, so we can safely remove that
compatibility code since it only applies during the channel open or close
period, which cannot last long anyway.
Fixes#2125
Starting with bitcoind 23.0, a new `changetype` parameter was introduced.
If not specified, bitcoind will generate a change output with a type that
matches the main output to make it harder for chain analysis to detect
which output is the change.
The issue is that lightning requires segwit utxos: if an on-chain payment
is sent to a non-segwit address, we still want our change output to use
segwit, otherwise we won't be able to use it. We thus must set
`addresstype` and `changetype` in `bitcoin.conf` to ensure we never
generate legacy change addresses.
The introduction of `scid-alias` broke the ability to create a valid route
to our own node using `FinalizeRoute` for private or unconfirmed channels
because we use our local alias as the shortChannelId of the graph edge in
both directions.
Using the same identifier for both directions makes sense, because it's a
stable identifier whereas the remote alias could be updated by our peer.
It's generally not an issue, except when we are building a route, because
our peer will not know how to route if we give them our local alias in a
payment onion (they expect their own local alias, which from our point of
view is the remote alias).
The only way to build routes to ourselves is by using `FinalizeRoute`, so
we fix the issue only in this handler by looking at our private channels
when we notice that we are the destination.
* Implement correct ordering of `tx_signatures`
According to the dual funding specification, the peer that has contributed
the smaller amount of `tx_add_input` must sign first. We incorrectly used
the contributions to the funding output instead of the sum of the inputs
values.
This requirement guarantees that there can be no deadlocks when nodes
are batching multiple interactive-tx sessions.
* Send error instead tx_abort in interactive-tx
When building the first version of a dual-funded transaction, we should
use `error` instead of `tx_abort` if we encounter a failure. It makes it
more explicit to our peer that the channel should be closed at that point,
whereas `tx_abort` can be used in RBF attempts to abort a secondary
interactive-tx sessions without closing the channel.
This is a follow up to #2360 which was actually buggy: the channel actor
doesn't stop itself immediately after going into the CLOSED state, so it
received a `WatchFundingTxSpent` event that was handled in the global
`whenUnhandled` block, logged a warning and went back to the CLOSING
state.
We now go through the normal steps to close those channels, by waiting
for the commit tx to be confirmed before actually going to CLOSED.
When using `static_remotekey`, we ask `bitcoind` to generate a receiving
address and send the corresponding public key to our peer. We previously
assumed that `bitcoind` used `bech32`, which may change since #2466.
We need to explicitly tell `bitcoind` that we want to use a p2wpkh address
in that case, otherwise it may generate a `p2tr` address, we will use that
public key to get our funds back, but `bitcoind` will not detect that it
can spend this output (because it only expects this public key to be used
for a taproot address).
The default values were generated once when the eclair node starts instead
of being recomputed for every request. Fixes#2475.
We also add pagination to the `listPendingInvoice` API as a follow-up for
#2474.
This release of bitcoind contains several bug fixes that let us simplify
our fee bumping logic:
- fixed a bug where bitcoind dropped non-wallet signatures
- added an option to fund transactions containing non-wallet inputs
It also has support for Taproot, which we want in eclair.
We previously only supported `scid_alias` and `zero_conf` when used in
combination with `anchors_zero_fee_htlc_tx`, but we have no way of
advertising that to our peers, which leads to confusing failures when
opening channels.
Some nodes that don't have good access to a utxo pool may not migrate to
anchor outputs, but may still want to use `scid_alias` (and potentially
`zero_conf` as well).
Fixes#2394
When nodes receive HTLCs, they verify that the contents of those HTLCs
match the intructions that the sender provided in the onion. It is
important to ensure that intermediate nodes and final nodes have similar
requirements, otherwise a malicious intermediate node could easily probe
whether the next node is the final recipient or not.
Unfortunately, the requirements for intermediate nodes were more lenient
than the requirements for final nodes. Intermediate nodes allowed overpaying
and increasing the CLTV expiry, whereas final nodes required a perfect
equality between the HTLC values and the onion values.
This provided a trivial way of probing: when relaying an HTLC, nodes could
relay 1 msat more than what the onion instructed (or increase the outgoing
expiry by 1). If the next node was an intermediate node, they would accept
this HTLC, but if the next node was the recipient, they would reject it.
We update those requirements to fix this probing attack vector.
See https://github.com/lightning/bolts/pull/1032
When sending outgoing payments, using an expiry that is very close to
the current block height makes it obvious to the next-to-last node
who the recipient is.
We add a random expiry to the amount we use to make it plausible
that the route contains more hops.
Add a message flag to channel update to specify that an update is private
and should not be rebroadcast to other nodes.
We log an error if a private channel update gets into the rebroadcast list.
See https://github.com/lightning/bolts/pull/999
The specification recommends using a length of 256 for onion errors, but
it doesn't say that we should reject errors that use a different length.
We may want to start creating errors with a bigger length than 256 if we
need to transmit more data to the sender. In order to prepare for this,
we keep creating 256-bytes onion errors, but allow receiving errors of
arbitrary length.
See the specification here: https://github.com/lightning/bolts/pull/1021Fixes#2438
We previously had a few error messages where we would send the complete
offending transaction instead of just its txid. This can be dangerous if
we accidentally send a signed transaction that our counterparty may
broadcast.
We are currently careful and avoid sending signed transaction when it may
be dangerous for us, but there's a risk that a refactoring changes that.
It's safer to only send the txid.
* Explain how and why we use bitcoin core
Explain why we chose to delegate most on-chain tasks to bitcoin core (including on-chain wallet management), the additional requirements that it creates and also the benefits in terms of security.
The `tx_signatures` message uses `tx_hash` instead of `txid`.
Thanks @niftynei for pointing this out.
Reject `tx_add_input` with an `nSequence` set to `0xfffffffe` or
`0xffffffff`. In theory we only need one of the inputs to have an
`nSequence` below `0xfffffffe`, but it's simpler to check all of them
to be able to fail early in the protocol.
When a channel was pruned and we receive a valid channel update for it,
we need to wait until we're sure both sides of the channel are back
online: a channel that has only one side available will most likely not
be able to relay payments.
When both sides have created fresh, valid channel updates, we restore the
channel to our channels map and to the graph.
We remove the custom `pruned` table and instead directly use the data
from the channels table and cached data from the router.
Fixes#2388
Add tlv to require confirmed inputs in dual funding: when that tlv is specified,
the peer must use confirmed inputs, otherwise the funding attempt will be
rejected. This ensures that we won't pay the fees for a low-feerate ancestor.
This can be configured at two levels:
- globally using `eclair.conf`
- on a per-channel basis by overriding the default in `Peer.OpenChannel`
We used to drop onion messages above a certain size, but the onion message packet is already limited to 65536 bytes so we only keep this larger limit instead.
If we have activated 0-conf support for a given peer, we send our
`channel_ready` early regardless of whether our peer has activated support
for 0-conf. If they also immediately send their `channel_ready` it's great,
if they don't it's ok, we'll just wait for confirmations, but it was worth
trying.
In case feeRatePerKw is high and liquidity is low on the initiator side, the initiator can't send anything and the test would fail because we try to create a HTLC for an amount of 0.
We currently accept some malformed TLVs with additional data that we ignore. This means that decoding and reencoding may give a different result.
With this change, we now reject such TLVs.
Also add the `.as[]` part of the codec inside `tlvField` so we can remove the redundant types annotations.
Currently, for an incomming payment we check that the CLTV delta is larger than the minFinalExpiryDelta from the invoice. However with BOLT 12, invoices no longer contain a minFinalExpiryDelta (not yet visible in eclair code, BOLT 12 moves fast!). I suggest to use the minFinalExpiryDelta value from the node params instead.
Since we use this value for all the invoices we create, it doesn't change much. The only case where it would have an impact would be if we create an invoice, then shutdown, change the conf, restart, and only then someone tries to pay the invoice; in that case we would probably want to enforce the new conf anyway.
We previously duplicated `variableSizeBytesLong(varintoverflow, ...)`
whenever we wanted to work with a tlv field.
This was confusing and error-prone, so it's now factored into a specific
helper codec. We also remove the length-prefixed truncated int codecs,
as they are always used in tlvs and should simply use this new tlv field
codec instead.
These two fuzz tests setup a random set of HTLCs and then try to send or
receive the maximum available amount. The initial HTLC setup may fail,
if the initial balances are too low.
It is hard to set those initial balances to ensure this will always work,
but since this will only rarely randomly happen, we should simply ignore
it (instead of failing the test).
When using a custom logger for log capture in tests (with `akka.loggers=["fr.acinq.eclair.testutils.MySlf4jLogger"]`), we need to explicitly disable the "hardcoded" slf4j logger for akka typed, otherwise we will end up with duplicate slf4j logging (one through our custom logger, the other one through the default slf4j logger).
See the rationale for this hardcoded sl4j logger here: https://doc.akka.io/docs/akka/current/typed/logging.html#event-bus.
When using dual funding, both sides may push an initial amount to the remote
side. This is done with an experimental tlv that can be added to `open_channel2`
and `accept_channel2`.
With the requirements added by #2430, we can get rid of the superfluous degrees of freedom around channel reserve, while still leaving the model untouched.
In dual funded channels the reserve is computed automatically. But our model allows setting a reserve even for dual funded channels.
Changing the model is too much work, instead this PR proposes to:
- add `require`s in the `Commitments` class to verify at runtime that we are consistent (it would cause the channel to fail before it is created)
- pass the `dualFunded` status to `Peer.makeChannelParams` so we only build valid parameters.
We could also alter the handlers for `SpawnChannelnitiator`/`SpawnChannelNonInitiator` and mutate the `LocalParams` depending on the value of `dualFunded`. It is less intrusive but arguably more hacky.
When creating a blinded route, we expose the last blinding point (that the
last node will receive). This lets the recipient derive the corresponding
blinded private key, which they may use to sign an invoice.
We add support for generating Bolt 12 invoices and storing them in our
payments DB to then receive blinded payments.
We implement the receiving part once a blinded payment has been decrypted.
This uses the same payment flow as non-blinded payments, with slightly
different validation steps.
Note that we put a random secret in the blinded paths' path_id field
to verify that an incoming payment uses a valid blinded route generated
by us. We store that as an arbitrary byte vector to allow future changes
to this strategy.
Add InvalidOnionBlinded error and translate downstream errors when
we're inside a blinded route, with a random delay when we're the
introduction point.
Add more restrictions to the tlvs that can be used inside blinded payloads.
Add route blinding feature bit and reject blinded payments when
the feature is disabled.
* Separate tlv decoding from content validation
When decoding a tlv stream, we previously also validated the
stream's content at decoding time. This was a layer violation,
as checking that specific tlvs are present in a stream is not
an encoding concern.
This was somewhat fine when we only had very basic validation
(presence or absence of specific tlvs), but blinded paths
substantially changed that because one of the tlvs must be
decrypted to yield another tlv stream that also needs to have
its content validated.
This forced us to have an overly complex trait hierarchy in
PaymentOnion.scala and expose a blinding key in classes that
shouldn't care about whether blinding is used or not.
We now decouple that into two distinct steps:
* codecs simply return tlv streams and verify that tlvs are
correctly encoded
* business logic case classes (such as ChannelRelayPayload)
should be instantiated with a new `validate` method that
takes tlv streams and verifies mandatory/forbidden tlvs
This lets us greatly simplify the trait hierarchy and deal
with case class that only contain fully decrypted and valid
data.
* Improve tests
There was redundancy in the wrong places: route blinding codec tests were
testing route blinding decryption and were missing content validation.
We also change the behavior of the route blinding decode method to return
the blinding override when present, instead of letting higher level
components duplicate that logic.
* Use hierarchical namespaces
As suggested by @pm47
* Small PR comments
* Remove confusing comment