When using `static_remotekey`, we ask `bitcoind` to generate a receiving
address and send the corresponding public key to our peer. We previously
assumed that `bitcoind` used `bech32`, which may change since #2466.
We need to explicitly tell `bitcoind` that we want to use a p2wpkh address
in that case, otherwise it may generate a `p2tr` address, we will use that
public key to get our funds back, but `bitcoind` will not detect that it
can spend this output (because it only expects this public key to be used
for a taproot address).
The default values were generated once when the eclair node starts instead
of being recomputed for every request. Fixes#2475.
We also add pagination to the `listPendingInvoice` API as a follow-up for
#2474.
This release of bitcoind contains several bug fixes that let us simplify
our fee bumping logic:
- fixed a bug where bitcoind dropped non-wallet signatures
- added an option to fund transactions containing non-wallet inputs
It also has support for Taproot, which we want in eclair.
We previously only supported `scid_alias` and `zero_conf` when used in
combination with `anchors_zero_fee_htlc_tx`, but we have no way of
advertising that to our peers, which leads to confusing failures when
opening channels.
Some nodes that don't have good access to a utxo pool may not migrate to
anchor outputs, but may still want to use `scid_alias` (and potentially
`zero_conf` as well).
Fixes#2394
When nodes receive HTLCs, they verify that the contents of those HTLCs
match the intructions that the sender provided in the onion. It is
important to ensure that intermediate nodes and final nodes have similar
requirements, otherwise a malicious intermediate node could easily probe
whether the next node is the final recipient or not.
Unfortunately, the requirements for intermediate nodes were more lenient
than the requirements for final nodes. Intermediate nodes allowed overpaying
and increasing the CLTV expiry, whereas final nodes required a perfect
equality between the HTLC values and the onion values.
This provided a trivial way of probing: when relaying an HTLC, nodes could
relay 1 msat more than what the onion instructed (or increase the outgoing
expiry by 1). If the next node was an intermediate node, they would accept
this HTLC, but if the next node was the recipient, they would reject it.
We update those requirements to fix this probing attack vector.
See https://github.com/lightning/bolts/pull/1032
When sending outgoing payments, using an expiry that is very close to
the current block height makes it obvious to the next-to-last node
who the recipient is.
We add a random expiry to the amount we use to make it plausible
that the route contains more hops.
Add a message flag to channel update to specify that an update is private
and should not be rebroadcast to other nodes.
We log an error if a private channel update gets into the rebroadcast list.
See https://github.com/lightning/bolts/pull/999
The specification recommends using a length of 256 for onion errors, but
it doesn't say that we should reject errors that use a different length.
We may want to start creating errors with a bigger length than 256 if we
need to transmit more data to the sender. In order to prepare for this,
we keep creating 256-bytes onion errors, but allow receiving errors of
arbitrary length.
See the specification here: https://github.com/lightning/bolts/pull/1021Fixes#2438
We previously had a few error messages where we would send the complete
offending transaction instead of just its txid. This can be dangerous if
we accidentally send a signed transaction that our counterparty may
broadcast.
We are currently careful and avoid sending signed transaction when it may
be dangerous for us, but there's a risk that a refactoring changes that.
It's safer to only send the txid.
* Explain how and why we use bitcoin core
Explain why we chose to delegate most on-chain tasks to bitcoin core (including on-chain wallet management), the additional requirements that it creates and also the benefits in terms of security.
The `tx_signatures` message uses `tx_hash` instead of `txid`.
Thanks @niftynei for pointing this out.
Reject `tx_add_input` with an `nSequence` set to `0xfffffffe` or
`0xffffffff`. In theory we only need one of the inputs to have an
`nSequence` below `0xfffffffe`, but it's simpler to check all of them
to be able to fail early in the protocol.
When a channel was pruned and we receive a valid channel update for it,
we need to wait until we're sure both sides of the channel are back
online: a channel that has only one side available will most likely not
be able to relay payments.
When both sides have created fresh, valid channel updates, we restore the
channel to our channels map and to the graph.
We remove the custom `pruned` table and instead directly use the data
from the channels table and cached data from the router.
Fixes#2388
Add tlv to require confirmed inputs in dual funding: when that tlv is specified,
the peer must use confirmed inputs, otherwise the funding attempt will be
rejected. This ensures that we won't pay the fees for a low-feerate ancestor.
This can be configured at two levels:
- globally using `eclair.conf`
- on a per-channel basis by overriding the default in `Peer.OpenChannel`
We used to drop onion messages above a certain size, but the onion message packet is already limited to 65536 bytes so we only keep this larger limit instead.
If we have activated 0-conf support for a given peer, we send our
`channel_ready` early regardless of whether our peer has activated support
for 0-conf. If they also immediately send their `channel_ready` it's great,
if they don't it's ok, we'll just wait for confirmations, but it was worth
trying.
In case feeRatePerKw is high and liquidity is low on the initiator side, the initiator can't send anything and the test would fail because we try to create a HTLC for an amount of 0.
We currently accept some malformed TLVs with additional data that we ignore. This means that decoding and reencoding may give a different result.
With this change, we now reject such TLVs.
Also add the `.as[]` part of the codec inside `tlvField` so we can remove the redundant types annotations.
Currently, for an incomming payment we check that the CLTV delta is larger than the minFinalExpiryDelta from the invoice. However with BOLT 12, invoices no longer contain a minFinalExpiryDelta (not yet visible in eclair code, BOLT 12 moves fast!). I suggest to use the minFinalExpiryDelta value from the node params instead.
Since we use this value for all the invoices we create, it doesn't change much. The only case where it would have an impact would be if we create an invoice, then shutdown, change the conf, restart, and only then someone tries to pay the invoice; in that case we would probably want to enforce the new conf anyway.
We previously duplicated `variableSizeBytesLong(varintoverflow, ...)`
whenever we wanted to work with a tlv field.
This was confusing and error-prone, so it's now factored into a specific
helper codec. We also remove the length-prefixed truncated int codecs,
as they are always used in tlvs and should simply use this new tlv field
codec instead.
These two fuzz tests setup a random set of HTLCs and then try to send or
receive the maximum available amount. The initial HTLC setup may fail,
if the initial balances are too low.
It is hard to set those initial balances to ensure this will always work,
but since this will only rarely randomly happen, we should simply ignore
it (instead of failing the test).
When using a custom logger for log capture in tests (with `akka.loggers=["fr.acinq.eclair.testutils.MySlf4jLogger"]`), we need to explicitly disable the "hardcoded" slf4j logger for akka typed, otherwise we will end up with duplicate slf4j logging (one through our custom logger, the other one through the default slf4j logger).
See the rationale for this hardcoded sl4j logger here: https://doc.akka.io/docs/akka/current/typed/logging.html#event-bus.
When using dual funding, both sides may push an initial amount to the remote
side. This is done with an experimental tlv that can be added to `open_channel2`
and `accept_channel2`.
With the requirements added by #2430, we can get rid of the superfluous degrees of freedom around channel reserve, while still leaving the model untouched.
In dual funded channels the reserve is computed automatically. But our model allows setting a reserve even for dual funded channels.
Changing the model is too much work, instead this PR proposes to:
- add `require`s in the `Commitments` class to verify at runtime that we are consistent (it would cause the channel to fail before it is created)
- pass the `dualFunded` status to `Peer.makeChannelParams` so we only build valid parameters.
We could also alter the handlers for `SpawnChannelnitiator`/`SpawnChannelNonInitiator` and mutate the `LocalParams` depending on the value of `dualFunded`. It is less intrusive but arguably more hacky.
When creating a blinded route, we expose the last blinding point (that the
last node will receive). This lets the recipient derive the corresponding
blinded private key, which they may use to sign an invoice.
We add support for generating Bolt 12 invoices and storing them in our
payments DB to then receive blinded payments.
We implement the receiving part once a blinded payment has been decrypted.
This uses the same payment flow as non-blinded payments, with slightly
different validation steps.
Note that we put a random secret in the blinded paths' path_id field
to verify that an incoming payment uses a valid blinded route generated
by us. We store that as an arbitrary byte vector to allow future changes
to this strategy.
Add InvalidOnionBlinded error and translate downstream errors when
we're inside a blinded route, with a random delay when we're the
introduction point.
Add more restrictions to the tlvs that can be used inside blinded payloads.
Add route blinding feature bit and reject blinded payments when
the feature is disabled.
* Separate tlv decoding from content validation
When decoding a tlv stream, we previously also validated the
stream's content at decoding time. This was a layer violation,
as checking that specific tlvs are present in a stream is not
an encoding concern.
This was somewhat fine when we only had very basic validation
(presence or absence of specific tlvs), but blinded paths
substantially changed that because one of the tlvs must be
decrypted to yield another tlv stream that also needs to have
its content validated.
This forced us to have an overly complex trait hierarchy in
PaymentOnion.scala and expose a blinding key in classes that
shouldn't care about whether blinding is used or not.
We now decouple that into two distinct steps:
* codecs simply return tlv streams and verify that tlvs are
correctly encoded
* business logic case classes (such as ChannelRelayPayload)
should be instantiated with a new `validate` method that
takes tlv streams and verifies mandatory/forbidden tlvs
This lets us greatly simplify the trait hierarchy and deal
with case class that only contain fully decrypted and valid
data.
* Improve tests
There was redundancy in the wrong places: route blinding codec tests were
testing route blinding decryption and were missing content validation.
We also change the behavior of the route blinding decode method to return
the blinding override when present, instead of letting higher level
components duplicate that logic.
* Use hierarchical namespaces
As suggested by @pm47
* Small PR comments
* Remove confusing comment
The bug is due to mistakingly using the `^` as power operator, while it instead is a `xor`. As a result, the available space for local alias was tiny, resulting in collisions. This in turns causes the `relayer` to forward `UpdateAddHtlc` to the wrong node, which results in `UpdateFailMalformed` error due to the peers being unable to decrypt the onion that wasn't meant for them.
We previously generated random values, but the randomness doesn't protect
against anything and adds a risk of re-using the same serial ID twice.
It's a better idea to just increment serial IDs (while respecting the
parity spec requirement).
If your peer uses unconfirmed inputs in the interactive-tx protocol, you
will end up paying for the fees of their unconfirmed previous transactions.
This may be undesirable in some cases, so we allow requiring confirmed
inputs only. This is currently set to false, but can be tweaked based on
custom tlvs or values in the open/accept messages.
When fee-bumping an interactive-tx, we want to be more lenient and accept
transactions that improve the feerate, even if peers didn't contribute
equally to the feerate increase.
This is particularly useful for scenarios where the non-initiator dedicated
a full utxo for the channel and doesn't want to expose new utxos when
bumping the fees (or doesn't have more utxos to allocate).
While it makes sense to assume that relayed payments will fail in the context of balance computation (meaning that we don't earn a fee), the opposite is true for payments sent from the local node, which will cause the full htlc amount to be deducted from the balance.
This way we are consistent: the balance computation is pessismistic and always assume the lowest outcome.
Co-authored-by: Bastien Teinturier <31281497+t-bast@users.noreply.github.com>
This PR enables capturing and printing logs for tests that failed, and is compatible with parallel testing. The core idea is to use a different `LoggerContext` for each test (see [logback's doc on context selection](https://logback.qos.ch/manual/contextSelector.html)).
Actual capture and printing of logs is realized through the same technique as Akka's builtin `LogCapture` helpers, that is:
- a custom appender accumulates log events in memory
- a dedicated logger (defined in logback-test.xml and disabled by default) is manually called by the custom appender when logs need to be printed
I unfortunately had to introduce boilerplate classes `MyContextSelector`, `MySlf4jLogger` and `MyCapturingAppender`, the last two being tweaked versions of Akka's existing classes.
Note that the log capture is only enabled for tests that use `FixtureSpec`. The `ActorSystem` needs to be configured to log to `MySlf4jLogger`.
Advantages over existing technique:
- compatible with parallel testing
- no funny business with reflection in FixtureSpec.scala
- use configuratble logback formatting instead of raw println
- allows logging from lightning-kmp (depends on https://github.com/ACINQ/lightning-kmp/pull/355)
Co-authored-by: Bastien Teinturier <31281497+t-bast@users.noreply.github.com>
We previously supported a 65-bytes fixed-size sphinx payload, which has
been deprecated in favor of variable length payloads containing a tlv
stream (see https://github.com/lightning/bolts/pull/619).
It looks like the whole network now supports the variable-length format,
so we can stop accepting the old one. It is also being removed from the
spec (see https://github.com/lightning/bolts/pull/962).
The test `ChannelCodecsSpec.backward compatibility older codecs (integrity)` uses reference json stored as text files. Depending on git settings and OS, the test may fail due to line ending differences.
We need to validate early that a `tx_add_input` message can be converted
into an `OutPoint` without raising an out of bounds exception.
We also fix a flaky test on slow machines.
Add support for bumping the fees of a dual funding transaction.
We spawn a transient dedicated actor: if the RBF attempt fails, or if we
are disconnected before completing the protocol, we should forget it.
Add more tests for scenarios where an unconfirmed channel is force-closed,
where the funding transaction that confirms may not be the last one.
When an alternative funding transaction confirms, we need to unlock other
candidates: we may not have published them yet if for example we didn't
receive remote signatures.