We currently accept some malformed TLVs with additional data that we ignore. This means that decoding and reencoding may give a different result.
With this change, we now reject such TLVs.
Also add the `.as[]` part of the codec inside `tlvField` so we can remove the redundant types annotations.
Currently, for an incomming payment we check that the CLTV delta is larger than the minFinalExpiryDelta from the invoice. However with BOLT 12, invoices no longer contain a minFinalExpiryDelta (not yet visible in eclair code, BOLT 12 moves fast!). I suggest to use the minFinalExpiryDelta value from the node params instead.
Since we use this value for all the invoices we create, it doesn't change much. The only case where it would have an impact would be if we create an invoice, then shutdown, change the conf, restart, and only then someone tries to pay the invoice; in that case we would probably want to enforce the new conf anyway.
We previously duplicated `variableSizeBytesLong(varintoverflow, ...)`
whenever we wanted to work with a tlv field.
This was confusing and error-prone, so it's now factored into a specific
helper codec. We also remove the length-prefixed truncated int codecs,
as they are always used in tlvs and should simply use this new tlv field
codec instead.
These two fuzz tests setup a random set of HTLCs and then try to send or
receive the maximum available amount. The initial HTLC setup may fail,
if the initial balances are too low.
It is hard to set those initial balances to ensure this will always work,
but since this will only rarely randomly happen, we should simply ignore
it (instead of failing the test).
When using a custom logger for log capture in tests (with `akka.loggers=["fr.acinq.eclair.testutils.MySlf4jLogger"]`), we need to explicitly disable the "hardcoded" slf4j logger for akka typed, otherwise we will end up with duplicate slf4j logging (one through our custom logger, the other one through the default slf4j logger).
See the rationale for this hardcoded sl4j logger here: https://doc.akka.io/docs/akka/current/typed/logging.html#event-bus.
When using dual funding, both sides may push an initial amount to the remote
side. This is done with an experimental tlv that can be added to `open_channel2`
and `accept_channel2`.
With the requirements added by #2430, we can get rid of the superfluous degrees of freedom around channel reserve, while still leaving the model untouched.
In dual funded channels the reserve is computed automatically. But our model allows setting a reserve even for dual funded channels.
Changing the model is too much work, instead this PR proposes to:
- add `require`s in the `Commitments` class to verify at runtime that we are consistent (it would cause the channel to fail before it is created)
- pass the `dualFunded` status to `Peer.makeChannelParams` so we only build valid parameters.
We could also alter the handlers for `SpawnChannelnitiator`/`SpawnChannelNonInitiator` and mutate the `LocalParams` depending on the value of `dualFunded`. It is less intrusive but arguably more hacky.
When creating a blinded route, we expose the last blinding point (that the
last node will receive). This lets the recipient derive the corresponding
blinded private key, which they may use to sign an invoice.
We add support for generating Bolt 12 invoices and storing them in our
payments DB to then receive blinded payments.
We implement the receiving part once a blinded payment has been decrypted.
This uses the same payment flow as non-blinded payments, with slightly
different validation steps.
Note that we put a random secret in the blinded paths' path_id field
to verify that an incoming payment uses a valid blinded route generated
by us. We store that as an arbitrary byte vector to allow future changes
to this strategy.
Add InvalidOnionBlinded error and translate downstream errors when
we're inside a blinded route, with a random delay when we're the
introduction point.
Add more restrictions to the tlvs that can be used inside blinded payloads.
Add route blinding feature bit and reject blinded payments when
the feature is disabled.
* Separate tlv decoding from content validation
When decoding a tlv stream, we previously also validated the
stream's content at decoding time. This was a layer violation,
as checking that specific tlvs are present in a stream is not
an encoding concern.
This was somewhat fine when we only had very basic validation
(presence or absence of specific tlvs), but blinded paths
substantially changed that because one of the tlvs must be
decrypted to yield another tlv stream that also needs to have
its content validated.
This forced us to have an overly complex trait hierarchy in
PaymentOnion.scala and expose a blinding key in classes that
shouldn't care about whether blinding is used or not.
We now decouple that into two distinct steps:
* codecs simply return tlv streams and verify that tlvs are
correctly encoded
* business logic case classes (such as ChannelRelayPayload)
should be instantiated with a new `validate` method that
takes tlv streams and verifies mandatory/forbidden tlvs
This lets us greatly simplify the trait hierarchy and deal
with case class that only contain fully decrypted and valid
data.
* Improve tests
There was redundancy in the wrong places: route blinding codec tests were
testing route blinding decryption and were missing content validation.
We also change the behavior of the route blinding decode method to return
the blinding override when present, instead of letting higher level
components duplicate that logic.
* Use hierarchical namespaces
As suggested by @pm47
* Small PR comments
* Remove confusing comment
The bug is due to mistakingly using the `^` as power operator, while it instead is a `xor`. As a result, the available space for local alias was tiny, resulting in collisions. This in turns causes the `relayer` to forward `UpdateAddHtlc` to the wrong node, which results in `UpdateFailMalformed` error due to the peers being unable to decrypt the onion that wasn't meant for them.
We previously generated random values, but the randomness doesn't protect
against anything and adds a risk of re-using the same serial ID twice.
It's a better idea to just increment serial IDs (while respecting the
parity spec requirement).
If your peer uses unconfirmed inputs in the interactive-tx protocol, you
will end up paying for the fees of their unconfirmed previous transactions.
This may be undesirable in some cases, so we allow requiring confirmed
inputs only. This is currently set to false, but can be tweaked based on
custom tlvs or values in the open/accept messages.
When fee-bumping an interactive-tx, we want to be more lenient and accept
transactions that improve the feerate, even if peers didn't contribute
equally to the feerate increase.
This is particularly useful for scenarios where the non-initiator dedicated
a full utxo for the channel and doesn't want to expose new utxos when
bumping the fees (or doesn't have more utxos to allocate).
While it makes sense to assume that relayed payments will fail in the context of balance computation (meaning that we don't earn a fee), the opposite is true for payments sent from the local node, which will cause the full htlc amount to be deducted from the balance.
This way we are consistent: the balance computation is pessismistic and always assume the lowest outcome.
Co-authored-by: Bastien Teinturier <31281497+t-bast@users.noreply.github.com>
This PR enables capturing and printing logs for tests that failed, and is compatible with parallel testing. The core idea is to use a different `LoggerContext` for each test (see [logback's doc on context selection](https://logback.qos.ch/manual/contextSelector.html)).
Actual capture and printing of logs is realized through the same technique as Akka's builtin `LogCapture` helpers, that is:
- a custom appender accumulates log events in memory
- a dedicated logger (defined in logback-test.xml and disabled by default) is manually called by the custom appender when logs need to be printed
I unfortunately had to introduce boilerplate classes `MyContextSelector`, `MySlf4jLogger` and `MyCapturingAppender`, the last two being tweaked versions of Akka's existing classes.
Note that the log capture is only enabled for tests that use `FixtureSpec`. The `ActorSystem` needs to be configured to log to `MySlf4jLogger`.
Advantages over existing technique:
- compatible with parallel testing
- no funny business with reflection in FixtureSpec.scala
- use configuratble logback formatting instead of raw println
- allows logging from lightning-kmp (depends on https://github.com/ACINQ/lightning-kmp/pull/355)
Co-authored-by: Bastien Teinturier <31281497+t-bast@users.noreply.github.com>
We previously supported a 65-bytes fixed-size sphinx payload, which has
been deprecated in favor of variable length payloads containing a tlv
stream (see https://github.com/lightning/bolts/pull/619).
It looks like the whole network now supports the variable-length format,
so we can stop accepting the old one. It is also being removed from the
spec (see https://github.com/lightning/bolts/pull/962).
The test `ChannelCodecsSpec.backward compatibility older codecs (integrity)` uses reference json stored as text files. Depending on git settings and OS, the test may fail due to line ending differences.
We need to validate early that a `tx_add_input` message can be converted
into an `OutPoint` without raising an out of bounds exception.
We also fix a flaky test on slow machines.
Add support for bumping the fees of a dual funding transaction.
We spawn a transient dedicated actor: if the RBF attempt fails, or if we
are disconnected before completing the protocol, we should forget it.
Add more tests for scenarios where an unconfirmed channel is force-closed,
where the funding transaction that confirms may not be the last one.
When an alternative funding transaction confirms, we need to unlock other
candidates: we may not have published them yet if for example we didn't
receive remote signatures.
Once we've exchanged signatures for the funding tx, we wait for it to
confirm.
Note that we don't allow mutual closing an unconfirmed channel, but that
is also the case for single-funded channels. We can improve that in the
future if necessary, but it is more efficient to double-spend an unconfirmed
channel rather than mutual close it.
To match the latest changes in https://github.com/lightning/bolts/pull/765
at commit aed5518a80aade56218da87f92e0a39963b660cf
The main change was the introduction of the `payment_relay`,
`payment_constraints` and `allowed_features` tlvs, with small
additional codec updates.
Apply @rustyrussell's neat truncating integer arithmetic formula to
calculate the amount that should be forwarded by blinded path nodes
instead of our previous approximation.
It is now possible to specify a DNS host name as one of your
`server.public-ips` addresses.
DNS host names will not be resolved until eclair attempts to
connect to the peer.
See https://github.com/lightning/bolts/pull/911
After exchanging `open_channel2` and `accept_channel2`, we start building
the funding transaction.
We stop once we've generated our signatures for the funding transaction,
at which point we should store the channel in the DB (which will be done in
future commits).
Before eclair v0.6.0, we didn't store a mapping between htlc_id and htlc
txs, which made it tricky to correctly remove identical htlcs that used
MPP during force-close, when an htlc tx was confirmed.
We have added that mapping since then and released it more than one year
ago, so we can now safely remove that code.
We previously incorrectly pruned only once both channel updates were stale.
This was incorrect, we must prune channels as soon as one side becomes stale.
There are ~100 channels on the network today that have one inactive side,
while the other side regularly refreshes their channel update, but those
channels won't be usable for routing. They should eventually be closed,
but the active side is probably hoping for the inactive side to come back
online to get the opportunity to do a mutual close.
Following #2361, we reject channel updates that don't contain the
`htlc_maximum_msat` field. However, the network DB may contain such
channel updates, that we need to remove when starting up.
This is a follow up of #2264 where we refactor handling of channel updates
in failures coming from routing hints.
For failures in one of the routing hints, we use the node_id pair (source,
destination) instead of the short_channel_id to identify the edge.
We implement the first step of the dual funding protocol: exchanging
`open_channel2` and `accept_channel2`.
We currently stop after exchanging these two messages. Future commits will
add the interactive-tx protocol used to build the funding transaction.
Note on log capture:
- akka version (`... with LogCapturing`): only works when tests are run sequentially (that is: useless on CI), otherwise outputs of all test are mixed together
- my version: separate between tests, but ugly formatting, because akka's `StdOutLogger` dosen't use logback settings.
The specification is removing support for old channel updates that didn't
include an `htlc_maximum_msat` (https://github.com/lightning/bolts/pull/996).
Every implementation has been generating updates containing this field for
years, so we can safely reject updates that don't contain it.
When we have nothing at stake (channel was never used and we don't have
funds to claim), we previously directly went to the CLOSED state without
publishing our commitment. This can be an issue for our peer if they have
lost data or had a hard time getting a funding tx confirmed.
We now publish our commitment once to help them get their funds back in
all cases and avoid the CSV delays when getting their funds back.
Fixes#1730
We introduced a way to get a "light" payments view in #1225.
This was a performance improvement for mobile wallets embedding eclair.
Mobile wallets should now use lightning-kmp instead of eclair, so we can
get rid of that unused code.