It makes sense to allow node operators to configure the value they want
to use as a maximum threshold for anchor outputs commitment tx feerate.
This allows node operators to raise this value when mempools start getting
full in anticipation of a potential rise of the min-relay-fee.
This value can also be overridden for specific nodes.
We previously used a Set, which means you could theoretically have a feature
that is both activated as `optional` and `mandatory`.
We change that to be a Map `feature -> support`.
* add tests on funding mindepth
We verify that when using wumbo channels:
- if we are funder we keep our regular min_depth
- if we are fundee we use a greater min_depth
* use lenses to simplify tags handling
Co-authored-by: Bastien Teinturier <31281497+t-bast@users.noreply.github.com>
When using anchor outputs, the commitment feerate is kept low (<10 sat/byte).
When we need to force-close a channel, we must ensure the commit tx and htlc
txs confirm before a given deadline, so we need to increase their feerates.
This is currently done only once, at broadcast time.
We use CPFP for the commit tx and RBF for the htlc txs.
If publishing fails because we don't have enough utxos available, it will
be retried after the next block is confirmed.
Note that it's still not recommended to activate anchor outputs.
More work needs to be done on this fee bumping logic and utxos management.
If a channel closes when we've received an UpdateFailHtlc, signed it but
not yet received our peer's revocation, we need to fail the htlc upstream.
That specific scenario was not correctly handled, resulting in upstream
htlcs that were not failed which would force our upstream peer to close
the channel.
Since we almost always know which transactions will spend the utxos that we are watching, we can optimize the watcher to look for those instead of starting from scratch.
Our previous timeout was based on timestamps, mostly because blockCount
could be 0 on mobile using Electrum until a new block was received.
Now that we're diverging from the mobile wallet codebase, we can use block
heights instead which is more accurate.
See lightningnetwork/lightning-rfc#839
In some places of the codebase we relied on the fact that lightning transactions
had a single input. That was correct with the standard commitments format,
but will not be the case with anchor outputs: 2nd-stage txs (htlc-txs) and
3rd-stage txs (claim-htlc-txs) can be RBF-ed and have any number of inputs
and outputs.
With anchor outputs, the actual feerate for the commit tx can be decided
when broadcasting the tx by using CPFP on the anchor.
That means we don't need to constantly keep the channel feerate close to
what's happening on-chain. We just need a feerate that's good enough to get
the tx to propagate through the bitcoin network.
We set the upper threshold to 10 sat/byte, which is what lnd does as well.
We let the feerate be lower than that when possible, but do note that
depending on your configured `feerate-tolerance`, that means you can still
experience some force-close events because of feerate mismatch.
Fix anchor outputs closing fee requirements: when using anchor outputs,
the mutual close fee is allowed to be greater than the commit tx fee,
because we're targeting a specific confirmation window.
Fix fee mismatch without htlc: we allow disagreeing on fees while the channel
doesn't contain any htlc, because no funds can be at risk in that case.
But we used the latest signed fee when adding a new HTLC, whereas we must
also take into account the latest proposed (unsigned) fee.
Added additional method to Eclair like findRoute but allowing for 2 nodeIds.
Also added a new endpoint to the http Api "findroutebetweennodes" which
takes sourceNode and targetNode as params.
Fixes#1068
When our mempool is full, its min-relay-fee may be constantly changing.
To ensure our txs can be published, we need to check the min-relay-fee when
we fund the transaction, and raise it if necessary.
We are restoring the previous behavior of using the `sync_complete` field
to signal the end of a `channel_range_query` sync.
The first step is to correctly set that field, before we can read it and
interpret it to mark the end of sync.
See https://github.com/lightningnetwork/lightning-rfc/pull/826
When a peer is disconnected, the register will return a forward failure.
This can happen if the peer is connected when we start the payment FSM and
then disconnects before we send them an HTLC.
Obviously the route amount must be strictly positive.
We don't control htlcMinimumMsat (it is set by our peer) and for backwards
compatibility reasons we allow it to be 0 msat (even though it doesn't make
much sense), so we need to enrich our condition to detect empty channels.
We keep the GetRoutingState API available in the router as it's useful to
query network information locally (or between actors), but we stop sending
that data to remote nodes.
It's useful to separate channel state test methods in a dedicated trait
instead of always bundling it with `FixtureTestSuite`.
In particular, it was previously impossible to use both `BitcoindService`
and `StateTestsHelperMethods` because `BitcoindService` doesn't work with
fixtures (it leverages `beforeAll` and `afterAll` instead because launching
one bitcoind instance per-test would be too expensive and useless).
Front-end logs can produce a huge amount of logs, with significant
duplication. In order to reduce the log volume, we truncate `nodeId` and
`channelId` in the MDC to only keep the first 8 hexadecimal characters.
Also, override a few `toString` because some channel-queries-related
case classes produce huge strings.
This reduces the bandwidth used: it doesn't make sense to sync with every
node that connects to us.
We also better track sync requests, to reject unsolicited sync responses.
To ensure that nodes don't need to explicitly reconnect after creating
their first channel in order to get the routing table, we add a mechanism
to trigger a sync when the first channel is created.
The routing hint we get in a Bolt 11 invoice may be obsolete when we attempt
the payment: one of the nodes in the route may have updated his relay fees.
Since this affects a private channel that is not kept in the routing graph,
we need to update the routing hints before injecting them in the router.
This was already done in `PaymentLifecycle` with automatic retries, but when
using MPP we retried in the `MultiPartPaymentLifecycle` instead of inside
the `PaymentLifecycle`, so we need to handle routing hints updates there.
Clarify why we don't actively update channel relay fees when the default
values change in eclair.conf. It would override manual changes made via
the API which is bad.
We were extracting F's commit tx from its internal state right after receiving
the `PaymentSent` event. The issue is that this could happen before the fulfill
was completely signed on both sides, so the commit tx we obtained would still
contain the HTLC and would be different from the one F would publish when
closing.
Actor names cannot conflict.
Even though blockchain watchdog actors stop themselves after fetching block
data, when blocks are found in a short interval, we may end up with multiple
actors of the same type simultaneously alive, so we need them to have
unique names.
The blockcount isn't sufficient to make their names unique because forks
can happen.
Fixes#1665
We had a delay mechanism before re-enabling reconnected channels
to avoid creating frequent channel updates on flappy connections and flooding
the network with unnecessary gossip.
We don't need this protection for private channels since they're not gossiped
to the rest of the network. And in the case of private channels to mobile
wallets, we don't want to add any delay, otherwise the reconnected channel
will not be in the router's graph and we'll have issues routing payments to
that wallet (especially if they quickly disconnect, before our 10-seconds
delay).
When a channel goes to the CLOSED state, the actor will stop itself.
We were previously sending messages to the actor asking for its state,
which returns a failure when the actor is stopped. We can simply listen
to state events to safely get the same result.
Electrum can return unconfirmed txs in an address' history. When that
happens, we should not try to fetch its confirmed position, it will
return an error.
We simply need to ignore these events and wait for the tx to confirm.