This is almost a drop-in replacement. I had to relaxed compiler
parameters to allow deprecated features though.
Main changes:
- relaxed compiler parameters to minimize impact (e.g. allow
deprecated features)
- `scala.collection.JavaConverters` -> `scala.jdk.CollectionConverters`
- `MultiMap` -> `MultiDict`
Compilation is 25% faster on my machine, compiler is a bit more strict
(it found an "invalid comparison" bug).
Do all the changes that will be required and are already possible to
minimize the diff:
- update dependencies
- `'something` -> `Symbol("something")`
- `BigDecimal.xValue()` -> `BigDecimal.xValue`
- `Map.filterKeys` -> `Map.filterKeys.toMap` (same for `Map.mapValues`)
- `def myMethod(...)` -> `def myMethod(...): Unit`
Router uses channel events (LocalChannelUpdate and AvailableBalanceChanged)
to track the balance of local channels.
This information will be used in path-finding to improve path-finding,
especially in the MPP case.
The information is added to the graph structures, but it's not used yet in
path-finding. Some A/B testing will be needed before we can use those
safely for the path-finding algorithm.
When switching to a new connection while already connected, peer
immediately kills the current connection and sends back the
`PeerConnection.ConnectionReady` to itself. Since #1379, the sender of
this message is assumed to be the `PeerConnection` actor. If peer
doesn't preserve the sender by using a `forward` instead of a `tell`, it
will assume that itself is the `PeerConnection`, which will break
everything.
Transaction generation functions used to throw exceptions.
We have a good TxGenerationSkipped type to express potential errors,
so these functions should return an Either to make the contract explicit.
We update transaction fees at every block (ie every 10 minutes). While this
works well when the remote peer is a node that's online for more than 10 minutes,
it's an issue for mobile wallets that usually come online for a few minutes
and then disconnect.
We want to make sure we send these wallet peers an update_fee when one
is needed, so we now check for feerate updates on reconnection.
Fixes#1381.
In case a channel has been pruned, and we receive a recent update, we
"unprune" it and immediately request the channel announcement again
(which will cause us to revalidate it). We also discard the update,
assuming that we will receive it again with the channel announcement.
We were using a `GossipDecision.Duplicate` rejection for the channel
update, which is inaccurate. This PR introduces a new
`GossipDecision.RelatedChannelPruned`.
This commit reverts #1278 where connecting to an Electrum server
would disable the SSL check. The correct way to handle that is to
allow users to choose their SSL behavior in the frontend applications.
If our latest successful connection attempt was less
than 30 seconds ago, we pick up the exponential
back-off retry delay where we left it. The
goal is to address cases where the reconnection
is successful, but we are disconnected right away.
* Support additional user defined TLVs when sending a payment (both single-part and MPP)
* Allow encoding and decoding of even TLV types above the high range
* Add missing cases to PostRestart
When a channel is closed we want to remove its HTLCs from our
list of pending broken HTLCs (they are being resolved on-chain).
We should also ignore outgoing HTLCs that have already been
settled upstream (which can happen when downstream is closing).
* Watch for downstream HTLC resolved on-chain
When a downstream channel is closing, we can safely fail upstream the
HTLCs that were either timed out on-chain or not included in the
broadcast commit transaction.
Channels will not always raise events about those after a reboot, so we
need to inspect the channel state and detect such HTLCs.
* Add helper function to HTLC scripts
To extract the payment_hash or preimage from an HTLC script seen on-chain.
* Cleanup on-chain HTLC timeout handling for MPP
With MPP, it's possible that a channel contains multiple HTLCs for the
same payment hash, and potentially even for the same expiry and amount.
We add more fine-grained handling of HTLC timeouts that share the same
payment hash. This allows a cleaner handling after a restart, and makes
sure we correctly detect failure that should be propagated upstream.
Otherwise we wouldn't be losing any money, but some channels may be closed
that we can avoid.
* Handle out-of-order htlc-timeout txs
It may happen that a commit tx and some htlc-timeout txs end up in the
same block. In that case, there is no guarantee on the order we'll receive
the confirmation events.
If any tx in a local/remoteCommitPublished is confirmed, that implicitly
means that the commit tx is confirmed (because it spends from it).
So we can consider the closing type known and forward the failure upstream.
* removed the `Direction` class
* improved the non-reg test for htlcs
- check actual content instead of only success and roundtrip
- use randomized data for all fields instead of all-zero
- check the remaining data, not only the decoded value (codecs are
chained so a regression here will cause the next codec to fail)
Co-Authored-By: Bastien Teinturier <31281497+t-bast@users.noreply.github.com>
* Sort commit transaction outputs using BIP69 + CLTV as tie-breaker for offered HTLCs
* Type DirectedHtlc:
We now use a small hierarchy of classes to represent HTLC directions.
There is also a type alias for a collection of commitment output links.
* front now handles ping/sync
Peer has been split in two and now handles only channel related stuff.
A new `PeerConnection` class is in charge of managing the BOLT 1 part
(init handshake, pings) and has the same lifetime as the underlying
connection.
Also, made `TransportHandler` be a child of `PeerConnection` by making
the `remoteNodeId` an attribute of the state of `PeerConnection` instead
of a constructor argument (since we cannot be sure of the remote nodeid
before the auth handshake is done). Now we don't need to worry about
cleaning up the underlying `TransportHandler` if the `PeerConnection`
dies.
* remove `Authenticator`
Instead of first authenticating a connection, then passing it to the
`PeerConnection` actor, we pass the connection directly to the
`PeerConnection` and let it handle the crypto handshake, before the LN
init. This removes a central point of management and makes things easier
to reason about. As a side effect, the `TransportHandler` actor is now a
child of `PeerConnection` which gives us a guarantee that it dies when
its parent dies.
* separated connection logic from `Peer`
The `ReconnectionTask` actor handles outgoing connections to a peer. The
goal is to free
the `Peer` actor from the reconnection logic and have it just react to
already established
connections, independently of whether those connections are incoming or
outgoing.
The base assumption is that the `Peer` will send its state transitions
to the `ReconnectionTask` actor.
This is more complicated than it seems and there are various corner
cases to consider:
- multiple available addresses
- concurrent outgoing connections and conflict between
automated/user-requested attempts
- concurrent incoming/outgoing connections and risk of reconnection
loops
- etc.
Co-Authored-By: Bastien Teinturier <31281497+t-bast@users.noreply.github.com>
* Refactor timed out HTLC helpers: directly take a DATA_CLOSING
and extract the relevant parts.
* ClosingStateSpec: test dust HTLCs
* Improve ClosingStateSpec
* Clean up usage of AddHtlcFailed
We were abusing AddHtlcFailed in some cases where an outgoing HTLC
was correctly added, but was later settled on-chain (fulfilled, timed
out or overridden by a different commit transaction).
These cases are now specifically handled with new Relayer.ForwardMessage
dedicated to on-chain settling.
* Refactor Relayer's ForwardMessages
ForwardFail and ForwardFulfill are now traits.
Handle both on-chain and remote fail/fulfills.
We were allowing users to set htlc-minimum-msat to 0, which directly
contradicts the fact that we must never send an HTLC for 0 msat.
We now explicitly disallow that behavior: the minimum is 1 msat.
In case the remote side of a channel had set its htlc-minimum-msat to 0,
we would forward HTLC with a value of 0 msat if a sender crafted such a
payment. The spec disallows that, so we now explicitly check for that
lower bound.
Also removed string interpolation for some of the more expensive debug
lines. It's a trade-off performance vs readability and is probably not
worth changing for info level logs, which will be enabled anyway.
Co-Authored-By: Bastien Teinturier <31281497+t-bast@users.noreply.github.com>
See https://github.com/lightningnetwork/lightning-rfc/issues/728
Add an additional reserve on the funder to prevent emptying and then
being stuck with an unusable channel.
As fundee we don't verify funders comply with that change.
We may enforce it in the future when we're confident the network as a
whole enforces that.
Previous implementation in #1317 wasn't working because in a bug in the
transition. Added a test and fixed it.
Co-authored-by: Bastien Teinturier <31281497+t-bast@users.noreply.github.com>
Refactor and improve payment metrics:
* Unify in a Monitoring object.
* Add helper functions and objects.
* Add more error metrics.
* Add more latency metrics.
* Add metrics to post-restart HTLC cleanup
* Add metrics to router path-finding
As long as we receive messages from our peer, we consider it is
online and don't send ping requests. If we don't hear from the
peer, we send pings and expect timely answers, otherwise we'll
close the connection.
This is implemented by scheduling a ping request every 30 seconds,
and pushing it back every time we receive a message from the peer.
* Rework plugin loading:
We now require the plugin to supply a manifest entry for the "Main-Class" attribute, this is used to load the plugin without doing illegal reflection operations. We also get rid of the dependency org.clapper.classutil
* Support wumbo channels:
- use bits 18, 19
- compute the min depth for the funding transaction according to the channel size
- update routing heuristics for a wumbo world:
- the lower bound is the 25th percentile of current channel capacity on the network
- the higher bound is the most common capacity for wumbo channels
- add 'max-funding-satoshis' configuration key to allow to set the maximum channel size that will be accepted
Make DLP data mandatory in ChannelReestablish.
We make them mandatory to allow extending the message with TLVs.
Make upfront_shutdown_script a TLV record that we always include in
open_channel / accept_channel.
See https://github.com/lightningnetwork/lightning-rfc/pull/714.
Add an abstraction of a PaymentPart.
Remove unnecessary intermediary case classes.
This allows extending how payment parts can be received.
It's not limited to HTLCs (could be swaps, pay-to-opens, etc).
Currently, those methods throw exceptions, and we rely on `Channel` to
call them within a `Try(...)`. It puts more burden on `Channel` and
isn't very functional.
Some methods were returning an `Either[]`, which seem to play the role
of a `Try` but isn't used. It seems the idea was to not fail the channel
upon receiving a `fulfill`/`fail` for an unknown htlc, but it is not
fully wired, and isn't compliant (BOLT 2):
> A receiving node:
> if the id does not correspond to an HTLC in its current commitment
transaction:
> MUST fail the channel.
For signature-related methods, I went with the minimal change of
encapsulating portions of the code inside a `Try {...}` to minimize risk
of regression. We could also make `CommitmentSpec` methods return
`Try[]` but I suspect that would be more complicated with little
benefit.
Note that if some part isn't covered by the `Try` and ends up throwing
an exception, that will be covered by the `handleException` handler of
`Channel`.
Fixes#1305.