Instead of using two separate maps (for channels and channel_updates), we now use a single map, which groups channel+channel_updates. This is also true for data storage, resulting in the removal of the channel_updates table.
This is the implementation of https://github.com/lightningnetwork/lightning-rfc/pull/557.
* Correctly handle multiple channel_range_replies
The scheme we use to keep tracks of channel queries with each peer would forget about
missing data when several channel_range_replies are sent back for a single channel_range_queries.
* RoutingSync: remove peer entry properly
* Remove peer entry on our sync map only when we've received
a `reply_short_channel_ids_end` message.
* Make routing sync test more explicit
* Routing Sync: rename Sync.count to Sync.totalMissingCount
* Do not send channel queries if we don't want to sync
* Router: clean our sync state when we (re)connect to a peer
We must clean up leftovers for the previous session and start the sync process again.
* Router: reset sync state on reconnection
When we're reconnected to a peer we will start a new sync process and should reset our sync
state with that peer.
* Extended Queries: use TLV format for optional data
Optional query extensions now use TLV instead of a custom format.
Flags are encoded as varint instead of bytes as originally proposed. With the current proposal they will all fit on a single byte, but will be
much easier to extends this way.
* Optional TLVs are represented as a list, not an optional list
TLVs that extend regular LN messages can be represented as a TlvStream and not an Option[TlvStream] since we don't need
to explicitely terminate the stream (either by preprending its length or using a specific terminator) as we do in Onion TLVs.
No TLVs simply means that the TLV stream is empty.
* TLV Stream: Implement a generic "get" method for TLV fields
If a have a TLV stream of type MyTLV which is a subtype of TLV, and MyTLV1 and MYTLV2 are both
subtypes of MyTLV then we can use stream.get[MyTLV1] to get the TLV record of type MYTLV1 (if any)
in our TLV stream.
* Use extended range queries on regtest and testnet
We will use them on mainnet as soon as https://github.com/lightningnetwork/lightning-rfc/pull/557 has been merged.
* Channel range queries: send back node announcements if requested (#1108)
This PR adds support for sending back node announcements when replying to channel range queries:
- when explicitly requested (bit is set in the optional query flag)
- when query flags are not used and a channel announcement is sent (as per the BOLTs)
A new configuration option `request-node-announcements` has been added in the `router` section. If set to true, we
will request node announcements when we receive a channel id (through channel range queries) that we don't know of.
This is a setting that we will probably turn off on mobile devices.
* Extended Channel Queries: add CL interop test
Untyped cltv expiry was confusing: delta and absolute expiries really need to be handled differently.
Even variable names were sometimes misleading.
Now the compiler will help us catch errors early.
Follow up to #1082.
The goal is to be able to publish transactions only after we have
persisted the state. Otherwise we may run into corner cases like [1]
where a refund tx has been published, but we haven't kept track of it
and generate a different one (with different fees) the next time.
As a side effect, we can now remove the special case that we were
doing when publishing the funding tx, and remove the `store` function.
NB: the new `calling` transition method isn't restricted to publishing
transactions but that is the only use case for now.
[1] https://github.com/ACINQ/eclair-mobile/issues/206
* Route computation: fix fee check
Fee check during route computation is:
- fee is below maximum value
- OR fee is below amout * maximum percentage
The second check was buggy and route computation would failed when fees we above maximum value but below maximum percentage of amount being paid.
* Type all amounts used in eclair
* Add eclair.MilliSatoshi class
* Use bitcoin-lib 0.14
* Add specialized codecs for Satoshi/MilliSatoshi
* Rename 'toSatoshi' to 'truncateToSatoshi' to highlight it's a precision-losing conversion
* Use feeEstimator in NodeParams, remove all calls to Globals.feeratePerKw
* Introduce FeeConf object and config block for confirmation targets, remove unused 'smartfeeNBlocks'
* Use a custom confirmation target for commitment transaction
* Use a custom confirmation target for funding transaction
* Use custom confirmation target for mutual close transaction
* Use custom confirmation target for claim transactions
* Add confirmation target block 144
* Use block target = 12 as default for claim transactions
Add support for variable-length onion payloads at the Sphinx (cryptographic) layer.
This is currently unused as we keep using the legacy format by default (this will be changed in a later commit).
This commit also refactors quite heavily the Sphinx file.
When we want to fulfill an HTLC but the upstream peer is unresponsive, we must close the channel if we get too close to the HTLC timeout on their side.
Otherwise we risk an on-chain race condition between our HTLC success transaction and their HTLC timeout transaction, which could result in a loss of funds.
This PR adds support for truncated integers as defined in the spec.
The test vectors are updated to include all test vectors from rusty's spec PR.
It also provides many changes to the tlv and tlv stream classes:
- The tlv trait doesn't need a type field, the codec should handle that
- A TLV stream should be scoped to a specific subtrait of tlv
- Stream validation is done inside the codec instead of the tlv stream: it makes it more convenient for application layers to create tlv streams and manipulate them
All the data contained in `node_announcement`, `channel_announcement`
and `channel_update` is to be included in the signature, including
unknown trailing fields. We were ignoring them, causing signature
verification to fail when there was unknown fields.
In the case of `channel_update` there is a backward compatibility issue
to handle, because when persisting channel data in state `NORMAL`, we
used to store the `channel_update` followed by other data, and without
prefixing it with size information.
To work around that we use the same trick as before, based on an
additional discriminator codec.
* connect immediately on restart, then wait
This is to allow herd effect when we restart the app and have numerous
peers.
Also removed the unnecessary transition and cleaned up delay
computation.
* always reconnect immediately when disconnected
Whether we go to this state from startup, or after getting disconnected.
It makes the transition logic simpler, and the potential herd effect at
startup is inevitable anyway since our peers will try to reconnect too.
* add randomization when reconnecting
* randomize delay for first reconnection attempt after startup
* make some parameters configurable
If we are fundee and after 5 days the funding isn't even in the mempool,
then we give up waiting and consider the channel closed. Note that if
the funding tx stays unconfirmed forever we won't give up waiting.
If we are funder, we never give up until the funding tx is double spent,
and we periodically republish it.
This applies to states `WAIT_FOR_FUNDING_CONFIRMED` and `CLOSING` (and
also `OFFLINE`/`SYNCING` when underlying state is
`WAIT_FOR_FUNDING_CONFIRMED`).
Also, added a generic way of passing context to `ElectrumClient`
requests/responses.
Fixes#1029.
TLV (tag-length-value) types and TLV streams have been defined in the following spec PR: https://github.com/lightningnetwork/lightning-rfc/pull/607
New Lightning Messages should use TLV extensively instead of ad-hoc per-message encoding. This also allows ignoring unknown odd TLV types, which lets implementers safely test new features on mainnet without impacting legacy nodes. It also allows type re-use which speeds up new features development.
Also cleaned-up and refactored common codecs.
Currently balances can be obtained from `channels` call but this requires a lot of work on caller side and also some specific knowledge (reserves, commit tx fee, in-flight payments), so this new `balances` endpoint only returns a correct balance info for each channel.
In the event when we receive an unexpected message, the `Peer` was just logging a warning and not sending an `Ack` to the `TransportHandler`. This resulted in a stuck connection, because no more data was read on the connection.
Fixes#1037.
It turns out that performance gains of the cached codec are not that
great, and they come at a cost of significant pressure on the GC. In
other words: premature optimization.
When removed, the heap usage becomes very stable, which is much better
than hypothetical performance gains.
Fixes#1031.
* use 64B representation instead of DER for sigs
It is more compact, and as an added bonus it frees us from the
completely unrelated Bitcoin-specific `0x01` trailing sig hash.
Note that we already used the 64B representation for storage everywhere,
except in `ChannelCodecs.htlcTxAndSigsCodec`, which required a backward
compatibility codec. Added a nonreg test for this.
* Use updated secp256k1 JNI bindings
* Replace scalar with private key and point with public key
We now use the simplified/unified design proposed in bitcoin-lib where:
- there are no more specific types for scalar/point
- private and public keys are compressed unless explicitly requested
* Generate and use 32 bytes seeds (and not 33)
We used serialized random private keys which were represented a 33 bytes (with a 01 suffix).
Using random 32 bytes values is more consistent.
We must make sure that upgraded apps that already have a 33 bytes seed will still generate the same secrets, which is why LocalKeyManager still uses the 01 suffix when needed
We store `CMD_FULFILL_HTLC`/`CMD_FAIL_HTLC`/`CMD_FAIL_MALFORMED_HTLC` in
a database (see `CommandBuffer`) because we don't want to lose
preimages, or to forget to fail incoming htlcs, which would lead to
unwanted channel closings.
But we currently only clean up this database on success, and because of
the way our watcher works, in a scenario where a downstream channel has
gone to the blockchain, it may send several times the same command. Only
the first one will be acked and cleaned up by the upstream channel,
causing the remaining commands to stay forever in the "pending relay
db".
With this change we clean up the commands when they fail too.
We also clean up the pending relay db on startup.
If the closing type is known:
- there is no need to watch the funding tx because it has already
been spent and the spending tx has already reached min_depth
- there is no need to attempt to publish transactions for other
type of closes.
* differentiate current/next remote close
We can still match on the trait `RemoteClose` if we don't need that
level of precision.
* Use node announcements as fallback to load peer addresses during startup
* Add NetworkDb.getNode to retrieve a node_announcement by nodeId
* When connecting to a peer use node_announcement as fallback for its IP address
* Support connection to peer via pubKey
* Increase finite max of exponential backoff time to 1h.
* Add peer disconnect API call
This only happens when we are fundee. We *could* have some funds at
stake if there was a non-zero `push_msat`, but we already allows 5 days
for the funding tx to confirm so the best option is probably to forget
about the channel.