We must consider `nextRemoteCommit` when applicable.
This is a regression caused in #784. The core bug only exists when we
have a pending unacked `commit_sig`, but since we only send the
`AvailableBalanceChanged` event when sending a signature (not when
receiving a revocation), actors relying on this event to know the
current available balance (e.g. the `Relayer`) will have a wrong
value in-between two outgoing sigs.
Instead of using two separate maps (for channels and channel_updates), we now use a single map, which groups channel+channel_updates. This is also true for data storage, resulting in the removal of the channel_updates table.
This is the implementation of https://github.com/lightningnetwork/lightning-rfc/pull/557.
* Correctly handle multiple channel_range_replies
The scheme we use to keep tracks of channel queries with each peer would forget about
missing data when several channel_range_replies are sent back for a single channel_range_queries.
* RoutingSync: remove peer entry properly
* Remove peer entry on our sync map only when we've received
a `reply_short_channel_ids_end` message.
* Make routing sync test more explicit
* Routing Sync: rename Sync.count to Sync.totalMissingCount
* Do not send channel queries if we don't want to sync
* Router: clean our sync state when we (re)connect to a peer
We must clean up leftovers for the previous session and start the sync process again.
* Router: reset sync state on reconnection
When we're reconnected to a peer we will start a new sync process and should reset our sync
state with that peer.
* Extended Queries: use TLV format for optional data
Optional query extensions now use TLV instead of a custom format.
Flags are encoded as varint instead of bytes as originally proposed. With the current proposal they will all fit on a single byte, but will be
much easier to extends this way.
* Optional TLVs are represented as a list, not an optional list
TLVs that extend regular LN messages can be represented as a TlvStream and not an Option[TlvStream] since we don't need
to explicitely terminate the stream (either by preprending its length or using a specific terminator) as we do in Onion TLVs.
No TLVs simply means that the TLV stream is empty.
* TLV Stream: Implement a generic "get" method for TLV fields
If a have a TLV stream of type MyTLV which is a subtype of TLV, and MyTLV1 and MYTLV2 are both
subtypes of MyTLV then we can use stream.get[MyTLV1] to get the TLV record of type MYTLV1 (if any)
in our TLV stream.
* Use extended range queries on regtest and testnet
We will use them on mainnet as soon as https://github.com/lightningnetwork/lightning-rfc/pull/557 has been merged.
* Channel range queries: send back node announcements if requested (#1108)
This PR adds support for sending back node announcements when replying to channel range queries:
- when explicitly requested (bit is set in the optional query flag)
- when query flags are not used and a channel announcement is sent (as per the BOLTs)
A new configuration option `request-node-announcements` has been added in the `router` section. If set to true, we
will request node announcements when we receive a channel id (through channel range queries) that we don't know of.
This is a setting that we will probably turn off on mobile devices.
* Extended Channel Queries: add CL interop test
Untyped cltv expiry was confusing: delta and absolute expiries really need to be handled differently.
Even variable names were sometimes misleading.
Now the compiler will help us catch errors early.
Follow up to #1082.
The goal is to be able to publish transactions only after we have
persisted the state. Otherwise we may run into corner cases like [1]
where a refund tx has been published, but we haven't kept track of it
and generate a different one (with different fees) the next time.
As a side effect, we can now remove the special case that we were
doing when publishing the funding tx, and remove the `store` function.
NB: the new `calling` transition method isn't restricted to publishing
transactions but that is the only use case for now.
[1] https://github.com/ACINQ/eclair-mobile/issues/206
* Route computation: fix fee check
Fee check during route computation is:
- fee is below maximum value
- OR fee is below amout * maximum percentage
The second check was buggy and route computation would failed when fees we above maximum value but below maximum percentage of amount being paid.
* Type all amounts used in eclair
* Add eclair.MilliSatoshi class
* Use bitcoin-lib 0.14
* Add specialized codecs for Satoshi/MilliSatoshi
* Rename 'toSatoshi' to 'truncateToSatoshi' to highlight it's a precision-losing conversion
* Use feeEstimator in NodeParams, remove all calls to Globals.feeratePerKw
* Introduce FeeConf object and config block for confirmation targets, remove unused 'smartfeeNBlocks'
* Use a custom confirmation target for commitment transaction
* Use a custom confirmation target for funding transaction
* Use custom confirmation target for mutual close transaction
* Use custom confirmation target for claim transactions
* Add confirmation target block 144
* Use block target = 12 as default for claim transactions
Add support for variable-length onion payloads at the Sphinx (cryptographic) layer.
This is currently unused as we keep using the legacy format by default (this will be changed in a later commit).
This commit also refactors quite heavily the Sphinx file.
When we want to fulfill an HTLC but the upstream peer is unresponsive, we must close the channel if we get too close to the HTLC timeout on their side.
Otherwise we risk an on-chain race condition between our HTLC success transaction and their HTLC timeout transaction, which could result in a loss of funds.
This PR adds support for truncated integers as defined in the spec.
The test vectors are updated to include all test vectors from rusty's spec PR.
It also provides many changes to the tlv and tlv stream classes:
- The tlv trait doesn't need a type field, the codec should handle that
- A TLV stream should be scoped to a specific subtrait of tlv
- Stream validation is done inside the codec instead of the tlv stream: it makes it more convenient for application layers to create tlv streams and manipulate them
All the data contained in `node_announcement`, `channel_announcement`
and `channel_update` is to be included in the signature, including
unknown trailing fields. We were ignoring them, causing signature
verification to fail when there was unknown fields.
In the case of `channel_update` there is a backward compatibility issue
to handle, because when persisting channel data in state `NORMAL`, we
used to store the `channel_update` followed by other data, and without
prefixing it with size information.
To work around that we use the same trick as before, based on an
additional discriminator codec.
* connect immediately on restart, then wait
This is to allow herd effect when we restart the app and have numerous
peers.
Also removed the unnecessary transition and cleaned up delay
computation.
* always reconnect immediately when disconnected
Whether we go to this state from startup, or after getting disconnected.
It makes the transition logic simpler, and the potential herd effect at
startup is inevitable anyway since our peers will try to reconnect too.
* add randomization when reconnecting
* randomize delay for first reconnection attempt after startup
* make some parameters configurable
If we are fundee and after 5 days the funding isn't even in the mempool,
then we give up waiting and consider the channel closed. Note that if
the funding tx stays unconfirmed forever we won't give up waiting.
If we are funder, we never give up until the funding tx is double spent,
and we periodically republish it.
This applies to states `WAIT_FOR_FUNDING_CONFIRMED` and `CLOSING` (and
also `OFFLINE`/`SYNCING` when underlying state is
`WAIT_FOR_FUNDING_CONFIRMED`).
Also, added a generic way of passing context to `ElectrumClient`
requests/responses.
Fixes#1029.
TLV (tag-length-value) types and TLV streams have been defined in the following spec PR: https://github.com/lightningnetwork/lightning-rfc/pull/607
New Lightning Messages should use TLV extensively instead of ad-hoc per-message encoding. This also allows ignoring unknown odd TLV types, which lets implementers safely test new features on mainnet without impacting legacy nodes. It also allows type re-use which speeds up new features development.
Also cleaned-up and refactored common codecs.
Currently balances can be obtained from `channels` call but this requires a lot of work on caller side and also some specific knowledge (reserves, commit tx fee, in-flight payments), so this new `balances` endpoint only returns a correct balance info for each channel.
In the event when we receive an unexpected message, the `Peer` was just logging a warning and not sending an `Ack` to the `TransportHandler`. This resulted in a stuck connection, because no more data was read on the connection.
Fixes#1037.
It turns out that performance gains of the cached codec are not that
great, and they come at a cost of significant pressure on the GC. In
other words: premature optimization.
When removed, the heap usage becomes very stable, which is much better
than hypothetical performance gains.
Fixes#1031.
* use 64B representation instead of DER for sigs
It is more compact, and as an added bonus it frees us from the
completely unrelated Bitcoin-specific `0x01` trailing sig hash.
Note that we already used the 64B representation for storage everywhere,
except in `ChannelCodecs.htlcTxAndSigsCodec`, which required a backward
compatibility codec. Added a nonreg test for this.
* Use updated secp256k1 JNI bindings
* Replace scalar with private key and point with public key
We now use the simplified/unified design proposed in bitcoin-lib where:
- there are no more specific types for scalar/point
- private and public keys are compressed unless explicitly requested
* Generate and use 32 bytes seeds (and not 33)
We used serialized random private keys which were represented a 33 bytes (with a 01 suffix).
Using random 32 bytes values is more consistent.
We must make sure that upgraded apps that already have a 33 bytes seed will still generate the same secrets, which is why LocalKeyManager still uses the 01 suffix when needed
We store `CMD_FULFILL_HTLC`/`CMD_FAIL_HTLC`/`CMD_FAIL_MALFORMED_HTLC` in
a database (see `CommandBuffer`) because we don't want to lose
preimages, or to forget to fail incoming htlcs, which would lead to
unwanted channel closings.
But we currently only clean up this database on success, and because of
the way our watcher works, in a scenario where a downstream channel has
gone to the blockchain, it may send several times the same command. Only
the first one will be acked and cleaned up by the upstream channel,
causing the remaining commands to stay forever in the "pending relay
db".
With this change we clean up the commands when they fail too.
We also clean up the pending relay db on startup.
If the closing type is known:
- there is no need to watch the funding tx because it has already
been spent and the spending tx has already reached min_depth
- there is no need to attempt to publish transactions for other
type of closes.
* differentiate current/next remote close
We can still match on the trait `RemoteClose` if we don't need that
level of precision.
* Use node announcements as fallback to load peer addresses during startup
* Add NetworkDb.getNode to retrieve a node_announcement by nodeId
* When connecting to a peer use node_announcement as fallback for its IP address
* Support connection to peer via pubKey
* Increase finite max of exponential backoff time to 1h.
* Add peer disconnect API call
This only happens when we are fundee. We *could* have some funds at
stake if there was a non-zero `push_msat`, but we already allows 5 days
for the funding tx to confirm so the best option is probably to forget
about the channel.
When relaying a payment, we look in the onion to find the next
`shortChannelId`. But we may choose a different channel (to the same
node), because the requested channel may not have enough balance, of for
some other reasons, as permitted by the spec.
Currently we limit ourselves to only two attempts: one with a
"preferred" channel, and one with the originally requested channel if is
different from the preferred one. This has drawbacks, because if we have
multiple channels to the same node, we may not be able to relay a
payment if the "preferred" channel is currently unavailable (e.g.
because of an htlc in-flight value that is too high).
We now retry as many times as there are available channels, in our order
of preference, and if all fail, then we return a failure message for the
originally requested channel.
`Globals.feeratePerKB` is an atomic reference initialized to `null` and
is asynchronously set by the fee provider only once it's ready. This
means that it is possible to retrieve a null object from feeratePerKB,
scenario that must be handled separately to prevent any issues.
This commit now initialize `Globals.feeratePerKB` with the default
values set in the configuration file. This makes sure that the
feerate is always set to a meaningful value.
It was obsole since
068b0bccf9.
Note that we use a signed long, but it doesn't matter since 2^64
milli-satoshis is greater than the total number of bitcoins.
* Electrum: update mainnet and testnet servers list
* Electrum: request missing history transactions on reconnect
Upon connection/reconnection, ask for transactions that are included in our history but
which we don't have and don't have a pending request for.
* Electrum: add disconnect/reconnect tests
Simulate disconnections and check that wallet eventually gets it history and transactions.
* Add /sendtoroute API and functionality
* Do not use extractor pattern in PaymentLifecycle::SendPaymentToRoute
* /sendtoroute: support route parameter as comma separated list
* Add test for 'sendtoroute' in EclairImplSpec
* /payinvoice: assert the override amount is being used in the invoice
* /updaterelayfee: assert the values passed through the API call are forwarder to EclairImpl
* 'receive' API: test for fallback address usage and parameters passing
* 'close' API: test for parameters handling
* 'networkFees': test for default parameters
* Add test dependency mockito-scala, rewrite a test using the mock framework
* Factor out query filter parsing in EclairImpl, Add test for networkFees/audit/allinvoice
* Move getDefaultTimestampFilters in companion object, group together EclairImpl-scoped classes
* use bitcoind fee provider first
* set default `smooth-feerate-window`=6
* Configuration: increase fee rate mismatch threshold
We wil accept fee rates that up to 8x bigger or smaller than our local fee rate
LND sometimes sends a new signature without any changes, which is a
(harmless) spec violation.
Note that the test was previously not failing because it wasn't specific
enough. The test now fails and has been ignored.
* Transport: add support for encryption key logging.
This is the format the wireshark lightning-dissector uses to be able to decrypt lightning messages.
There was actually a change introduced by #944 where we used
`ClosingType.toString` instead of manually defining types, causing a
regression in the audit database.
The goal is to prevent sending a lot of updates for flappy channels.
Instead of sending a disabled `channel_update` after each disconnection,
we now wait for a payment to try to route through the channel and only
then reply with a disabled `channel_update` and broadcast it on the
network.
The reason is that in case of a disconnection, if noone cares about that
channel then there is no reason to tell everyone about its current
(disconnected) state.
In addition to that, when switching from `SYNCING`->`NORMAL`, instead
of emitting a new `channel_update` with flag=enabled right away, we wait
a little bit and send it later. We also don't send a new `channel_update` if
it is identical to the previous one (except if the previous one is outdated).
This way, if a connection to a peer is unstable and we keep getting
disconnected/reconnected, we won't spam the network.
The extra delay allows us to remove the change made in #888, which was
a workaround in case we generated `channel_update` too quickly.
Also, increased refresh interval from 7 days to 10 days. There was no
need to be so conservative.
Note that on startup we still need to re-send `channel_update` for all
channels in order to properly initialize the `Router` and the `Relayer`.
Otherwise they won't know about those channels, and e.g. the
`Relayer` will return `UnknownNextPeer` errors.
But we don't need to create new `channel_update`s in most cases, so
this should have little or no impact to gossip because our peers will
already know the updates and will filter them out.
On the other hand, if some global parameters (like relaying fees) are
changed, it will cause the creation a new `channel_update` for all
channels.
This PR standardizes the way we compute the current time as unix timestamp
- Scala's Platform is used and the conversion is done via scala's concurrent.duration facilities
- Java's Instant has been replaced due to broken compatibility with android
- AuditDB events use milliseconds (fixes#970)
- PaymentDB events use milliseconds
- Query filters for AuditDB and PaymentDB use seconds
* Remove closed channels when application starts
If the app is stopped just after a channel has transition from CLOSING to CLOSED, when the application starts again if will be restored as CLOSING. This commit checks channel data and remove closed channels instead of restoring them.
* Channels Database: tag closed channels but don't delete them
Instead we add a new `closed` column that we check when we restore channels.
* Document how we check and remove closed channels on startup
* Channel: Log additional data
Log local channel parameters, and our peer's open or accept message.
This should be enough to recompute keys needed to recover funds in case of unilateral close.
* Backup: explicitely specify move options
We now specify that we want to atomically overwrite the existing backup file with the new one (fixes
a potential issue on Windows).
We also publish a specific notification when the backup process has been completed.
* Backup running channel database when needed
Every time our channel database needs to be persisted, we create a backup which is always
safe to copy even when the system is busy.
* Upgrade sqlite-jdbc to 3.27.2.1
* BackupHandler: use a specific bounded mailbox
BackupHandler is now private, users have to call BackupHandler.props() which always
specifies our custom bounded maibox.
* BackupHandler: use a specific threadpool with a single thread
* Add backup notification script
Once a new backup has been created, call an optional user defined script.
Note that this doesn't mean that we will buffer 1M objects in memory:
those are just pointers to (mostly) network announcements that already
exist in our routing table.
Routing table has recently gone over 100K elements (nodes,
announcements, updates) and this causes the connection to be closed when
peer requests a full initial sync.
Until now, if the peer is unresponsive (typically doesn't respond to
`open_channel` or `funding_created`), we waited indefinitely, or until the
connection closed.
It translated to an API timeout for users, and uncertainty about the
state of the channel.
This PR:
- adds an optional `--openTimeoutSeconds` timeout to the `open` endpoint, that will
actively cancel the channel opening if it takes too long before reaching
state `WAIT_FOR_FUNDING_CONFIRMED`.
- makes the `ask` timeout configurable per request with a new `--timeoutSeconds`
- makes the akka http timeout slightly greater than the `ask` timeout
Ask timeout is set to 30s by default.
Locks held on utxos that are used in unpublished funding transactions should not be persisted.
If the app is stopped before the funding transaction has been published the channel is forgotten
and so should be locks on its funding tx utxos.
There is no unique identifier for payments in LN protocol. Critically,
we can't use `payment_hash` as a unique id because there is no way to
ensure unicity at the protocol level.
Also, the general case for a "payment" is to be associated to multiple
`update_add_htlc`s, because of automated retries. We also routinely
retry payments, which means that the same `payment_hash` will be
conceptually linked to a list of lists of `update_add_htlc`s.
In order to address this, we introduce a payment id, which uniquely
identifies a payment, as in a set of sequential `update_add_htlc`
managed by a single `PaymentLifecycle` that ends with a `PaymentSent` or
`PaymentFailed` outcome.
We can then query the api using either `payment_id` or `payment_hash`.
The former will return a single payment status, the latter will return a
set of payment statuses, each identified by their `payment_id`.
* Add a payment identifier
* Remove InvalidPaymentHash channel exception
* Remove unused 'close' from paymentsDb
* Introduce sent_payments in PaymentDB, bump db version
* Return the UUID of the ongoing payment in /send API
* Add api to query payments by ID
* Add 'fallbackAddress' in /receive API
* Expose /paymentinfo by paymentHash
* Add id column to audit.sent table, add test for db migration
* Add invoices to payment DB
* Add license header to ExtraDirective.scala
* Respond with HTTP 404 if the corresponding invoice/paymentHash was not found.
* Left-pad numeric bolt11 tagged fields to have a number of bits multiple of five (bech32 encoding).
* Add invoices API
* Remove CheckPayment message
* GUI: consume UUID reply from payment initiator
* API: reply with JSON encoded response if the queried element wasn't found
* Return a payment request object in /receive
* Remove limit of pending payment requests!
* Avoid printing "null" fields when serializing an invoice to json
* Add index on paymentDb.sent_payments.payment_hash
* Order results in descending order in listPaymentRequest
Our `open` API calls expects an optional fee rate in satoshi/byte, which is the most widely
used unit, but failed to convert to satoshi/kiloweight which is the standard in LN.
We also check that the converted fee rate cannot go below 253 satoshi/kiloweight.
There was an obscure Docker error when trying to start an Electrum
server in tests. [1]
It appears that there is a conflict between Docker and Hyper-V on some
range of ports.
A workaround is to just change the port we were using.
[1] https://github.com/docker/for-win/issues/3171
* Fix eclair cli argument passing
* Modify eclair-cli to work with equals in arguments
* Eclair-cli: show usage when wrong params are received
* Remove deprecated call from eclair-cli help message [ci skip]
* send events when htlc settle on-chain
* send events when a payment is received/relayed/sent
* send events when a payment is failed
* Add test for websocket
* Use nicer custom type hints when serializing to json (websocket)
* Fix bech32 prefix for regtest
* Separate cases for bech32 testnet and regtest for BOLT11 fallback address
* Electrum: use 3.3.4 as client name
* Electrum Pool: more specific message on disconnect
Specify wether we lost connection to our master server or to a backup server.
* Electrum: Update mainnet servers list
* Electrum: make pool address selection more readable
We connect to a random server we're not already connected to.
* Electrum Tests: increase "wait for ready" test timeout
If was a bit short and sometimes failed on travis.
* Electrum: better parsing of invalid responses
On testnet some Electrum servers are not compliant with the protocole version they advertise
and will return responses formatted with 1.0 rules.
Port the existing API functionalities over a new structure of HTTP endpoints, with the biggest difference being the usage of **named parameters** for the requests (responses are unchanged). RPC methods have become endpoints and the parameters for each are now passed via form-params (clients must use the header "Content-Type" : "multipart/form-data"), this allows for a clearer interpretation of the parameters and results in more elegant parsing code on the server side. It is possible to still use the old API version via a configuration key.
Old API can be used by setting `eclair.api.use-old-api=true`.
* Treat channels with fees=0 as if they had feeBase=1msat while we compute a route
* Add test to ensure we build the onion attaching no fees if they were not required by the channel_update
* Initialize the database outside the node param constructor
* Do not create folders during StartupSpec
* Simplify syntax for instantiating test Databases
* Rework parameter passing to database initialization
* Force UTF-8 file encoding on all platform.
* Use bitcoin-lib 0.11, which embeds libsecp256k1
* Unit tests: generate dummy sig from 32 random bytes
We now use a version of bitcoin-lib which embeds JNI bindings for libsecp256k1,
and it will only sign data that is 32 bytes long (in Bitcoin and LN you always
sign data hashes, not the actual data).
* Use maven 3.6.0 and a different mirror
* RoutingSyncSpec: don't create databases at init time
We called nodeParams which created a new in-memory sqlite database everytime we created "fake" routing info
* Bitcoin tests: generate 150 blocks instead of 500
We don't need to generate 432 blocks to activate segwit but we still need to have
spendable coins and coinbase maturity is 100 blocks even on regtest.
* Electrum client: test against mainnet Electrum servers
Previous test against testnet servers was flaky because testnet Electrum
servers are unrelable. Here we test against our own server on mainnet (and
2 servers from our list for the pool test).
Bitcoin Core 0.18 is about to enter RC cycle and should be release soon (initial target was April). It is not compatible with 0.16 (some of the RPC calls that we use have been removed. They're still available in 0.17 but tagged as deprecated).
With this PR, eclair will be compatible with 0.17 and the upcoming 0.18, but not with 0.16 any more so it will be a breaking change for some of our users. Supporting the last 2 versions is the right option and we should be ready before 0.18 is actually released (its initial target was April).
* don't spam with channel_updates at startup
Previous logic was very simple but naive:
- every time a channel_update changed we would send it out
- we would always make a new channel_update with the disabled flag set
at startup.
In case our node was simply restarted, this resulted in us re-sending a
channel_update with the disabled flag set, then a second one with the
disabled flag unset a few seconds later, for each public channel.
On top of that, this opened way to a bug: if reconnection is very fast,
then the two successive channel_update will have the same timestamp,
causing the router to not send the second one, which means that the
channel would be considered disabled by the network, and excluded from
payments.
The new logic is as follows:
- when we do NORMAL->NORMAL or NORMAL->OFFLINE or OFFLINE->NORMAL, we
send out the new channel_update if it has changed
- in all other case (e.g. WAIT_FOR_INIT_INTERNAL->OFFLINE) we do nothing
As a side effect, if we were connected to a peer, then we shut down
eclair, then the peer goes down, then we restart eclair: we will make a
new channel_update with the disabled flag set but we won't broadcast it.
If someone tries to make a payment to that node, we will return the
new channel_update with disabled flag set (and maybe the payer will then
broadcast that channel_update). So even in that corner case we are good.
* quick reconnection: bump channel_update timestamp
In case of a disconnection-reconnection, we first generate a
channel_update with disabled bit set, then after we reconnect we
generate a second channel_update with disabled bit not set.
If this happens very quickly, then both channel_updates will have the
same timestamp, and the second one will get ignored by the network.
A simple fix is to bump the second timestamp in this case.
* set channel_update refresh timer at reconnection
We only care about this timer when connected anyway. We also cancel it
when disconnecting.
This has several advantages:
- having a static task resulted in unnecessary refresh if the channel
got disconnected/reconnected in between 2 weeks
- better repartition of the channel_update refresh over time because at
startup all channels were generated at the same time causing all refresh
tasks to be synchronized
- less overhead for the scheduler (because we cancel refresh task for
offline channels (minor, but still)
Use bitcoin-lib v0.10 which has finally been synced to maven central.
Fix transactions unit test (the check in the test was using the whole locktime and not
the last 24 bits).
See https://github.com/ACINQ/bitcoin-lib/pull/31.
We still have to use `Array[Byte]` for low-level cryptographic primitives, and `akka.util.ByteBuffer` for tcp connections. In order to reduce unnecessary copies, we used `ByteVector.view(...)` as much as possible.
Took the opportunity to do a project-wide optimize imports. We might as well do it now since pretty much all files have been touched already.
NB: temporarily use bitcoin-lib 0.10.1-SNAPSHOT because maven central is very slow and we can't access the recently release 0.10 for now.
Add methods to delete channels and tags channels as pruned in batch which is much
more efficient in sqlite.
* Network db: minor changes in unit tests
Test pruning a few 1000s channels at once.
* NetworkDb API: use Iterators and not Seq
It's more consistent with our code base.
This is a simpler alternative to #867, with the following limitations:
- no `OnChainRefundsDb` and associated API calls
- `PaymentSettlingOnChain` event will be sent exactly once per payment
and have less information
- we don't touch `HtlcTimeoutTx`
- use json4s type hints instead of manual attributes to case classes
There are several separate but related changes in this PR:
(a) Fast close on scenarii where we have nothing at stake (instead of going to `CLOSING` state). The previous process was not only slower (we had to wait for confirmations), but it never resolved when the funding tx hadn't been confirmed. Note that there is still an edge case where the funding tx never gets confirmed, we are fundee and we have something at stake (`push_msat` > 0).
(b) When *fundee*: after a timeout (5 days), if the funding tx hasn't reached `min_depth`, we cancel the channel.
(c) When *funder*: there is no timeout on the funding tx: however on restart, if we detect that our funding tx was doublespent, then we cancel the channel. Just because there is a doublespend doesn't mean that something malicious is going on: e.g. fee was to low, the tx was eventually removed from mempools and we just spent the inputs on something else).
Commits:
* set proper channelid in logs on restore
* fast close if we have nothing at stake
* added fundingTx and timestamp to DATA_WAIT_FOR_FUNDING_CONFIRMED
Also added migration codecs and tests
* implemented funding timeout for fundee
After a given delay, fundee will consider that the funding tx will never
confirm and cancels the channel.
Note that this doesn't apply to the funder, because our implementation
guarantees that we have sent out a funding tx, and the only way to be
sure that it will never be confirmed is that we double spend it. We just
can't rely on a timeout if we want to be safe.
* Electrum: detect if a wallet transaction has been double-spent
If it's in the mempool, or if it's been confirmed, then it's not double spent.
If it's not confirmed and not the mempool, we check if we have a transaction in
our wallet that sspends one of the inputs of our tx. If we find one, then it's been
double spent.
This will work with our funding txs, but not with their funding txs.
* fix regression with dataloss protection
The fast close causes a regression with dataloss protection, because
if we have nothing at stake we won't publish anything in case of
error (even if our peer asks us to).
This fixes#854.
* Add route-weight-ratios to SendPayment/RouteRequest
* Update test channel_update with real world fee values
* Add maxFeeBase and maxFeePct to SendCommand, use high fee tolerance in integration test
* Expose randomized route selection feature in SendPayment, used in integration test too
* Add maxCltv to SendPayment/RouteRequest
* Implement boundaries for graph searching with cost, cltv, and size
* Enable searching for routes with size/CLTV/fee limits
* Expose RouteParams in RouteRequest and SendPayment
* If we couldn't find a route on the first attempt, retry relaxing the restriction on the route size
* Avoid returning an empty path, collapse the route not found cases into one
* When retrying to search for a route, relax 'maxCltv'
* Group search params configurations into a block
* Use the returning edges in 'ignoredEdges' when looking for a spur path
* Log path-finding params when receiving a route request
* Enforce weight ratios to be between (0,1]
* Make path-finding heuristics optional
* Upgrade to JDK11
Eclair can be built and used on Oracle JDK 1.8 or OpenJDK 11.
JavaFX is now embedded in eclair-node-gui and does not need to be installed separately.
* Install: update java download links
OpenJDK 11 is now our recommendation. Tell users to download java from https://jdk.java.net/11
* README: Rewrite installation instructions
* Electrum: don't ask for merkle proofs for unconfirmed txs
* Electrum: clear status when we get disconnected and still have pending history transactions
When we get disconnected and have script hashes for which we still have pending connections,
clear the script hash status. When we reconnect we will ask for its history again, this way we
won't miss anything. Since we rotate keys it should not result in heavy traffic (script hashes have
few history items).
* Electrum: represent and persist block heights as 32 bits signed ints
Int.MaxValue is about 40,000 years of block which should be enough, and it will fix the encoding
problem users on testnet when there's a reorg and one of their txs has a height of -1.
Side-note: changing the wallet codec can be done easily: if we cannot read and decode persisted data
we just start with an empty wallet and retrieve all wallet data from electrum servers, and once it's
ready it will be encoded with the new codec and saved.
* Electrum persistence: include a version number
It provides a clean way, when upgrading the app, of choosing whether to keep the same version and start from the
last persisted wallet (if the persistence format has not been changed or is compatible with the old one), or to
change the version and force starting from an empty wallet and downloading all wallet items from Electrum servers.
* ElectrumClient: remove useless buffer
The GUI took a very long time to startup because nodes and channels were
stored in javafx `ObervableList` which doesn't allow random access. We
can't easily use `ObservableMap` because `TableView` do not support
them.
As a workaround, created an `IndexedObservableList` which keeps track of
indices in `ObservableList` so that we can still have fast random
access.
Also, now grouping announcements in `NetworkEvent` instead of sending
them one by one.
* Enable searching for routes with size/CLTV/fee limits
* Expose the RouteParams in RouteRequest
* Expose the RouteParams in SendPayment
* Rename DEFAULT_ROUTE_MAX_LENGTH
* If we couldn't find a route on the first attempt, retry relaxing the restriction on the route size
* Avoid returning an empty path, collapse the route not found cases into one
* When retrying to search for a route, relax 'maxCltv'
* Move the default params for route searching in the conf, refactor together router params into a config class
* Remove max-payment-fee in favor of router.search-max-fee-pct
* Group search params configurations into a block
* Rename config keys for router path-finding
When we are disconnected and we don't have any channel with that peer,
we can stop it and remove its address from db.
We also need to handle the case where we are in `DISCONNECTED` or
`INITIALIZING` state and we are notified that the last channel we had
with that peer just got closed.
Note that this opens a race condition in the switchboard, if we receive
an incoming connection from that same peer right after we stopped it,
and before the switchboard received the `Terminated` event. If that
happens, then `Peer.Connect` or `Peer.OpenChannel` will timeout. We
could also have the switchboard listens to deadletter events, but that
seems a bit over the top.
Also, removed switchboard's map of peers. Instead, we use the actor's
children list, and access peers using the recommended `child()` method.
Now the 'peers request only returns an `Iterable` instead of a `Map`. This
removes the need to watch child actors, and thus removes the race
condition when peers were stopped. As a trade-off, peer lookup is now
in O(log(N)) instead of O(1) (with N being the number of peers), but this
seems acceptable.
Persist a partial view of our wallet to enable faster startup time.
Users will be able to create transactions as soon as we've checked that we're connected to a "good" electrum server. In the unlikely event where they were able to create and try to publish a transaction during the few seconds it took to finish syncing and their utxo was spent (because they were trying to use an unconfirmed utxo that got double spent, or if they spent their utxo themselves from another wallet initialized with the same seed while his one was offline), publishing the tx will fail.
* Electrum: persist wallet state
Wallet state is persisted and reloaded on startup.
* Electrum wallet: fix handling of headers chunk
When we receive a tx that is older than our checkpoints, we download and check
the header chunk that it's in, check it and connect it to our chain.
* Electrum: advertise wallet transactions
Send notifications for all wallet transactions once we're connected and synced
* Electrum: add timestamp to wallet events
Add an optional timestamp to TransactionReceived and TransactionConfidenceChanged, which is the timestamp of the block the tx was confirmed in (if any).
* Electrum client: use a Close message
This will fix concurrency issues where handlers are called when the actor
is already stopped.
Note that balance events are logged at most once every 30s, and only when
the balance actually changes (e.g. won't log if a payment is failed).
Also, only send `AvailableBalanceChanged` when needed.
We were sending this event everytime we sent a `commit_sig`, which is
incorrect because our balance doesn't change if, say, we are signing an
incoming htlc.
Note that we only send this event in `NORMAL` state, not in `SHUTDOWN`
state, because the balance is not really _available_ in the latter.
This includes support for hosting onion services, and connecting to them, which are two separate things:
- Opening an onion service implie interacting with the tor daemon controller, which requires authentication. We support both `SAFECOOKIE` and `HASHEDPASSWORD` authentication mechanisms, with a default to `SAFECOOKIE`. We support v2 and v3 services, with a default to v3 as recommended by the tor project.
- Connecting to onion services requires tunnelling through tor's local SOCKS5 proxy.
Incoming and outgoing tor connections are thus separate matters that needs to be configured independently. A specific documentation has been added to guide users through these steps.
Big thanks to @rorp for doing the heavy lifting on all this!
While it makes sense to exclude from the routing table channels for
which there is a spending tx in the mempool, we shouldn't blame our
peers for considering the channel still opened if the spending tx hasn't
yet been confirmed.
Also, reworked `ValidateResult` with better types. It appears that the
`NonexistingChannel` error wasn't really useful (a malicious peer would
probably point to an existing funding tx, so there is no real difference
between "txid not found" and "invalid script"), so it was replaced by
`InvalidAnnouncement` error which is a more serious offense (punished
by a disconnection, and probably a ban when we implement that sort of
things).
* Make route randomization optional (enabled by default), option is exposed in SendPayment/RouteRequest
* Fix non deterministic behavior in IntegrationTest
We previously assumed that the `gossip_timestamp_filter` only applied to
future messages, so we set `first_timestamp` to 0 in order to have a
pass-all filter.
But BOLT 7 actually says that the remote peer:
> SHOULD send all gossip messages whose timestamp is greater or equal to
first_timestamp, and less than first_timestamp plus timestamp_range
Which means that the way our filter was set, the remote peer would dump
the entire routing table on us.
By setting `first_timestamp` to the current time, we achieve what we
want. The synchronization of older messages is done by sending a
`query_channel_range` and then one or several `query_short_channel_ids`.
* relay to channel with lowest possible balance
Our current channel selection is very simplistic: we relay to the
channel with the largest balance. As time goes by, this leads to all
channels having the same balance.
A better strategy is to relay to the channel which has the smallest
balance but has enough to process the payment. This way we save larger
channels for larger payments, and also on average channels get depleted
one after the other.
* added tests...
...and found bugs!
Note that there is something fishy in BOLT 4, filed a PR:
https://github.com/lightningnetwork/lightning-rfc/pull/538
Also, first try of softwaremill's quicklens lib (in scope test for now)
* minor: fixed typo (h/t @btcontract)
This is a simple mechanism to test our direct peers by sending fake
small payments to them. A probe is considered successful if the peer
replies with an `UnknownPaymentHash` error.
Probing is configurable and disabled by default.
We use Yen's algorithm to find the k-shortest (loopless) paths in a graph, and dijkstra as search algo.
We then pick a random route up among the cheapest ones.
* Electrum: always stop clients when there's an error
We will automatically connect to another server, and it's much cleaner
that restarting right away
* Electrum: improve handling of server errors
* Electrum Pool: don't switch to our current master
In regtest (and probably testnet too) it may seem that our master server went
straight to current height + 2 when blocks are created very quickly, so
check first that the server is not already our master before we switch.
* Improved amounts readability (fixes#542) and added the Bits unit
denomination in the documentation
* Improved channel panel UI layout
* Added a confirmation dialog when closing a channel, with a short summary
of the channel to be closed
The balance bar has been shrunk in size so that it should not be mistaken as
a separator element between channels. The channel's balance, capacity and
peer node id are now more visible. Also added the short channel id to the
channel's pane.
fixes#690
* Added node's aggregated balance in the status bar
fixes#775
We were sending this event everytime we sent a `commit_sig`, which is
incorrect because our balance doesn't change if, say, we are signing an
incoming htlc.
Note that we only send this event in `NORMAL` state, not in `SHUTDOWN`
state, because the balance is not really *available* in the latter.
* Support searching backward from the target
* Use the amount+fees with testing for min/max htlc value of edges
* Build the adjacency list graph with incoming edges instead of outgoing
* Make sure we don't find routes longer than the max allowed by the spec
* Remove default amount msat, enhance 'findroute' API
* Optimize tests for ignored edges in Dijkstra
* Enhance test for max route length, fix the length to 20 channels
* Add test for routing to a target that is not in the graph (assisted routes)
* Correctly parse short channel id
* Add test for RPC APIs
* Put akka.http.version in parent project pom
Co-Authored-By: araspitzu <a.raspitzu@protonmail.com>
* Implement "GetHeaders" RPC call
* Add checkpoints and pow verification
* Don't resolve server address too soon
* Add testnet checkpoints
* Store headers in a sqlite wallet db
* Use 1.4 protocol
Request protocol version 1.4 (this is the default setting in Electrum wallet).
Retrieve and store all headers as binary blobs in bitcoin format.
* Insert headers in batch
* Optimize headers sync and persistence
We assume that there won't be a reorg of more that 2016 blocks (which
could be handled by publishing a new checkpoint) and persist our headers
except for the last 2016 we have received: when we restart, we will ask
our server for at least 2016 headers.
* Persists transactions
Transactions are persisted only when they've been verified (i.e. we've receive
a valid Merkle proof)
* Disable difficulty check on testnet and regtest
On testnet there can be difficulty adjustements even within a re-targeting window.
* Update checkpoints
* Use proper Ping message
`version` can not longer be sent as a ping as we did before.
* Don't ask for Merkle proofs for unconfirmed transactions
* Improve startup time
We now store a new checkpoint and headers up to that checkpoint as soon as our
best chain is 2016 + 500 blocks long
* Properly detect connection loss
* Update electrum mainnet servers list
Using the list from Electrum 3.3.2
* Don't open multiple connection to the same Electrum servers
We want to keep connection to 3 different servers, but when we have less than 3 different
addresses it's pointless to attempt to keep maintain 3 connections.
Our current channel selection is very simplistic: we relay to the
channel with the largest balance. As time goes by, this leads to all
channels having the same balance.
A better strategy is to relay to the channel which has the smallest
balance but has enough to process the payment. This way we save larger
channels for larger payments, and also on average channels get depleted
one after the other.
If we have stopped eclair while it was forwarding HTLCs, it is possible
that we are in a state were an incoming HTLC
was committed by both sides, but we didn't have time to send
and/or sign the corresponding HTLC to the downstream node.
In that case, if we do nothing, the incoming HTLC will
eventually expire and we won't lose money, but the channel
will get closed, which is a major inconvenience.
This check will detect this and will allow us
to fast-fail HTLCs and thus preserve channels.
The goal is to reduce attempts from other nodes in the network to use
channels that are unbalanced and can't be used to relay payments.
This leaks information about the current balance and is a privacy
tradeoff, particularly in this simplistic implementation. A better way
would be to add some kind of hysteresis in order to prevent trivial
probing of channels.
We were previously handling `UpdateFailHtlc` and
`UpdateFailMalformedHtlc` similarly to `UpdateFulfillHtlc`, but that is
wrong:
- a fulfill needs to be propagated as soon as possible, because it
allows us to pull funds from upstream
- a fail needs to be cross-signed downstream (=irrevocably confirmed)
before forwarding it upstream, because it means that we won't
be able to pull funds anymore. In other words we need to be absolutely
sure that the htlc won't be fulfilled downstream if we fail it upstream,
otherwise we risk losing money.
Also added tests.
* Consider htlc_minimum/maximum_msat when computing a route
* Compare shortChannelIds first as it is less costly than comparing the pubkeys
* Remove export to dot functionality
* Remove dependency jgraph
* Add optimized constructor to build the graph faster
* Use fibonacci heaps from jheaps.org
* Use Set instead of Seq for extraEdges, remove redundant publishing of channel updates
* Use Set for ignored edges