When doing an unilateral close (local or remote), we previously weren't
watching htlc outputs to decide whether the commit was finished or not.
This was incorrect because we didn't make sure the htlc-related
transactions had indeed been confirmed on the blockchain, making us
potentially lose money.
This is not trivial, because htlc transactions may be double-spent by the
counterparty, dependending on scenarios (ex: `htlc-timeout` vs
`claim-success`). On top of that, there may be several different kind of
commits in competition at the same time.
With this change, we now:
- put `WatchConfirm` watches on the commitment tx, and on all outputs only
us control (eg: our main output) ;
- put `WatchSpent` watches on the outputs that may be double spent by the
counterparty; when such an output is spent, we put a `WatchConfirm` on
the corresponding transaction and keep track of all outpoints spent ;
- every time a new transaction is confirmed, we find out if there are some
remaining transactions waiting for confirmation, taking into account the
fact that some 2nd/3rd-stage txs may never confirm because their input
has been doublespent.
We also don't rely anymore on dedicated `BITCOIN_CLOSE_DONE`,
`BITCOIN_LOCALCOMMIT_DONE`, ... events.
* properly handle new htlc requests when closing
When in NORMAL state and a `shutdown` message has already been
sent or received, then any subsequent `CMD_ADD_HTLC` fails and
the relayer is notified of the failure.
Same in SHUTDOWN state.
This fixes a possible race condition when a channel just switched
to SHUTDOWN, and the relayer keeps sending it new htlcs before
being notified of the state change.
* renamed Htlc->DirectedHtlc + cleanup
* storing origin of htlcs in the channel state
Currently this information is handled in the relayer, which is not
persisted. As a consequence, if eclair is shut down and there are
pending (signed) incoming htlcs, those will always expire (time out
and fail) even if the corresponding outgoing htlc is fulfilled, because
we lose the lookup table (the relayer's `bindings` map).
Storing the origin in the channel (as opposed to persisting the state
of the relayer) makes sense because we want to store the origin if and
only if an outgoing htlc was successfully sent and signed in a channel.
It is also probably more performant because we only need to do one disk
operation (which we have to do at signing anyway) instead of two
distinct operations.
* removed bindings from relayer
Instead, we rely on the origin stored in the actor state.
* preimages are now persisted and acknowledged
Upon reception of an `UpdateFulfillHtlc`, the relayer forwards it
immediately to the origin channel, *and* it stores the preimage in
a `PreimagesDb`.
When the origin channel has irrevocably committed the fulfill in a
`CommitSig`, it sends an `AckFulfillCmd` back to the relayer, which
will then remove the preimage from its database.
In addition to that, the relayer will re-send all pending fulfills
when it is notified that a channel reaches NORMAL, SHUTDOWN, or
CLOSING state. That way we make sure that the origin channel will
always get the fulfill eventually, even if it currently OFFLINE for
example. This fixes#146.
Also, the relayer now relies on the register to forward messages to
channels based on `channelId` or `shortChannelId`. This simplifies
the relayer but adds one hop when forwarding messages.
* modified `PaymentRelayed` event
Replaced `amountIn` and `feesEarned` by more explicit `amountIn`
and `amountOut`. `feesEarned` are simply the difference.
TODO:
- when local/remote closing a channel, we currently do not wait
for htlc-related transactions, we consider the channel CLOSED when
the commitment transactions has been buried deeply enough; this is
wrong because it wouldn't let us time to extract payment preimages
in certain cases
This mitigates issues when opening channels in parallel and having unpublished transactions reuse utxos that are already used by other txs.
Note that utxo will stay locked if counterparty disconnects after the `funding_created` message is sent, and before a `funding_signed` is received. In that case we should release locked utxos by calling `lockunspent` RPC function.
* Support building of outgoing payments with optional extra hops from payment requests
* Add test for route calculation with extra hops
* Simplify pattern matching in `buildExtra`
* `buildPayment` now uses a reverse Seq[Hop] to create a Seq[ExtraHop]
Since `Router` currently stores `ChannelUpdate`'s for non-public channels, it is possible to use it not only to get a route from payer to payee but also to get a "reverse" assisted route from payee when creating a `PaymentRequest`.
In principle this could be used to even generate a full reverse path to a payer which does not have an access to routing table for some reason.
* Can create `PaymentRequest`s with `RoutingInfoTag`s
* Bugfix and update test with data from live payment testing
* Move ExtraHop to PaymentRequest.scala
Instead of only subscribing to those events when we reach certain states,
we now always subscribe to them at startup. It doesn't cost a lot because
we receive an event only when a new block appears, and is much simpler.
Note that we previously didn't even bother unsubscribing when we weren't
interested anymore in the events, and were ignoring the messages in the
`whenUnhandled` block. So it is more consistent to have the same behavior
during the whole lifetime of the channel.
This fixes#187.
* payment request expiry encoding: add Anton's test
it shows that we don't encode/decode values which would take up more than 2 5-bits value
* payment request: encode expiry as a varlen unsigned value
fixes#186
Current version attempted to do everything at once, and would always
leave the NORMAL state after processing the `shutdown` message. In
addition to being overly complicated, it had bugs because it is just
not always possible to do so: for example when we have unsigned outgoing
`update_add_htlc` and are already in the process of signing older changes,
the only thing we can do is wait for the counterparty's `revoke_and_ack`
and sign again right away when we receive it. Only after that can we
send our `shutdown` message, and we can't accept new `update_add_htlc`
in the meantime.
Another issue with the current implementation was that we considered
unsigned outgoing *changes*, when only unsigned outgoing `update_add_htlc`
are relevant in the context of `shutdown` logic.
We now also fail to route htlcs in SHUTDOWN state, as recommended by BOLT 2.
This fixes#173.
The current trimming logic [1] as defined in the spec only takes
care of 2nd level txes, making sure that their outputs are greater
than the configured dust limit. However, it is very possible that
depending on the current `feeRatePerKw`, 3rd level transactions
(such as those claiming delayed outputs) are below the dust limit.
Current implementation wasn't taking care of that, and was happily
generating transactions with negative amounts, as reported in #164.
This fixes#164 by rejecting transactions with an amount below the
dust limit, similarly to what happens when the parent output is
trimmed for 2nd level txes.
[1] https://github.com/lightningnetwork/lightning-rfc/blob/master/03-transactions.md#trimmed-outputs
Current implementation was simplistic, which resulted in writes
being rejected when OS buffer was full. This happened especially
right after connection, when dumping a large routing table.
It is not clear whether we need read throttling too.
The scenario was already tested at a lower level, but this is
more realistic, with a real bitcoin core.
Note that we currently only steal the counterparty's *main output*,
we ignore pending htlcs. From an incentive point-of-view, it is an
acceptable tradeoff because the amount of in-flight htlcs should
typically be far less than the main outputs (and can be configured
with `max-htlc-value-in-flight-msat`).
* handling `update_fail_malformed` messages in payment fsm
* added check on failure code for malformed htlc errors
Spec says that the `update_fail_malformed_htlc`.`failure_code`
must have the BADONION bit set.
* removed hard-coded actor names in fuzzy tests
We previously skipped the `handleSync` function when we had to re-send
`funding_locked` messages on reconnection. This didn't take into account
the fact that we might have been disconnected right after sending the
very first `commit_sig` in the channel. In that case we need to first
re-send `funding_locked`, then re-send whatever updates were included in
the lost signature, and finally re-send the same `commit_sig`.
Note that the specification doesn't require to re-send the exact same
updates and signatures on reconnection. But doing this allows for a single
commitment history and allows us not to keep track of all signatures
sent to the other party.
Closes#165
* now requiring spv nodes to be 0.13+
* properly setting bitcoinj Context
* disconnect peers which do not provide witness data
* waiting for bitcoinj to be initialized before going further in the setup
This is less performant but our ResultSet->Iterator implementation
was buggy due to java/scala iterators requiring look-ahead capabilities
when iterating over the result, which ResultSet does not support.
This is a quick fix in the meantime.
When a node return a `TemporaryChannelFailure` for its outgoing
channel, along with an unchanged `ChannelUpdate`, we temporarily
exclude this channel from route calculation.
Note that the exclusion is directed, as we expect this will mostly
happen when all funds are on one side of the channel.
The duration of the exclusion can be configured by setting the
key `eclair.channel-exclude-duration`. Default value is 60s.
* Add API method to accept requests with custom amount
- can be used to send up to 2x higher amount than requested according to BOLT11
- should be used for payment requests without amounts
* Refactor 'send' method in API
* Add comments and description for 'send' API method
Eclair wasn't stopping anymore when two instances were started with the
same ports.
Note: we should probably go one step further and put a lock in the datadir
directory. For now we just check if the main TCP port is in use and fail fast.
Most notably, we do not anymore discard previously signed updates.
Instead, we re-send them and re-send the exact same signature. For that to
work, we had to be careful to re-send rev/sig in the same order, because
that impacts whatever is signed.
NB: this breaks storage serialization backward compatibility
* reworked payment lifecycle
* fixed retry logic (infinite loop in some cases)
* check update signature
* keep track of the list of errors and routes tried
* added support for sending bolt11 payment request in the API
* updated eclair-cli and deleted deprecated TESTING.md (closes#112)
* removed useless application.conf in eclair-node
* now handling CMD_CLOSE in shutdown/negotiating/closing states
* added no-op handlers for FundingLocked and CurrentFeeRate messages
* cleaning up stale announcements on restart
* more informative/less spam logs in Channel
* (gui) Wrapping payment events to display date of event
* Also added controls to item content in cell factory overrides. This
should prevent prevent duplicates as reported in #115