Our recovery logic didn't handle the case were our local commit is
up-to-date, but we don't know their local commit (probably because we
just lost the last state were we sent them a new `commit_sig`).
Also, process all cases in the same `channel_reestablish` handler, like we do
everywhere else.
Moved the sync tests in `Helpers` so that they are more understandable.
* get full blocks when looking for spending tx
With a verbosity of `0`, `getblock` returns the raw serialized
block. It saves us from calling `getrawtransaction` for each transaction
in the block.
Fixes#664.
This can happen in an unilateral close scenario, when local commit
"wins" the race to the blockchain, and some outgoing htlcs weren't yet
signed by remote.
This fixes#649.
Added a new `AuditDb` which keeps tracks of:
- every single payment (received/sent/relayed)
- every single network fee paid to the miners (funding, closing, and all commit/htlc transactions)
Note that network fees are considered paid when the corresponding tx has reached `min_depth`, it makes sense and allows us to compute the fee in one single place in the `CLOSING` handler. There is an exception for the funding tx, for which we consider the fee paid when the tx has successfully been published to the network. It simplifies the implementation and the tradeoff seems acceptable.
Three new functions have been added to the json-rpc api:
- `audit`: returns all individual payments, with optional filtering on timestamp.
- `networkfees`: returns every single fee paid to the miners, by type (`funding`, `mutual`, `revoked-commit`, etc.) and by channel, with optional filtering on timestamp.
- `channelstats`: maybe the most useful method; it returns a number of information per channel, including the `relayFee` (earned) and the `networkFee` (paid).
The `channels` method now returns details information about channels. It makes it far easier to compute aggregate information about channels using the command line.
Also added a new `ChannelFailed` event that allows e.g. the mobile app to know why a channel got closed.
This allows e.g. the mobile app to know why a channel got closed.
Depending on whether the error is local or remote, a
`Throwable`/`wire.Error` will be attached to the event.
This allows for a user of the library to implicitly pass the `ActorSystem` to the eclair node. Although if you are running multiple eclair instances on the same machine you need to make sure the `ActorSystems` that are passed implicitly are unique.
* Implement new 'routing sync' messages
* add a new feature bit for channel queries
when we receive their init message and check their features:
- if they set `initial_routing_sync` and `channel_range_queries` we do nothing, we should receive a
range query shorly
- if they support channel range queries we send a range query
* Modify query_short_channel_id to ask for a range of ids
And not just a single id
* use `SortedMap` to store channel announcements
* don't send prune channels with channel range queries
* update range queries type to match BOLT PR
* add timestamp-based filters for gossip messages
each peer can speficy a `timestamp range` to filter gossip messages against.
* don't preserve order in `decodeShortChannelIds`
It is not needed and allows us to return a `Set`, which is better suited
to how we use the result.
* channel range queries: handle multi-message responses
Handle case where there are too many short ids to fit in a single message.
* channel range queries: use zlib instead of gzip
but detect when a message was encoded with gzip and reply with gzip in that case.
* router: add more channel range queries logs
* Channel range queries: correctly set firstBlockNum and numberOfBlocks fields
* channel range queries: properly handle case where there is no data
we will just receive on byte (compression format)
* channel range queries: use their compression format to query channels
when we query channels with `query_short_channel_ids`, we now use the same compression
format as in their `repy_channel_range` message. So we should be able to communicate
with peers that have not implemented all compression formats.
* router: make sure that channel ids are sorted
For channel range queries to work properly, channel ids need to be sorted.
It is then much more efficient to use a sorted map in our router state.
* always use `keySet` instead of `keys`
`SortedMap`.`keySet` returns a `SortedSet`, whereas `SortedMap`.`keys`
returns an `Iterable`. This is a critical difference because channel
range queries has requirements on ordering of channel ids.
Using the former allows us to rely on type guarantees instead of on
assumptions that the `Iterable` is sorted in `ChannelRangeQueries`.
There is no cost difference as internally the `Iterator` is actually a
`SortedSet`.
Also, explicitely specified the type instead of relying on comments in
`Router`.
* publish channel update event on router startup
* channel range queries: use uint32 for 4-byte integers (and not int32)
* channel range queries: make sure we send at least one reply to `query_channel_range`
reply to `query_channel_range` messages for which we have no matching channel ids
with a single `reply_channel_range` that contains no channel ids.
* channel range queries: handle `query_channel_range` cleanly
add an explicit test when we have no matching channel ids and send back a reply with an
empty (uncompressed) channel ids payload
* channel range queries: rename GossipTimeRange to GossipTimestampFilter
* channel range queries: add gossip filtering test
* peer: forward all routing messages to the router
and not just channel range queries. this should not break anything and if
it does it would reveal a problem
* peer: add remote node id to messages sent to the router
this will improve logging and debugging and will help if we implement
banning strategies for mis-behaving peers
* router: filter messages with a wrong chainHash more cleanly
* channel range queries: set a "pass-all" timestamp filter
* router: remove useless pattern match
ChannelUpdates are wapped in a PeerRoutingMessage now
* Peer: fit typo in comment
* Peer: optimize our timestamp filter
* Router: use mdc to log remote node id when possible
* fix typos and improve log message formatting
* Peer: rephrase scala doc comment that breaks travis
* Peer: improve timestamp filtering + minor fixes
* Electrum tests: properly stop actor system at the end of the test
* Peer: filter out node announcements against our peer's timestamp
But we don't prune node annoucements for which we don't have a matching
channel in the same "rebroadcast" message
* relay htlcs to channels with the highest balance
In order to reduce unnecessary back-and-forth in case an outgoing
channel doesn't have enough capacity but another one has, the relayer
can now forward a payment to a different channel that the one specified
in the onion (to the same node of course).
If this preferred channel returns an error, then we will retry to the original
requested channel, this way if it fails again, the sender will always receive
an error for the channel she requested.
* improved logs on sig sent/received
* put 'sent announcements' log in debug
* added logging of IN/OUT wire messages
* added mdc support to IO classes
* reduced package length to 24 chars in logs
* add basic electrum wallet test
our wallet connects to a dockerized electrumx server
* electrum: clean up tests, and add watcher docker tests
* electrum wallet: fix balance computation issue
when different keys produced the exact same confirmed + unconfirmed balances, we
would compute an invalid balance because these duplicates would be pruned.
* electrum: rename wallet test
* electrum: add a specific test with identical outputs
* electrum: change scripthash balance logging level to debug
* electrum: make docker tests run on windows/mac
Our electrumx docker container needs to contains to bitcoind that is running on the host.
On linux we use the host network mode, which is not available on windows/osx
On windows/osx we use host.docker.internal, which is not available on linux. This
requires docker 18.03 or higher.
Our electrumx docker container needs to contains to bitcoind that
is running on the host.
On linux we use the host network mode, which is not available on windows/osx
On windows/osx we use host.docker.internal, which is not available on linux. This
requires docker 18.03 or higher.
electrum: change scripthash balance logging level to debug
electrum: add a specific test with identical outputs
electrum: rename wallet test
electrum wallet: fix balance computation issue
when different keys produced the exact same confirmed + unconfirmed balances, we
would compute an invalid balance because these duplicates would be pruned.
electrum: clean up tests, and add watcher docker tests
add basic electrum wallet test
our wallet connects to a dockerized electrumx server
* `ReceivePayment` now accepts additional routing info, which is useful for nodes that are not announced on the network but still want to receive funds.
* Fix scala.NotImplementedError when public-ips config parameter contains invalid values
* more comprehensive validations
* fix unit tests
This fixes#630
We should return a `FeeInsufficient` error when an incoming htlc doesn't
pay us what we require in our latest `channel_update`.
Note that the spec encourages us to being a bit more lax than that (BOLT
7):
> SHOULD accept HTLCs that pay an older fee, for some reasonable time
after sending channel_update.
> Note: this allows for any propagation delay.
* add api call to update channel relay fees
* fixed bug in GUI, as channel can had different fees in each direction!
* fire transitions on `TickRefreshChannelUpdate` (fixes#621)
* make router publish `channel_update`s on startup
* (gui) Channel info fees are now options and case where channels have no known fees data is now properly handled.
* rename InvalidDustLimit to DustLimitTooSmall
* make sure that our reserve is above our dust limit
* check that their accept message is valid
see BOLT 2:
- their channel reserve must be above their dust limit
- their channel reserve must be above our dust limit
- their dust limit must be below our reserve
* channel: check to_local and to_remote amounts againt channel_reserve_satoshis
see BOLT 2: The receiving node MUST fail the channel if both to_local and to_remote amounts for
the initial commitment transaction are less than or equal to channel_reserve_satoshis (see BOLT 3).
* channel: check that their open.max_accepted_htlcs is valid
* Added server address in ElectrumReady object
* Assigned remote address to variable to improve readability
* Checking that the master address exists in the addresses map
* set a minimum feerate-per-kw of 253 (fixes#602)
why 253 and not 250 since feerate-per-kw is feerate-per-kb / 250 and the minimum relay fee rate is 1000 satoshi/Kb ?
because bitcoin core uses neither the actual tx size in bytes or the tx weight to check fees, but a "virtual size" which is (3 * weight) / 4 ...
so we want :
fee > 1000 * virtual size
feerate-per-kw * weight > 1000 * (3 * weight / 4)
feerate_per-kw > 250 + 3000 / (4 * weight)
with a conservative minimum weight of 400, we get a minimum feerate_per-kw of 253
* set minimum fee rate to 2 satoshi/byte
users can still change it to 1 satoshi/byte
* use better weight estimations when computing fees
* test that tx fees are above min-relay-fee
* check that remote fee updates are above acceptable minimum
we need to check that their fee rate is always above our absolute minimum threshold
or we will end up with unrelayable txs
* fix ClaimHtlcSuccessTx weight computation
* channel tests: use actual minimum fee rate
test with our absolute minimum fee rate (253), it should be valid and anything below
sould be invalid and trigger and specific error
Because we keep sending them over and over.
Using `CacheBuilder.weakKeys` will cause the serialized messages to be
cleaned up when messages are garbage collected, hence there is no need
to set a maximum size.
If the closing tx is already in `mutualClosePublished`, it means that we
already know about it and don't need to re-handle it again. Everytime we
succesfully negotiate a mutual close, we end up publishing ourselves the
closing tx, and right after that we are notified of this tx by the
watcher. We always ended up with duplicates in the
`mutualClosePublished`field.
This fixes#568.
* feerate: use satoshi/kb instead of satoshi/byte
* API fixup: convert input feerate from sat/bytes to sat/kw
* fixup: convert input feerate from sat/bytes to sat/kw
* add cleaner access to current feerate
implementation (blocks 2,6,12...) is not exposed, users will call getFeerate()
* fix feerate conversions
a kilobyte is 1000 bytes, not 1024 bytes (thanks @jimpo)
* revert commit 179dadc
keep this PR focused on 1 task only
* rename FeeratesPerKb to FeeratesPerKB