* Correctly parse short channel id
* Add test for RPC APIs
* Put akka.http.version in parent project pom
Co-Authored-By: araspitzu <a.raspitzu@protonmail.com>
* updated to scalatest 3.0.5
* use scalatest runner instead of junit
Output is far more readable, and makes console (incl. travis) reports
actually usable.
Turned off test logs as error reporting is enough to figure out what
happens.
The only downside is that we can't use junit's categories to group
tests, like we did for docker related tests. We could use nested suites,
but that seems to be overkill so I just removed the categories. Users
will only have the possibility to either skip/run all tests.
* update scala-maven-plugin to 3.4.2
NB: This requires maven 3.5.4, which means that we currently need to
manually install maven on travis.
Also updated Docker java version to 8u181 (8u171 for compiling).
* Implement new 'routing sync' messages
* add a new feature bit for channel queries
when we receive their init message and check their features:
- if they set `initial_routing_sync` and `channel_range_queries` we do nothing, we should receive a
range query shorly
- if they support channel range queries we send a range query
* Modify query_short_channel_id to ask for a range of ids
And not just a single id
* use `SortedMap` to store channel announcements
* don't send prune channels with channel range queries
* update range queries type to match BOLT PR
* add timestamp-based filters for gossip messages
each peer can speficy a `timestamp range` to filter gossip messages against.
* don't preserve order in `decodeShortChannelIds`
It is not needed and allows us to return a `Set`, which is better suited
to how we use the result.
* channel range queries: handle multi-message responses
Handle case where there are too many short ids to fit in a single message.
* channel range queries: use zlib instead of gzip
but detect when a message was encoded with gzip and reply with gzip in that case.
* router: add more channel range queries logs
* Channel range queries: correctly set firstBlockNum and numberOfBlocks fields
* channel range queries: properly handle case where there is no data
we will just receive on byte (compression format)
* channel range queries: use their compression format to query channels
when we query channels with `query_short_channel_ids`, we now use the same compression
format as in their `repy_channel_range` message. So we should be able to communicate
with peers that have not implemented all compression formats.
* router: make sure that channel ids are sorted
For channel range queries to work properly, channel ids need to be sorted.
It is then much more efficient to use a sorted map in our router state.
* always use `keySet` instead of `keys`
`SortedMap`.`keySet` returns a `SortedSet`, whereas `SortedMap`.`keys`
returns an `Iterable`. This is a critical difference because channel
range queries has requirements on ordering of channel ids.
Using the former allows us to rely on type guarantees instead of on
assumptions that the `Iterable` is sorted in `ChannelRangeQueries`.
There is no cost difference as internally the `Iterator` is actually a
`SortedSet`.
Also, explicitely specified the type instead of relying on comments in
`Router`.
* publish channel update event on router startup
* channel range queries: use uint32 for 4-byte integers (and not int32)
* channel range queries: make sure we send at least one reply to `query_channel_range`
reply to `query_channel_range` messages for which we have no matching channel ids
with a single `reply_channel_range` that contains no channel ids.
* channel range queries: handle `query_channel_range` cleanly
add an explicit test when we have no matching channel ids and send back a reply with an
empty (uncompressed) channel ids payload
* channel range queries: rename GossipTimeRange to GossipTimestampFilter
* channel range queries: add gossip filtering test
* peer: forward all routing messages to the router
and not just channel range queries. this should not break anything and if
it does it would reveal a problem
* peer: add remote node id to messages sent to the router
this will improve logging and debugging and will help if we implement
banning strategies for mis-behaving peers
* router: filter messages with a wrong chainHash more cleanly
* channel range queries: set a "pass-all" timestamp filter
* router: remove useless pattern match
ChannelUpdates are wapped in a PeerRoutingMessage now
* Peer: fit typo in comment
* Peer: optimize our timestamp filter
* Router: use mdc to log remote node id when possible
* fix typos and improve log message formatting
* Peer: rephrase scala doc comment that breaks travis
* Peer: improve timestamp filtering + minor fixes
* Electrum tests: properly stop actor system at the end of the test
* Peer: filter out node announcements against our peer's timestamp
But we don't prune node annoucements for which we don't have a matching
channel in the same "rebroadcast" message
* relay htlcs to channels with the highest balance
In order to reduce unnecessary back-and-forth in case an outgoing
channel doesn't have enough capacity but another one has, the relayer
can now forward a payment to a different channel that the one specified
in the onion (to the same node of course).
If this preferred channel returns an error, then we will retry to the original
requested channel, this way if it fails again, the sender will always receive
an error for the channel she requested.
* improved logs on sig sent/received
* put 'sent announcements' log in debug
* added logging of IN/OUT wire messages
* added mdc support to IO classes
* reduced package length to 24 chars in logs
* Accept bech32 addresses
Our android wallet will be able to send funds to bech32 addresses
* improve parsing of base58/bech32 addresses
Return appropriate errors when a base58 address is parseable but on the wrong chain
* add test with invalid address (not parseable as base58 or bech32)
* fix invalid version test
Previously we were only stealing the remote's main output when they publish a revoked commit, and were relying on a sufficiently high `channel_reserve` do deincentivize cheating.
In order to also steal the htlc outputs, we need to handle both cases:
- they only publish their revoked commit tx => we claim the htlc outputs directly from the commit tx
- they publish their revoked commit tx, and their 2nd-stage HTLCSuccessTx and HtlcTimeout txes => we claim the output of these htlcs tx
To do that, we need to be able to reconstruct htlc scripts (`htlcOffered` and `htlcReceived`), therefore we need to store `paymentHash` and `cltvExpiry` for each htlc we sign. Note that this won't be needed in the future when we have MAST.
* store `paymentHash` and `cltvExpiry` for each signed htlc
* spend htlc outputs with penalty txs
* added full integration tests on revoked scenario
Most notably, we do not anymore discard previously signed updates.
Instead, we re-send them and re-send the exact same signature. For that to
work, we had to be careful to re-send rev/sig in the same order, because
that impacts whatever is signed.
NB: this breaks storage serialization backward compatibility