1
0
Fork 0
mirror of https://github.com/ACINQ/eclair.git synced 2025-03-14 03:48:13 +01:00

Merge branch 'android' into android-phoenix

This commit is contained in:
dpad85 2020-01-15 12:17:25 +01:00
commit caa79c2fb7
No known key found for this signature in database
GPG key ID: 574C8C6A1673E987
13 changed files with 612 additions and 51 deletions

153
CONTRIBUTING.md Normal file
View file

@ -0,0 +1,153 @@
# Contributing to Eclair
Eclair welcomes contributions in the form of peer review, testing and patches.
This document explains the practical process and guidelines for contributing.
While developing a Lightning implementation is an exciting project that spans many domains
(cryptography, peer-to-peer networking, databases, etc), contributors must keep in mind that this
represents real money and introducing bugs or security vulnerabilities can have far more dire
consequences than in typical projects. In the world of cryptocurrencies, even the smallest bug in
the wrong area can cost users a significant amount of money.
If you're looking for somewhere to start contributing, check out the [good first issue](https://github.com/acinq/eclair/issues?q=is%3Aopen+is%3Aissue+label%3A"good+first+issue") list.
Another way to start contributing is by adding tests or improving them.
This will help you understand the different parts of the codebase and how they work together.
## Communicating
We recommend using our Gitter [developers channel](https://gitter.im/ACINQ/developers).
Introducing yourself and explaining what you'd like to work on is always a good idea: you will get
some pointers and feedback from experienced contributors. It will also ensure that you're not
duplicating work that someone else is doing.
We use Github issues only for, well, issues (mostly bugs that need to be investigated).
You can also use Github issues for [feature requests](https://github.com/acinq/eclair/issues?q=is%3Aissue+label%3A"feature+request").
## Recommended Reading
- [Bitcoin Whitepaper](https://bitcoin.org/bitcoin.pdf)
- [Lightning Network Whitepaper](https://lightning.network/lightning-network-paper.pdf)
- [Deployable Lightning](https://github.com/ElementsProject/lightning/raw/master/doc/deployable-lightning.pdf)
- [Understanding the Lightning Network](https://bitcoinmagazine.com/articles/understanding-the-lightning-network-part-building-a-bidirectional-payment-channel-1464710791)
- [Lightning Network Specification](https://github.com/lightningnetwork/lightning-rfc)
- [High Level Lightning Network Specification](https://medium.com/@rusty_lightning/the-bitcoin-lightning-spec-part-1-8-a7720fb1b4da)
## Recommended Skillset
Eclair uses [Scala](https://www.scala-lang.org/) and [Akka](https://akka.io/).
Some experience with these technologies is required to contribute.
There are a lot of good resources online to learn about them.
## Contributor Workflow
To contribute a patch, the workflow is as follows:
1. [Fork repository](https://help.github.com/en/github/getting-started-with-github/fork-a-repo) (only the first time)
2. Create a topic branch
3. Add commits
4. Open a pull request
### Pull Request Philosophy
Pull requests should always be focused. For example, a pull request could add a feature, fix a bug,
or refactor code; but not a mixture.
Please also avoid super pull requests which attempt to do too much, are overly large, or overly
complex as this makes review difficult.
You should try your best to make reviewers' lives as easy as possible: a lot more time will be
spent reading your code than the time you spent writing it.
The quicker your changes are merged to master, the less time you will need to spend rebasing and
otherwise trying to keep up with the master branch.
Pull request should always include a clean, detailed description of what they fix/improve, why,
and how.
Even if you think that it is obvious, don't be shy and add explicit details and explanations.
When fixing a bug, please start by adding a failing test that reproduces the issue.
Create a first commit containing that test without the fix: this makes it easy to verify that the
test correctly failed. You can then fix the bug in additional commits.
When adding a new feature, thought must be given to the long term technical debt and maintenance
that feature may require after inclusion. Before proposing a new feature that will require
maintenance, please consider if you are willing to maintain it (including bug fixing).
When addressing pull request comments, we recommend using [fixup commits](https://robots.thoughtbot.com/autosquashing-git-commits).
The reason for this is two fold: it makes it easier for the reviewer to see what changes have been
made between versions (since Github doesn't easily show prior versions) and it makes it easier on
the PR author as they can set it to auto-squash the fixup commits on rebase.
It's recommended to take great care in writing tests and ensuring the entire test suite has a
stable successful outcome; eclair uses continuous integration techniques and having a stable build
helps the reviewers with their job.
We don't have hard rules around code style, but we do avoid having too many conflicting styles;
look around and make sure you code fits well with the rest of the codebase.
### Signed Commits
We ask contributors to sign their commits.
You can find setup instructions [here](https://help.github.com/en/github/authenticating-to-github/signing-commits).
### Commit Message
Eclair keeps a clean commit history on the master branch with well-formed commit messages.
Here is a model Git commit message:
```text
Short (50 chars or less) summary of changes
More detailed explanatory text, if necessary. Wrap it to about 72
characters or so. In some contexts, the first line is treated as the
subject of an email and the rest of the text as the body. The blank
line separating the summary from the body is critical (unless you omit
the body entirely); tools like rebase can get confused if you run the
two together.
Write your commit message in the present tense: "Fix bug" and not
"Fixed bug". This convention matches up with commit messages generated
by commands like git merge and git revert.
Further paragraphs come after blank lines.
- Bullet points are okay, too
- Typically a hyphen or asterisk is used for the bullet, preceded by a
single space, with blank lines in between, but conventions vary here
- Use a hanging indent
```
### Dependencies
We try to minimize our dependencies (libraries and tools). Introducing new dependencies increases
package size, attack surface and cognitive overhead.
Since Eclair is [running on Android](https://github.com/acinq/eclair-mobile), we have a requirement
to be compatible with Java 7. This currently restricts the set of dependencies we can add and the
language features we use.
If your contribution is adding a new dependency, please detail:
- why you need it
- why you chose this specific library/tool (a thorough analysis of alternatives will be
appreciated)
Contributions that add new dependencies may take longer to approve because a detailed audit of the
dependency may be required.
### IntelliJ Tips
If you're using [IntelliJ](https://www.jetbrains.com/idea/), here are some useful commands:
- Ctrl+Alt+L: format file (ensures consistency in the codebase)
- Ctrl+Alt+o: optimize imports (removes unused imports)
Note that we use IntelliJ's default formatting configuration for Scala to minimize conflicts.
### Contribution Checklist
- The code being submitted is accompanied by tests which exercise both the positive and negative
(error paths) conditions (if applicable)
- The code being submitted is correctly formatted
- The code being submitted has a clean, easy-to-follow commit history
- All commits are signed

21
SECURITY.md Normal file
View file

@ -0,0 +1,21 @@
# Security Policy
## Reporting a Vulnerability
To report security issues send an email to security@acinq.fr (not for support).
The following keys may be used to communicate sensitive information to developers:
| Name | Fingerprint |
|---------------------|---------------------------------------------------|
| Pierre-Marie Padiou | 6AA4 5A4C 209A 2D30 64CF 66BE E434 ED29 2E85 643A |
| Fabrice Drouin | C25A 288A 842E AF7A A5B5 303F 7A73 FE77 DE2C 4027 |
You can import keys by running the following commands:
```sh
gpg --keyserver https://acinq.co/pgp/padioupm.asc --recv-keys "6AA4 5A4C 209A 2D30 64CF 66BE E434 ED29 2E85 643A"
gpg --keyserver https://acinq.co/pgp/drouinf.asc --recv-keys "C25A 288A 842E AF7A A5B5 303F 7A73 FE77 DE2C 4027"
```
Ensure that you put quotes around fingerprints containing spaces.

View file

@ -120,7 +120,7 @@ eclair {
sync {
request-node-announcements = true // if true we will ask for node announcements when we receive channel ids that we don't know
encoding-type = zlib // encoding for short_channel_ids and timestamps in query channel sync messages; other possible value is "uncompressed"
channel-range-chunk-size = 2500 // max number of short_channel_ids (+ timestamps + checksums) in reply_channel_range *do not change this unless you know what you are doing*
channel-range-chunk-size = 1500 // max number of short_channel_ids (+ timestamps + checksums) in reply_channel_range *do not change this unless you know what you are doing*
channel-query-chunk-size = 100 // max number of short_channel_ids in query_short_channel_ids *do not change this unless you know what you are doing*
}

View file

@ -28,6 +28,8 @@ case class ShortChannelId(private val id: Long) extends Ordered[ShortChannelId]
def toLong: Long = id
def blockHeight = ShortChannelId.blockHeight(this)
override def toString: String = {
val TxCoordinates(blockHeight, txIndex, outputIndex) = ShortChannelId.coordinates(this)
s"${blockHeight}x${txIndex}x${outputIndex}"
@ -48,12 +50,23 @@ object ShortChannelId {
def toShortId(blockHeight: Int, txIndex: Int, outputIndex: Int): Long = ((blockHeight & 0xFFFFFFL) << 40) | ((txIndex & 0xFFFFFFL) << 16) | (outputIndex & 0xFFFFL)
def coordinates(shortChannelId: ShortChannelId): TxCoordinates = TxCoordinates(((shortChannelId.id >> 40) & 0xFFFFFF).toInt, ((shortChannelId.id >> 16) & 0xFFFFFF).toInt, (shortChannelId.id & 0xFFFF).toInt)
@inline
def blockHeight(shortChannelId: ShortChannelId) = ((shortChannelId.id >> 40) & 0xFFFFFF).toInt
@inline
def txIndex(shortChannelId: ShortChannelId) = ((shortChannelId.id >> 16) & 0xFFFFFF).toInt
@inline
def outputIndex(shortChannelId: ShortChannelId) = (shortChannelId.id & 0xFFFF).toInt
def coordinates(shortChannelId: ShortChannelId): TxCoordinates = TxCoordinates(blockHeight(shortChannelId), txIndex(shortChannelId), outputIndex(shortChannelId))
/**
* This is a trick to encode a partial hash of node id in a short channel id.
* We use a prefix of 0xff to make it easily distinguishable from normal short channel id.
*/
* This is a trick to encode a partial hash of node id in a short channel id.
* We use a prefix of 0xff to make it easily distinguishable from normal short channel id.
*
* @note Phoenix only.
*/
def peerId(remoteNodeId: PublicKey): ShortChannelId = ShortChannelId(0xff00000000000000L | remoteNodeId.value.takeRight(7).toLong())
def isPeerId(shortChannelId: ShortChannelId): Boolean = (shortChannelId.id & 0xff00000000000000L) == 0xff00000000000000L

View file

@ -135,20 +135,23 @@ class ElectrumWatcher(blockCount: AtomicLong, client: ActorRef) extends Actor wi
Some(w)
}).flatten
// this is for WatchConfirmed
watches.collect {
case w@WatchConfirmed(_, txid, _, minDepth, _) if txid == tx.txid =>
val txheight = item.height
val confirmations = height - txheight + 1
log.info(s"txid=$txid was confirmed at height=$txheight and now has confirmations=$confirmations (currentHeight=$height)")
if (confirmations >= minDepth) {
if (w.minDepth == 0 && w.event == BITCOIN_FUNDING_DEPTHOK) {
// TODO: special case for phoenix
self ! ElectrumClient.GetMerkleResponse(tx.txid, Nil, block_height = Random.nextInt(height - 10), 0, Some(tx))
} else {
// we need to get the tx position in the block
client ! ElectrumClient.GetMerkle(txid, txheight, Some(tx))
}
// don't ask for merkle proof for unconfirmed transactions
if (item.height > 0) {
watches.collect {
case w@WatchConfirmed(_, txid, _, minDepth, _) if txid == tx.txid =>
val txheight = item.height
val confirmations = height - txheight + 1
log.info(s"txid=$txid was confirmed at height=$txheight and now has confirmations=$confirmations (currentHeight=$height)")
if (confirmations >= minDepth) {
if (w.minDepth == 0 && w.event == BITCOIN_FUNDING_DEPTHOK) {
// TODO: special case for phoenix
self ! ElectrumClient.GetMerkleResponse(tx.txid, Nil, block_height = Random.nextInt(height - 10), 0, Some(tx))
} else {
// we need to get the tx position in the block
client ! ElectrumClient.GetMerkle(txid, txheight, Some(tx))
}
}
}
}
context become running(height, tip, watches -- watchSpentTriggered, scriptHashStatus, block2tx, sent)

View file

@ -33,7 +33,7 @@ import fr.acinq.eclair.wire._
import fr.acinq.eclair.{wire, _}
import kamon.Kamon
import scodec.Attempt
import scodec.bits.ByteVector
import scodec.bits.{BitVector, ByteVector}
import scala.compat.Platform
import scala.concurrent.duration._
@ -102,7 +102,21 @@ class Peer(val nodeParams: NodeParams, remoteNodeId: PublicKey, authenticator: A
context watch transport
val localInit = nodeParams.overrideFeatures.get(remoteNodeId) match {
case Some(f) => wire.Init(f)
case None => wire.Init(nodeParams.features)
case None =>
// Eclair-mobile thinks feature bit 15 (payment_secret) is gossip_queries_ex which creates issues, so we mask
// off basic_mpp and payment_secret. As long as they're provided in the invoice it's not an issue.
// We use a long enough mask to account for future features.
// TODO: remove that once eclair-mobile is patched.
val tweakedFeatures = BitVector.bits(nodeParams.features.bits.reverse.toIndexedSeq.zipWithIndex.map {
// we disable those bits if they are set...
case (true, 14) => false
case (true, 15) => false
case (true, 16) => false
case (true, 17) => false
// ... and leave the others untouched
case (value, _) => value
}).reverse.bytes.dropWhile(_ == 0)
wire.Init(tweakedFeatures)
}
log.info(s"using features=${localInit.features.toBin}")
transport ! localInit

View file

@ -715,6 +715,13 @@ object Router {
val shortChannelIdKey = Context.key[ShortChannelId]("shortChannelId", ShortChannelId(0))
val remoteNodeIdKey = Context.key[String]("remoteNodeId", "unknown")
// maximum number of ids we can keep in a single chunk and still have an encoded reply that is smaller than 65Kb
// please note that:
// - this is based on the worst case scenario where peer want timestamps and checksums and the reply is not compressed
// - the maximum number of public channels in a single block so far is less than 300, and the maximum number of tx per block
// almost never exceeds 2800 so this is not a real limitation yet
val MAXIMUM_CHUNK_SIZE = 3200
def props(nodeParams: NodeParams, watcher: ActorRef, initialized: Option[Promise[Done]] = None) = Props(new Router(nodeParams, watcher, initialized))
def toFakeUpdate(extraHop: ExtraHop, htlcMaximum: MilliSatoshi): ChannelUpdate = {
@ -786,8 +793,8 @@ object Router {
* Filters channels that we want to send to nodes asking for a channel range
*/
def keep(firstBlockNum: Long, numberOfBlocks: Long, id: ShortChannelId): Boolean = {
val TxCoordinates(height, _, _) = ShortChannelId.coordinates(id)
height >= firstBlockNum && height <= (firstBlockNum + numberOfBlocks)
val height = id.blockHeight
height >= firstBlockNum && height < (firstBlockNum + numberOfBlocks)
}
def shouldRequestUpdate(ourTimestamp: Long, ourChecksum: Long, theirTimestamp_opt: Option[Long], theirChecksum_opt: Option[Long]): Boolean = {
@ -972,30 +979,83 @@ object Router {
crc32c(data)
}
case class ShortChannelIdsChunk(firstBlock: Long, numBlocks: Long, shortChannelIds: List[ShortChannelId])
/**
* Have to split ids because otherwise message could be too big
* there could be several reply_channel_range messages for a single query
*/
def split(shortChannelIds: SortedSet[ShortChannelId], channelRangeChunkSize: Int): List[ShortChannelIdsChunk] = {
// this algorithm can split blocks (meaning that we can in theory generate several replies with the same first_block/num_blocks
// and a different set of short_channel_ids) but it doesn't matter
if (shortChannelIds.isEmpty) {
List(ShortChannelIdsChunk(0, 0, List.empty))
} else {
shortChannelIds
.grouped(channelRangeChunkSize)
.toList
.map { group =>
// NB: group is never empty
val firstBlock: Long = ShortChannelId.coordinates(group.head).blockHeight.toLong
val numBlocks: Long = ShortChannelId.coordinates(group.last).blockHeight.toLong - firstBlock + 1
ShortChannelIdsChunk(firstBlock, numBlocks, group.toList)
}
case class ShortChannelIdsChunk(firstBlock: Long, numBlocks: Long, shortChannelIds: List[ShortChannelId]) {
/**
*
* @param maximumSize maximum size of the short channel ids list
* @return a chunk with at most `maximumSize` ids
*/
def enforceMaximumSize(maximumSize: Int) = {
if (shortChannelIds.size <= maximumSize) this else {
// we use a random offset here, so even if shortChannelIds.size is much bigger than maximumSize (which should
// not happen) peers will eventually receive info about all channels in this chunk
val offset = Random.nextInt(shortChannelIds.size - maximumSize + 1)
this.copy(shortChannelIds = this.shortChannelIds.slice(offset, offset + maximumSize))
}
}
}
/**
* Split short channel ids into chunks, because otherwise message could be too big
* there could be several reply_channel_range messages for a single query, but we make sure that the returned
* chunks fully covers the [firstBlockNum, numberOfBlocks] range that was requested
*
* @param shortChannelIds list of short channel ids to split
* @param firstBlockNum first block height requested by our peers
* @param numberOfBlocks number of blocks requested by our peer
* @param channelRangeChunkSize target chunk size. All ids that have the same block height will be grouped together, so
* returned chunks may still contain more than `channelRangeChunkSize` elements
* @return a list of short channel id chunks
*/
def split(shortChannelIds: SortedSet[ShortChannelId], firstBlockNum: Long, numberOfBlocks: Long, channelRangeChunkSize: Int): List[ShortChannelIdsChunk] = {
// see BOLT7: MUST encode a short_channel_id for every open channel it knows in blocks first_blocknum to first_blocknum plus number_of_blocks minus one
val it = shortChannelIds.iterator.dropWhile(_.blockHeight < firstBlockNum).takeWhile(_.blockHeight < firstBlockNum + numberOfBlocks)
if (it.isEmpty) {
List(ShortChannelIdsChunk(firstBlockNum, numberOfBlocks, List.empty))
} else {
// we want to split ids in different chunks, with the following rules by order of priority
// ids that have the same block height must be grouped in the same chunk
// chunk should contain `channelRangeChunkSize` ids
@tailrec
def loop(currentChunk: List[ShortChannelId], acc: List[ShortChannelIdsChunk]): List[ShortChannelIdsChunk] = {
if (it.hasNext) {
val id = it.next()
val currentHeight = currentChunk.head.blockHeight
if (id.blockHeight == currentHeight)
loop(id :: currentChunk, acc) // same height => always add to the current chunk
else if (currentChunk.size < channelRangeChunkSize) // different height but we're under the size target => add to the current chunk
loop(id :: currentChunk, acc) // different height and over the size target => start a new chunk
else {
// we always prepend because it's more efficient so we have to reverse the current chunk
// for the first chunk, we make sure that we start at the request first block
val first = if (acc.isEmpty) firstBlockNum else currentChunk.last.blockHeight
val count = currentChunk.head.blockHeight - first + 1
loop(id :: Nil, ShortChannelIdsChunk(first, count, currentChunk.reverse) :: acc)
}
}
else {
// for the last chunk, we make sure that we cover the request block range
val first = if (acc.isEmpty) firstBlockNum else currentChunk.last.blockHeight
val count = numberOfBlocks - first + firstBlockNum
(ShortChannelIdsChunk(first, count, currentChunk.reverse) :: acc).reverse
}
}
val first = it.next()
val chunks = loop(first :: Nil, Nil)
// make sure that all our chunks match our max size policy
enforceMaximumSize(chunks)
}
}
/**
* Enforce max-size constraints for each chunk
* @param chunks list of short channel id chunks
* @return a processed list of chunks
*/
def enforceMaximumSize(chunks: List[ShortChannelIdsChunk]) : List[ShortChannelIdsChunk] = chunks.map(_.enforceMaximumSize(MAXIMUM_CHUNK_SIZE))
def addToSync(syncMap: Map[PublicKey, Sync], remoteNodeId: PublicKey, pending: List[RoutingMessage]): (Map[PublicKey, Sync], Option[RoutingMessage]) = {
pending match {
case head +: rest =>

View file

@ -240,8 +240,14 @@ object LightningMessageCodecs {
val encodedShortChannelIdsCodec: Codec[EncodedShortChannelIds] =
discriminated[EncodedShortChannelIds].by(byte)
.\(0) { case a@EncodedShortChannelIds(EncodingType.UNCOMPRESSED, _) => a }((provide[EncodingType](EncodingType.UNCOMPRESSED) :: list(shortchannelid)).as[EncodedShortChannelIds])
.\(1) { case a@EncodedShortChannelIds(EncodingType.COMPRESSED_ZLIB, _) => a }((provide[EncodingType](EncodingType.COMPRESSED_ZLIB) :: zlib(list(shortchannelid))).as[EncodedShortChannelIds])
.\(0) {
case a@EncodedShortChannelIds(_, Nil) => a // empty list is always encoded with encoding type 'uncompressed' for compatibility with other implementations
case a@EncodedShortChannelIds(EncodingType.UNCOMPRESSED, _) => a
}((provide[EncodingType](EncodingType.UNCOMPRESSED) :: list(shortchannelid)).as[EncodedShortChannelIds])
.\(1) {
case a@EncodedShortChannelIds(EncodingType.COMPRESSED_ZLIB, _) => a
}((provide[EncodingType](EncodingType.COMPRESSED_ZLIB) :: zlib(list(shortchannelid))).as[EncodedShortChannelIds])
val queryShortChannelIdsCodec: Codec[QueryShortChannelIds] = {
Codec(

View file

@ -232,6 +232,31 @@ class PeerSpec extends TestkitBaseClass with StateTestsHelperMethods {
probe.expectTerminated(transport.ref)
}
test("masks off MPP and PaymentSecret features") { f =>
import f._
val wallet = new TestWallet
val probe = TestProbe()
val testCases = Seq(
(bin" 00000010", bin" 00000010"), // option_data_loss_protect
(bin" 0000101010001010", bin" 0000101010001010"), // option_data_loss_protect, initial_routing_sync, gossip_queries, var_onion_optin, gossip_queries_ex
(bin" 1000101010001010", bin" 0000101010001010"), // option_data_loss_protect, initial_routing_sync, gossip_queries, var_onion_optin, gossip_queries_ex, payment_secret
(bin" 0100101010001010", bin" 0000101010001010"), // option_data_loss_protect, initial_routing_sync, gossip_queries, var_onion_optin, gossip_queries_ex, payment_secret
(bin"000000101000101010001010", bin" 0000101010001010"), // option_data_loss_protect, initial_routing_sync, gossip_queries, var_onion_optin, gossip_queries_ex, payment_secret, basic_mpp
(bin"000010101000101010001010", bin"000010000000101010001010") // option_data_loss_protect, initial_routing_sync, gossip_queries, var_onion_optin, gossip_queries_ex, payment_secret, basic_mpp and 19
)
for ((configuredFeatures, sentFeatures) <- testCases) {
val nodeParams = TestConstants.Alice.nodeParams.copy(features = configuredFeatures.bytes)
val peer = TestFSMRef(new Peer(nodeParams, remoteNodeId, authenticator.ref, watcher.ref, router.ref, relayer.ref, TestProbe().ref, wallet))
probe.send(peer, Peer.Init(None, Set.empty))
authenticator.send(peer, Authenticator.Authenticated(connection.ref, transport.ref, remoteNodeId, new InetSocketAddress("1.2.3.4", 42000), outgoing = true, None))
transport.expectMsgType[TransportHandler.Listener]
val init = transport.expectMsgType[wire.Init]
assert(init.features === sentFeatures.bytes)
}
}
test("handle disconnect in status INITIALIZING") { f =>
import f._

View file

@ -17,13 +17,17 @@
package fr.acinq.eclair.router
import fr.acinq.bitcoin.ByteVector32
import fr.acinq.eclair.router.Router.ShortChannelIdsChunk
import fr.acinq.eclair.wire.ReplyChannelRangeTlv._
import fr.acinq.eclair.{LongToBtcAmount, randomKey}
import fr.acinq.eclair.{LongToBtcAmount, ShortChannelId, randomKey}
import org.scalatest.FunSuite
import scodec.bits.ByteVector
import scala.collection.immutable.SortedMap
import scala.annotation.tailrec
import scala.collection.immutable.{SortedMap, SortedSet}
import scala.collection.mutable.ArrayBuffer
import scala.compat.Platform
import scala.util.Random
class ChannelRangeQueriesSpec extends FunSuite {
@ -127,4 +131,229 @@ class ChannelRangeQueriesSpec extends FunSuite {
assert(Router.computeFlag(channels)(ef.shortChannelId, None, None, false) === (INCLUDE_CHANNEL_ANNOUNCEMENT | INCLUDE_CHANNEL_UPDATE_1 | INCLUDE_CHANNEL_UPDATE_2))
assert(Router.computeFlag(channels)(ef.shortChannelId, None, None, true) === (INCLUDE_CHANNEL_ANNOUNCEMENT | INCLUDE_CHANNEL_UPDATE_1 | INCLUDE_CHANNEL_UPDATE_2 | INCLUDE_NODE_ANNOUNCEMENT_1 | INCLUDE_NODE_ANNOUNCEMENT_2))
}
def makeShortChannelIds(height: Int, count: Int): List[ShortChannelId] = {
val output = ArrayBuffer.empty[ShortChannelId]
var txIndex = 0
var outputIndex = 0
while (output.size < count) {
if (Random.nextBoolean()) {
txIndex = txIndex + 1
outputIndex = 0
} else {
outputIndex = outputIndex + 1
}
output += ShortChannelId(height, txIndex, outputIndex)
}
output.toList
}
def validate(chunk: ShortChannelIdsChunk) = {
require(chunk.shortChannelIds.forall(Router.keep(chunk.firstBlock, chunk.numBlocks, _)))
}
// check that chunks do not overlap and contain exactly the ids they were built from
def validate(ids: SortedSet[ShortChannelId], firstBlockNum: Long, numberOfBlocks: Long, chunks: List[ShortChannelIdsChunk]): Unit = {
@tailrec
def noOverlap(chunks: List[ShortChannelIdsChunk]): Boolean = chunks match {
case Nil => true
case a :: b :: _ if b.firstBlock < a.firstBlock + a.numBlocks => false
case _ => noOverlap(chunks.tail)
}
// aggregate ids from all chunks, to check that they match our input ids exactly
val chunkIds = SortedSet.empty[ShortChannelId] ++ chunks.flatMap(_.shortChannelIds).toSet
val expected = ids.filter(Router.keep(firstBlockNum, numberOfBlocks, _))
if (expected.isEmpty) require(chunks == List(ShortChannelIdsChunk(firstBlockNum, numberOfBlocks, Nil)))
chunks.foreach(validate)
require(chunks.head.firstBlock == firstBlockNum)
require(chunks.last.firstBlock + chunks.last.numBlocks == firstBlockNum + numberOfBlocks)
require(chunkIds == expected)
require(noOverlap(chunks))
}
test("limit channel ids chunk size") {
val ids = makeShortChannelIds(1, 3)
val chunk = ShortChannelIdsChunk(0, 10, ids)
val res1 = for (_ <- 0 until 100) yield chunk.enforceMaximumSize(1).shortChannelIds
assert(res1.toSet == Set(List(ids(0)), List(ids(1)), List(ids(2))))
val res2 = for (_ <- 0 until 100) yield chunk.enforceMaximumSize(2).shortChannelIds
assert(res2.toSet == Set(List(ids(0), ids(1)), List(ids(1), ids(2))))
val res3 = for (_ <- 0 until 100) yield chunk.enforceMaximumSize(3).shortChannelIds
assert(res3.toSet == Set(List(ids(0), ids(1), ids(2))))
}
test("split short channel ids correctly (basic tests") {
def id(blockHeight: Int, txIndex: Int = 0, outputIndex: Int = 0) = ShortChannelId(blockHeight, txIndex, outputIndex)
// no ids to split
{
val ids = Nil
val firstBlockNum = 10
val numberOfBlocks = 100
val chunks = Router.split(SortedSet.empty[ShortChannelId] ++ ids, firstBlockNum, numberOfBlocks, ids.size)
assert(chunks == ShortChannelIdsChunk(firstBlockNum, numberOfBlocks, Nil) :: Nil)
}
// ids are all atfer the requested range
{
val ids = List(id(1000), id(1001), id(1002), id(1003), id(1004), id(1005))
val firstBlockNum = 10
val numberOfBlocks = 100
val chunks = Router.split(SortedSet.empty[ShortChannelId] ++ ids, firstBlockNum, numberOfBlocks, ids.size)
assert(chunks == ShortChannelIdsChunk(firstBlockNum, numberOfBlocks, Nil) :: Nil)
}
// ids are all before the requested range
{
val ids = List(id(1000), id(1001), id(1002), id(1003), id(1004), id(1005))
val firstBlockNum = 1100
val numberOfBlocks = 100
val chunks = Router.split(SortedSet.empty[ShortChannelId] ++ ids, firstBlockNum, numberOfBlocks, ids.size)
assert(chunks == ShortChannelIdsChunk(firstBlockNum, numberOfBlocks, Nil) :: Nil)
}
// all ids in different blocks, but they all fit in a single chunk
{
val ids = List(id(1000), id(1001), id(1002), id(1003), id(1004), id(1005))
val firstBlockNum = 900
val numberOfBlocks = 200
val chunks = Router.split(SortedSet.empty[ShortChannelId] ++ ids, firstBlockNum, numberOfBlocks, ids.size)
assert(chunks == ShortChannelIdsChunk(firstBlockNum, numberOfBlocks, ids) :: Nil)
}
// all ids in the same block, chunk size == 2
// chunk size will not be enforced and a single chunk should be created
{
val ids = List(id(1000, 0), id(1000, 1), id(1000, 2), id(1000, 3), id(1000, 4), id(1000, 5))
val firstBlockNum = 900
val numberOfBlocks = 200
val chunks = Router.split(SortedSet.empty[ShortChannelId] ++ ids, firstBlockNum, numberOfBlocks, 2)
assert(chunks == ShortChannelIdsChunk(firstBlockNum, numberOfBlocks, ids) :: Nil)
}
// all ids in different blocks, chunk size == 2
{
val ids = List(id(1000), id(1001), id(1002), id(1003), id(1004), id(1005))
val firstBlockNum = 900
val numberOfBlocks = 200
val chunks = Router.split(SortedSet.empty[ShortChannelId] ++ ids, firstBlockNum, numberOfBlocks, 2)
assert(chunks == List(
ShortChannelIdsChunk(firstBlockNum, 100 + 2, List(ids(0), ids(1))),
ShortChannelIdsChunk(1002, 2, List(ids(2), ids(3))),
ShortChannelIdsChunk(1004, numberOfBlocks - 1004 + firstBlockNum, List(ids(4), ids(5)))
))
}
// all ids in different blocks, chunk size == 2, first id outside of range
{
val ids = List(id(1000), id(1001), id(1002), id(1003), id(1004), id(1005))
val firstBlockNum = 1001
val numberOfBlocks = 200
val chunks = Router.split(SortedSet.empty[ShortChannelId] ++ ids, firstBlockNum, numberOfBlocks, 2)
assert(chunks == List(
ShortChannelIdsChunk(firstBlockNum, 2, List(ids(1), ids(2))),
ShortChannelIdsChunk(1003, 2, List(ids(3), ids(4))),
ShortChannelIdsChunk(1005, numberOfBlocks - 1005 + firstBlockNum, List(ids(5)))
))
}
// all ids in different blocks, chunk size == 2, last id outside of range
{
val ids = List(id(1000), id(1001), id(1002), id(1003), id(1004), id(1005))
val firstBlockNum = 900
val numberOfBlocks = 105
val chunks = Router.split(SortedSet.empty[ShortChannelId] ++ ids, firstBlockNum, numberOfBlocks, 2)
assert(chunks == List(
ShortChannelIdsChunk(firstBlockNum, 100 + 2, List(ids(0), ids(1))),
ShortChannelIdsChunk(1002, 2, List(ids(2), ids(3))),
ShortChannelIdsChunk(1004, numberOfBlocks - 1004 + firstBlockNum, List(ids(4)))
))
}
// all ids in different blocks, chunk size == 2, first and last id outside of range
{
val ids = List(id(1000), id(1001), id(1002), id(1003), id(1004), id(1005))
val firstBlockNum = 1001
val numberOfBlocks = 4
val chunks = Router.split(SortedSet.empty[ShortChannelId] ++ ids, firstBlockNum, numberOfBlocks, 2)
assert(chunks == List(
ShortChannelIdsChunk(firstBlockNum, 2, List(ids(1), ids(2))),
ShortChannelIdsChunk(1003, 2, List(ids(3), ids(4)))
))
}
// all ids in the same block
{
val ids = makeShortChannelIds(1000, 100)
val firstBlockNum = 900
val numberOfBlocks = 200
val chunks = Router.split(SortedSet.empty[ShortChannelId] ++ ids, firstBlockNum, numberOfBlocks, 10)
assert(chunks == ShortChannelIdsChunk(firstBlockNum, numberOfBlocks, ids) :: Nil)
}
}
test("split short channel ids correctly") {
val ids = SortedSet.empty[ShortChannelId] ++ makeShortChannelIds(42, 100) ++ makeShortChannelIds(43, 70) ++ makeShortChannelIds(44, 50) ++ makeShortChannelIds(45, 30) ++ makeShortChannelIds(50, 120)
val firstBlockNum = 0
val numberOfBlocks = 1000
validate(ids, firstBlockNum, numberOfBlocks, Router.split(ids, firstBlockNum, numberOfBlocks, 1))
validate(ids, firstBlockNum, numberOfBlocks, Router.split(ids, firstBlockNum, numberOfBlocks, 20))
validate(ids, firstBlockNum, numberOfBlocks, Router.split(ids, firstBlockNum, numberOfBlocks, 50))
validate(ids, firstBlockNum, numberOfBlocks, Router.split(ids, firstBlockNum, numberOfBlocks, 100))
validate(ids, firstBlockNum, numberOfBlocks, Router.split(ids, firstBlockNum, numberOfBlocks, 1000))
}
test("split short channel ids correctly (comprehensive tests)") {
val ids = SortedSet.empty[ShortChannelId] ++ makeShortChannelIds(42, 100) ++ makeShortChannelIds(43, 70) ++ makeShortChannelIds(44, 50) ++ makeShortChannelIds(45, 30) ++ makeShortChannelIds(50, 120)
for (firstBlockNum <- 0 to 60) {
for (numberOfBlocks <- 1 to 60) {
for (chunkSize <- 1 :: 2 :: 20 :: 50 :: 100 :: 1000 :: Nil) {
validate(ids, firstBlockNum, numberOfBlocks, Router.split(ids, firstBlockNum, numberOfBlocks, chunkSize))
}
}
}
}
test("enforce maximum size of short channel lists") {
def makeChunk(startBlock: Int, count : Int) = ShortChannelIdsChunk(startBlock, count, makeShortChannelIds(startBlock, count))
def validate(before: ShortChannelIdsChunk, after: ShortChannelIdsChunk) = {
require(before.shortChannelIds.containsSlice(after.shortChannelIds))
require(after.shortChannelIds.size <= Router.MAXIMUM_CHUNK_SIZE)
}
def validateChunks(before: List[ShortChannelIdsChunk], after: List[ShortChannelIdsChunk]): Unit = {
before.zip(after).foreach { case (b, a) => validate(b, a) }
}
// empty chunk
{
val chunks = makeChunk(0, 0) :: Nil
assert(Router.enforceMaximumSize(chunks) == chunks)
}
// chunks are just below the limit
{
val chunks = makeChunk(0, Router.MAXIMUM_CHUNK_SIZE) :: makeChunk(Router.MAXIMUM_CHUNK_SIZE, Router.MAXIMUM_CHUNK_SIZE) :: Nil
assert(Router.enforceMaximumSize(chunks) == chunks)
}
// fuzzy tests
{
val chunks = collection.mutable.ArrayBuffer.empty[ShortChannelIdsChunk]
// we select parameters to make sure that some chunks will have too many ids
for (i <- 0 until 100) chunks += makeChunk(0, Router.MAXIMUM_CHUNK_SIZE - 500 + Random.nextInt(1000))
val pruned = Router.enforceMaximumSize(chunks.toList)
validateChunks(chunks.toList, pruned)
}
}
}

View file

@ -172,7 +172,7 @@ class RoutingSyncSpec extends TestKit(ActorSystem("test")) with FunSuiteLike wit
sender.send(bob, PeerRoutingMessage(sender.ref, charlieId, na2))
}
awaitCond(bob.stateData.channels.size === fakeRoutingInfo.size && countUpdates(bob.stateData.channels) === 2 * fakeRoutingInfo.size, max = 60 seconds)
assert(BasicSyncResult(ranges = 3, queries = 12, channels = fakeRoutingInfo.size, updates = 2 * fakeRoutingInfo.size, nodes = 2 * fakeRoutingInfo.size) === sync(alice, bob, extendedQueryFlags_opt).counts)
assert(BasicSyncResult(ranges = 3, queries = 13, channels = fakeRoutingInfo.size, updates = 2 * fakeRoutingInfo.size, nodes = 2 * fakeRoutingInfo.size) === sync(alice, bob, extendedQueryFlags_opt).counts)
awaitCond(alice.stateData.channels === bob.stateData.channels, max = 60 seconds)
}
@ -221,7 +221,7 @@ class RoutingSyncSpec extends TestKit(ActorSystem("test")) with FunSuiteLike wit
sender.send(bob, PeerRoutingMessage(sender.ref, charlieId, na2))
}
awaitCond(bob.stateData.channels.size === fakeRoutingInfo.size && countUpdates(bob.stateData.channels) === 2 * fakeRoutingInfo.size, max = 60 seconds)
assert(BasicSyncResult(ranges = 3, queries = 10, channels = fakeRoutingInfo.size - 10, updates = 2 * (fakeRoutingInfo.size - 10), nodes = if (requestNodeAnnouncements) 2 * (fakeRoutingInfo.size - 10) else 0) === sync(alice, bob, extendedQueryFlags_opt).counts)
assert(BasicSyncResult(ranges = 3, queries = 11, channels = fakeRoutingInfo.size - 10, updates = 2 * (fakeRoutingInfo.size - 10), nodes = if (requestNodeAnnouncements) 2 * (fakeRoutingInfo.size - 10) else 0) === sync(alice, bob, extendedQueryFlags_opt).counts)
awaitCond(alice.stateData.channels === bob.stateData.channels, max = 60 seconds)
// bump random channel_updates

View file

@ -22,9 +22,46 @@ import fr.acinq.eclair.wire.LightningMessageCodecs._
import fr.acinq.eclair.wire.ReplyChannelRangeTlv._
import fr.acinq.eclair.{CltvExpiryDelta, LongToBtcAmount, ShortChannelId, UInt64}
import org.scalatest.FunSuite
import scodec.bits.ByteVector
import scodec.bits._
class ExtendedQueriesCodecsSpec extends FunSuite {
test("encode a list of short channel ids") {
{
// encode/decode with encoding 'uncompressed'
val ids = EncodedShortChannelIds(EncodingType.UNCOMPRESSED, List(ShortChannelId(142), ShortChannelId(15465), ShortChannelId(4564676)))
val encoded = encodedShortChannelIdsCodec.encode(ids).require
val decoded = encodedShortChannelIdsCodec.decode(encoded).require.value
assert(decoded === ids)
}
{
// encode/decode with encoding 'zlib'
val ids = EncodedShortChannelIds(EncodingType.COMPRESSED_ZLIB, List(ShortChannelId(142), ShortChannelId(15465), ShortChannelId(4564676)))
val encoded = encodedShortChannelIdsCodec.encode(ids).require
val decoded = encodedShortChannelIdsCodec.decode(encoded).require.value
assert(decoded === ids)
}
{
// encode/decode empty list with encoding 'uncompressed'
val ids = EncodedShortChannelIds(EncodingType.UNCOMPRESSED, List.empty)
val encoded = encodedShortChannelIdsCodec.encode(ids).require
assert(encoded.bytes === hex"00")
val decoded = encodedShortChannelIdsCodec.decode(encoded).require.value
assert(decoded === ids)
}
{
// encode/decode empty list with encoding 'zlib'
val ids = EncodedShortChannelIds(EncodingType.COMPRESSED_ZLIB, List.empty)
val encoded = encodedShortChannelIdsCodec.encode(ids).require
assert(encoded.bytes === hex"00") // NB: empty list is always encoded with encoding type 'uncompressed'
val decoded = encodedShortChannelIdsCodec.decode(encoded).require.value
assert(decoded === EncodedShortChannelIds(EncodingType.UNCOMPRESSED, List.empty))
}
}
test("encode query_short_channel_ids (no optional data)") {
val query_short_channel_id = QueryShortChannelIds(
Block.RegtestGenesisBlock.blockId,

View file

@ -67,7 +67,7 @@
<akka.version>2.3.14</akka.version>
<akka.http.version>10.0.11</akka.http.version>
<sttp.version>1.3.9</sttp.version>
<bitcoinlib.version>0.16</bitcoinlib.version>
<bitcoinlib.version>0.17</bitcoinlib.version>
<guava.version>24.0-android</guava.version>
<kamon.version>2.0.0</kamon.version>
</properties>