At slow internet connections the current timeout make it impossible to
get the initial data and therefor to use Bisq.
The timeout is containing the request and response as well as the time
it takes to start up the network connection which can also be quite
slow.
In my scenario, it took about 6-10 sec for the connection and the
request is atm nearly 3 MB which takes about 24 sec on a 1 Mbit/s
connection (note that over tor connection is slower so if normal speed
is 3-5 Mbit tors speed can be considerable lower). The response data
depends on the missing data/last update but can be easily 6 MB which
adds about another 48 sec. So one can easily hit the 90 sec. limit.
There is work in development for optimizing the initial data request,
but as that is more complex and not clear when it will be deployed I
recommend that we increase the current timeout to 180 sec. to avoid
that critical issue that users get "locked out".
If torControlPort is specified, but neither torControlPassword nor
torControlCookieFile are specified, we have cookieFile == null in
bisq.network.p2p.network.RunningTor, but RunningTor.getTor() assumes a
cookie file has been specified and tries to check that the file exists,
causing the thread to crash. Added a check for null to fix this.
and do not broadcast.
It is unclear why we receive expired data (some are very old), but a
manipulated node might produce that and as it only removed at each
batch process running each minute to clean out expired data it still
could propagate. Is an attack vector also to flood the network with
outdated offers where the maker is likely not online.
Should fix https://github.com/bisq-network/bisq/issues/4026
and do not broadcast.
It is unclear why we receive expired data (some are very old), but a
manipulated node might produce that and as it only removed at each
batch process running each minute to clean out expired data it still
could propagate. Is an attack vector also to flood the network with
outdated offers where the maker is likely not online.
Should fix https://github.com/bisq-network/bisq/issues/4026
The getAllConnections() call in the while loop always returned the same
number of nodes so the timeout of 15 sec was always triggered.
We now wait for the shutdown handlers of the connections and if all are
called we run our handler. If it takes longer as our timeout of 3 sec.
the shutdown handler gets called by the timeout.
The large binary objects in p2p/src/main/resources/ are updated on every
Bisq release with the latest network data to avoid the need for new Bisq
clients to download all of this information from the network, which
would easily overload seed nodes and generally bog down the client.
This approach works well enough for its purposes, but comes with the
significant downside of storing all of this binary data in Git history
forever. The current version of these binary objects total about 65M,
and they grow with every release. In aggregate, this has caused the
total size of the repository to grow to 360M, making it cumbersome to
clone over a low-bandwith connection, and slowing down various local Git
operations.
To avoid further exacerbating this problem, this commit sets these files
up to be tracked via Git LFS. There's nothing we can do about the 360M
of files that already exist in history, but we can ensure it doesn't
grow in this unchecked way going forward. For an understanding of how
Git LFS works, see the reference material at [1], and see also the
sample project and README at [2].
The following command was used to track the files:
$ git lfs track "p2p/src/main/resources/*BTC_MAINNET"
Tracking "p2p/src/main/resources/AccountAgeWitnessStore_BTC_MAINNET"
Tracking "p2p/src/main/resources/BlindVoteStore_BTC_MAINNET"
Tracking "p2p/src/main/resources/DaoStateStore_BTC_MAINNET"
Tracking "p2p/src/main/resources/ProposalStore_BTC_MAINNET"
Tracking "p2p/src/main/resources/SignedWitnessStore_BTC_MAINNET"
Tracking "p2p/src/main/resources/TradeStatistics2Store_BTC_MAINNET"
We are using GitHub's built-in LFS service here, and it's important to
understand that there are storage and bandwidth limits there. We have
1G total storage and 1G per month of bandwidth on the free tier. We will
certainly exceed this, and so must purchase at least one "data pack"
from GitHub, possibly two. One gets us to 50G storage and bandwith.
In an attempt to avoid unnecessary LFS bandwidth usage, this commit also
updates the Travis CI build configuration to cache Git LFS files, such
that they are not re-downloaded on every CI build (see [3] and [4]
below). With that out of the way, the variable determining whether we
exceed the monthly limit is how many clones we have every month, and
there are many, though it's not clear how many are are Travis CI and how
many are users / developers.
Tracking these files via LFS means that developers will need to have Git
LFS installed in order to properly synchronize the files. If a developer
does not have LFS installed, cloning will complete successfully and the
build would complete successfully, but the app would fail when trying to
actually load the p2p data store files. For this reason, the build has
been updated to proactively check that the p2p data store files have
been properly synchronized via LFS, and if not, the build fails with a
helpful error message. The docs/build.md instructions have also been
updated accordingly.
It is important that we make this change now, not only to avoid growing
the repository in the way described above as we have been doing now for
many releases, but also because we are now considering adding yet more
binary objects to the repository, as proposed at
https://github.com/bisq-network/projects/issues/25.
[1]: https://git-lfs.github.com
[2]: https://github.com/cbeams/lfs-test
[3]: https://docs-staging.travis-ci.com/user/customizing-the-build/#git-lfs
[4]: https://github.com/travis-ci/travis-ci/issues/8787#issuecomment-394202791
The getAllConnections() call in the while loop always returned the same
number of nodes so the timeout of 15 sec was always triggered.
We now wait for the shutdown handlers of the connections and if all are
called we run our handler. If it takes longer as our timeout of 3 sec.
the shutdown handler gets called by the timeout.
The former class is dead code, together with its store service, as they
were only referenced from CorePersistenceProtoResolver::fromProto, the
binding logic and from AppendOnlyDataStoreService by orphaned migration
code. However, migration from the old persisted data was completed long
ago and the store file is no longer being read or written from anywhere
in the codebase.
Also remove the associated PersistableEnvelope proto message type, along
with the TradeStatisticsList message type. The latter is long deprecated
and has no corresponding Java class implementing PersistableEnvelope, so
removing it won't change behaviour (outside the exception message thrown
when attempting to resolve it).
* Report HS version to pricenode
In order to evaluate progress on https://github.com/bisq-network/projects/issues/23,
the Bisq app reports its hiddenservice version.
This change is going to be undone as soon as we do not need the
info anymore.
* Added hsversion scraper script
* Added installer/uninstaller
* Cleanup
* Fix unit name