If torControlPort is specified, but neither torControlPassword nor
torControlCookieFile are specified, we have cookieFile == null in
bisq.network.p2p.network.RunningTor, but RunningTor.getTor() assumes a
cookie file has been specified and tries to check that the file exists,
causing the thread to crash. Added a check for null to fix this.
and do not broadcast.
It is unclear why we receive expired data (some are very old), but a
manipulated node might produce that and as it only removed at each
batch process running each minute to clean out expired data it still
could propagate. Is an attack vector also to flood the network with
outdated offers where the maker is likely not online.
Should fix https://github.com/bisq-network/bisq/issues/4026
and do not broadcast.
It is unclear why we receive expired data (some are very old), but a
manipulated node might produce that and as it only removed at each
batch process running each minute to clean out expired data it still
could propagate. Is an attack vector also to flood the network with
outdated offers where the maker is likely not online.
Should fix https://github.com/bisq-network/bisq/issues/4026
The getAllConnections() call in the while loop always returned the same
number of nodes so the timeout of 15 sec was always triggered.
We now wait for the shutdown handlers of the connections and if all are
called we run our handler. If it takes longer as our timeout of 3 sec.
the shutdown handler gets called by the timeout.
The large binary objects in p2p/src/main/resources/ are updated on every
Bisq release with the latest network data to avoid the need for new Bisq
clients to download all of this information from the network, which
would easily overload seed nodes and generally bog down the client.
This approach works well enough for its purposes, but comes with the
significant downside of storing all of this binary data in Git history
forever. The current version of these binary objects total about 65M,
and they grow with every release. In aggregate, this has caused the
total size of the repository to grow to 360M, making it cumbersome to
clone over a low-bandwith connection, and slowing down various local Git
operations.
To avoid further exacerbating this problem, this commit sets these files
up to be tracked via Git LFS. There's nothing we can do about the 360M
of files that already exist in history, but we can ensure it doesn't
grow in this unchecked way going forward. For an understanding of how
Git LFS works, see the reference material at [1], and see also the
sample project and README at [2].
The following command was used to track the files:
$ git lfs track "p2p/src/main/resources/*BTC_MAINNET"
Tracking "p2p/src/main/resources/AccountAgeWitnessStore_BTC_MAINNET"
Tracking "p2p/src/main/resources/BlindVoteStore_BTC_MAINNET"
Tracking "p2p/src/main/resources/DaoStateStore_BTC_MAINNET"
Tracking "p2p/src/main/resources/ProposalStore_BTC_MAINNET"
Tracking "p2p/src/main/resources/SignedWitnessStore_BTC_MAINNET"
Tracking "p2p/src/main/resources/TradeStatistics2Store_BTC_MAINNET"
We are using GitHub's built-in LFS service here, and it's important to
understand that there are storage and bandwidth limits there. We have
1G total storage and 1G per month of bandwidth on the free tier. We will
certainly exceed this, and so must purchase at least one "data pack"
from GitHub, possibly two. One gets us to 50G storage and bandwith.
In an attempt to avoid unnecessary LFS bandwidth usage, this commit also
updates the Travis CI build configuration to cache Git LFS files, such
that they are not re-downloaded on every CI build (see [3] and [4]
below). With that out of the way, the variable determining whether we
exceed the monthly limit is how many clones we have every month, and
there are many, though it's not clear how many are are Travis CI and how
many are users / developers.
Tracking these files via LFS means that developers will need to have Git
LFS installed in order to properly synchronize the files. If a developer
does not have LFS installed, cloning will complete successfully and the
build would complete successfully, but the app would fail when trying to
actually load the p2p data store files. For this reason, the build has
been updated to proactively check that the p2p data store files have
been properly synchronized via LFS, and if not, the build fails with a
helpful error message. The docs/build.md instructions have also been
updated accordingly.
It is important that we make this change now, not only to avoid growing
the repository in the way described above as we have been doing now for
many releases, but also because we are now considering adding yet more
binary objects to the repository, as proposed at
https://github.com/bisq-network/projects/issues/25.
[1]: https://git-lfs.github.com
[2]: https://github.com/cbeams/lfs-test
[3]: https://docs-staging.travis-ci.com/user/customizing-the-build/#git-lfs
[4]: https://github.com/travis-ci/travis-ci/issues/8787#issuecomment-394202791
The getAllConnections() call in the while loop always returned the same
number of nodes so the timeout of 15 sec was always triggered.
We now wait for the shutdown handlers of the connections and if all are
called we run our handler. If it takes longer as our timeout of 3 sec.
the shutdown handler gets called by the timeout.
The former class is dead code, together with its store service, as they
were only referenced from CorePersistenceProtoResolver::fromProto, the
binding logic and from AppendOnlyDataStoreService by orphaned migration
code. However, migration from the old persisted data was completed long
ago and the store file is no longer being read or written from anywhere
in the codebase.
Also remove the associated PersistableEnvelope proto message type, along
with the TradeStatisticsList message type. The latter is long deprecated
and has no corresponding Java class implementing PersistableEnvelope, so
removing it won't change behaviour (outside the exception message thrown
when attempting to resolve it).
* Report HS version to pricenode
In order to evaluate progress on https://github.com/bisq-network/projects/issues/23,
the Bisq app reports its hiddenservice version.
This change is going to be undone as soon as we do not need the
info anymore.
* Added hsversion scraper script
* Added installer/uninstaller
* Cleanup
* Fix unit name
Here, the tor object is a member variable and there are cases where
this member variable is not set yet.
Situation arose where a sigterm/sigint shutdown is requested and due
to the member variable not set tor was left running.
Remove an unused PersistableEnvelope interface from the following five
PersistableNetworkPayload implementations:
AccountAgeWitness, BlindVotePayload, ProposalPayload,
SignedWitness, TradeStatistics2
These already have corresponding *Store envelope classes which correctly
implement the interface.
The close connection process did fire up worker threads to actually
close the connections. Yet, once all threads have been spawned,
the code proceeds assuming that there are no connections left open
without checking.
This lead to situations where tor has been shutdown already but
open connections. These connections tried to gracefully close but
without tor, that only caused a wall of exceptions.
Currently bisq desktop does not accept IPv6 addresses in the settings for
custom nodes or via the --btcNodes command line option. The separation of
address and port is handled incorrectly in core / BtcNodes::fromFullAddress.
This results in IPv6 addresses being ignored. Where Tor is enabled for
Bitcoin connections, we need to handle the IPv6 address response
from Tor DNS lookup.
Fixes#3990
Make the default toPersistableMessage() method of PersistableEnvelope
simply delegate to Proto.toProtoMessage for speed, so that stores can
explicitly implement (Threaded|UserThreadMapped)PersistableEnvelope if
they actually need concurrency control.
As part of this, make PeerList implement PersistableEnvelope directly
instead of extending PersistableList, as it is non-critical & cloned on
the user thread prior to storage anyway, so doesn't need be thread-safe.
In this way, only PaymentAccountList & small DAO-related stores extend
PersistableList, so they can all be made user-thread-mapped.
After this change, the only concrete store classes not implementing
(Threaded|UserThreadMapped)PersistableEnvelope are:
AccountAgeWitness, BlindVotePayload, ProposalPayload, SignedWitness,
TradeStatistics2, NavigationPath & PeerList
The first five appear to erroneously implement PersistableEnvelope and
can be cleaned up in a separate commit. The last two are non-critical.
(Make NavigationPath.path an immutable list, for slightly better thread
safety anyway - that way it will never be observed half-constructed.)
Add toProtoMessageSynchronized() default method to PersistableEnvelope,
which performs (blocking) protobuf serialisation in the user thread,
regardless of the calling thread. This should prevent data races like
the ConcurrentModificationException observed in #3752, under the
reasonable assumption that shared persistable objects are only mutated
in the user thread.
Also add a ThreadedPersistableEnvelope sub-interface overriding the
default method above, to let objects which are expensive to serialise
(like DaoStateStore) be selectively serialised in the 'save-file-task-X'
thread as before, but directly synchronised with each mutating op. As
most objects are cheap to serialise, this avoids a noticeable perf drop
without having to track down every mutating method for each store.
In all cases but one, classes implementing ThreadedPersistableEnvelope
are stores like TradeStatistic2Store, with a single ConcurrentHashMap
field. These require no further serialisation, since the map entries are
immutable, so the only mutating operations are map.put(..) calls which
are already synchronised with map reads. (Even if map.values().stream()
sees updates @ different keys happen out-of-order, it should be benign.)
The remaining case is DaoStateStore, which is only ever reset or
modified via a single persist(..) call with a cloned DaoState instance
and hash chain from DaoStateSnapshotService, so there is no aliasing
risk from the various DAO state mutations done in DaoStateService and
elsewhere.
This should fix#3752.