We handle the connections by INITIAL_DATA_EXCHANGE which
cover the seed nodes as well. Do have an parallel routine
is risky and make things more complex.
In the 3rd attempt we filter for
INITIAL_DATA_EXCHANGE peers.
Before we excluded 2 types and as PEER have been
already filtered earlier we would look up for SEED_NODE.
This was only called by non-seedNodes.
Remove unnecessary setPeerType calls. ConnectionState is handling that.
Only PeerManager does the setting of isSeedNode as we do not have the
required dependency in ConnectionState.
The numbers did not match up from delivered response size and items as we did not count
in the overhead of the ProtectedStorageEntry (pub key+sig) and did estimate the size
with taking only first item and multiplying it. A measurement resulted in 20 ms costs
for the exact calculation (toProtoMessage().getSerializedSize() has some costs).
I guess that is acceptable to get correct metrics.
HistoricalDataStoreService.getMap is called.
HistoricalDataStoreService.getMap should not be used by domain
clients but rather the custom methods getMapOfAllData,
getMapOfLiveData or getMapSinceVersion.
As we have not removed the calls from ProposalService and
other domains we return getMapOfAllData() instead of the live map.
This was prevented earlier for performance reasons. It is more safe
thought to return in case of an illegal access all data instead of
live data only.
p2PService.getP2PDataStorage().getAppendOnlyDataStoreMap().
p2PService.getP2PDataStorage().getAppendOnlyDataStoreMap() iterates
over all services including the historical data store service. It used the
getMap method which should not be used at historical data store service as
it is not clear if the live data or all data should be accessed.
If oen starts with --daoActivated=false there is no service
set up in one of the data store services so it never calls
the result handler and the app never starts up.
If oen starts with --daoActivated=false there is no service
set up in one of the data store services so it never calls
the result handler and the app never starts up.
Call flush at openOfferManager shutdown.
Remove unused method.
Force broadcaster to send out immediately, otherwise we could
have a 2 sec delay until the bundled messages sent out.
while hasPendingRequest is true
- Throw exception if we get a request before previous request is
terminated (happens with priceFee at startup, on regtest as
startup is fast, but can happen also on mainnet)
- Improve shutDown
- Improve finally clause
These are failing on the tip of release/1.5.0 currently due to extra
validation added to PersistenceManager, causing the build to fail upon
merging upstream. Add missing PersistenceManager.shutDown calls to the
tearDown methods of the affected tests to fix.
It should be only needed in case we get the historical data from resources,
but as I have seen multiple times that some nodes have duplicated entries
in the live data I think its more safe to clean up always. If no entries are
removed the call is very cheap. Even with 60k entries to be pruned it takes
only about 20 ms.
As we might have same keys in multiple maps and merge those to 1 map we
cannot use an immutable map when merging the maps. Instead we copy our merged map
at the end into a immutable map.
Fix issue with immutable maps.
As we might have same keys in multiple maps and merge those to 1 map we
cannot use an immutable map when merging the maps. Instead we copy our merged map
at the end into a immutable map.
We used a delegate method in P2PService for calling readPersisted on p2PDataStorage and peerManager.
This was from old times when those classed have not been injected classes. The complete handlers got
called from both p2PDataStorage and peerManager but we counted only P2PService as host, so the
countdown completed before the last host was really completed, leading to a nullpointer in
MainView (not always).
We removed now PersistedDataHost interface from P2PService and use P2PDataStorage and PeerManager to be added to hosts.
- Use fileName not getFileName() in readHistoricalStoreFromResources
- use complete handler once reading of all historical data is completed where we
build the ImmutableMaps and complete the readFromResources method
Delay the boolean property setter as otherwise our listener might never
get triggered if property is set synchronously before listener registration.
Remove shutdown thread.
Cancel future in case tor is not created yet.
Add synchronous methods for tests. They new async methods lead to failing tests.
It could be probably fixed, but its quite an effort... Don't like to add code just for
tests but on the other hand, maybe those methods might be useful for other use cases as well.
Before we use a thread in readFromResources and readAllPersisted. To avoid that client code need to deal with
threading we moved that to the PersistenceManager and changed the API accordingly so it will not return the persisted object but calls a consumer once it is completed with reading.
We did check in Connection for SupportedCapabilitiesMessage and if a message is of that type we set the capability.
But encrypted messages are wrapped in a PrefixedSealedAndSignedMessage so the payload is not visible as SupportedCapabilitiesMessage without decrypting it.
We need to call maybeHandleSupportedCapabilitiesMessage at decrypting the message. We do that only for direct messages not for mailbox messages as we likely do not have a connection open to the peer in that case (otherwise it would not be a mailbox msg) and as we don't have the connection available (we get is as AddDataMessage broadcast from an peer, so could could not apply it to the Connection of the sender.
This will be used for monitoring seed nodes.
Instead of requesting all data (we cannot request all in fact as it is too large)
we request the number of items the node has.
This code will not have any impact atm. It will be triggered once a new monitor module gets added which
will send the GetInventoryRequest to the seeds.
Add DateSortedTruncatablePayload interface for TradeStatistics2
We check first if we need to truncate dateSortedTruncatablePayloads, if so we have sorted by date and truncate in the way that we receive the most recent data. We define the maxItems in the class implementing the interface (3000 for trade stats).
Later we apply the maxEntries check the combined list and if we need to truncate here as well (10 000) we have added the dateSortedTruncatablePayloads at the end so those will get truncated with higher prio.
There is also a bit wrong handling in the previous code that we check for max limits before the shouldTransmitPayloadToPeer filter. Should be fixed in another PR for master...
Number of objects is 24 more then with 1.3.9. Seems there are still either a few duplicate
with some diverging data which should not be different or that our old code to filter
duplicates had some issues. But a difference of 24 out of 75 000 object can be ignored IMO.
1. We do not want that initial data request/response use old trades statistics for excluded key hashes.
Thats why we return an empty map in getMap.
2. We do not read resource file as we have removed that.
3. We do not persist as we convert the existing data and re-publish as new data, or at startup we convert the old data to the new one and then delete the file.
Set address prefix to empty bytes in case we know that peer has capability (updated version)
Batch process mailbox messages in a thread.
Refactor handling of mailbox messages
Rename:
LOW to NETWORK
MID to PRIVATE_LOW_PRIO
HIGH to PRIVATE
Change delay of MID/PRIVATE_LOW_PRIO from 30 min to 2 hours (we had different datastores before using it, now its only real low prio stores)
Add comment to each enum
Make initializePersistenceManager in StorageService abstract to enforce in concrete class to define priority.
Change priorities for future renaming to a different meaning. instead of priority we want to describe the category: private data, public data,.... will come in next commit
Cleanup logs.
isDebugEnabled() is not recommended if params are used. It caused more performance costs and adds boilerplate code.
See:
http://logging.apache.org/log4j/1.2/manual.html
"This will not incur the cost of parameter construction if debugging is disabled. On the other hand, if the logger is debug-enabled, it will incur twice the cost of evaluating whether the logger is enabled or not: once in debugEnabled and once in debug. This is an insignificant overhead because evaluating a logger takes about 1% of the time it takes to actually log."
The info icon next to the trade ID is then a warning icon (should be red but css is not my best friend) and if opening trade details window we also color the missing txs red with a warn icon and tooltip.
When clicking the trash button a popup is displayed with detail info.
At failed trades there is a "undo" icon for reverting the trade back to pending (if user wants to open mediation, etc).
All the automatic handling of the failed trades and popups are removed as it never worked well and just confused users...
In next commits we will add more instructions what a user should/can do for diff. error cases.
TradeManger:
- Remove all the failed checks at initPendingTrade.
- Remove tradesWithoutDepositTx
- Remove tradesForStatistics as it was never read
- Remove cleanUpAddressEntries
- Rename addTradeToClosedTrades to onTradeCompleted
TxIdTextField accepts a null for tx ID and shows then red colored N/A and a warning icon.
HyperlinkWithIcon exposed the icon to be accessible for style change.
DebugWindow was updated for one variation of the trade protocol (other is missing still).
Trade detail window show now always all 4 mandatory txs.
Beside that this commit has some cleanups and null pointer fixes (when testing error scenarios i got those NP).
Fix missing CSS color code xmr-orange, was missing from dark mode.
Fix log message spelling/typo errors.
Removed 2 fixes from SellerStep3View so that chimp1984 can make
changes.
Remove address validator from XMR service address settings because
it does not support https prefix.
I don't know why the tests failed as I just added an overloaded method
and it should not have any impact. There is also one exception which
makes it even more obscure. I guess its some test framework issue.
See comment at the exceptional handling
// If we remove the last argument (isNull()) tests fail. No idea why as the broadcast method has an
/ overloaded method with nullable listener. Seems a testframework issue as it should not matter if the
// method with listener is called with null argument or the other method with no listener. We removed the
// null value from all other calls but here we can't as it breaks the test.
It is important that we flush our queued requests
at shutdown and wait until broadcast is completed as a maker need to
remove his offers at shutdown.
- Add handling for the case that there are very few connections (as in
dev setup).
- Make BundleOfEnvelopes extend BroadcastMessage
- Add complete handler for broadCaster to shutdown in P2PService and
wait with shutdown of other services until broadcaster is completed.
- Remove case for repeated shutdown call on P2PService as it cannot
happen.
At slow internet connections the current timeout make it impossible to
get the initial data and therefor to use Bisq.
The timeout is containing the request and response as well as the time
it takes to start up the network connection which can also be quite
slow.
In my scenario, it took about 6-10 sec for the connection and the
request is atm nearly 3 MB which takes about 24 sec on a 1 Mbit/s
connection (note that over tor connection is slower so if normal speed
is 3-5 Mbit tors speed can be considerable lower). The response data
depends on the missing data/last update but can be easily 6 MB which
adds about another 48 sec. So one can easily hit the 90 sec. limit.
There is work in development for optimizing the initial data request,
but as that is more complex and not clear when it will be deployed I
recommend that we increase the current timeout to 180 sec. to avoid
that critical issue that users get "locked out".
I do not agree that not allowing Throwable in a catch makes the code
better. Unknown exceptions can be easier found if there is an error log
at the code where it occurred.
I would prefer if there is some flexibility like it is the case with the
IDEA code analysis, where one can edit and customize the suggestions.
Ignore annotations would help.
There have been several long delays as well a wrong order of the
shutdown process (wallet got shutdown after network shutdown.
Shutdown is now pretty fast, but depends on open offers and connections.
If torControlPort is specified, but neither torControlPassword nor
torControlCookieFile are specified, we have cookieFile == null in
bisq.network.p2p.network.RunningTor, but RunningTor.getTor() assumes a
cookie file has been specified and tries to check that the file exists,
causing the thread to crash. Added a check for null to fix this.
and do not broadcast.
It is unclear why we receive expired data (some are very old), but a
manipulated node might produce that and as it only removed at each
batch process running each minute to clean out expired data it still
could propagate. Is an attack vector also to flood the network with
outdated offers where the maker is likely not online.
Should fix https://github.com/bisq-network/bisq/issues/4026
and do not broadcast.
It is unclear why we receive expired data (some are very old), but a
manipulated node might produce that and as it only removed at each
batch process running each minute to clean out expired data it still
could propagate. Is an attack vector also to flood the network with
outdated offers where the maker is likely not online.
Should fix https://github.com/bisq-network/bisq/issues/4026
The getAllConnections() call in the while loop always returned the same
number of nodes so the timeout of 15 sec was always triggered.
We now wait for the shutdown handlers of the connections and if all are
called we run our handler. If it takes longer as our timeout of 3 sec.
the shutdown handler gets called by the timeout.
The large binary objects in p2p/src/main/resources/ are updated on every
Bisq release with the latest network data to avoid the need for new Bisq
clients to download all of this information from the network, which
would easily overload seed nodes and generally bog down the client.
This approach works well enough for its purposes, but comes with the
significant downside of storing all of this binary data in Git history
forever. The current version of these binary objects total about 65M,
and they grow with every release. In aggregate, this has caused the
total size of the repository to grow to 360M, making it cumbersome to
clone over a low-bandwith connection, and slowing down various local Git
operations.
To avoid further exacerbating this problem, this commit sets these files
up to be tracked via Git LFS. There's nothing we can do about the 360M
of files that already exist in history, but we can ensure it doesn't
grow in this unchecked way going forward. For an understanding of how
Git LFS works, see the reference material at [1], and see also the
sample project and README at [2].
The following command was used to track the files:
$ git lfs track "p2p/src/main/resources/*BTC_MAINNET"
Tracking "p2p/src/main/resources/AccountAgeWitnessStore_BTC_MAINNET"
Tracking "p2p/src/main/resources/BlindVoteStore_BTC_MAINNET"
Tracking "p2p/src/main/resources/DaoStateStore_BTC_MAINNET"
Tracking "p2p/src/main/resources/ProposalStore_BTC_MAINNET"
Tracking "p2p/src/main/resources/SignedWitnessStore_BTC_MAINNET"
Tracking "p2p/src/main/resources/TradeStatistics2Store_BTC_MAINNET"
We are using GitHub's built-in LFS service here, and it's important to
understand that there are storage and bandwidth limits there. We have
1G total storage and 1G per month of bandwidth on the free tier. We will
certainly exceed this, and so must purchase at least one "data pack"
from GitHub, possibly two. One gets us to 50G storage and bandwith.
In an attempt to avoid unnecessary LFS bandwidth usage, this commit also
updates the Travis CI build configuration to cache Git LFS files, such
that they are not re-downloaded on every CI build (see [3] and [4]
below). With that out of the way, the variable determining whether we
exceed the monthly limit is how many clones we have every month, and
there are many, though it's not clear how many are are Travis CI and how
many are users / developers.
Tracking these files via LFS means that developers will need to have Git
LFS installed in order to properly synchronize the files. If a developer
does not have LFS installed, cloning will complete successfully and the
build would complete successfully, but the app would fail when trying to
actually load the p2p data store files. For this reason, the build has
been updated to proactively check that the p2p data store files have
been properly synchronized via LFS, and if not, the build fails with a
helpful error message. The docs/build.md instructions have also been
updated accordingly.
It is important that we make this change now, not only to avoid growing
the repository in the way described above as we have been doing now for
many releases, but also because we are now considering adding yet more
binary objects to the repository, as proposed at
https://github.com/bisq-network/projects/issues/25.
[1]: https://git-lfs.github.com
[2]: https://github.com/cbeams/lfs-test
[3]: https://docs-staging.travis-ci.com/user/customizing-the-build/#git-lfs
[4]: https://github.com/travis-ci/travis-ci/issues/8787#issuecomment-394202791
The getAllConnections() call in the while loop always returned the same
number of nodes so the timeout of 15 sec was always triggered.
We now wait for the shutdown handlers of the connections and if all are
called we run our handler. If it takes longer as our timeout of 3 sec.
the shutdown handler gets called by the timeout.