are still received from seed nodes and processed but
as the services for processing the payloads are not
added the data is inefficiently processed.
The getMap returned a flattened map of all maps in
all services which can be quite large.
We use now a filtered map with calling canHandle
first. Also the put got optimized to indicate in the
return value if there has been a service found to add
the payload. If not we do not invoke the listeners and
do not broadcast.
To not request the DAO P2P data would be better but I
don't see a easy way how to do that as the P2P network
is not aware of the type of data. Some market interface
could be used and a flag at the request to the seed node
to indicate if those types should be included but that
does feel too customized for a special use case. The
DAO P2P data is not that big as well, so I think for now
that fix should be good enough.
We can cache an offer payload hash as soon as its `offerFeePaymentTxId`
is set. (The payload hash cannot be calculated until the object can
be transformed into a protobuf message, which requires a non-null
offerFeePaymentTxId.)
Another benefit is removal of the payload hash argument from the
`OfferBookListItem` constructor.
Changes include
- `OfferPayload` Added `transient byte[] hash` field + getter method
(where hash is calculated and cached).
- `OfferBookService` Removed `P2PDataStorage.ByteArray hashOfPayload`
parameter from `OfferBookChangedListener` listener methods
`onAdded` & `onRemoved`. (Hash is cached in `OfferPayload`.)
- `P2PDataStorage` Added null check to `ByteArray` class constructor.
- `OfferBook` Adjusted for change to `OfferBookChangedListener`.
Also removed redundant payload hash null checks.
- `TakeOfferDataModel` and `MarketAlerts` Adjusted for change to
`OfferBookChangedListener`.
- `OfferBookListItem` Removed overloaded constructor with
`@Nullable P2PDataStorage.ByteArray hashOfPayload` parameter.
(Field value is set from cached offer payload hash.)
- `OfferBookViewModelTest` and `OfferMaker` Adjusted test and test fixture:
do not attempt to create offer payloads without an `offerFeePaymentTxId`.
mailboxMessageService.onBootstrapped(); and onUpdatedDataReceived
mailboxMessageService depends on p2PDataStorage so we make sure the
p2PDataStorage is updated before we update the mailboxMessageService state.
Change jsonrpc4j version from 1.5.3 to 1.6.0.bisq.1, forked to the Bisq
repo from the recent 1.6.0 release. The forked version changes the class
'com.googlecode.jsonrpc4j.HttpException' to be public, instead of (prob.
mistakenly) package private, so we can avoid using reflection to catch
it and re-throw as a 'bisq.network.http.HttpException'. Remove the now
unused constructors from the latter.
As part of this, upgrade Jackson to the latest stable (2.12.1) release,
since jsonrpc4j now depends on a newer version than the previous 2.8.10.
We cannot use a listener at RequestDataManager as the order
is not defined if doing so.
So we use P2PService as our controlling entity to call
further clients in the correct order.
Create a new 'BitcoindClient' interface and a corresponding builder, to
replace the old 'com.neemre.btcdcli4j.core.client.BtcdClientImpl' class
from the btcdcli4j library. This is instantiated by jsonrpc4j using a
dynamic proxy. It provides only a cut down version of the bitcoind RPC
API, exposing the methods 'getblock', 'getblockcount' & 'getblockhash',
as they are the only ones currently being used by RpcService.
Add corresponding Jackson-annotated DTO classes to model the JSON
structures returned by bitcoind, very similar to the classes provided by
btcdcli4j. Note that we use Double instead of BigDecimal to represent
fractional fields (difficulties + coin amounts in BTC), as they have
more consistent Jackson (de)serialisation and appear to be able to
faithfully round-trip numeric fields produced by bitcoind. Also note
that doubles can faithfully represent any valid decimal BTC amount (that
is, with 8 d.p. of precision) up to 21 million.
For now, keep the old BtcdClientImpl instance used by RpcService in
place, as the btcdcli4j block notification daemon is dependent upon it
and would also need to be replaced.
Also add unit tests for BitcoindClient which test against sample regtest
responses, using a mock HttpURLConnection.
For all MailboxMessage we want to be sure they have an expire date.
Add getTTL methods and TTL value.
We use 7 days for ChatMessages and 30 days for PrivateNotificationMessages.
All others use default 15 days.
We prefer to keep them explicit and not use a default method in
ExpirablePayload or MailboxMessage.
The value in PrefixedSealedAndSignedMessage will not be used as the
sender use the TTL of the payload and pass that to the MailboxStoragePayload
where we store it in the extraMap. That way we can use different TTL even the
payload message is encrypted and we would not be able to look it up.
instead of decryptedMessageWithPubKey.
Here we change a bit the behaviour as now we check also if the
TradeMessage is a MailboxMessage. As that method is only called
from MailboxMessageService domain we never get a non MailboxMessage
here anyway and it would have been a bug.
Remove removeMailboxMsg(DecryptedMessageWithPubKey decryptedMessageWithPubKey) as not used anymore
Seed nodes republish the persisted mailbox messages after
startup in chunks of 50 items with a 2 minute delay.
Move check for hasSequenceNrIncreased in addProtectedStorageEntry
earlier so we return earlier. This is a very common case for return
if we receive outdated data (like republished mailbox data we have
already received).
Mark logs with ## for easier find/replace for dev testing...
different TTL for lower prio mailbox messages like AckMessages.
As we cannot add a field without breaking signatures we
need to use the extraMap in MailboxStoragePayload
The uid is not perfect like a cryptographic hash but it can be
considered safe enough to not have collusion. The list got filled
with duplicates, so should have been a hashSet anyway.
We cannot use the hash as that is not available at the remove method.
We might refactor that in future to get rid of that problematic uid as
key but will require a but more refactoring in the client code as well,
as we do not pass around the outer envelope data but only the decrypted
data.
- Add protectedStorageEntry from persisted mailbox messages to
P2PDataStorage at startup. This ensures that we add those keys to excludedKeys
and that helps to reduce load for seed nodes at initial data response.
- Refactoring at removeFromMapAndDataStore:
- Add trace logs
- Add more comments
We have added the capability in 1.4.0 and have
enforced 1.5.1 so no traders can use that old
version anymore so the capability check is not
needed anymore.
We handle the connections by INITIAL_DATA_EXCHANGE which
cover the seed nodes as well. Do have an parallel routine
is risky and make things more complex.
In the 3rd attempt we filter for
INITIAL_DATA_EXCHANGE peers.
Before we excluded 2 types and as PEER have been
already filtered earlier we would look up for SEED_NODE.
This was only called by non-seedNodes.
Remove unnecessary setPeerType calls. ConnectionState is handling that.
Only PeerManager does the setting of isSeedNode as we do not have the
required dependency in ConnectionState.
The numbers did not match up from delivered response size and items as we did not count
in the overhead of the ProtectedStorageEntry (pub key+sig) and did estimate the size
with taking only first item and multiplying it. A measurement resulted in 20 ms costs
for the exact calculation (toProtoMessage().getSerializedSize() has some costs).
I guess that is acceptable to get correct metrics.
HistoricalDataStoreService.getMap is called.
HistoricalDataStoreService.getMap should not be used by domain
clients but rather the custom methods getMapOfAllData,
getMapOfLiveData or getMapSinceVersion.
As we have not removed the calls from ProposalService and
other domains we return getMapOfAllData() instead of the live map.
This was prevented earlier for performance reasons. It is more safe
thought to return in case of an illegal access all data instead of
live data only.
p2PService.getP2PDataStorage().getAppendOnlyDataStoreMap().
p2PService.getP2PDataStorage().getAppendOnlyDataStoreMap() iterates
over all services including the historical data store service. It used the
getMap method which should not be used at historical data store service as
it is not clear if the live data or all data should be accessed.
If oen starts with --daoActivated=false there is no service
set up in one of the data store services so it never calls
the result handler and the app never starts up.
If oen starts with --daoActivated=false there is no service
set up in one of the data store services so it never calls
the result handler and the app never starts up.
Call flush at openOfferManager shutdown.
Remove unused method.
Force broadcaster to send out immediately, otherwise we could
have a 2 sec delay until the bundled messages sent out.
while hasPendingRequest is true
- Throw exception if we get a request before previous request is
terminated (happens with priceFee at startup, on regtest as
startup is fast, but can happen also on mainnet)
- Improve shutDown
- Improve finally clause
These are failing on the tip of release/1.5.0 currently due to extra
validation added to PersistenceManager, causing the build to fail upon
merging upstream. Add missing PersistenceManager.shutDown calls to the
tearDown methods of the affected tests to fix.
It should be only needed in case we get the historical data from resources,
but as I have seen multiple times that some nodes have duplicated entries
in the live data I think its more safe to clean up always. If no entries are
removed the call is very cheap. Even with 60k entries to be pruned it takes
only about 20 ms.
As we might have same keys in multiple maps and merge those to 1 map we
cannot use an immutable map when merging the maps. Instead we copy our merged map
at the end into a immutable map.
Fix issue with immutable maps.
As we might have same keys in multiple maps and merge those to 1 map we
cannot use an immutable map when merging the maps. Instead we copy our merged map
at the end into a immutable map.
We used a delegate method in P2PService for calling readPersisted on p2PDataStorage and peerManager.
This was from old times when those classed have not been injected classes. The complete handlers got
called from both p2PDataStorage and peerManager but we counted only P2PService as host, so the
countdown completed before the last host was really completed, leading to a nullpointer in
MainView (not always).
We removed now PersistedDataHost interface from P2PService and use P2PDataStorage and PeerManager to be added to hosts.
- Use fileName not getFileName() in readHistoricalStoreFromResources
- use complete handler once reading of all historical data is completed where we
build the ImmutableMaps and complete the readFromResources method
Delay the boolean property setter as otherwise our listener might never
get triggered if property is set synchronously before listener registration.
Remove shutdown thread.
Cancel future in case tor is not created yet.
Add synchronous methods for tests. They new async methods lead to failing tests.
It could be probably fixed, but its quite an effort... Don't like to add code just for
tests but on the other hand, maybe those methods might be useful for other use cases as well.
Before we use a thread in readFromResources and readAllPersisted. To avoid that client code need to deal with
threading we moved that to the PersistenceManager and changed the API accordingly so it will not return the persisted object but calls a consumer once it is completed with reading.
We did check in Connection for SupportedCapabilitiesMessage and if a message is of that type we set the capability.
But encrypted messages are wrapped in a PrefixedSealedAndSignedMessage so the payload is not visible as SupportedCapabilitiesMessage without decrypting it.
We need to call maybeHandleSupportedCapabilitiesMessage at decrypting the message. We do that only for direct messages not for mailbox messages as we likely do not have a connection open to the peer in that case (otherwise it would not be a mailbox msg) and as we don't have the connection available (we get is as AddDataMessage broadcast from an peer, so could could not apply it to the Connection of the sender.
This will be used for monitoring seed nodes.
Instead of requesting all data (we cannot request all in fact as it is too large)
we request the number of items the node has.
This code will not have any impact atm. It will be triggered once a new monitor module gets added which
will send the GetInventoryRequest to the seeds.
Add DateSortedTruncatablePayload interface for TradeStatistics2
We check first if we need to truncate dateSortedTruncatablePayloads, if so we have sorted by date and truncate in the way that we receive the most recent data. We define the maxItems in the class implementing the interface (3000 for trade stats).
Later we apply the maxEntries check the combined list and if we need to truncate here as well (10 000) we have added the dateSortedTruncatablePayloads at the end so those will get truncated with higher prio.
There is also a bit wrong handling in the previous code that we check for max limits before the shouldTransmitPayloadToPeer filter. Should be fixed in another PR for master...
Number of objects is 24 more then with 1.3.9. Seems there are still either a few duplicate
with some diverging data which should not be different or that our old code to filter
duplicates had some issues. But a difference of 24 out of 75 000 object can be ignored IMO.
1. We do not want that initial data request/response use old trades statistics for excluded key hashes.
Thats why we return an empty map in getMap.
2. We do not read resource file as we have removed that.
3. We do not persist as we convert the existing data and re-publish as new data, or at startup we convert the old data to the new one and then delete the file.