The broadcasting consumes most of the threads but has lower priority than other messages being sent.
By separating that thread pool from the common sendMessage executor we can reduce the risk that a burst of
broadcasts exhausts the thread pool and might drop send message tasks.
Signed-off-by: HenrikJannsen <boilingfrog@gmx.com>
The previously used newCachedThreadPool carries higher risk for execution exceptions if exceeded.
Originally we had only one executor with a corePoolSize of 15 and a maximumPoolSize of 30 and queueCapacity was set to maximumPoolSize.
This was risky when the 15 corePool threads have been busy and new messages or connection creation threads are
queued up with potentially significant delay until getting served leading to timeouts.
Now we use (if maxConnections is 12) corePoolSize of 24, maximumPoolSize 36 and queueCapacity 10. This gives
considerable headroom. We also have split up the executors in 2 distinct ones.
Signed-off-by: HenrikJannsen <boilingfrog@gmx.com>
Wrap nodeAddressProperty.set into UserThread.execute as it is a javafx api. We call startServer also in that execute scope to maintain order of calls.
Signed-off-by: HenrikJannsen <boilingfrog@gmx.com>
Remove torStartupFuture as it was not needed.
Make executorService private and add shutdownNow call.
Signed-off-by: HenrikJannsen <boilingfrog@gmx.com>
Make sendMessage package scope to not be used from client code.
Remove stacktrace print.
The caller in NetworkNode would report a onSuccess in the future callback because we do not escalate the exception but only handle it inside handleException.
Signed-off-by: HenrikJannsen <boilingfrog@gmx.com>
When ArrayBlockingQueue is used (as in case of using Utilities.getListeningExecutorService) the maxPoolSize
has no effect. The pool creates never more threads than the core pool size.
Thus we have been limited to 15 threads for message sending and connection creation.
This was likely a reason why seed nodes are not accepting new connections if the pool is exhausted.
Slow message send can block a thread for 1-3 minutes.
Signed-off-by: HenrikJannsen <boilingfrog@gmx.com>
I want to avoid to risk changes with not calling error handlers/listeners in those cases
as not 100% sure if that could have unintended effects.
Signed-off-by: HenrikJannsen <boilingfrog@gmx.com>
Rename to expectedInitialDataResponses as we compare with numInitialDataResponses.
Add comment to make it more clear how its used.
Let the LiteNode increment only if getBlocks get called (e.g. blocks are missing).
Signed-off-by: HenrikJannsen <boilingfrog@gmx.com>
logging of: `The requester had version x.x.x. Our historical data
store has version y.y.y As the requester version is not older as our
historical store we do not add the data to the result map.`
Which takes up many screenfuls of log for every client getDataRequest.
Replace 'BiFunction<T, U, Boolean>' with the primitive specialisation
'BiPredicate<T, U>' in HashCashService & FilterManager.
As part of this, replace similar predicate constructs found elsewhere.
NOTE: This touches the DAO packages (trivially @ VoteResultService).
are still received from seed nodes and processed but
as the services for processing the payloads are not
added the data is inefficiently processed.
The getMap returned a flattened map of all maps in
all services which can be quite large.
We use now a filtered map with calling canHandle
first. Also the put got optimized to indicate in the
return value if there has been a service found to add
the payload. If not we do not invoke the listeners and
do not broadcast.
To not request the DAO P2P data would be better but I
don't see a easy way how to do that as the P2P network
is not aware of the type of data. Some market interface
could be used and a flag at the request to the seed node
to indicate if those types should be included but that
does feel too customized for a special use case. The
DAO P2P data is not that big as well, so I think for now
that fix should be good enough.
We can cache an offer payload hash as soon as its `offerFeePaymentTxId`
is set. (The payload hash cannot be calculated until the object can
be transformed into a protobuf message, which requires a non-null
offerFeePaymentTxId.)
Another benefit is removal of the payload hash argument from the
`OfferBookListItem` constructor.
Changes include
- `OfferPayload` Added `transient byte[] hash` field + getter method
(where hash is calculated and cached).
- `OfferBookService` Removed `P2PDataStorage.ByteArray hashOfPayload`
parameter from `OfferBookChangedListener` listener methods
`onAdded` & `onRemoved`. (Hash is cached in `OfferPayload`.)
- `P2PDataStorage` Added null check to `ByteArray` class constructor.
- `OfferBook` Adjusted for change to `OfferBookChangedListener`.
Also removed redundant payload hash null checks.
- `TakeOfferDataModel` and `MarketAlerts` Adjusted for change to
`OfferBookChangedListener`.
- `OfferBookListItem` Removed overloaded constructor with
`@Nullable P2PDataStorage.ByteArray hashOfPayload` parameter.
(Field value is set from cached offer payload hash.)
- `OfferBookViewModelTest` and `OfferMaker` Adjusted test and test fixture:
do not attempt to create offer payloads without an `offerFeePaymentTxId`.
mailboxMessageService.onBootstrapped(); and onUpdatedDataReceived
mailboxMessageService depends on p2PDataStorage so we make sure the
p2PDataStorage is updated before we update the mailboxMessageService state.
Change jsonrpc4j version from 1.5.3 to 1.6.0.bisq.1, forked to the Bisq
repo from the recent 1.6.0 release. The forked version changes the class
'com.googlecode.jsonrpc4j.HttpException' to be public, instead of (prob.
mistakenly) package private, so we can avoid using reflection to catch
it and re-throw as a 'bisq.network.http.HttpException'. Remove the now
unused constructors from the latter.
As part of this, upgrade Jackson to the latest stable (2.12.1) release,
since jsonrpc4j now depends on a newer version than the previous 2.8.10.
We cannot use a listener at RequestDataManager as the order
is not defined if doing so.
So we use P2PService as our controlling entity to call
further clients in the correct order.
Create a new 'BitcoindClient' interface and a corresponding builder, to
replace the old 'com.neemre.btcdcli4j.core.client.BtcdClientImpl' class
from the btcdcli4j library. This is instantiated by jsonrpc4j using a
dynamic proxy. It provides only a cut down version of the bitcoind RPC
API, exposing the methods 'getblock', 'getblockcount' & 'getblockhash',
as they are the only ones currently being used by RpcService.
Add corresponding Jackson-annotated DTO classes to model the JSON
structures returned by bitcoind, very similar to the classes provided by
btcdcli4j. Note that we use Double instead of BigDecimal to represent
fractional fields (difficulties + coin amounts in BTC), as they have
more consistent Jackson (de)serialisation and appear to be able to
faithfully round-trip numeric fields produced by bitcoind. Also note
that doubles can faithfully represent any valid decimal BTC amount (that
is, with 8 d.p. of precision) up to 21 million.
For now, keep the old BtcdClientImpl instance used by RpcService in
place, as the btcdcli4j block notification daemon is dependent upon it
and would also need to be replaced.
Also add unit tests for BitcoindClient which test against sample regtest
responses, using a mock HttpURLConnection.
For all MailboxMessage we want to be sure they have an expire date.
Add getTTL methods and TTL value.
We use 7 days for ChatMessages and 30 days for PrivateNotificationMessages.
All others use default 15 days.
We prefer to keep them explicit and not use a default method in
ExpirablePayload or MailboxMessage.
The value in PrefixedSealedAndSignedMessage will not be used as the
sender use the TTL of the payload and pass that to the MailboxStoragePayload
where we store it in the extraMap. That way we can use different TTL even the
payload message is encrypted and we would not be able to look it up.
instead of decryptedMessageWithPubKey.
Here we change a bit the behaviour as now we check also if the
TradeMessage is a MailboxMessage. As that method is only called
from MailboxMessageService domain we never get a non MailboxMessage
here anyway and it would have been a bug.
Remove removeMailboxMsg(DecryptedMessageWithPubKey decryptedMessageWithPubKey) as not used anymore