is expected as we shut down the executor immediately.
If not thrown at shutdown or exception is not a RejectedExecutionException
we log a warning and re-throw the exception (to avoid change of behaviour
of current version). The exception is likely not handled by callers and goes up to
the uncaught exception handler.
Prevent a "URI is not hierarchical" IllegalArgumentException from the
expression, 'new File(dirUrl.toURI())', which occurs on Linux & Windows
when listing the resource directory of BSQ blocks. Adapt a solution from
StackOverflow which uses two separate code paths depending on the
environment, into the new method 'FileUtil::listResourceDirectory'.
The issue is caused by the resource URL taking one of the two forms:
file:/Users/[USER]/Java/bisq/bisq/p2p/out/production/resources/BsqBlocks_BTC_MAINNET
jar:file:...p2p.jar!/BsqBlocks_BTC_MAINNET
depending on whether the system is OSX or not.
Move timeout before shutdown sequence starts and use a Timer thread instead of
UserThread to avoid that in case the UserThread gets blocked that the timeout
would not get triggered.
Reduce timeout from 20 sec. to 10 sec.
Move timeout before shutdown sequence starts and use a Timer thread instead of
UserThread to avoid that in case the UserThread gets blocked that the timeout
would not get triggered.
Reduce timeout from 20 sec. to 10 sec.
is expected as we shut down the executor immediately.
If not thrown at shutdown or exception is not a RejectedExecutionException
we log a warning and re-throw the exception (to avoid change of behaviour
of current version). The exception is likely not handled by callers and goes up to
the uncaught exception handler.
Prevent a "URI is not hierarchical" IllegalArgumentException from the
expression, 'new File(dirUrl.toURI())', which occurs on Linux & Windows
when listing the resource directory of BSQ blocks. Adapt a solution from
StackOverflow which uses two separate code paths depending on the
environment, into the new method 'FileUtil::listResourceDirectory'.
The issue is caused by the resource URL taking one of the two forms:
file:/Users/[USER]/Java/bisq/bisq/p2p/out/production/resources/BsqBlocks_BTC_MAINNET
jar:file:...p2p.jar!/BsqBlocks_BTC_MAINNET
depending on whether the system is OSX or not.
1. Reorder the PoW fields in the 'Filter' proto by field index, instead
of contextually.
2. Deduplicate expression for 'pow' & replace if-block with boolean op
to simplify 'FilterManager::isProofOfWorkValid'.
3. Avoid slightly confusing use of null char as a separator to prevent
hashing collisions in 'EquihashProofOfWorkService::getChallenge'. Use
comma separator and escape the 'itemId' & 'ownerId' arguments instead.
(based on PR #5858 review comments)
Remove all the 'challengeValidation', 'difficultyValidation' and
'testDifficulty' BiPredicate method params from 'HashCashService' &
'ProofOfWorkService', to simplify the API. These were originally
included to aid testing, but turned out to be unnecessary.
Patches committed on behalf of @chimp1984.
Change the type of the 'difficulty' field in the Filter & ProofOfWork
proto objects from int32/bytes to double and make it use a linear scale,
in place of the original logarithmic scale which counts the (effective)
number of required zeros.
This allows fine-grained difficulty control for Equihash, though for
Hashcash it simply rounds up to the nearest power of 2 internally.
NOTE: This is a breaking change to PoW & filter serialisation (unlike
the earlier PR commits), as the proto field version nums aren't updated.
Add an abstract base class, 'ProofOfWorkService', for the existing PoW
implementation 'HashCashService' and a new 'EquihashProofOfWorkService'
PoW implementation based on Equihash-90-5 (which has 72 byte solutions &
5-10 MB peak memory usage). Since the current 'ProofOfWork' protobuf
object only provides a 64-bit counter field to hold the puzzle solution
(as that is all Hashcash requires), repurpose the 'payload' field to
hold the Equihash puzzle solution bytes, with the 'challenge' field
equal to the puzzle seed: the SHA256 hash of the offerId & makerAddress.
Use a difficulty scale factor of 3e-5 (derived from benchmarking) to try
to make the average Hashcash & Equihash puzzle solution times roughly
equal for any given log-difficulty/numLeadingZeros integer chosen in the
filter.
NOTE: An empty enabled-version-list in the filter defaults to Hashcash
(= version 0) only. The new Equihash-90-5 PoW scheme is version 1.
Add a numeric version field to the 'ProofOfWork' protobuf object, along
with a list of allowed version numbers, 'enabled_pow_versions', to the
filter. The versions are taken to be in order of preference from most to
least preferred when creating a PoW, with an empty list signifying use
of the default algorithm only (that is, version 0: Hashcash).
An explicit list is used instead of an upper & lower version bound, in
case a new PoW algorithm (or changed algorithm params) turns out to
provide worse resistance than an earlier version.
(The fields are unused for now, to be enabled in a later commit.)
Replace 'BiFunction<T, U, Boolean>' with the primitive specialisation
'BiPredicate<T, U>' in HashCashService & FilterManager.
As part of this, replace similar predicate constructs found elsewhere.
NOTE: This touches the DAO packages (trivially @ VoteResultService).
1. Reorder the PoW fields in the 'Filter' proto by field index, instead
of contextually.
2. Deduplicate expression for 'pow' & replace if-block with boolean op
to simplify 'FilterManager::isProofOfWorkValid'.
3. Avoid slightly confusing use of null char as a separator to prevent
hashing collisions in 'EquihashProofOfWorkService::getChallenge'. Use
comma separator and escape the 'itemId' & 'ownerId' arguments instead.
(based on PR #5858 review comments)
When doing a resync from genesis the number of blocks is limited to 6000
so that requires lots of requests and with that increases risk of broken
connections. Giving more tolerance for retries avoids that the user has
to restart the app.
When syncing from genesis the number of blocks are limited so we get the
`onParseBlockCompleteAfterBatchProcessing` called each time when the received
blocks are processed, and as we are not at wallet height we repeat requesting
blocks. But the new check for the BTC recipient triggers a resync from resource call.
We add now a check that we do this check only once the wallet is synced and our
block height from dao state matches wallet blockheight.
- At genesis we use the genesis height for request (not height+1)
- If wallet is not synced yet we do not call onParseBlockChainComplete (as it was before)
We added 1 as with the lite monitor mode we persist the most recent block,
thus we request with the start height for the next block.
But that cause a problem at a DAO full mode which has lite monitor mode set
as then the block parsing would not be triggered.
We refactor it so that we take the chainHeight from the dao state
directly and add 1 at the requests.
We add a check if we are at chain tip, and if so we skip requests
and call the onParseBlockChainComplete directly.
Add a check of 'scriptTypeId' field, against the output of the spending
tx, to the 'RawTransactionInput::validate' method. Also make the seller
as well as the buyer validate each raw BSQ/BTC input received from the
peer. This prevents either peer from claiming that any of their
non-segwit inputs are segwit in order to underpay the tx fee.
Prevent the seller from stealing the combined tx fee as change by lying
about the value of one or more of his BTC inputs, which are passed to
the buyer as raw inputs in the 'BsqSwapFinalizeTxRequest' message.
To this end, add a 'RawTransactionInput::validate' method to check the
'value' field against the output value of the respective spending tx and
run it on every seller input in 'ProcessBsqSwapFinalizeTxRequest', so
that the buyer is no longer just trusting those numbers.
Additionally, check that the spending txIds from the raw BTC inputs
supplied by the seller actually match those of his signed inputs in the
accompanying partially signed tx, thus tying the raw input values to the
seller's tx.
When doing a resync from genesis the number of blocks is limited to 6000
so that requires lots of requests and with that increases risk of broken
connections. Giving more tolerance for retries avoids that the user has
to restart the app.
When syncing from genesis the number of blocks are limited so we get the
`onParseBlockCompleteAfterBatchProcessing` called each time when the received
blocks are processed, and as we are not at wallet height we repeat requesting
blocks. But the new check for the BTC recipient triggers a resync from resource call.
We add now a check that we do this check only once the wallet is synced and our
block height from dao state matches wallet blockheight.
- At genesis we use the genesis height for request (not height+1)
- If wallet is not synced yet we do not call onParseBlockChainComplete (as it was before)
Add a check of 'scriptTypeId' field, against the output of the spending
tx, to the 'RawTransactionInput::validate' method. Also make the seller
as well as the buyer validate each raw BSQ/BTC input received from the
peer. This prevents either peer from claiming that any of their
non-segwit inputs are segwit in order to underpay the tx fee.
The editoffer validation bug fixes:
- A trigger-price edit forced offer.price-margin=0.00.
This needs to be checked in new apitest case asserts.
- An activate state (only) edit forced offer.isUseMarketBasedPrice=true.
The CLI does not have the offer instance, and cannot know the correct
value of the isUseMarketBasedPrice param sent in the editoffer request.
The daemon has to figure this out. If the editType parameter value
sent to daemon is ACTIVATION_STATE_ONLY, use the current offer.isUseMarketBasedPrice.
The refactoring includes more useful and readable information in core's EditOfferValidator
and MutableOfferPayloadFields toString methods, for debugging with the daemon log. And some
adjustments for allowing edits to XMR offers.
Prevent the seller from stealing the combined tx fee as change by lying
about the value of one or more of his BTC inputs, which are passed to
the buyer as raw inputs in the 'BsqSwapFinalizeTxRequest' message.
To this end, add a 'RawTransactionInput::validate' method to check the
'value' field against the output value of the respective spending tx and
run it on every seller input in 'ProcessBsqSwapFinalizeTxRequest', so
that the buyer is no longer just trusting those numbers.
Additionally, check that the spending txIds from the raw BTC inputs
supplied by the seller actually match those of his signed inputs in the
accompanying partially signed tx, thus tying the raw input values to the
seller's tx.
**Unconfirmed** BSQ swap seems like something failed. **Processing** is used by some wallets for unconfirmed transactions and has no negative implications.
We added 1 as with the lite monitor mode we persist the most recent block,
thus we request with the start height for the next block.
But that cause a problem at a DAO full mode which has lite monitor mode set
as then the block parsing would not be triggered.
We refactor it so that we take the chainHeight from the dao state
directly and add 1 at the requests.
We add a check if we are at chain tip, and if so we skip requests
and call the onParseBlockChainComplete directly.
There are some use cases where the CLI needs to know what kind of offer
is being acted on before the request is made, For example:
There are differences between a BsqSwap 'takeoffer'request, and a v1
'takeoffer' request.
A BsqSwap offer cannot be edited by an 'editoffer' request, and an
attempt should be blocked by the CLI.
- Append isMyOffer GetOfferCategoryRequest rpc msg def.
- Adjust daemon.grpc services for new boolean GetOfferCategoryRequest param.
- Adjust core.api for new boolean GetOfferCategoryRequest param.
- Add validation check in core.api EditOfferValidator to block attempt to
edit a BsqSwap offer.
- Refactor CoreOffersService get*offer(id) methods to optionally throw
excpetions.
Remove all the 'challengeValidation', 'difficultyValidation' and
'testDifficulty' BiPredicate method params from 'HashCashService' &
'ProofOfWorkService', to simplify the API. These were originally
included to aid testing, but turned out to be unnecessary.
Patches committed on behalf of @chimp1984.
Change the type of the 'difficulty' field in the Filter & ProofOfWork
proto objects from int32/bytes to double and make it use a linear scale,
in place of the original logarithmic scale which counts the (effective)
number of required zeros.
This allows fine-grained difficulty control for Equihash, though for
Hashcash it simply rounds up to the nearest power of 2 internally.
NOTE: This is a breaking change to PoW & filter serialisation (unlike
the earlier PR commits), as the proto field version nums aren't updated.
Add an abstract base class, 'ProofOfWorkService', for the existing PoW
implementation 'HashCashService' and a new 'EquihashProofOfWorkService'
PoW implementation based on Equihash-90-5 (which has 72 byte solutions &
5-10 MB peak memory usage). Since the current 'ProofOfWork' protobuf
object only provides a 64-bit counter field to hold the puzzle solution
(as that is all Hashcash requires), repurpose the 'payload' field to
hold the Equihash puzzle solution bytes, with the 'challenge' field
equal to the puzzle seed: the SHA256 hash of the offerId & makerAddress.
Use a difficulty scale factor of 3e-5 (derived from benchmarking) to try
to make the average Hashcash & Equihash puzzle solution times roughly
equal for any given log-difficulty/numLeadingZeros integer chosen in the
filter.
NOTE: An empty enabled-version-list in the filter defaults to Hashcash
(= version 0) only. The new Equihash-90-5 PoW scheme is version 1.
Add a numeric version field to the 'ProofOfWork' protobuf object, along
with a list of allowed version numbers, 'enabled_pow_versions', to the
filter. The versions are taken to be in order of preference from most to
least preferred when creating a PoW, with an empty list signifying use
of the default algorithm only (that is, version 0: Hashcash).
An explicit list is used instead of an upper & lower version bound, in
case a new PoW algorithm (or changed algorithm params) turns out to
provide worse resistance than an earlier version.
(The fields are unused for now, to be enabled in a later commit.)
PaymentMethod use an instance of TradeLimits and expect that it
has been injected, which is the case for desktop but not for
headless apps, so we enforce injection in the app base classes
used for headless apps.
The validation of trade statistics use a method in PaymentMethod
where that dependency is required.
Tha hack how the PaymentMethod use TradeLimits is not nice, but
would require more effort for refactoring.
We had historically higher trade limits and assets which are not in the
currency list anymore, so we apply the filter only for data after
Nov 1st 2021.
We had historically higher trade limits and assets which are not in the
currency list anymore, so we apply the filter only for data after
Nov 1st 2021.
PaymentMethod use an instance of TradeLimits and expect that it
has been injected, which is the case for desktop but not for
headless apps, so we enforce injection in the app base classes
used for headless apps.
The validation of trade statistics use a method in PaymentMethod
where that dependency is required.
Tha hack how the PaymentMethod use TradeLimits is not nice, but
would require more effort for refactoring.
found in the active list. Otherwise the 2 BTC default is used.
We get TradeStatistics3 objects from old retired PaymentMethods
which are not found in the active list.
found in the active list. Otherwise the 2 BTC default is used.
We get TradeStatistics3 objects from old retired PaymentMethods
which are not found in the active list.
The API uses reflection to figure out how to build a payment acct json
form, and the tradeCurrencies field needs to be set after reflection
API's onstructor.newInstance() is called.
Replace 'BiFunction<T, U, Boolean>' with the primitive specialisation
'BiPredicate<T, U>' in HashCashService & FilterManager.
As part of this, replace similar predicate constructs found elsewhere.
NOTE: This touches the DAO packages (trivially @ VoteResultService).
- Made several adjustments to CLI's 'gettrade' output related code
so it can show single trade details for either Bisq v1 trades, or
BSQ swap trades.
- Did minor refactoring of API's core to retrieve # tx confirmations
for an addresses and transactions.
- Show # of tx confirmations in bsq swap trade detail.
That check is done if:
- If we are after activation time
- If filter is available
- If value in filter is > 0
The check compares the paid fee and the fee value from filter
and requires that paid fee is > 70% of filter value.
See comments in code for more background.
This commit refactors the first cut of the BsqSwapTradeInfo and
TradeInfo gRPC proto defs and wrappers. The change avoids duplication
of fields between BsqSwapTradeInfo and TradeInfo, and adds a
bsqSwapTradeInfo field to the old TradeInfo proto & wrapper.
The immediate goal is moving towards getting the API's 'gettrade'
method to work for both Bisq v1 trades and BSQ swap trades: the TradeInfo
proto sent to the CLI should represent either a Bisq v1 trade or a BSQ
swap trade. A mid-term term goal is to also make a new 'gettrades' method
return a List<TradeInfo> to the CLI, where items in the List<TradeInfo>
can be either v1 trades or bsq-swap trades.
- Add core api methods to help CLI determine which type of offer to
take for a given offerId. CLI's 'takeoffer` will need to determine
which gRPC/proto request type to send to server.
- Add implemetations for getBsqSwapTradeRole(), for tradeId or trade.
- Complete BsqSwapTradeInfo impl.
- Merge BsqSwapOfferInfo fields into OfferInfo and remove BsqSwapOfferInfo.
- Change all model builders to private static class Builder.
persistance is below the chain height of btc core.
With the new handling of dao state hashblocks it can be the
we have persisted the latest block up to the chain height.
Before that change we only had the last snapshot which was at least
20 blocks back persisted.
We want to avoid to get caught in a retry loop at parseBlocksOnHeadHeight
so we filter out that case just at the caller.
At parseBlocksOnHeadHeight we inline the requestChainHeadHeightAndParseBlocks
method as it is not used by other callers anymore.
This case that we request a block but our local btc core chain hight is below
that might be some edge case when we had a synced dao blocks but we btc core is
resyncing...
and FullNode
The custom implementations triggered to repeat requests
but we shut down the seed node in case of a reorg and
for desktops we request the user to shutdown.
to startPeriodicTasks.
Call that from BisqExecutable.onApplicationLaunched
We have created a non-UI timer before for the
printSystemLoadPeriodically as that was called
before the configUserThread was called (where we
define the UI timer for desktop)
Avoid that outdated donation and fee addresses are used.
In case we get an outdated donation address (RECIPIENT_BTC_ADDRESS)
we trigger a dao resync. The user get a popup for doing a restart
in that case.
For the fee address selection we do not fall back to the
RECIPIENT_BTC_ADDRESS in case no address from filter is provided
but we use the hard coded address of the current fee receiver address.
Improve logging
Add BsqBlockStore to protobuf
Remove DaoStateMonitoringService field
Do not persist the blocks in daoState anymore.
This improves persistence performance and reduces memory
requirements for snapshots.
For the snapshot we create a deep clone by protobuf serialisation.
We do not need the deserialisation back to the java object as it is
only kept in memory for later persistence where we need to do protobuf
serialisation again. So we can skip that cycle and safe a bit of
time at creating and persisting snapshots.