Filter offers at OfferBookViewModel when the DaoState changes.
Check for offers amount exceeds trade limit at OpenOfferManager and add it to invalidOffers.
Signed-off-by: HenrikJannsen <boilingfrog@gmx.com>
Filter offers at OfferBookViewModel when the DaoState changes.
Check for offers amount exceeds trade limit at OpenOfferManager and add it to invalidOffers.
Signed-off-by: HenrikJannsen <boilingfrog@gmx.com>
Reduce a hotspot searching the BSQ wallet for the user's votes, upon
selecting a list item of the Vote Result view, by optimising the method
'BsqWalletService.isWalletTransaction(String)'. Do this by adding a
lazily initialised Map field, 'walletTransactionsById', kept in sync
with the existing 'walletTransactions' List field, similar to the tx-by-
id cache removed from the base class in the previous commit, so that a
linear scan of that list can be avoided. Don't bother to make the cache
thread safe, however, since 'isWalletTransaction' is only called from
the user thread and wasn't thread safe to begin with -- access to
'walletTransactions' isn't synchronised, and it is updated only on the
user thread, after a 100 ms delay upon any changes to the BSQ wallet.
Also remove the unused methods 'getUnverifiedBsqTransactions()' and
'getBsqWalletTransactions()' from the class.
This was added in an earlier commit (57b2b4b8) to speed up the Trade
History view, via the method 'getConfidenceFotTxId(String)', by
replacing a linear scan of the live wallet txs with a lookup into a
lazily initialised map. However, the delegating 'WalletService' method
'getTransaction(Sha256Hash)' already serves this purpose, with
'o.b.w.Wallet' itself maintaining a map of all the wallet txs. Use that
method instead, taking care to exclude dead txs from the lookups, to
exactly preserve the current behaviour.
Also do some minor cleanup of the class, mainly to remove IDE warnings.
Alleviate a cubic time bug during the update of the bonded roles
repository, reducing it to quadratic running time. On Mainnet, this
gives a roughly ten-fold speedup and should allow better scaling in the
event that many new bonded roles are added.
Replace calls to 'BondedRolesRepository.findBondedAssetByHash' with a
lookup into a lazily initialised map of bonded assets (Roles) by hash
(reset at the start of each call to 'BondRepository.update()' to prevent
stale caching). This avoids rescanning the roles list for every pair of
roles and lockup tx outputs, thus reducing the number of steps (to
highest order) from:
#roles * #roles * #lockup-tx-outputs
to:
#roles * #lockup-tx-outputs
(The logs show 2 or 3 calls to 'BondedRepository.update()' every time a
new block arrives, and while this was only taking around a second or so
on Mainnet, it could potentially grow to something problematic with
cubic scaling in the number of bonded roles.)
This restores the functionality that was removed in b5beea58. However,
this implementation utilizes the JavaCV library rather than the
webcam-capture library as discussed in #4940. As a result, this should
now provide macOS support.
3000 items are about 180.325 kB.
For nodes not being online for longer the repeated requests consumes quite some time.
With 15k we can expect a 1 MB payload which is still acceptable.
Signed-off-by: HenrikJannsen <boilingfrog@gmx.com>
This is in anticipation of speedups we wish to make, as JProfiler
reveals it to be a hotspot during new block arrivals (which are tricky
to profile, as they occur at random).
Fix the broken stubbing of 'PersistenceManager', which had gone stale as
a result of the conversion of 'Preferences' to asynchronous persistence
in commit 3f4d6e6 (2020/10/12). This caused the assertions in the
'readPersisted' continuation blocks of 3 of the 4 tests not to be
reached. Fix by stubbing the async 'persistenceManager::readPersisted'
method with a callback, instead of stubbing 'getPersisted'.
NOTE: Alternatively, we could add a testing-only 'readPersistedSync'
method to 'Preferences' for consistency, as this is how the other broken
(failing) tests resulting from 3f4d6e6 were fixed (in commit 68583d8).
Fix raw usage of the following types, all of which (apart from
Comparator) touch the DAO packages somewhere:
Comparable, Comparator, GetStateHashesResponse, NewStateHashMessage,
RequestStateHashesHandler, PersistenceManager
(Also replace 'Integer.valueOf' with the non-boxing but otherwise
identical method 'Integer.parseInt', in the class 'TxOutputKey'.)
Replace all raw uses of 'Bond<T extends BondedAsset>', mostly with
wildcards (that is, 'Bond<?>'), to prevent compiler/IDE warnings.
Also rename the 'T extends Bond<R>' & 'R extend BondedAsset' type params
of 'BondRepository<..>' to 'B' & 'T' respectively, as this is a little
less confusing.
Use the simpler & slightly more efficient 'Map::computeIfAbsent' method
in place of the common pattern:
map.putIfAbsent(key, newValue());
V value = map.get();
(Clean up BondRepository + some cases missed from BurningManService.)
Remove the last 10 blocks one-by-one from the end of the internal linked
list of blocks, instead of rebuilding a truncated list from scratch.
(This all takes place within a write-lock anyway, so it's atomic.)
Add missing synchronisation to the 'toProtoMessage' method, by first
copying the internal list of blocks inside a read-lock, prior to
serialisation (still outside the lock, to maximise concurrency). Since
we only make a shallow copy, this should be fast and take no more than a
MB or so of extra memory.
This prevents a race seen to cause a ConcurrentModificationException
during store persistence, that sometimes occurred when the application
resumed from a long suspension.
Now that 'mockito-junit-jupiter' has been added to the build test
dependencies, we may replace the manual setup & tear down of a Mockito
session with the 'MockitoExtension' JUnit 5 test extension, as the old
Mockito JUnit 4 test runner was no longer available upon upgrading to
Jupiter. This slightly simplifies the tests which use '@Mock', '@Spy',
etc.
Fix inspection warnings and other minor defects (unchecked casts, bad
'assert*' usage, unused assignments/declarations, missing null checks,
object equality on enum instances, needless statement lambdas, missing
final, redundant 'toString', needless visibility) in 'TxValidatorTest'.
Also, uniformise and slightly tidy resource loading in other test
classes, replacing 'Files.readAllBytes' with Java 11 'Files.readString'
(which assumes UTF-8 as desired, instead of the default charset).
Fix resource loading in 'MakerTxValidatorSanityCheckTests', which breaks
when the working directory has spaces in its absolute path, due to the
fact that the test resource paths are URL encoded.
(This pattern is already used to load resources in a few other tests, so
there hopefully won't be a regression on Windows.)
The `checkFeeAmountBTC` method looks up the `value` field of the
`vin[0].prevout` field without checking whether `vin[0].prevout` exists.
When the field is missing `JsonElement jsonVIn0Value =
jsonVin0.getAsJsonObject("prevout").get("value");` (line 193) throws a
NullPointerException.
The `TxValidator`s `getVinAndVout(...)` method assumes that
`json.get("field").getAsJsonArray()` returns null when the field is not
a JSON array. The `getAsJsonArray()` throws an `IllegalStateException`
exception, however. The `IllegalStateException` doesn't get caught by
any caller.
The initialSanityChecks method only checks whether the JSON response is
null (HTTP request failed) or the response is empty (HTTP 200) before
parsing the JSON response. A invalid JSON response would throw a
JsonSyntaxException exception which the callers are not catching.
Previously, the onGetBlocksRequest method created an ArrayList and two
LinkedList before creating the GetBlocksResponse. The first two lists
were never used. PR #6947 introduced the second LinkedList creation.
With this change, the GetBlocksRequestHandler only creates a single
LinkedList.
Before this change, the currentFilter will be added to the
invalidFilters list after comparing both filters creation date and
before checking whether the new filter was created by a banned signer.
Note: The invalid filters list is only used on the security manager Bisq
instances. This list is never read by regular user clients.
Since the 'MockitoAnnotations.initMocks' method used to initialise
@Mock, @Spy, etc. fields is deprecated, use MockitoSession instead, as
suggested by the Javadoc. (Or just remove the call, if the test class is
not actually using any Mockito annotations.)
This allows strict stubbing (the default for Mockito 4), though it must
be configured to be lenient in the P2P test classes, to prevent failures
due to unused stubs.
Change the algorithm used to adjust & cap the burn share of each BM
candidate to use an unlimited number of 'rounds', as described in:
https://github.com/bisq-network/proposals/issues/412
That is, instead of capping the shares once, then distributing the
excess to the remaining BM, then capping again and giving any excess to
the Legacy Burning Man, we cap-redistribute-cap-redistribute-... an
unlimited number of times until no more candidates are capped. This has
the effect of reducing the LBM's share and increasing everyone else's,
alleviating the security risk of giving too much to the LBM (who is
necessarily uncapped).
Instead of implementing the new algorithm directly, we simply enlarge
the set of candidates who should be capped to include those who would
eventually be capped by the new algorithm, in order to determine how
much excess burn share should go to the remaining BM. Then we apply the
original method, 'candidate.calculateCappedAndAdjustedShares(..)', to
set each share to be equal to its respective cap or uniformly scaled
upwards from the starting amount accordingly.
To this end, the static method 'BurningManService.imposeCaps' is added,
which determines which candidates will eventually be capped, by sorting
them in descending order of burn-share/cap-share ratio, then marking all
the candidates in some suitable prefix of the list as capped, iterating
through them one-by-one & gradually increasing the virtual capping round
(starting at zero) until the end of the prefix is reached. (The method
also determines what the uncapped adjusted burn share of each BM should
be, but that only affects the BM view & burn targets.) In this way, the
new algorithm runs in guaranteed O(n * log n) time.
To prevent failed trades, the new algorithm is set to activate at time
'DelayedPayoutTxReceiverService.PROPOSAL_412_ACTIVATION_DATE', with a
placeholder value of 12am, 1st January 2024 (UTC). This simply toggles
whether the for-loop in 'imposeCaps' should stop after capping round 0,
since doing so will lead to identical behaviour to the original code
(even accounting for FP rounding errors).
Replace HashMap in 'BurningManService.getBurningManCandidatesByName'
result construction with a TreeMap, to ensure that the map values are
ordered deterministically (alphabetically by candidate name) when
computing floating point sums. The map values are streamed over in a few
places in this method and elsewhere in 'DelayedPayoutTxReceiverService',
when performing double precision summation to compute the DPT. This
introduces potential nondeterminism due to the nonassociativity of FP
addition, making sums dependent on the term order.
(Note that 'DoubleStream::sum' uses compensated (Kahan) summation, which
makes it effectively quad precision internally, so the chance of term
reordering causing the result to differ by even a single ULP is probably
quite low here. So there might not be much problem in practice.)
Add a missing test case for burning man candidates with fully expired
compensation issuance or proofs-of-burn, or who are inactive because
they have never carried out a proof-of-burn, to prevent regressions in
the forthcoming capping algorithm code changes.
Also add missing assertions for the expected 'adjustedBurnAmountShare'
property of each candidate to the test cases. Note that these only
affect the UI, including the displayed burn targets, not the actual BM
revenue (DPT outputs or fee recipient selection), so we use approximate
equality in the test assertions as usual for floating point expressions.
Add an inner class to 'BurningManServiceTest' to test the cap shares &
(capped + uncapped) burn shares allocated to each candidate found upon
calling 'BurningManService.getBurningManCandidatesByName'. To this end,
set up mocks of 'DaoStateService' and the other injected dependencies of
'BurningManService' to return skeletal proof-of-burn & compensation
issuance txs & payloads for the test users supplied for each case.
Ensure the test cases exercise the capping algorithm fairly thoroughly,
which is to be changed in the proceeding commits, per the proposal:
https://github.com/bisq-network/proposals/issues/412
In particular, provide test cases where more than two capping rounds
would be applied by the new algorithm, as opposed to the current
algorithm which always applies at most two. (This leads to strictly
lower shares for the Legacy Burning Man and non-strict increases in the
shares of all the other burning men.)
Use fees retrieved from getAllMarketPrices.
More frequent fee updates (was 5 minutes, now 1).
Saves one socket connection.
Saves one threadpool.
Service failover to new node works (previously did not).
Uses POJO data transfer object instead of parsing Json in a tree.
Code footprint is reduced.
Clients no longer need to request fee updates.
See issue 5509.
When creating or taking an offer the user gets the choice to fund the
trade from an external wallet for potentially better privacy (avoid
coin merge if inputs from ext. wallet are kept separated). From
feedback it seems that most users are not using that option and then
they need to do that extra click on the "Fund from internal Bisq
wallet" button.
We offer a way to avoid that extra step if the user chooses
to use the internal wallet. We show at the first time a popup
with background info why funding from an external wallet could
improve privacy and if the user prefers to not use that to give them
an option to skip that step in future. In the settings we could add
a toggle to re-enable it again.
The review screen can be skipped if user chooses this feature,
leading to data entry > confirm rather than
data entry > fund > review > confirm.
The thread name is being appended at each price request with the
request URL. This is a leak because the size of the string forever
increases.
There's actually no benefit to modifying the thread name with
the request URL, as whatever small amount of logging is done in
the price subsystem includes the applicable url in the message
itself. So applying the KISS rule, we no longer munge the
thread name at all.
Repurpose not used bonded roles for Bisq 2 roles.
We prefer to not add new roles to avoid risks with DAO consensus issues.
Signed-off-by: HenrikJannsen <boilingfrog@gmx.com>