Remove the last 10 blocks one-by-one from the end of the internal linked
list of blocks, instead of rebuilding a truncated list from scratch.
(This all takes place within a write-lock anyway, so it's atomic.)
Add missing synchronisation to the 'toProtoMessage' method, by first
copying the internal list of blocks inside a read-lock, prior to
serialisation (still outside the lock, to maximise concurrency). Since
we only make a shallow copy, this should be fast and take no more than a
MB or so of extra memory.
This prevents a race seen to cause a ConcurrentModificationException
during store persistence, that sometimes occurred when the application
resumed from a long suspension.
Now that 'mockito-junit-jupiter' has been added to the build test
dependencies, we may replace the manual setup & tear down of a Mockito
session with the 'MockitoExtension' JUnit 5 test extension, as the old
Mockito JUnit 4 test runner was no longer available upon upgrading to
Jupiter. This slightly simplifies the tests which use '@Mock', '@Spy',
etc.
Fix inspection warnings and other minor defects (unchecked casts, bad
'assert*' usage, unused assignments/declarations, missing null checks,
object equality on enum instances, needless statement lambdas, missing
final, redundant 'toString', needless visibility) in 'TxValidatorTest'.
Also, uniformise and slightly tidy resource loading in other test
classes, replacing 'Files.readAllBytes' with Java 11 'Files.readString'
(which assumes UTF-8 as desired, instead of the default charset).
Fix resource loading in 'MakerTxValidatorSanityCheckTests', which breaks
when the working directory has spaces in its absolute path, due to the
fact that the test resource paths are URL encoded.
(This pattern is already used to load resources in a few other tests, so
there hopefully won't be a regression on Windows.)
The `checkFeeAmountBTC` method looks up the `value` field of the
`vin[0].prevout` field without checking whether `vin[0].prevout` exists.
When the field is missing `JsonElement jsonVIn0Value =
jsonVin0.getAsJsonObject("prevout").get("value");` (line 193) throws a
NullPointerException.
The `TxValidator`s `getVinAndVout(...)` method assumes that
`json.get("field").getAsJsonArray()` returns null when the field is not
a JSON array. The `getAsJsonArray()` throws an `IllegalStateException`
exception, however. The `IllegalStateException` doesn't get caught by
any caller.
The initialSanityChecks method only checks whether the JSON response is
null (HTTP request failed) or the response is empty (HTTP 200) before
parsing the JSON response. A invalid JSON response would throw a
JsonSyntaxException exception which the callers are not catching.
Previously, the onGetBlocksRequest method created an ArrayList and two
LinkedList before creating the GetBlocksResponse. The first two lists
were never used. PR #6947 introduced the second LinkedList creation.
With this change, the GetBlocksRequestHandler only creates a single
LinkedList.
Before this change, the currentFilter will be added to the
invalidFilters list after comparing both filters creation date and
before checking whether the new filter was created by a banned signer.
Note: The invalid filters list is only used on the security manager Bisq
instances. This list is never read by regular user clients.
Since the 'MockitoAnnotations.initMocks' method used to initialise
@Mock, @Spy, etc. fields is deprecated, use MockitoSession instead, as
suggested by the Javadoc. (Or just remove the call, if the test class is
not actually using any Mockito annotations.)
This allows strict stubbing (the default for Mockito 4), though it must
be configured to be lenient in the P2P test classes, to prevent failures
due to unused stubs.
Change the algorithm used to adjust & cap the burn share of each BM
candidate to use an unlimited number of 'rounds', as described in:
https://github.com/bisq-network/proposals/issues/412
That is, instead of capping the shares once, then distributing the
excess to the remaining BM, then capping again and giving any excess to
the Legacy Burning Man, we cap-redistribute-cap-redistribute-... an
unlimited number of times until no more candidates are capped. This has
the effect of reducing the LBM's share and increasing everyone else's,
alleviating the security risk of giving too much to the LBM (who is
necessarily uncapped).
Instead of implementing the new algorithm directly, we simply enlarge
the set of candidates who should be capped to include those who would
eventually be capped by the new algorithm, in order to determine how
much excess burn share should go to the remaining BM. Then we apply the
original method, 'candidate.calculateCappedAndAdjustedShares(..)', to
set each share to be equal to its respective cap or uniformly scaled
upwards from the starting amount accordingly.
To this end, the static method 'BurningManService.imposeCaps' is added,
which determines which candidates will eventually be capped, by sorting
them in descending order of burn-share/cap-share ratio, then marking all
the candidates in some suitable prefix of the list as capped, iterating
through them one-by-one & gradually increasing the virtual capping round
(starting at zero) until the end of the prefix is reached. (The method
also determines what the uncapped adjusted burn share of each BM should
be, but that only affects the BM view & burn targets.) In this way, the
new algorithm runs in guaranteed O(n * log n) time.
To prevent failed trades, the new algorithm is set to activate at time
'DelayedPayoutTxReceiverService.PROPOSAL_412_ACTIVATION_DATE', with a
placeholder value of 12am, 1st January 2024 (UTC). This simply toggles
whether the for-loop in 'imposeCaps' should stop after capping round 0,
since doing so will lead to identical behaviour to the original code
(even accounting for FP rounding errors).
Replace HashMap in 'BurningManService.getBurningManCandidatesByName'
result construction with a TreeMap, to ensure that the map values are
ordered deterministically (alphabetically by candidate name) when
computing floating point sums. The map values are streamed over in a few
places in this method and elsewhere in 'DelayedPayoutTxReceiverService',
when performing double precision summation to compute the DPT. This
introduces potential nondeterminism due to the nonassociativity of FP
addition, making sums dependent on the term order.
(Note that 'DoubleStream::sum' uses compensated (Kahan) summation, which
makes it effectively quad precision internally, so the chance of term
reordering causing the result to differ by even a single ULP is probably
quite low here. So there might not be much problem in practice.)
Add a missing test case for burning man candidates with fully expired
compensation issuance or proofs-of-burn, or who are inactive because
they have never carried out a proof-of-burn, to prevent regressions in
the forthcoming capping algorithm code changes.
Also add missing assertions for the expected 'adjustedBurnAmountShare'
property of each candidate to the test cases. Note that these only
affect the UI, including the displayed burn targets, not the actual BM
revenue (DPT outputs or fee recipient selection), so we use approximate
equality in the test assertions as usual for floating point expressions.
Add an inner class to 'BurningManServiceTest' to test the cap shares &
(capped + uncapped) burn shares allocated to each candidate found upon
calling 'BurningManService.getBurningManCandidatesByName'. To this end,
set up mocks of 'DaoStateService' and the other injected dependencies of
'BurningManService' to return skeletal proof-of-burn & compensation
issuance txs & payloads for the test users supplied for each case.
Ensure the test cases exercise the capping algorithm fairly thoroughly,
which is to be changed in the proceeding commits, per the proposal:
https://github.com/bisq-network/proposals/issues/412
In particular, provide test cases where more than two capping rounds
would be applied by the new algorithm, as opposed to the current
algorithm which always applies at most two. (This leads to strictly
lower shares for the Legacy Burning Man and non-strict increases in the
shares of all the other burning men.)
Use fees retrieved from getAllMarketPrices.
More frequent fee updates (was 5 minutes, now 1).
Saves one socket connection.
Saves one threadpool.
Service failover to new node works (previously did not).
Uses POJO data transfer object instead of parsing Json in a tree.
Code footprint is reduced.
Clients no longer need to request fee updates.
See issue 5509.
When creating or taking an offer the user gets the choice to fund the
trade from an external wallet for potentially better privacy (avoid
coin merge if inputs from ext. wallet are kept separated). From
feedback it seems that most users are not using that option and then
they need to do that extra click on the "Fund from internal Bisq
wallet" button.
We offer a way to avoid that extra step if the user chooses
to use the internal wallet. We show at the first time a popup
with background info why funding from an external wallet could
improve privacy and if the user prefers to not use that to give them
an option to skip that step in future. In the settings we could add
a toggle to re-enable it again.
The review screen can be skipped if user chooses this feature,
leading to data entry > confirm rather than
data entry > fund > review > confirm.
The thread name is being appended at each price request with the
request URL. This is a leak because the size of the string forever
increases.
There's actually no benefit to modifying the thread name with
the request URL, as whatever small amount of logging is done in
the price subsystem includes the applicable url in the message
itself. So applying the KISS rule, we no longer munge the
thread name at all.
Repurpose not used bonded roles for Bisq 2 roles.
We prefer to not add new roles to avoid risks with DAO consensus issues.
Signed-off-by: HenrikJannsen <boilingfrog@gmx.com>
PR #6740 was a diff against master (instead against the release/v1.9.11
branch). The previous merge into the release branch corrupted the
acounting file.
Remove old file.
The current accounting data are missing about 10% of transactions due to threading issues at processing the transactions in earlier versions.
The getter was called by EqualsAndHashCode which throws an exception as it is is not intended to get used anymore.
Signed-off-by: HenrikJannsen <boilingfrog@gmx.com>
Implement new selection algorithm.
Add methods for accessing the receiverAddress (not used yet, but will be used in next commits)
Signed-off-by: HenrikJannsen <boilingfrog@gmx.com>
isHotfixActivated to isBugfix6699Activated,
wasHotfixActivatedAtTradeDate to wasBugfix6699ActivatedAtTradeDate
Signed-off-by: HenrikJannsen <boilingfrog@gmx.com>
Short circuit the BigInteger arithmetic in 'AltcoinExchangeRate' &
'org.bitcoinj.utils.ExchangeRate' (from which the former is adapted), by
using ordinary long arithmetic when it is guaranteed not to overflow due
to the two quantities to be multiplied fitting in an int. This will be
the case most of the time. Also remove duplicated logic, to ensure that
all conversions of BTC amounts to volumes happen via the 'Price'
instance methods, so that the optimisation always applies.
In particular, this speeds up the BTC -> BSQ conversions in the burning
man view, as well as the USD price calculations for the candles in the
trades charts view via 'TradeStatistics3.getTradeVolume()'.
Additionally, fix the lazy initialisation pattern in TradeStatistics3 to
ensure that it is thread safe (that is, it only has benign data races),
by making it of the form:
Foo foo = this.foo;
if (foo == null) {
this.foo = foo = computeFoo();
}
return foo;
This avoids the problem that 'foo' is a nonvolatile field and can
therefore be seen to alternate any number of times between null and
nonnull from the PoV of the thread initialising it (at least when the
initialisation is racy).
Add an 'averagePricesValid' boolean field to avoid needless refilling of
the cached BSQ prices map when calling 'getAverageBsqPriceByMonth()'.
(Also skip a redundant filling of the map will non-historical data upon
startup of the service.) Since the prices are calculated from the
(observable) set of all trade statistics, add a listener to the set to
invalidate the cache whenever it changes.
This significantly speeds up the burning man view, since the getter is
called several times when activating it.
Factor out duplicated logic in the 'Stream.map' lambdas to compute the
BSQ value of the BTC of each streamed ReceivedBtcBalanceEntry, returned
as an 'Optional<Long>'. Also simplify the logic slightly and return an
OptionalLong instead for greater efficiency.
(Also replace a statement lambda with an expression lambda.)
Optimise 'BurningManPresentationService.getCandidateBurnTarget' to avoid
the repeated computation of the total accumulated decayed burned amount
for every listed burning man. To this end, cache the total in a nullable
Long field, along with the method 'getAccumulatedDecayedBurnedAmount()'
to lazily initialise it. (This eliminates a minor hotspot in the burning
man view revealed by JProfiler.)
Cache enum arrays 'TickUnit.values()' & 'PaymentMethodWrapper.values()'
as the JVM makes defensive copies of them every time they are returned,
and they are both being used in tight loops. In particular, profiling
suggests this will make 'TradeStatistics3.isValid' about twice as fast.
Now that the trade statistics are retrieved as a sorted set, it can be
assumed that the USD & BSQ trade lists passed to 'getUSDAverage' are
already sorted. Use this to avoid repeatedly scanning the USD trade list
for the first trade dated after each given BSQ trade, by moving two
cursors in a single pass across the respective lists simultaneously.
Make TradeStatistics3 implement the previously added ComparableExt
interface and make TradeStatisticsManager hold them as a TreeSet instead
of a HashSet, to support fast retrieval of statistics in any given date
range. (Even though red-black trees are generally slower than hash
tables, this should not matter here since the set is only being iterated
over and infrequently appended, and does not benefit from O(1) lookups/
additions/removals.)
Add a 'TradeStatisticsManager.getNavigableTradeStatisticsSet' accessor,
which returns the backing TreeSet of the current ObservableSet field, so
that callers can access its NavigableSet interface where needed (as
there is no ObservableSortedSet or similar in JavaFX). Use this to
optimise 'AveragePriceUtil.getAveragePriceTuple',
'DisputeAgentSelection.getLeastUsedDisputeAgent' and
'MutableOfferDataModel.getSuggestedSecurityDeposit', to obtain a narrow
date range of trade statistics without streaming over the entire set.
Additionally optimise & simplify the price collation in
'TradeStatisticsManager.onAllServicesInitialised', by exploiting the
fact that the statistics are now sorted in order of date (which is the
presently defined natural order).
Use a DoubleStream when streaming over 'List<Double>' method arguments
in InlierUtil, as well as a primitive array sort in 'InlierUtil.trim'
(followed by taking a sublist view), instead of calling 'Stream.sorted'.
To this end, use Guava 'Doubles.asList' to pass lists of Doubles to/from
the InlierUtil methods without incurring any boxing or unboxing costs,
since their spliterators can be simply downcast to Spliterator.OfDouble
(opportunistically), instead of needing to use 'mapToDouble' to unbox.
This was a minor hotspot when called from AveragePriceUtil (used by the
burning man and BSQ dashboard views).
Use nonstrict bounds when filtering outliers from the provided trade
statistics list, since otherwise it will always remove the outermost two
inliers from the list. This is because the dependent call to
'InlierUtil.findInlierRange' returns the min & max inlier values.
Use a Map to speed up 'PaymentMethod.getPaymentMethod', called from
'isValid', instead of scanning the payment method list every invocation.
Make the list immutable to ensure the map never goes stale, which is OK
since no code modified it outside PaymentMethod's static initialisation.
Also speed up the global accessor 'TradeLimits.getMaxTradeLimit', by
caching the DAO param value in a volatile field, cleared upon each new
block arrival.
Furthermore, speed up 'TradeLimits.getFirstMonthRiskBasedTradeLimit' by
simplifying the rounding logic to avoid a pass through the (rather slow)
BigDecimal type.
Short circuit the exception control flow used in the method
'TradeStatistics3.getPaymentMethodId', which occurs whenever the payment
method code is stored directly in the 'paymentMethod' field instead of
first being converted into a numeric string. This occurs if the method
is unrecognised as it is not in listed into 'PaymentMethodWrapper' enum.
This fixes an unnecessary slowdown of 'TradeStatistics3.isValid', which
calls the above, when scanning the list of BSQ trade statistics in
AveragePriceUtil and elsewhere, due to the fact that the 'BSQ_SWAP'
payment method has been missed out of the above enum.
Also add a (presently disabled) unit test to prevent any future payment
methods from being missed out of the enum. (Should fix the missing BSQ
swaps issue in a separate PR to make sure that the seed nodes recognise
the new payment method code before anyone else.)
Change flow of cloning an offer:
We open the clone offer tab similar like the duplicate/edit offer tab. When clicking the clone button we create and publish the cloned offer. if the clone would not have changed the payment method/currency we show a popup and deactivate the offer.
At editOffer we check if the offer is using a shared maker fee and if so we check if the edit triggered same payment method/currency. If so we show a popup and deactivate the offer.
Signed-off-by: HenrikJannsen <boilingfrog@gmx.com>
btcWalletService.getAddressEntriesForOpenOffer() contains also OFFER_FUNDING entries.
This version minimizes the change by mapping to address and use distinct to avoid duplicate entries to be summed up.
Signed-off-by: HenrikJannsen <boilingfrog@gmx.com>
Multiple calls will call add on the hashset but as the entry is the same it will not trigger any change of the hashset.
Signed-off-by: HenrikJannsen <boilingfrog@gmx.com>
The numTransactions param in getRecentTransactions delivers all transactions if it is 0 but that is not intuitive. Passing Integer.MAX_VALUE makes more sense.
Signed-off-by: HenrikJannsen <boilingfrog@gmx.com>
We use a postfix to the header title and not a busy animation, as the busy animation is quite CPU intense.
Signed-off-by: HenrikJannsen <boilingfrog@gmx.com>