Do not create a new observableArrayList in filterPaymentAccounts.
The reason why the wrong account gets selected is not completely clear to me. The selection handler gets called when the combobox gets filled and that overwrites the selected account from the data. It seems that the new observableArrayList in filterPaymentAccounts triggered that un-expected behaviour.
Short circuit the BigInteger arithmetic in 'AltcoinExchangeRate' &
'org.bitcoinj.utils.ExchangeRate' (from which the former is adapted), by
using ordinary long arithmetic when it is guaranteed not to overflow due
to the two quantities to be multiplied fitting in an int. This will be
the case most of the time. Also remove duplicated logic, to ensure that
all conversions of BTC amounts to volumes happen via the 'Price'
instance methods, so that the optimisation always applies.
In particular, this speeds up the BTC -> BSQ conversions in the burning
man view, as well as the USD price calculations for the candles in the
trades charts view via 'TradeStatistics3.getTradeVolume()'.
Additionally, fix the lazy initialisation pattern in TradeStatistics3 to
ensure that it is thread safe (that is, it only has benign data races),
by making it of the form:
Foo foo = this.foo;
if (foo == null) {
this.foo = foo = computeFoo();
}
return foo;
This avoids the problem that 'foo' is a nonvolatile field and can
therefore be seen to alternate any number of times between null and
nonnull from the PoV of the thread initialising it (at least when the
initialisation is racy).
Use the previously added 'ChartDataModel.toCachedTimeIntervalFn' to
additionally speed up some of the charts in the BSQ supply view, in
particular the trade fees & total burned BSQ, via the DaoChartDataModel
methods 'getBsqTradeFeeByInterval' & 'getTotalBurnedByInterval'. (The
other changes in the BSQ supply, such as proofs of burn or issuance, are
too infrequent to benefit from the LocalDate caching.)
For this to work, the filtered BSQ txs must be streamed in chronological
order, so provide local methods 'get[Burnt|Trade]FeeTxStream()', to use
in place of the DaoStateService methods 'get[Burnt|Trade]FeeTxs()',
which return unordered HashSets.
Now that the trade statistics are retrieved in chronological order,
optimise the per-interval BSQ & USD price and volume calculations in
PriceChartDataModel & VolumeChartDataModel, by adding caches to avoid
relatively expensive timezone calculations in TemporalAdjusterModel,
similarly to the cache added for 'ChartCalculations.roundToTick' (as
profiling shows 'TemporalAdjusterModel.toTimeInteval' is a hotspot).
Add a cache to speed up Instant -> LocalDate mappings by storing the
unix time (Instant) range of the last seen day (LocalDate) in a tuple,
then just returning that day if the next Instant falls in range. Also
add a cache of the last temporal adjustment (start of month, week, etc.)
of that day. In this way, successive calls to 'toTimeInteval(Instant)'
with input times on the same day are sped up.
Since TemporalAdjusterModel is used by multiple threads simultaneously,
store the caches in instance fields and add a 'withCache' method which
clones the model and enables the caching, since otherwise the separate
threads keep invalidating one another's caches, making it slower than it
would be without them. (We could use ThreadLocals, but profiling
suggests they are too heavyweight to be very useful here, so instead use
unsynchronised caching with nonfinal fields and benign data races.)
Provide the method 'ChartDataModel.toCachedTimeIntervalFn' which returns
a method reference to a cloned & cache-enabled TemporalAdjustedModel, to
use in place of the delegate method 'ChartDataModel.toTimeInterval' when
the caching is beneficial.
As profiling shows a hotspot mapping the set of trade statistics to a
list of currencies to pass to 'CurrencyList.updateWithCurrencies',
attempt to speed this up with a parallel stream. For this to work
correctly, take care to use the backing set (with unmodifiable wrapper)
in place of 'tradeStatisticsManager.getObservableTradeStatisticsSet()',
as ObservableSetWrapper doesn't delegate calls to its spliterator.
Reduce a hotspot sorting the trade statistics table, triggered by the
'sortedList.bind(comparatorProperty)' call upon completion of the
'fillList' future. Profiling shows that repeated invocation of the cell
value factory over the entries of the sorted column is a bottleneck, so
speed this up by caching the returned cell value (given by calling
'new ReadOnlyObjectWrapper<>(listItem)') as an instance field of
TradeStatistics3ListItem.
As a further significant optimisation, stream the trade statistics in
reverse chronological order, when collecting into a list wrapped by
SortedList, as this matches the default display order, reducing the
number of comparisons done by SortedList's internal mergesort to O(n).
Optimise (further) the ChartCalculations methods 'getItemsPerInterval' &
'getCandleData' by replacing HashSets in the former with sorted sets,
which avoids relatively expensive calls to 'TradeStatistics3.hashCode'
and needless subsequent re-sorting by date in 'getCandleData'. (Forming
the trade statistics into an ImmutableSortedSet, OTOH, is cheap since
they are already encountered in chronological order.)
Further optimise the latter by using a primitive array sort of the trade
prices to calculate their median, instead of needlessly boxing them and
using 'Collections.sort'.
Avoid calculating average prices for ticks that won't ever be part of a
visible chart candle, as only the last 90 ticks can fit on the chart. To
this end, stream the trade statistics in reverse chronological order
(which requires passing them as a NavigableSet), so that once more than
MAX_TICKS ticks have been encountered for a given tick unit, the
relevant map (and all lower granularity maps) can stop being filled up.
Also add a 'PriceAccumulator' static class to save time and memory when
filling up the intermediate maps, by avoiding the addition of each trade
statistics object to (multiple) temporary lists prior to average price
calculation.
Now that the trade statistics are encountered in chronological order,
speed up 'roundToTick(LocalDateTime, TickUnit)' by caching the last
calculated LocalDateTime -> Date mapping from the tick start (with one
cache entry per tick unit), as multiple successive trades will tend to
have the same tick start.
This avoids a relatively expensive '.atZone(..).toInstant()' call, which
was slowing down 'ChartCalculations.getUsdAveragePriceMapsPerTickUnit',
as it uses 'roundToTick' in a tight loop (#trades * #tick-units calls).
Also unqualify 'TradesChartsViewModel.TickUnit' references for brevity.
Cache enum arrays 'TickUnit.values()' & 'PaymentMethodWrapper.values()'
as the JVM makes defensive copies of them every time they are returned,
and they are both being used in tight loops. In particular, profiling
suggests this will make 'TradeStatistics3.isValid' about twice as fast.
The test was erroneously passing a candle tick start time (as a long) to
'ChartCalculations.getCandleData', which expects a tick index from 0 to
MAX_TICKS + 1 (91) inclusive. Since this is out of range, the method
skipped an 'itemsPerInterval' map lookup which would have thrown an NPE
prior to the last commit. Fix the test by making 'itemsPerInterval'
nonempty and passing 0 as the tick index. Also check the now correctly
populated 'date' field in the returned candle data.
Additionally, tidy up the class a little and avoid an unnecessary temp
directory creation.
Avoid scanning all the ticks backwards from 90 to 1 repeatedly, to find
the one with the correct date interval for each item in the
'tradeStatisticsByCurrency' list. Instead, for each item, remember the
last found tick index and move forwards if necessary, then scan
backwards from that point to find the correct tick. As the trade
statistics are now in chronological order, this is much faster (though
it will still work correctly regardless of the order of the list items).
Also, hold 'itemsPerInterval' as a 'List<Pair<..>>' instead of a
'Map<Long, Pair<..>>', since the keys are just tick indices from 0 to 91
inclusive, so it is cleaner and more efficient to use an array than a
hash table.
1) Change statement lambdas to expression lambdas;
2) Replace 'Map.putIfAbsent' then 'Map.get' with 'Map.computeIfAbsent';
3) Add missing @VisibleForTesting annotation or make private.
Make TradeStatistics3 implement the previously added ComparableExt
interface and make TradeStatisticsManager hold them as a TreeSet instead
of a HashSet, to support fast retrieval of statistics in any given date
range. (Even though red-black trees are generally slower than hash
tables, this should not matter here since the set is only being iterated
over and infrequently appended, and does not benefit from O(1) lookups/
additions/removals.)
Add a 'TradeStatisticsManager.getNavigableTradeStatisticsSet' accessor,
which returns the backing TreeSet of the current ObservableSet field, so
that callers can access its NavigableSet interface where needed (as
there is no ObservableSortedSet or similar in JavaFX). Use this to
optimise 'AveragePriceUtil.getAveragePriceTuple',
'DisputeAgentSelection.getLeastUsedDisputeAgent' and
'MutableOfferDataModel.getSuggestedSecurityDeposit', to obtain a narrow
date range of trade statistics without streaming over the entire set.
Additionally optimise & simplify the price collation in
'TradeStatisticsManager.onAllServicesInitialised', by exploiting the
fact that the statistics are now sorted in order of date (which is the
presently defined natural order).
Change flow of cloning an offer:
We open the clone offer tab similar like the duplicate/edit offer tab. When clicking the clone button we create and publish the cloned offer. if the clone would not have changed the payment method/currency we show a popup and deactivate the offer.
At editOffer we check if the offer is using a shared maker fee and if so we check if the edit triggered same payment method/currency. If so we show a popup and deactivate the offer.
Signed-off-by: HenrikJannsen <boilingfrog@gmx.com>
Make sure that none of the key extractor functions passed to
'Comparator.comparing(fn)' can return null, as this results in an NPE
when the corresponding column is sorted in the UI, but has blank entries
(such as the BTC received for a BSQ burn in the balance entries table).
(Make blanks appear smallest in magnitude using 'Comparator.nullsFirst'
or by defaulting to 0 instead of null, since the entries are initially
sorted biggest to smallest, pushing them to the bottom of the table.)
Also change the default sort type of the burned BSQ column, which should
be ASCENDING since the entries are negative.
We use a postfix to the header title and not a busy animation, as the busy animation is quite CPU intense.
Signed-off-by: HenrikJannsen <boilingfrog@gmx.com>
There is the small + icon on the right top of the table which allows to show the hidden columns.
Unfortunately this does not come with column names, so it's a bit ugly.
Here is a way how to adjust the context menu: https://gist.github.com/Roland09/d92829cdf5e5fee6fee9
Maybe any dev is motivated to improve that.
We do not add a column to the overview table because calculating the balance entries is expensive and doing it over all burningmen would take too long. There is headroom for performance improvements in that area...
Signed-off-by: HenrikJannsen <boilingfrog@gmx.com>
Add private method 'WalletInfoView.addAccountPaths', similar to the
method 'addXpubKeys', to iterate over the active wallet keychains,
formatting & displaying the derivation paths, instead of using the 4
constants defined in BisqKeyChainGroupStructure. Also simplify the code
slightly by updating the 'gridRow' field directly instead of passing it
as a method argument.
Add the new account path "44'/142'/1'" for segwit BSQ to the wallet info
view, which was missed from PR #5109 making the wallet & UI changes to
implement segwit BSQ. Also format the paths from the constants defined
in 'BisqKeyChainGroupStructure', instead of using string literals, so
that they are only defined in one place. (Though it is extremely
unlikely the paths would ever change.)
Move the 'confirmations' field from TransactionsListItem to the nested
'LazyFields' class, so that it is correctly lazily initialised when
'getNumConfirmations()' is called, instead of just returning 0 for
hidden list items with uninitialised tooltips + confidence indicators.
This makes the logic consistent with that in TxConfidenceListItem and
fixes a bug in the BTC transactions view CSV export, where only the
already rendered list items would have nonzero confirmation counts.
Add a 'lazyFields' volatile field to DepositListItem to detect
initialisation of the associated memoised 'LazyFields' supplier,
allowing confidence updates to be short-circuited when the corresponding
indicator has not yet been lazily loaded.
This is for consistency with the 'LazyFields' logic just added to
TxConfidenceListItem and should make the UI more responsive if a new
block arrives when the funds deposit list is in view, by avoiding the
entire list of item tooltips & confidence indicators from being force-
initialised.
Add a 'LazyFields' static class and memoised supplier field to the base
class TxConfidenceListItem, similar to that added to the classes
DepositListItem & TransactionsListItem. This allows lazy loading of
tooltips & tx confidence indicators, just as for those classes.
This removes the current remaining bottleneck in the BSQ tx view load,
revealed by JProfiler. (Note that there is still a quadratic time bug
remaining due to the use of a confidence listener for each list item,
since the BSQ wallet internally uses a CopyOnWriteArrayList, whose
backing array is cloned for each listener addition or removal. However,
it isn't clear that fixing this will give a noticable speedup unless
there are an extremely large number of BSQ txs in the wallet.)
Take care to add the tx confirmation count to LazyFields, to prevent
issues when exporting the list as a CSV. Also add a volatile
'lazyFields' field to detect lazy initialisation of the supplier and
short circuit confidence updates for uninitialised indicators.
Precompute and pass a map of txIds to BsqSwapTrade instances to the
BsqTxListItem constructor in 'BsqTxView.updateList()', in place of the
tradable repository, so that the tradables don't need to be repeatedly
scanned to find the optional matching BSQ swap trade for each BSQ tx.
This fixes a quadratic time bug and significantly speeds up the BSQ tx
view load for users with many past trades.
Remove the direct call to 'updateList' in the 'BsqTxView.activate()'
method, as it is later called indirectly via 'onUpdateAnyChainHeight()'.
This nearly doubles the loading speed of the BSQ tx list (in the DAO /
BSQ Wallet tab), since 'updateList' is very slow when there are many txs
in the wallet.
In the (hopefully rare) case that the user has multiple past trades that
end in arbitration, the entire wallet tx output set was scanned once for
every such trade (via 'TransactionAwareTrade.isRefundPayoutTx' calls),
to look for any outputs matching the payout address. This potentially
causes a slowdown of the Transaction view load for each new arbitration
case added. To avoid this problem, cache the last set of recipient
address strings of the provided tx, as the next call to
'isRefundPayoutTx' is likely to be for the same tx.
Also check that there is exactly one input (the multisig input) for any
candidate delayed payout tx, to speed up 'isDelayedPayoutTx' in case the
wallet contains many unusual txs with nonzero locktime.
Eliminate a minor quadratic time bug, caused by the unnecessary addition
of a (BtcWalletService) TxConfidenceListener for each list item in the
Transactions view. (Since the confidence listeners are internally held
in a CopyOnWriteArraySet, this sadly runs in quadratic time, slowing
down the Transactions view load a little.)
The confidence listener is apparently redundant because of a set of
calls to 'TransactionsListItem.cleanup' immediately upon construction of
the item list, which removes all the listeners just added. (This code
appears to date from at least February 2016, in commit c70df86.)
(The confidence indicators are kept up to date by simply reloading the
entire list upon each wallet change event.)
Use a crude Bloom filter (of sorts) to cut down the quadratic number of
calls to 'TransactionAwareTradable.isRelatedToTransaction' (that is, one
for each tx-tradable pair) during the Transactions view load. In this
way, we may reduce the number of calls roughly 40-fold, for a Bisq
instance with similar numbers of BSQ swap trades and escrow trades.
(Sadly, profiling does not show a 40-fold reduction in the size of the
'isRelatedToTransaction' hotspot, likely due to the remaining calls
being expensive ones involving disputed trades or unusual txs with
nonzero locktime, e.g. dust attacks or funds from Electrum wallets.)
To this end, partition the wallet transactions into 64 pseudo-randomly
chosen buckets (with a dedicated bucket for txs which might be delayed
payouts, namely those with nonzero locktime). Add an interface method,
'TransactionAwareTradable.getRelatedTransactionFilter', which returns an
IntStream of all the indices of buckets where a related tx may plausibly
be found. Where this is unclear, e.g. for trades involved in a dispute,
just return everything (that is, the range 0..63 inclusive).
Add a class, 'RelatedTransactionFilterSlices', that holds a provided
list of TransactionAwareTradable instances and 64 bitsets of all the
slices through their respective filters (each realised as 64-bit word
instead of a streams of integers). In this way, a list of tradables
plausibly related to any given tx may be quickly found by simply
selecting the appropriate bitset of the 64 (by the tx bucket index).
Inline a local variable, to eliminate another minor Sha256Hash.toString
hotspot in the Transactions view load, this time coming from
'TransactionsAwareOpenOffer.isRelatedToTransaction'. This is helpful in
the case that the user has a large number of (possibly disabled) BSQ
swap offers.
Move the line,
Set<Tradable> tradables = tradableRepository.getAll();
to the top level of 'TransactionsView.updateList', instead of needlessly
calling 'TradableRepository.getAll' (which builds a new set every
invocation) for each wallet transaction being iterated over.
This was causing a significant slowdown of the view load.
Use a mutable static tuple field to cache the last result of
'Sha256Hash.toString', which is used to get the ID string of the input
tx, when calling 'TransactionAwareTrade.isRelatedToTransaction'. In this
way, consecutive calls to 'isRelatedToTransaction' on the same input tx
(over all the past trades, as done by 'TransactionsView.updateList') are
sped up significantly, since hex encoding the txId is a bottleneck.
Replace the "Optional.ofNullable(...)..." constructs with more direct
code using short-circuit operators, as this is shorter and a little
faster. Also use "trade.get[Deposit|Payout]TxId()" instead of the code
"trade.get[Deposit|Payout]TxId().getTxId()", as (upon inspection of the
code) there should never be a case where the deposit/payout transaction
field of a Trade object is set but the respective txID field is null (or
set to an inconsistent value).
Also remove a redundant 'RefundManager.getDisputesAsObservableList'
method call, which was also slowing things down slightly.
The minor speedups afforded by the above are important because the
method 'TransactionAwareTrade.isRelatedToTransaction' is called a
quadratic number of times and consequently a major bottleneck when
loading the Transactions view.
This helps to avoid that the legacy BM would get the rest in case there are capped shares.
It still can be that a candidate exceeds the cap and by the adjustment becomes capped. We take that into account and the legacy BM would get some share in that case.
Signed-off-by: HenrikJannsen <boilingfrog@gmx.com>
Left side is amount to burn to reach the max allowed receiver share based on the burned amount of all BM.
The right side is the amount to burn to reach the max allowed receiver share based the boosted max burn target.
Increase ISSUANCE_BOOST_FACTOR from 3 to 4.
Add help overlay to burn target table header.
Signed-off-by: HenrikJannsen <boilingfrog@gmx.com>
This helps to avoid that the legacy BM would get the rest in case there are capped shares.
It still can be that a candidate exceeds the cap and by the adjustment becomes capped. We take that into account and the legacy BM would get some share in that case.
Signed-off-by: HenrikJannsen <boilingfrog@gmx.com>
Left side is amount to burn to reach the max allowed receiver share based on the burned amount of all BM.
The right side is the amount to burn to reach the max allowed receiver share based the boosted max burn target.
Increase ISSUANCE_BOOST_FACTOR from 3 to 4.
Add help overlay to burn target table header.
Signed-off-by: HenrikJannsen <boilingfrog@gmx.com>
It is not included in BurningManCandidate candidate map as that would complicate things.
We only want to show it in the UI for informational purposes.
Signed-off-by: HenrikJannsen <boilingfrog@gmx.com>
Split up into 4 service classes
- BurningManService: Common stuff
- BurningManInfoService: For displaying BurningMan data
- BtcFeeReceiverService: For getting btcFeeReceivers
- DelayedPayoutTxReceiverService: For getting delayedPayoutTxReceivers. This is the critical part where we need to have a deterministic data and which could break trade consensus.
WIP refactoring. More to come...
Signed-off-by: HenrikJannsen <boilingfrog@gmx.com>
Add getAverageDistributionPerCycle method to BurningManService.
Show receiver address when BM is selected.
Refactor code, cleanups, UI improvements.
Signed-off-by: HenrikJannsen <boilingfrog@gmx.com>
Add burningManSelectionHeight and tradeTxFee in Dispute.
Call validateDonationAddressMatchesAnyPastParamValues and validateDonationAddress
only if legacy BM was used.
Signed-off-by: HenrikJannsen <boilingfrog@gmx.com>
We cannot add a new field as that would break DAO consensus.
Add optional text field for burningManReceiverAddress to CompensationProposal UI.
Signed-off-by: HenrikJannsen <boilingfrog@gmx.com>
Some commits were missed from the first implementation in #6431.
The user limit is applied to offer entry via AccountAgeWitnessService.
Offers are filtered according to limits in OfferFilterService.
If limit changed, the cache in OfferFilterServices must be cleared.
Previously the BSQ fee payment was determined by parsing a raw tx
without relying on the DAO. Unfortunately this turned out to be
problematic, so with this change the BSQ fee paid is obtained from
the DAO tx, as originally preferred by chimp1984.
Unconfirmed transactions will not be able to have their BSQ fees
checked so early requests for validation will skip the fee check.
This is not a problem since maker fee validation is done by the
taker and in the majority of cases will be already confirmed; taker
fee validation is done after the first confirm at trade step 2.
More restrictive limits will still apply based on payment method.
It is intended to avoid that a new users who do not fully understand
the process of a Bisq trade to cause an arbitration case with high
amounts and therefore higher risks and costs for the DAO.
Do not pay out the security deposit of the trade peer to the
arbitration case winner.
Amounts are filled out based on which option the
Arbitrator chooses:
If BTC buyer is selected as case winner they will get
trade amount + buyer security deposit.
If BTC seller is selected as case winner they will get
trade amount + seller security deposit.
If custom payout is selected arbitrator can specify
custom amounts as they wish.
In case of a `||` the statement gets removed as it always is true.
At getUserPaymentAccounts we refactor further the steam to a HashSet constructor operation as the filter method gets removed.
Signed-off-by: HenrikJannsen <boilingfrog@gmx.com>