Reduce a hotspot sorting the trade statistics table, triggered by the
'sortedList.bind(comparatorProperty)' call upon completion of the
'fillList' future. Profiling shows that repeated invocation of the cell
value factory over the entries of the sorted column is a bottleneck, so
speed this up by caching the returned cell value (given by calling
'new ReadOnlyObjectWrapper<>(listItem)') as an instance field of
TradeStatistics3ListItem.
As a further significant optimisation, stream the trade statistics in
reverse chronological order, when collecting into a list wrapped by
SortedList, as this matches the default display order, reducing the
number of comparisons done by SortedList's internal mergesort to O(n).
Optimise (further) the ChartCalculations methods 'getItemsPerInterval' &
'getCandleData' by replacing HashSets in the former with sorted sets,
which avoids relatively expensive calls to 'TradeStatistics3.hashCode'
and needless subsequent re-sorting by date in 'getCandleData'. (Forming
the trade statistics into an ImmutableSortedSet, OTOH, is cheap since
they are already encountered in chronological order.)
Further optimise the latter by using a primitive array sort of the trade
prices to calculate their median, instead of needlessly boxing them and
using 'Collections.sort'.
Avoid calculating average prices for ticks that won't ever be part of a
visible chart candle, as only the last 90 ticks can fit on the chart. To
this end, stream the trade statistics in reverse chronological order
(which requires passing them as a NavigableSet), so that once more than
MAX_TICKS ticks have been encountered for a given tick unit, the
relevant map (and all lower granularity maps) can stop being filled up.
Also add a 'PriceAccumulator' static class to save time and memory when
filling up the intermediate maps, by avoiding the addition of each trade
statistics object to (multiple) temporary lists prior to average price
calculation.
Now that the trade statistics are encountered in chronological order,
speed up 'roundToTick(LocalDateTime, TickUnit)' by caching the last
calculated LocalDateTime -> Date mapping from the tick start (with one
cache entry per tick unit), as multiple successive trades will tend to
have the same tick start.
This avoids a relatively expensive '.atZone(..).toInstant()' call, which
was slowing down 'ChartCalculations.getUsdAveragePriceMapsPerTickUnit',
as it uses 'roundToTick' in a tight loop (#trades * #tick-units calls).
Also unqualify 'TradesChartsViewModel.TickUnit' references for brevity.
Cache enum arrays 'TickUnit.values()' & 'PaymentMethodWrapper.values()'
as the JVM makes defensive copies of them every time they are returned,
and they are both being used in tight loops. In particular, profiling
suggests this will make 'TradeStatistics3.isValid' about twice as fast.
The test was erroneously passing a candle tick start time (as a long) to
'ChartCalculations.getCandleData', which expects a tick index from 0 to
MAX_TICKS + 1 (91) inclusive. Since this is out of range, the method
skipped an 'itemsPerInterval' map lookup which would have thrown an NPE
prior to the last commit. Fix the test by making 'itemsPerInterval'
nonempty and passing 0 as the tick index. Also check the now correctly
populated 'date' field in the returned candle data.
Additionally, tidy up the class a little and avoid an unnecessary temp
directory creation.
Avoid scanning all the ticks backwards from 90 to 1 repeatedly, to find
the one with the correct date interval for each item in the
'tradeStatisticsByCurrency' list. Instead, for each item, remember the
last found tick index and move forwards if necessary, then scan
backwards from that point to find the correct tick. As the trade
statistics are now in chronological order, this is much faster (though
it will still work correctly regardless of the order of the list items).
Also, hold 'itemsPerInterval' as a 'List<Pair<..>>' instead of a
'Map<Long, Pair<..>>', since the keys are just tick indices from 0 to 91
inclusive, so it is cleaner and more efficient to use an array than a
hash table.
1) Change statement lambdas to expression lambdas;
2) Replace 'Map.putIfAbsent' then 'Map.get' with 'Map.computeIfAbsent';
3) Add missing @VisibleForTesting annotation or make private.
Make TradeStatistics3 implement the previously added ComparableExt
interface and make TradeStatisticsManager hold them as a TreeSet instead
of a HashSet, to support fast retrieval of statistics in any given date
range. (Even though red-black trees are generally slower than hash
tables, this should not matter here since the set is only being iterated
over and infrequently appended, and does not benefit from O(1) lookups/
additions/removals.)
Add a 'TradeStatisticsManager.getNavigableTradeStatisticsSet' accessor,
which returns the backing TreeSet of the current ObservableSet field, so
that callers can access its NavigableSet interface where needed (as
there is no ObservableSortedSet or similar in JavaFX). Use this to
optimise 'AveragePriceUtil.getAveragePriceTuple',
'DisputeAgentSelection.getLeastUsedDisputeAgent' and
'MutableOfferDataModel.getSuggestedSecurityDeposit', to obtain a narrow
date range of trade statistics without streaming over the entire set.
Additionally optimise & simplify the price collation in
'TradeStatisticsManager.onAllServicesInitialised', by exploiting the
fact that the statistics are now sorted in order of date (which is the
presently defined natural order).
Change flow of cloning an offer:
We open the clone offer tab similar like the duplicate/edit offer tab. When clicking the clone button we create and publish the cloned offer. if the clone would not have changed the payment method/currency we show a popup and deactivate the offer.
At editOffer we check if the offer is using a shared maker fee and if so we check if the edit triggered same payment method/currency. If so we show a popup and deactivate the offer.
Signed-off-by: HenrikJannsen <boilingfrog@gmx.com>
Make sure that none of the key extractor functions passed to
'Comparator.comparing(fn)' can return null, as this results in an NPE
when the corresponding column is sorted in the UI, but has blank entries
(such as the BTC received for a BSQ burn in the balance entries table).
(Make blanks appear smallest in magnitude using 'Comparator.nullsFirst'
or by defaulting to 0 instead of null, since the entries are initially
sorted biggest to smallest, pushing them to the bottom of the table.)
Also change the default sort type of the burned BSQ column, which should
be ASCENDING since the entries are negative.
We use a postfix to the header title and not a busy animation, as the busy animation is quite CPU intense.
Signed-off-by: HenrikJannsen <boilingfrog@gmx.com>
There is the small + icon on the right top of the table which allows to show the hidden columns.
Unfortunately this does not come with column names, so it's a bit ugly.
Here is a way how to adjust the context menu: https://gist.github.com/Roland09/d92829cdf5e5fee6fee9
Maybe any dev is motivated to improve that.
We do not add a column to the overview table because calculating the balance entries is expensive and doing it over all burningmen would take too long. There is headroom for performance improvements in that area...
Signed-off-by: HenrikJannsen <boilingfrog@gmx.com>
Add private method 'WalletInfoView.addAccountPaths', similar to the
method 'addXpubKeys', to iterate over the active wallet keychains,
formatting & displaying the derivation paths, instead of using the 4
constants defined in BisqKeyChainGroupStructure. Also simplify the code
slightly by updating the 'gridRow' field directly instead of passing it
as a method argument.
Add the new account path "44'/142'/1'" for segwit BSQ to the wallet info
view, which was missed from PR #5109 making the wallet & UI changes to
implement segwit BSQ. Also format the paths from the constants defined
in 'BisqKeyChainGroupStructure', instead of using string literals, so
that they are only defined in one place. (Though it is extremely
unlikely the paths would ever change.)