Avoid calculating average prices for ticks that won't ever be part of a
visible chart candle, as only the last 90 ticks can fit on the chart. To
this end, stream the trade statistics in reverse chronological order
(which requires passing them as a NavigableSet), so that once more than
MAX_TICKS ticks have been encountered for a given tick unit, the
relevant map (and all lower granularity maps) can stop being filled up.
Also add a 'PriceAccumulator' static class to save time and memory when
filling up the intermediate maps, by avoiding the addition of each trade
statistics object to (multiple) temporary lists prior to average price
calculation.
Now that the trade statistics are encountered in chronological order,
speed up 'roundToTick(LocalDateTime, TickUnit)' by caching the last
calculated LocalDateTime -> Date mapping from the tick start (with one
cache entry per tick unit), as multiple successive trades will tend to
have the same tick start.
This avoids a relatively expensive '.atZone(..).toInstant()' call, which
was slowing down 'ChartCalculations.getUsdAveragePriceMapsPerTickUnit',
as it uses 'roundToTick' in a tight loop (#trades * #tick-units calls).
Also unqualify 'TradesChartsViewModel.TickUnit' references for brevity.
Cache enum arrays 'TickUnit.values()' & 'PaymentMethodWrapper.values()'
as the JVM makes defensive copies of them every time they are returned,
and they are both being used in tight loops. In particular, profiling
suggests this will make 'TradeStatistics3.isValid' about twice as fast.
The test was erroneously passing a candle tick start time (as a long) to
'ChartCalculations.getCandleData', which expects a tick index from 0 to
MAX_TICKS + 1 (91) inclusive. Since this is out of range, the method
skipped an 'itemsPerInterval' map lookup which would have thrown an NPE
prior to the last commit. Fix the test by making 'itemsPerInterval'
nonempty and passing 0 as the tick index. Also check the now correctly
populated 'date' field in the returned candle data.
Additionally, tidy up the class a little and avoid an unnecessary temp
directory creation.
Avoid scanning all the ticks backwards from 90 to 1 repeatedly, to find
the one with the correct date interval for each item in the
'tradeStatisticsByCurrency' list. Instead, for each item, remember the
last found tick index and move forwards if necessary, then scan
backwards from that point to find the correct tick. As the trade
statistics are now in chronological order, this is much faster (though
it will still work correctly regardless of the order of the list items).
Also, hold 'itemsPerInterval' as a 'List<Pair<..>>' instead of a
'Map<Long, Pair<..>>', since the keys are just tick indices from 0 to 91
inclusive, so it is cleaner and more efficient to use an array than a
hash table.
1) Change statement lambdas to expression lambdas;
2) Replace 'Map.putIfAbsent' then 'Map.get' with 'Map.computeIfAbsent';
3) Add missing @VisibleForTesting annotation or make private.
Now that the trade statistics are retrieved as a sorted set, it can be
assumed that the USD & BSQ trade lists passed to 'getUSDAverage' are
already sorted. Use this to avoid repeatedly scanning the USD trade list
for the first trade dated after each given BSQ trade, by moving two
cursors in a single pass across the respective lists simultaneously.
Make TradeStatistics3 implement the previously added ComparableExt
interface and make TradeStatisticsManager hold them as a TreeSet instead
of a HashSet, to support fast retrieval of statistics in any given date
range. (Even though red-black trees are generally slower than hash
tables, this should not matter here since the set is only being iterated
over and infrequently appended, and does not benefit from O(1) lookups/
additions/removals.)
Add a 'TradeStatisticsManager.getNavigableTradeStatisticsSet' accessor,
which returns the backing TreeSet of the current ObservableSet field, so
that callers can access its NavigableSet interface where needed (as
there is no ObservableSortedSet or similar in JavaFX). Use this to
optimise 'AveragePriceUtil.getAveragePriceTuple',
'DisputeAgentSelection.getLeastUsedDisputeAgent' and
'MutableOfferDataModel.getSuggestedSecurityDeposit', to obtain a narrow
date range of trade statistics without streaming over the entire set.
Additionally optimise & simplify the price collation in
'TradeStatisticsManager.onAllServicesInitialised', by exploiting the
fact that the statistics are now sorted in order of date (which is the
presently defined natural order).
Provide a 'RangeUtils' class for computing subsets of a navigable set
with natural element order, with each bound defined by a mathematical
filter (that is, a predicate specifying whether an element is 'big' -
true, or 'small' - false), instead of a specific element. This allows
the subset of all elements which map into a given range to be computed,
provided the mapping function is (strictly or non-strictly) increasing.
Provide a fluent interface for this in RangeUtils (with unit tests).
To support this, provide a Comparable sub-interface, 'ComparableExt', of
elements which may be compared with marks/delimiters instead of just
elements of the same type, to work round the limitation that sorted (&
navigable) Sets/Maps in Java do not support general binary searching
with a filter (predicate) on the keys instead of just a specific key.
This will make it possible to efficiently take subsets of objects within
a given date range, say, without having to scan the entire set, provided
it is sorted (w.r.t. a suitable natural order).
Use a DoubleStream when streaming over 'List<Double>' method arguments
in InlierUtil, as well as a primitive array sort in 'InlierUtil.trim'
(followed by taking a sublist view), instead of calling 'Stream.sorted'.
To this end, use Guava 'Doubles.asList' to pass lists of Doubles to/from
the InlierUtil methods without incurring any boxing or unboxing costs,
since their spliterators can be simply downcast to Spliterator.OfDouble
(opportunistically), instead of needing to use 'mapToDouble' to unbox.
This was a minor hotspot when called from AveragePriceUtil (used by the
burning man and BSQ dashboard views).
Use nonstrict bounds when filtering outliers from the provided trade
statistics list, since otherwise it will always remove the outermost two
inliers from the list. This is because the dependent call to
'InlierUtil.findInlierRange' returns the min & max inlier values.
Use a Map to speed up 'PaymentMethod.getPaymentMethod', called from
'isValid', instead of scanning the payment method list every invocation.
Make the list immutable to ensure the map never goes stale, which is OK
since no code modified it outside PaymentMethod's static initialisation.
Also speed up the global accessor 'TradeLimits.getMaxTradeLimit', by
caching the DAO param value in a volatile field, cleared upon each new
block arrival.
Furthermore, speed up 'TradeLimits.getFirstMonthRiskBasedTradeLimit' by
simplifying the rounding logic to avoid a pass through the (rather slow)
BigDecimal type.
Short circuit the exception control flow used in the method
'TradeStatistics3.getPaymentMethodId', which occurs whenever the payment
method code is stored directly in the 'paymentMethod' field instead of
first being converted into a numeric string. This occurs if the method
is unrecognised as it is not in listed into 'PaymentMethodWrapper' enum.
This fixes an unnecessary slowdown of 'TradeStatistics3.isValid', which
calls the above, when scanning the list of BSQ trade statistics in
AveragePriceUtil and elsewhere, due to the fact that the 'BSQ_SWAP'
payment method has been missed out of the above enum.
Also add a (presently disabled) unit test to prevent any future payment
methods from being missed out of the enum. (Should fix the missing BSQ
swaps issue in a separate PR to make sure that the seed nodes recognise
the new payment method code before anyone else.)
Make sure that none of the key extractor functions passed to
'Comparator.comparing(fn)' can return null, as this results in an NPE
when the corresponding column is sorted in the UI, but has blank entries
(such as the BTC received for a BSQ burn in the balance entries table).
(Make blanks appear smallest in magnitude using 'Comparator.nullsFirst'
or by defaulting to 0 instead of null, since the entries are initially
sorted biggest to smallest, pushing them to the bottom of the table.)
Also change the default sort type of the burned BSQ column, which should
be ASCENDING since the entries are negative.
We use a postfix to the header title and not a busy animation, as the busy animation is quite CPU intense.
Signed-off-by: HenrikJannsen <boilingfrog@gmx.com>