The getter was called by EqualsAndHashCode which throws an exception as it is is not intended to get used anymore.
Signed-off-by: HenrikJannsen <boilingfrog@gmx.com>
Do not create a new observableArrayList in filterPaymentAccounts.
The reason why the wrong account gets selected is not completely clear to me. The selection handler gets called when the combobox gets filled and that overwrites the selected account from the data. It seems that the new observableArrayList in filterPaymentAccounts triggered that un-expected behaviour.
Implement new selection algorithm.
Add methods for accessing the receiverAddress (not used yet, but will be used in next commits)
Signed-off-by: HenrikJannsen <boilingfrog@gmx.com>
isHotfixActivated to isBugfix6699Activated,
wasHotfixActivatedAtTradeDate to wasBugfix6699ActivatedAtTradeDate
Signed-off-by: HenrikJannsen <boilingfrog@gmx.com>
Short circuit the BigInteger arithmetic in 'AltcoinExchangeRate' &
'org.bitcoinj.utils.ExchangeRate' (from which the former is adapted), by
using ordinary long arithmetic when it is guaranteed not to overflow due
to the two quantities to be multiplied fitting in an int. This will be
the case most of the time. Also remove duplicated logic, to ensure that
all conversions of BTC amounts to volumes happen via the 'Price'
instance methods, so that the optimisation always applies.
In particular, this speeds up the BTC -> BSQ conversions in the burning
man view, as well as the USD price calculations for the candles in the
trades charts view via 'TradeStatistics3.getTradeVolume()'.
Additionally, fix the lazy initialisation pattern in TradeStatistics3 to
ensure that it is thread safe (that is, it only has benign data races),
by making it of the form:
Foo foo = this.foo;
if (foo == null) {
this.foo = foo = computeFoo();
}
return foo;
This avoids the problem that 'foo' is a nonvolatile field and can
therefore be seen to alternate any number of times between null and
nonnull from the PoV of the thread initialising it (at least when the
initialisation is racy).
Add an 'averagePricesValid' boolean field to avoid needless refilling of
the cached BSQ prices map when calling 'getAverageBsqPriceByMonth()'.
(Also skip a redundant filling of the map will non-historical data upon
startup of the service.) Since the prices are calculated from the
(observable) set of all trade statistics, add a listener to the set to
invalidate the cache whenever it changes.
This significantly speeds up the burning man view, since the getter is
called several times when activating it.
Factor out duplicated logic in the 'Stream.map' lambdas to compute the
BSQ value of the BTC of each streamed ReceivedBtcBalanceEntry, returned
as an 'Optional<Long>'. Also simplify the logic slightly and return an
OptionalLong instead for greater efficiency.
(Also replace a statement lambda with an expression lambda.)
Optimise 'BurningManPresentationService.getCandidateBurnTarget' to avoid
the repeated computation of the total accumulated decayed burned amount
for every listed burning man. To this end, cache the total in a nullable
Long field, along with the method 'getAccumulatedDecayedBurnedAmount()'
to lazily initialise it. (This eliminates a minor hotspot in the burning
man view revealed by JProfiler.)
Use the previously added 'ChartDataModel.toCachedTimeIntervalFn' to
additionally speed up some of the charts in the BSQ supply view, in
particular the trade fees & total burned BSQ, via the DaoChartDataModel
methods 'getBsqTradeFeeByInterval' & 'getTotalBurnedByInterval'. (The
other changes in the BSQ supply, such as proofs of burn or issuance, are
too infrequent to benefit from the LocalDate caching.)
For this to work, the filtered BSQ txs must be streamed in chronological
order, so provide local methods 'get[Burnt|Trade]FeeTxStream()', to use
in place of the DaoStateService methods 'get[Burnt|Trade]FeeTxs()',
which return unordered HashSets.
Now that the trade statistics are retrieved in chronological order,
optimise the per-interval BSQ & USD price and volume calculations in
PriceChartDataModel & VolumeChartDataModel, by adding caches to avoid
relatively expensive timezone calculations in TemporalAdjusterModel,
similarly to the cache added for 'ChartCalculations.roundToTick' (as
profiling shows 'TemporalAdjusterModel.toTimeInteval' is a hotspot).
Add a cache to speed up Instant -> LocalDate mappings by storing the
unix time (Instant) range of the last seen day (LocalDate) in a tuple,
then just returning that day if the next Instant falls in range. Also
add a cache of the last temporal adjustment (start of month, week, etc.)
of that day. In this way, successive calls to 'toTimeInteval(Instant)'
with input times on the same day are sped up.
Since TemporalAdjusterModel is used by multiple threads simultaneously,
store the caches in instance fields and add a 'withCache' method which
clones the model and enables the caching, since otherwise the separate
threads keep invalidating one another's caches, making it slower than it
would be without them. (We could use ThreadLocals, but profiling
suggests they are too heavyweight to be very useful here, so instead use
unsynchronised caching with nonfinal fields and benign data races.)
Provide the method 'ChartDataModel.toCachedTimeIntervalFn' which returns
a method reference to a cloned & cache-enabled TemporalAdjustedModel, to
use in place of the delegate method 'ChartDataModel.toTimeInterval' when
the caching is beneficial.
As profiling shows a hotspot mapping the set of trade statistics to a
list of currencies to pass to 'CurrencyList.updateWithCurrencies',
attempt to speed this up with a parallel stream. For this to work
correctly, take care to use the backing set (with unmodifiable wrapper)
in place of 'tradeStatisticsManager.getObservableTradeStatisticsSet()',
as ObservableSetWrapper doesn't delegate calls to its spliterator.