This is in anticipation of speedups we wish to make, as JProfiler
reveals it to be a hotspot during new block arrivals (which are tricky
to profile, as they occur at random).
Fix the broken stubbing of 'PersistenceManager', which had gone stale as
a result of the conversion of 'Preferences' to asynchronous persistence
in commit 3f4d6e6 (2020/10/12). This caused the assertions in the
'readPersisted' continuation blocks of 3 of the 4 tests not to be
reached. Fix by stubbing the async 'persistenceManager::readPersisted'
method with a callback, instead of stubbing 'getPersisted'.
NOTE: Alternatively, we could add a testing-only 'readPersistedSync'
method to 'Preferences' for consistency, as this is how the other broken
(failing) tests resulting from 3f4d6e6 were fixed (in commit 68583d8).
Fix raw usage of the following types, all of which (apart from
Comparator) touch the DAO packages somewhere:
Comparable, Comparator, GetStateHashesResponse, NewStateHashMessage,
RequestStateHashesHandler, PersistenceManager
(Also replace 'Integer.valueOf' with the non-boxing but otherwise
identical method 'Integer.parseInt', in the class 'TxOutputKey'.)
Replace all raw uses of 'Bond<T extends BondedAsset>', mostly with
wildcards (that is, 'Bond<?>'), to prevent compiler/IDE warnings.
Also rename the 'T extends Bond<R>' & 'R extend BondedAsset' type params
of 'BondRepository<..>' to 'B' & 'T' respectively, as this is a little
less confusing.
Use the simpler & slightly more efficient 'Map::computeIfAbsent' method
in place of the common pattern:
map.putIfAbsent(key, newValue());
V value = map.get();
(Clean up BondRepository + some cases missed from BurningManService.)
Remove the last 10 blocks one-by-one from the end of the internal linked
list of blocks, instead of rebuilding a truncated list from scratch.
(This all takes place within a write-lock anyway, so it's atomic.)
Add missing synchronisation to the 'toProtoMessage' method, by first
copying the internal list of blocks inside a read-lock, prior to
serialisation (still outside the lock, to maximise concurrency). Since
we only make a shallow copy, this should be fast and take no more than a
MB or so of extra memory.
This prevents a race seen to cause a ConcurrentModificationException
during store persistence, that sometimes occurred when the application
resumed from a long suspension.
Use 'Tx::getBurntBsq' instead of 'Tx::getBurntFee', so as not to exclude
BSQ burned by invalid txs from the supply calculations. There are no
invalid BSQ txs at present on mainchain, but accidentally burned BSQ
should definitely count as a reduction in supply, so this fixes a bug.
1. Tidy up the stream pipelines which sum over time intervals, by
summing directly with a grouping collector, instead of wastefully
collecting to an intermediate map of lists;
2. Move duplicate 'memoize' static method to the base class;
3. Factor out 'getDateFilteredMap' static method, to replace the
repeated pattern of filtering date keys by a provided predicate and
collecting into a new map;
4. Use 'Map::replaceAll' instead of the pattern:
map.entrySet().forEach(e -> e.setValue(updateFn(e.getValue())));
5. Fix a quadratic time bug in 'getBsqMarketCapByInterval' by passing an
ordered map to 'issuanceAsOfDate', so that it doesn't have to
repeatedly sort or linearly scan the entire keyset of time intervals,
to find the latest one before the provided date.