Fix raw usage of the following types, all of which (apart from
Comparator) touch the DAO packages somewhere:
Comparable, Comparator, GetStateHashesResponse, NewStateHashMessage,
RequestStateHashesHandler, PersistenceManager
(Also replace 'Integer.valueOf' with the non-boxing but otherwise
identical method 'Integer.parseInt', in the class 'TxOutputKey'.)
Replace all raw uses of 'Bond<T extends BondedAsset>', mostly with
wildcards (that is, 'Bond<?>'), to prevent compiler/IDE warnings.
Also rename the 'T extends Bond<R>' & 'R extend BondedAsset' type params
of 'BondRepository<..>' to 'B' & 'T' respectively, as this is a little
less confusing.
Use the simpler & slightly more efficient 'Map::computeIfAbsent' method
in place of the common pattern:
map.putIfAbsent(key, newValue());
V value = map.get();
(Clean up BondRepository + some cases missed from BurningManService.)
Remove the last 10 blocks one-by-one from the end of the internal linked
list of blocks, instead of rebuilding a truncated list from scratch.
(This all takes place within a write-lock anyway, so it's atomic.)
Add missing synchronisation to the 'toProtoMessage' method, by first
copying the internal list of blocks inside a read-lock, prior to
serialisation (still outside the lock, to maximise concurrency). Since
we only make a shallow copy, this should be fast and take no more than a
MB or so of extra memory.
This prevents a race seen to cause a ConcurrentModificationException
during store persistence, that sometimes occurred when the application
resumed from a long suspension.
Use 'Tx::getBurntBsq' instead of 'Tx::getBurntFee', so as not to exclude
BSQ burned by invalid txs from the supply calculations. There are no
invalid BSQ txs at present on mainchain, but accidentally burned BSQ
should definitely count as a reduction in supply, so this fixes a bug.
1. Tidy up the stream pipelines which sum over time intervals, by
summing directly with a grouping collector, instead of wastefully
collecting to an intermediate map of lists;
2. Move duplicate 'memoize' static method to the base class;
3. Factor out 'getDateFilteredMap' static method, to replace the
repeated pattern of filtering date keys by a provided predicate and
collecting into a new map;
4. Use 'Map::replaceAll' instead of the pattern:
map.entrySet().forEach(e -> e.setValue(updateFn(e.getValue())));
5. Fix a quadratic time bug in 'getBsqMarketCapByInterval' by passing an
ordered map to 'issuanceAsOfDate', so that it doesn't have to
repeatedly sort or linearly scan the entire keyset of time intervals,
to find the latest one before the provided date.
First, the AtomicFileWriter writes data to a rolling file and before
swapping the rolling file with the active file. This makes all file
writes atomic and let's Bisq recover from crashes.