Run through the 'DecryptedBallotsWithMerits' list of each cycle in
parallel, when filling the cycle list of the Vote Result view, to speed
up the signature verification of all the 'Merit' objects found in the
DAO state. Checking all the signatures is necessary to correctly compute
the total merit stake and hence the vote weight of each ballot list, and
profiling shows that it is by far the biggest bottleneck during the
initial view load (with all subsequent activations of the view skipping
'doFillCycleList()', when outside of the Vote Result DAO phase).
(Since each signature is checked only once from 'doFillCycleList()' and
skipping the checks could potentially affect the computed vote weights,
we cannot obviously do any better than parallelise the checks, to speed
up this method.)
(Even though it may be a little more efficient to parallelise the outer
loop of the method, over the cycles instead of the decrypted votes of
each cycle, each individual signature check is expensive enough that it
probably wouldn't give much improvement over this one-line change.)
This should hopefully resolve images failing to load from resources
on Windows due to problems with imread encountering "can't open/read
file: check file path/integrity".
This restores the functionality that was removed in b5beea58. However,
this implementation utilizes the JavaCV library rather than the
webcam-capture library as discussed in #4940. As a result, this should
now provide macOS support.
In case the other seed node has not updated the historical data is not taken into account, thus we would get more repeated requests until all data is received. To avoid that we get stuck we increase the limit.
Signed-off-by: HenrikJannsen <boilingfrog@gmx.com>
3000 items are about 180.325 kB.
For nodes not being online for longer the repeated requests consumes quite some time.
With 15k we can expect a 1 MB payload which is still acceptable.
Signed-off-by: HenrikJannsen <boilingfrog@gmx.com>
Remove the last 10 blocks one-by-one from the end of the internal linked
list of blocks, instead of rebuilding a truncated list from scratch.
(This all takes place within a write-lock anyway, so it's atomic.)
Add missing synchronisation to the 'toProtoMessage' method, by first
copying the internal list of blocks inside a read-lock, prior to
serialisation (still outside the lock, to maximise concurrency). Since
we only make a shallow copy, this should be fast and take no more than a
MB or so of extra memory.
This prevents a race seen to cause a ConcurrentModificationException
during store persistence, that sometimes occurred when the application
resumed from a long suspension.
Use 'Tx::getBurntBsq' instead of 'Tx::getBurntFee', so as not to exclude
BSQ burned by invalid txs from the supply calculations. There are no
invalid BSQ txs at present on mainchain, but accidentally burned BSQ
should definitely count as a reduction in supply, so this fixes a bug.
1. Tidy up the stream pipelines which sum over time intervals, by
summing directly with a grouping collector, instead of wastefully
collecting to an intermediate map of lists;
2. Move duplicate 'memoize' static method to the base class;
3. Factor out 'getDateFilteredMap' static method, to replace the
repeated pattern of filtering date keys by a provided predicate and
collecting into a new map;
4. Use 'Map::replaceAll' instead of the pattern:
map.entrySet().forEach(e -> e.setValue(updateFn(e.getValue())));
5. Fix a quadratic time bug in 'getBsqMarketCapByInterval' by passing an
ordered map to 'issuanceAsOfDate', so that it doesn't have to
repeatedly sort or linearly scan the entire keyset of time intervals,
to find the latest one before the provided date.