Add an abstract base class, 'ProofOfWorkService', for the existing PoW
implementation 'HashCashService' and a new 'EquihashProofOfWorkService'
PoW implementation based on Equihash-90-5 (which has 72 byte solutions &
5-10 MB peak memory usage). Since the current 'ProofOfWork' protobuf
object only provides a 64-bit counter field to hold the puzzle solution
(as that is all Hashcash requires), repurpose the 'payload' field to
hold the Equihash puzzle solution bytes, with the 'challenge' field
equal to the puzzle seed: the SHA256 hash of the offerId & makerAddress.
Use a difficulty scale factor of 3e-5 (derived from benchmarking) to try
to make the average Hashcash & Equihash puzzle solution times roughly
equal for any given log-difficulty/numLeadingZeros integer chosen in the
filter.
NOTE: An empty enabled-version-list in the filter defaults to Hashcash
(= version 0) only. The new Equihash-90-5 PoW scheme is version 1.
Provide a utility method, 'Equihash::adjustDifficulty', to linearise and
normalise the expected time taken to solve a puzzle, as a function of
the provided difficulty, by taking into account the fact that there
could be 0, 1, 2 or more puzzle solutions for any given nonce. (Wagner's
algorithm is supposed to give 2 solutions on average, but the observed
number is fewer, possibly due to duplicate removal.) For tractability,
assume that the solution count has a Poisson distribution, which seems
to have good agreement with the tests.
Also add some (disabled) benchmarks to EquihashTest. These reveal an
Equihash-90-5 solution time of ~146ms per puzzle per unit difficulty on
a Core i3 laptop, with a verification time of ~50 microseconds.
The rpc GetBsqSwapTrade and TakeBsqSwapOffer services were a quick hack
to help test BSQ swap feature development more quickly, using apitest
cases. Their gRPC service method implementations are removed here and
the GetTrade & TakeOffer service methods are refactored to support BSQ
swaps.
Add a numeric version field to the 'ProofOfWork' protobuf object, along
with a list of allowed version numbers, 'enabled_pow_versions', to the
filter. The versions are taken to be in order of preference from most to
least preferred when creating a PoW, with an empty list signifying use
of the default algorithm only (that is, version 0: Hashcash).
An explicit list is used instead of an upper & lower version bound, in
case a new PoW algorithm (or changed algorithm params) turns out to
provide worse resistance than an earlier version.
(The fields are unused for now, to be enabled in a later commit.)
PaymentMethod use an instance of TradeLimits and expect that it
has been injected, which is the case for desktop but not for
headless apps, so we enforce injection in the app base classes
used for headless apps.
The validation of trade statistics use a method in PaymentMethod
where that dependency is required.
Tha hack how the PaymentMethod use TradeLimits is not nice, but
would require more effort for refactoring.
We had historically higher trade limits and assets which are not in the
currency list anymore, so we apply the filter only for data after
Nov 1st 2021.
We had historically higher trade limits and assets which are not in the
currency list anymore, so we apply the filter only for data after
Nov 1st 2021.
This change sets java source and class generation version targets to 11.
The Bisq distribution is built with JDK 11, but source target has remained at 1.10.
Upgrading allows devs to use some Java syntax features available @since 11, and it
might help anyone building the src avoid confusion over which JDK they should use
(minimum is JDK 11).
See https://docs.gradle.org/current/userguide/java_plugin.html#sec:java-extension
PaymentMethod use an instance of TradeLimits and expect that it
has been injected, which is the case for desktop but not for
headless apps, so we enforce injection in the app base classes
used for headless apps.
The validation of trade statistics use a method in PaymentMethod
where that dependency is required.
Tha hack how the PaymentMethod use TradeLimits is not nice, but
would require more effort for refactoring.
Run the initial XorTable fillup in 'Equihash::computeAllHashes' in
parallel, using a parallel stream, to get an easy speed up. (The solver
spends about half its time computing BLAKE2b hashes before iteratively
building tables of partial collisions using 'Equihash::findCollisions'.)
As part of this, replace the use of 'java.nio.ByteBuffer' array wrapping
in 'Utilities::(bytesToIntsBE|intsToBytesBE)' with manual for-loops, as
profiling reveals an unexpected bottleneck in the former when used in a
multithreaded setting. (Lock contention somewhere in unsafe code?)