The rpc GetBsqSwapTrade and TakeBsqSwapOffer services were a quick hack
to help test BSQ swap feature development more quickly, using apitest
cases. Their gRPC service method implementations are removed here and
the GetTrade & TakeOffer service methods are refactored to support BSQ
swaps.
Add a numeric version field to the 'ProofOfWork' protobuf object, along
with a list of allowed version numbers, 'enabled_pow_versions', to the
filter. The versions are taken to be in order of preference from most to
least preferred when creating a PoW, with an empty list signifying use
of the default algorithm only (that is, version 0: Hashcash).
An explicit list is used instead of an upper & lower version bound, in
case a new PoW algorithm (or changed algorithm params) turns out to
provide worse resistance than an earlier version.
(The fields are unused for now, to be enabled in a later commit.)
PaymentMethod use an instance of TradeLimits and expect that it
has been injected, which is the case for desktop but not for
headless apps, so we enforce injection in the app base classes
used for headless apps.
The validation of trade statistics use a method in PaymentMethod
where that dependency is required.
Tha hack how the PaymentMethod use TradeLimits is not nice, but
would require more effort for refactoring.
We had historically higher trade limits and assets which are not in the
currency list anymore, so we apply the filter only for data after
Nov 1st 2021.
We had historically higher trade limits and assets which are not in the
currency list anymore, so we apply the filter only for data after
Nov 1st 2021.
This change sets java source and class generation version targets to 11.
The Bisq distribution is built with JDK 11, but source target has remained at 1.10.
Upgrading allows devs to use some Java syntax features available @since 11, and it
might help anyone building the src avoid confusion over which JDK they should use
(minimum is JDK 11).
See https://docs.gradle.org/current/userguide/java_plugin.html#sec:java-extension
PaymentMethod use an instance of TradeLimits and expect that it
has been injected, which is the case for desktop but not for
headless apps, so we enforce injection in the app base classes
used for headless apps.
The validation of trade statistics use a method in PaymentMethod
where that dependency is required.
Tha hack how the PaymentMethod use TradeLimits is not nice, but
would require more effort for refactoring.
Run the initial XorTable fillup in 'Equihash::computeAllHashes' in
parallel, using a parallel stream, to get an easy speed up. (The solver
spends about half its time computing BLAKE2b hashes before iteratively
building tables of partial collisions using 'Equihash::findCollisions'.)
As part of this, replace the use of 'java.nio.ByteBuffer' array wrapping
in 'Utilities::(bytesToIntsBE|intsToBytesBE)' with manual for-loops, as
profiling reveals an unexpected bottleneck in the former when used in a
multithreaded setting. (Lock contention somewhere in unsafe code?)
found in the active list. Otherwise the 2 BTC default is used.
We get TradeStatistics3 objects from old retired PaymentMethods
which are not found in the active list.
Manually iterate over colliding table rows using a while- loop and a
custom 'PrimitiveIterator.OfInt' implementation, instead of a foreach
lambda called on an IntStream, in 'Equihash::findCollisions'. Profiling
shows that this results in a slight speedup.
Provide a (vastly cut down) drop-in replacement for the Guava multimap
instance 'indexMultimap', of type 'ListMultimap<Integer, Integer>', used
to map table row indices to block values, to detect collisions at a
given block position (that is, in a given table column).
The replacement stores (multi-)mappings from ints to ints in a flat int-
array, only spilling over to a ListMultimap if there are more than 4
values added for a given key. This vastly reduces the amount of boxing
and memory usage when running 'Equihash::findCollisions' to build up the
next table as part of Wagner's algorithm.
Implement the Equihash (https://eprint.iacr.org/2015/946.pdf) algorithm
for solving/verifying memory-hard client-puzzles/proof-of-work problems
for ASIC-resistant DoS attack protection. The scheme is asymmetric, so
that even though solving a puzzle is slow and memory-intensive, needing
100's of kB to MB's of memory, the solution verification is instant.
Instead of a single 64-bit counter/nonce, as in the case of Hashcash,
Equihash solutions are larger objects ranging from 10's of bytes to a
few kB, depending on the puzzle parameters used. These need to be
stored in entirety, in the proof-of-work field of each offer payload.
Include logic for fine-grained difficulty control in Equihash with a
double-precision floating point number. This is based on lexicographic
comparison with a target hash, like in Bitcoin, instead of just
counting the number of leading zeros of a hash.
The code is unused at present. Also add some simple unit tests.