Use 'java.nio.file.FileSystem' in place of 'java.util.jar.JarFile' to
list all the resources under a given classpath (on Windows & Linux), as
that is a bit neater and potentially more efficient than scanning the
entire ZIP directory structure.
Prevent a "URI is not hierarchical" IllegalArgumentException from the
expression, 'new File(dirUrl.toURI())', which occurs on Linux & Windows
when listing the resource directory of BSQ blocks. Adapt a solution from
StackOverflow which uses two separate code paths depending on the
environment, into the new method 'FileUtil::listResourceDirectory'.
The issue is caused by the resource URL taking one of the two forms:
file:/Users/[USER]/Java/bisq/bisq/p2p/out/production/resources/BsqBlocks_BTC_MAINNET
jar:file:...p2p.jar!/BsqBlocks_BTC_MAINNET
depending on whether the system is OSX or not.
Avoid repurposing the 'ProofOfWork.payload' field for Equihash puzzle
solutions, as that may be of later use in interactive PoW schemes such
as P2P network DoS protection (where the challenge may be a random nonce
instead of derived from the offer ID). Instead, make the payload the
UTF-8 bytes of the offer ID, just as with Hashcash.
Also, make the puzzle seed the SHA-256 hash of the payload concatenated
with the challenge, instead of just the 256-bit challenge on its own, so
that the PoW is tied to a particular payload and cannot be reused for
other payloads in the case of future randomly chosen challenges.
1. Reorder the PoW fields in the 'Filter' proto by field index, instead
of contextually.
2. Deduplicate expression for 'pow' & replace if-block with boolean op
to simplify 'FilterManager::isProofOfWorkValid'.
3. Avoid slightly confusing use of null char as a separator to prevent
hashing collisions in 'EquihashProofOfWorkService::getChallenge'. Use
comma separator and escape the 'itemId' & 'ownerId' arguments instead.
(based on PR #5858 review comments)
Fix a trivial bug in the iterator returned by 'IntListMultimap::get',
caused by mistaken use of the iterator index in place of the key when
doing lookups into the overspill map. This was causing puzzle solutions
to be invalid about 3% of the time, as well as substantially reducing
the average number of solutions found per nonce.
As the fix increases the mean solution count per nonce to the correct
value of 2.0 predicted by the paper (regardless of puzzle params k & n),
inline the affected constants to simplify 'Equihash::adjustDifficulty'.
Remove all the 'challengeValidation', 'difficultyValidation' and
'testDifficulty' BiPredicate method params from 'HashCashService' &
'ProofOfWorkService', to simplify the API. These were originally
included to aid testing, but turned out to be unnecessary.
Patches committed on behalf of @chimp1984.
Change the type of the 'difficulty' field in the Filter & ProofOfWork
proto objects from int32/bytes to double and make it use a linear scale,
in place of the original logarithmic scale which counts the (effective)
number of required zeros.
This allows fine-grained difficulty control for Equihash, though for
Hashcash it simply rounds up to the nearest power of 2 internally.
NOTE: This is a breaking change to PoW & filter serialisation (unlike
the earlier PR commits), as the proto field version nums aren't updated.
Add an abstract base class, 'ProofOfWorkService', for the existing PoW
implementation 'HashCashService' and a new 'EquihashProofOfWorkService'
PoW implementation based on Equihash-90-5 (which has 72 byte solutions &
5-10 MB peak memory usage). Since the current 'ProofOfWork' protobuf
object only provides a 64-bit counter field to hold the puzzle solution
(as that is all Hashcash requires), repurpose the 'payload' field to
hold the Equihash puzzle solution bytes, with the 'challenge' field
equal to the puzzle seed: the SHA256 hash of the offerId & makerAddress.
Use a difficulty scale factor of 3e-5 (derived from benchmarking) to try
to make the average Hashcash & Equihash puzzle solution times roughly
equal for any given log-difficulty/numLeadingZeros integer chosen in the
filter.
NOTE: An empty enabled-version-list in the filter defaults to Hashcash
(= version 0) only. The new Equihash-90-5 PoW scheme is version 1.
Provide a utility method, 'Equihash::adjustDifficulty', to linearise and
normalise the expected time taken to solve a puzzle, as a function of
the provided difficulty, by taking into account the fact that there
could be 0, 1, 2 or more puzzle solutions for any given nonce. (Wagner's
algorithm is supposed to give 2 solutions on average, but the observed
number is fewer, possibly due to duplicate removal.) For tractability,
assume that the solution count has a Poisson distribution, which seems
to have good agreement with the tests.
Also add some (disabled) benchmarks to EquihashTest. These reveal an
Equihash-90-5 solution time of ~146ms per puzzle per unit difficulty on
a Core i3 laptop, with a verification time of ~50 microseconds.
Add a numeric version field to the 'ProofOfWork' protobuf object, along
with a list of allowed version numbers, 'enabled_pow_versions', to the
filter. The versions are taken to be in order of preference from most to
least preferred when creating a PoW, with an empty list signifying use
of the default algorithm only (that is, version 0: Hashcash).
An explicit list is used instead of an upper & lower version bound, in
case a new PoW algorithm (or changed algorithm params) turns out to
provide worse resistance than an earlier version.
(The fields are unused for now, to be enabled in a later commit.)
Run the initial XorTable fillup in 'Equihash::computeAllHashes' in
parallel, using a parallel stream, to get an easy speed up. (The solver
spends about half its time computing BLAKE2b hashes before iteratively
building tables of partial collisions using 'Equihash::findCollisions'.)
As part of this, replace the use of 'java.nio.ByteBuffer' array wrapping
in 'Utilities::(bytesToIntsBE|intsToBytesBE)' with manual for-loops, as
profiling reveals an unexpected bottleneck in the former when used in a
multithreaded setting. (Lock contention somewhere in unsafe code?)
Manually iterate over colliding table rows using a while- loop and a
custom 'PrimitiveIterator.OfInt' implementation, instead of a foreach
lambda called on an IntStream, in 'Equihash::findCollisions'. Profiling
shows that this results in a slight speedup.
Provide a (vastly cut down) drop-in replacement for the Guava multimap
instance 'indexMultimap', of type 'ListMultimap<Integer, Integer>', used
to map table row indices to block values, to detect collisions at a
given block position (that is, in a given table column).
The replacement stores (multi-)mappings from ints to ints in a flat int-
array, only spilling over to a ListMultimap if there are more than 4
values added for a given key. This vastly reduces the amount of boxing
and memory usage when running 'Equihash::findCollisions' to build up the
next table as part of Wagner's algorithm.
Implement the Equihash (https://eprint.iacr.org/2015/946.pdf) algorithm
for solving/verifying memory-hard client-puzzles/proof-of-work problems
for ASIC-resistant DoS attack protection. The scheme is asymmetric, so
that even though solving a puzzle is slow and memory-intensive, needing
100's of kB to MB's of memory, the solution verification is instant.
Instead of a single 64-bit counter/nonce, as in the case of Hashcash,
Equihash solutions are larger objects ranging from 10's of bytes to a
few kB, depending on the puzzle parameters used. These need to be
stored in entirety, in the proof-of-work field of each offer payload.
Include logic for fine-grained difficulty control in Equihash with a
double-precision floating point number. This is based on lexicographic
comparison with a target hash, like in Bitcoin, instead of just
counting the number of leading zeros of a hash.
The code is unused at present. Also add some simple unit tests.
Replace 'BiFunction<T, U, Boolean>' with the primitive specialisation
'BiPredicate<T, U>' in HashCashService & FilterManager.
As part of this, replace similar predicate constructs found elsewhere.
NOTE: This touches the DAO packages (trivially @ VoteResultService).
to startPeriodicTasks.
Call that from BisqExecutable.onApplicationLaunched
We have created a non-UI timer before for the
printSystemLoadPeriodically as that was called
before the configUserThread was called (where we
define the UI timer for desktop)
Improve logging
Add BsqBlockStore to protobuf
Remove DaoStateMonitoringService field
Do not persist the blocks in daoState anymore.
This improves persistence performance and reduces memory
requirements for snapshots.
Increase duration for autoReleaseMemory from 60 to 120 sec.
Improve logging.
Add print stack trace when in dev mode to show caller for debugging/tuning.
Remove inefficient GC calls (based on test runs when no reduction occurred at those calls).
Seed nodes have 4 GB allocated so the attempt to reduce memory
footprint is not needed for those cases. We use the fullDaoNode
prog arg as its expected that any full node will have allocated
sufficient memory and thus the intended reduction of memory by
calling the GC is not required and counter productive.
Some i18n property values can be used by the API if long strings are
wrapped before written as commments to json payment account forms, or
written to the CLI console.
This change anticipates the addition of the more complex Swift payment
method (PR 5672). PR 5672's i18n property value for key "payment.swift.info"
will be wrapped and appended to the comments of the Swift payment account's
json form.
In certain situations we deal with infinite (or NaN) doubles. This causes issues when trying to instantiate a BigDecimal from it, which is an intermediary step in the process of rounding the double.
This commit checks for such non-finite doubles and throws a specific exception if their rounding is attempted.
These changes make tradeCurrencies a required twise acct form field.
Addresses issue https://github.com/bisq-network/bisq/issues/5281
- Added comment to PaymentAccountForm about special 'tradeCurrencies' field handling.
- Added support for Transferwise 'tradeCurrencies' to PaymentAccountTypeAdapter.
- Added instance of isTransferwiseAccount check to PaymentAccount.
- Added methods to CurrencyUtil to convert comma delimited code list to TradeCurrency list.
- Added methods to ReflectionUtil for getting fields and methods on class.
- Added required field check to CorePaymentAccountsService.
- Added CreatePaymentAccount test cases.
prevent write to disk before it is called.
In case the user started Bisq 2 times, the second process
would have called the flushAllDataToDisk at shutdown and
overwrote data from the running app.
Provide a new 'BitcoindDaemon' block notification socket server, to
replace 'com.neemre.btcdcli4j.daemon.BtcdDaemonImpl'. This starts a
single service thread to listen for raw block hashes on localhost port
512*, sent by the specified 'blocknotify' shell/batch script, delegating
to a pool of worker threads to run the supplied BlockListener handler.
Unlike the original BtcdDaemonImpl class, a call to the 'getblock' RPC
method is not made automatically to supply a complete block to the
handler, instead requiring a separate, manual BitcoindClient.getBlock
invocation from within RpcService.
Also provide unit tests using a mock ServerSocket + Socket.
TODO: Use the new Bitcoind(Client|Daemon) implementations in RpcService,
in place of btcdcli4j Btcd(Client|Daemon)Impl & remove the old library.
Seems the persistence at shutdown is too unsafe and we got bug reports where data was missing.
https://github.com/bisq-network/bisq/issues/4806
Use millisec instead of sec for delay
Rename delayInSec to delay
(which can happen in rare cases) and add guards that we never create multiple instances
for a given file as well not call initialize or other API methods after shutdown was started.