Use a separate file, delayed_payout_txs, to write all txs and their
delayed payout tx hashes as json. This will enable users to publish
locked txs when their data dir has been corrupted, as long as the trade
data is there.
* Refactoring of and function in ValidationResult
This function didnt have terminating short circuit which is important
for and operator functions. Also this function gets called multiple
times since it's validation function. hence improving needed here.
After verifying usages, left out following two classes
1. PercentageNumberValidator - since it's using only two validation
2. PhoneNumberValidator - have variable inputs and this is only place.
* Fix code format issue
Co-authored-by: Christoph Atteneder <christoph.atteneder@gmail.com>
Replace tail recursion of the play() method with an ordinary loop, to
prevent a new open JAR resource InputStream + sound file OutputStream
(which were created every 4 minute playback) from accumulating on the
stack, closing them inside the loop instead. (This also prevents
eventual stack overflow.)
Also tidy up FileUtil.resourceToFile and put the JAR URL InputStream in
a try-with-resources block, to ensure that it doesn't leak either.
Previously, Travis CI was failing non-deterministically due to a race
condition in which a thread was started in order to call the blocking
ServerSocket.accept() method, and sometimes the subsequent attempt by
LocalBitcoinNode.detectAndRun() to connect to that socket's port would
occur before the thread had actually called the accept() method.
This commit simplifies the approach by removing the thread entirely. As
it turns out, calling accept() is not necessary; simply constructing a
new ServerSocket() binds to and listens on the given port, such that a
subsequent attempt to connect() will succeed.
Previously ConfigTests constructed Config instances with string-based
options, e.g.:
Config config = new Config("--appName=My-Bisq");
The advantage here is clarity, but the downside is repetition of the
option names without any reference to their corresponding Config.*
constants.
One solution to the problem would be to format the option strings using
constants declared in the Config class, e.g.:
Config config = new Config(format("--%s=My-Bisq", APP_NAME));
but this is verbose, cumbersome to read and write and requires repeating
he '--' and '=' option syntax.
This commit introduces the Opt class and the opt() and configWithOpts()
methods to ConfigTests to make testing easier while using constant
references and preserving readability. e.g.:
Config config = configWithOpts(opt(APP_NAME, "My-Bisq"));
In the process of making these changes a bug was discovered in the
monitor submodule's P2PNetworkLoad class and that has been fixed here as
well.
This change also required introducing several option name constants that
had not previously been extracted in order to be referenced within
ConfigTests. For consistency and completeness, all additional option
names that did not previously have a contstant now have one.
This new class encapsulates all functionality related to detecting a
local Bitcoin node and reporting whether or not it was detected.
Previously this functionality was spread across the Config class
(formerly BisqEnvironment) with its mutable static
isLocalBitcoinNodeRunning property and the BisqSetup class with its
checkIfLocalHostNodeIsRunning method. All of this functionality now
lives within the LocalBitcoinNode class, an instance of which is wired
up via Guice and injected wherever necessary.
Note that the code for detecting whether the node is running has been
simplified, in that it is no longer wrapped in its own dedicated Thread.
There appears to be no performance benefit from doing so, and leaving it
in place would have made testing more difficult than necessary.
Several methods in BisqSetup have also been refactored to accept
callbacks indicating which step should be run next. This has the effect
of clarifying when the step2()-step5() methods will be called.
Previously this static property had been managed within
BaseCurrencyNetwork itself and was accessed directly by callers. Now it
is managed within Config, made private and accessed only via the
new and well-documented baseCurrencyNetwork() method. The same goes for
baseCurrencyNetworkParameters().
It is unfortunate that we must retain these mutable static fields and
accessors, but after trying to eliminate them entirely, this approach is
the lesser of two evils; attempting to use a Config instance and
instance methods only ends up being quite cumbersome to implement,
requiring Config to be injected into many more classes than it currently
is. Getting access to the BaseCurrencyNetwork is basically a special
case, and treating it specially as a static field is in the end the most
pragmatic approach.
Prior to this commit, the way that the appDataDir and its subdirectories
were created was a haphazard process that worked but in a fragile and
non-obvious way. When Config was instantiated, an attempt to call
btcNetworkDir.mkdir() was made, but if appDataDir did not already exist,
this call would always fail because mkdir() does not create parent
directories. This problem was never detected, though, because the
KeyStorage class happened to call mkdirs() on its 'keys' subdirectory,
which, because of the plural mkdirs() call ended up creating the whole
${appDataDir}/${btcNetworkDir}/keys hierarchy. Other btcNetworkDir
subdirectories such as tor/ and db/ then benefited from the hierarchy
already existing when they attempted to call mkdir() for their own dirs.
So the whole arrangement worked only because KeyStorage happened to make
a mkdirs() call and because that code in KeyStorage happened to get
invoked before the code that managed the other subdirectories.
This change ensures that appDataDir and all its subdirectories are
created up front, such that they are guaranteed to exist by the time
they are injected into Storage, KeyStorage, WalletsSetup and TorSetup.
The hierarchy is unchanged, structured as it always has been:
${appDataDir}
└── btc_mainnet
├── db
├── keys
├── wallet
└── tor
Note that the tor/ subdirectory actually gets deleted and re-created
within the TorSetup infrastructure regardless of whether the directory
exists beforehand.
Option handling is now the responsibility of the Config class. JOpt's
OptionParser is no longer passed down to BisqExecutable subclasses'
doExecute method, as they can now rely on the Config abstraction.
Previously, certain exceptions e.g. IllegalArgumentException would
result in the Bisq process exiting with an stack trace. This change
broadens exception handling during argument parsing such that all
Throwable subclasses are caught and the Bisq process exits gracefully
with a simple error message.
In previous commits, BisqEnvironment functionality has been fully ported
to the new, simpler and more type-safe Config class. This change removes
BisqEnvironment and all dependencies on the Spring Framework Environment
interface that it implements.
The one exception is the pricenode module, which is separate and apart
from the rest of the codebase in that it is a standalone, Spring-based
HTTP service.
Note that this change makes the user-facing change of renaming
the 'numConnectionForBtc' (singular 'Connection') to
'numConnectionsForBtc' (plural 'Connections'). It is presumed that not
many users are relying on this option for day-to-day operations, and the
singular version was pretty clearly a typo / oversight.
Note that the default value of 600 advertised in BisqExecutable's option
handling was incorrect. The actual value had since become 1200 MB. This
correct default is now reflected in Config's option handling.
This was added way back in 8f8866da and has since fallen out of use
entirely.
Removing it now because it depends on BisqEnvironment.appName, which
will be moved to Config in the next commit.
NOTE: This removes entirely the old BisqExecutable.appDataDir method
implemented for the v0.5.3 hotfix that renames the data dir from 'bisq'
to 'Bisq'. See a7f3d68cb for details.
Set OptionParser.allowsUnrecognizedOptions(true) to make sure
BisqEnvironment doesn't fail while options are still being transferred
one-by-one to Config.
Prior to this commit, BisqExecutable has been responsible for parsing
command line and config file options and BisqEnvironment has been
responsible for assigning default values to those options and providing
access to option values to callers throughout the codebase.
This approach has worked, but at considerable costs in complexity,
verbosity, and lack of any type-safety in option values. BisqEnvironment
is based on the Spring Framework's Environment abstraction, which
provides a great deal of flexibility in handling command line options,
environment variables, and more, but also operates on the assumption
that such inputs have String-based values.
After having this infrastructure in place for years now, it has become
evident that using Spring's Environment abstraction was both overkill
for what we needed and limited us from getting the kind of concision and
type saftey that we want. The Environment abstraction is by default
actually too flexible. For example, Bisq does not want or need to have
environment variables potentially overriding configuration file values,
as this increases our attack surface and makes our threat model more
complex. This is why we explicitly removed support for handling
environment variables quite some time ago.
The BisqEnvironment class has also organically evolved toward becoming a
kind of "God object", responsible for more than just option handling. It
is also, for example, responsible for tracking the status of the user's
local Bitcoin node, if any. It is also responsible for writing values to
the bisq.properties config file when certain ban filters arrive via the
p2p network. In the commits that follow, these unrelated functions will
be factored out appropriately in order to separate concerns.
As a solution to these problems, this commit begins the process of
eliminating BisqEnvironment in favor of a new, bespoke Config class
custom-tailored to Bisq's needs. Config removes the responsibility for
option parsing from BisqExecutable, and in the end provides "one-stop
shopping" for all option parsing and access needs.
The changes included in this commit represent a proof of concept for the
Config class, where handling of a number of options has been moved from
BisqEnvironment and BisqExecutable over to Config. Because the migration
is only partial, both Config and BisqEnvironment are injected
side-by-side into calling code that needs access to options. As the
migration is completed, BisqEnvironment will be removed entirely, and
only the Config object will remain.
An additional benefit of the elimination of BisqEnvironment is that it
will allow us to remove our dependency on the Spring Framework (with the
exception of the standalone pricenode application, which is Spring-based
by design).
Note that while this change and those that follow it are principally a
refactoring effort, certain functional changes have been introduced. For
example, Bisq now supports a `--configFile` argument at the command line
that functions very similarly to Bitcoin Core's `-conf` option.
Problem: a stack trace was being thrown during daemon startup from
BisqDaemonMain.onSetupComplete when it attempted to start a
second BisqGrpcServer and failed to bind to the already-bound port.
The first BisqGrpcServer is started in
BisqDaemonMain.onApplicationStarted much earlier in the startup process.
Solution: remove the second attempt to start the server by removing
BisqDaemonMain's implementation of onSetupComplete, and in turn remove
the now-obsolete bisqGrpcServer field as well.
This change also eliminates the BisqGrpcServer.blockUntilShutdown
method, which in turn called the underlying grpc server's
awaitTermination method. As the comment there explained, this was
thought to be necessary because grpc "does not use daemon threads by
default", but this is actually incorrect. According to the grpc Javadoc
at [1], "Grpc uses non-daemon Threads by default and thus a Server will
continue to run even after the main thread has terminated."
[1]: https://git.io/JePjn
The :grpc module will soon be renamed to :daemon. These two modules
represent two separate and equal modes of running bisq, either as a
desktop GUI or as a daemon. They are both applications, and one should
not depend on the other as it would be illogical and confusing to model
things that way. The reason for the current dependency from :desktop to
:grpc is because :grpc is the home of BisqGrpcServer. This change moves
this class up to :core, in a new bisq.core.grpc package, such that both
the :desktop and :daemon applications can pull it in cleanly.
The CoreApi 'facade' that BisqGrpcServer uses to abstract away bisq
internals has been moved from bisq.core to bisq.core.grpc as well and
for the same reasons detailed in 8b30c22d6.
This change also renames the Java package for generated grpc types from
bisq.grpc.protobuf to bisq.core.grpc (the same new package that
BisqGrpcServer and CoreApi now live in). Again, this is for reasons of
cohesion: BisqGrpcServer is the only user of these grpc-generated types,
and they should logically live in the same package (even if they
physically live in separate source dirs at the build level).
There are two structural / organizational reasons for this move:
1. References from one package to another should always be upward or
lateral, never downward, as the latter causes package cycles (aka
'tangles') which damage the suppleness and understandability of a large
codebase. Prior to this change the high-level bisq.core.CoreModule
class imported many classes from child packages like
bisq.core.{btc,dao,user,util}, etc. By moving CoreModule down into the
'.app' package, it can reference all these other packages as siblings
instead of doing so as a parent.
2. the bisq.core.desktop and bisq.core.app packages are the only
locations that reference the CoreModule class. By moving the class
into bisq.core.app, greater cohesion is acheived, again making the
codebase that much easier to read and understand.
Relevant issue thread: #3753
Currently the daily burnt BSQ chart under 'DAO -> Facts and Figures' is
distorted by outliers. This introduces a 'Zoom to inliers' toggle (off
by default), which, when toggled on, effectively zooms the chart to
inliers, thus removing the distortion. Also, a moving average is
plotted, to further improve the chart's readibility.
The chart is also changed from an area chart to a line chart, on the
presumption that it was an area chart for cosmetic reasons, but now that
there are two series in it (the moving average was added) an area chart
makes less sense.
Another noteworthy change is that the other chart in the screen, monthly
issued BSQ, has its Y axis set to start at zero, so as to improve
readability. This might seem outside the scope of this commit, but the
other changes involved some refactoring, which involved cleaning up some
duplicated logic, which involved configuring both of these charts
together, which involved forcing zero to be on the axis.
This implementation mixes some plotting logic (responsible for zooming
in on inliers) into the view logic, because I opted to implement said
zooming as an external manipulation of a chart's axis. I chose this in
favor of implementing a new Chart, because it would have required
including multiple large classes (relevant JavaFX's classes can't be
ergonomically extended) to the code base. I presumed that my chosen
solution will be easier to maintain.
I am not entirely happy with this choice and can see myself introducing
some plotting-related classes to encapsulate creating charts like these,
thus unmixing plotting logic from view logic. In the meantime this is a
working solution, and I plan to continue working on these charts in the
near future.
If the user entered an invalid hostname for a custom BTC node, such as a
V3 onion address, after restarting Bisq they would be presented with an
error due to the node being unreachable and unable to continue nor
correct the config.
So now a warning message will be shown in this situation informing the
user of an invalid config and once they restart their client they will
connect to the provided BTC nodes.
Fixes#3137
The previous commit introduces the BisqGrpcServer as a proof of concept,
but it is not yet ready for production use. This commit removes the
`--desktopWithGrpcApi` option that starts the gRPC server until such
time that it is production-ready.
This change also removes the `--desktopWithHttpApi` option for starting
an HTTP API server. The option has been in place for some time, but it
was 'false advertising' in the sense that nothing actually happened if
the user specified it, because there is in fact no HTTP API
implementation to be started.
Note that when the gRPC API option is reintroduced, it will be renamed
to `--rpcserver` or similar, following the convention in Bitcoin Core.
This commit introduces a new `grpc` module including the following key
types:
- BisqGrpcServer: The API implementation itself, along with generated
gRPC Response/Reploy types defined in grpc/src/main/proto/grpc.proto.
- BisqGrpcServerMain: A 'headless' / daemon-like entry point for
running a Bisq node without the JavaFX desktop UI.
- BisqGrpcClient: A simple, repl-style client for the API that allows
the user to exercise the various endpoints as seen in the example
below.
In the `desktop` module, the BisqAppMain class has been modified to
start a BisqGrpcServer instance if the `--desktopWithGrpcApi` option has
been set to `true`.
In the `core` module, a new `CoreApi` class has been introduced
providing a kind of comprehensive facade for all Bisq functionality to
be exposed via the RPC API.
How to explore the proof of concept:
1. Run the main() method in BisqAppMain providing
`--desktopWithGrpcApi=true` as a program argument or alternatively, run
the main() method in BisqGrpcServerMain, where no special option is
required. In either case, you'll notice the following entry in the log
output:
INFO bisq.grpc.BisqGrpcServer: Server started, listening on 8888
2. Now run the main() method in BisqGrpcClient. Once it has started up
you are connected to the gRPC server started in step 1 above. To
exercise the API, type `getVersion` via stdin and hit return. You
should see the following response:
INFO bisq.grpc.BisqGrpcClient - 1.2.4
Likewise, you can type `getBalance` and you'll see the following
response:
INFO bisq.grpc.BisqGrpcClient - 0.00 BTC
and so forth for each of the implemented endpoints. For a list of
implemented endpoints, see BisqGrpcServer.start().
Note once again that the code here is merely a proof of concept and
should not be considered complete or production-ready in any way. In a
subsequent commit, the `--desktopWithGrpcApi` option will be disabled in
order to avoid any potential production use.
The content of this commit is the result of squashing a number of
commits originally authored by chimp1984 in the `chimp1984` fork's `grpc`
branch.
Co-authored-by: Chris Beams <chris@beams.io>
* Add bsq.ninja BSQ explorer operated by @wiz
* Add bsq.bisq.services BSQ explorer
* Add bsq.sqrrm.net BSQ explorer operated by @sqrrm
* Add bsq.vante.me BSQ explorer operated by @mrosseel
* Add bsq.emzy.de BSQ explorer operated by @emzy
Co-authored-by: Devin Bileck <603793+devinbileck@users.noreply.github.com>
After further testing, I realized I introduced a bug in commit
4f4b0f6ce9 because apparently the
ArrayList.contains() method does not properly work for
BlockChainExplorer object - to fix this I implemented a new
contains() method that uses string comparison instead
After further testing, I realized I introduced a bug in commit
4f4b0f6ce9 because apparently the
ArrayList.contains() method does not properly work for
BlockChainExplorer object - to fix this I implemented a new
contains() method that uses string comparison instead
* Upgrade mempool.space to full block explorer functionality
* Add mempool.space Tor V2 block explorer
* If no valid Bitcoin block explorer is set, use the 1st block explorer
According to issue #3641, the label "Bankgiro number", as it was
hardcoded in BankUtil.java, should not be used in Sweden.
The label "Kontonummer" is the correct one to be used in Sweden and
Norway.
Account number validation is implemented to Norwegian accounts in
AccountNrValidator.java.
There is no validation implemented to Swedish accounts, so nothing
to change there.
If account validation for Sweden is needed, a different method would
be needed.
Fixes#3641
* Add 2 new columns to vote result
Add threshold and quorum column.
Combine name and link column.
Add sorting function for accepted column.
Adjust column width.
* Fix sorting functions
* Improve column sorting
We got a report where a "tx already known" message caused a failed
trade but the deposit tx was valid.
To avoid such false positives we only handle reject messages which
we consider clearly critical.
Use a LinkedHashMap in place of a List, for the caching CurrencyUtil
fields 'allSortedFiatCurrencies' & 'allSortedCryptoCurrencies', using
the same iteration order as before. In this way, we can avoid a linear
search in the lookup methods getFiatCurrency & getCryptoCurrency.
In particular, this speeds up the activation of TradesChartsView (and to
a lesser extent OfferBookChartView), which make a lot of calls to
CurrencyUtil.getTradeCurrency in the fillTradeCurrencies/updateChartData
methods respectively.
At mediation we require a min. payout to the losing party to keep
incentive for the trader to accept the mediated payout. For Refund
agent cases we do not have that restriction.
According to https://github.com/bisq-network/proposals/issues/155
For buyer:
Default: 15%
Min. 15% for Altcoin, 15% for Fiat
Max. 50% for Altcoin, 50% for Fiat
Absolute min. deposit in BTC: 0.006 BTC (currently 40 USD)
For Seller:
Fixed: 15%
Absolute min. deposit in BTC: 0.006 BTC (currently 40 USD)
These are mostly injected objects that are now redundant, such as some
CoinFormatter and Preferences fields.
Also do some additional minor tidying of TradesChartsViewModel.
In case we had an unconfirmed change output we reset the
unconfirmedBsqChangeOutputList so that after a SPV resync we do not
have any dangling BSQ utxos in that list which would cause an incorrect
BSQ balance state after the SPV resync.
If a trader has a pending trade with an unconfirmed tx and is doing a
spv resync, the deposit tx is not found. They get displayed a popup
asking to restart and if problem continues to move trade to failed
trades. In regtest testing the missing deposit tx was always received
from the network at a restart. So this commit adds another button as
default button to shut down Bisq so that the user do first that activity
instead of moving the trade to failed trades.
It is not guaranteed that the trader will receive the missing deposit tx
at restart, but at least it is better as the state before. We should
find a way how to distinguish a valid unconfirmed tx from an invalid one
but it is not clear yet how we can do that and if it is feasible with
doing that with BitcoinJ only. It might require a external service to
look up the tx if it is in the mem pool, but even that will never gie a
100% certainty.
* Remove unused parameters from assorted methods
Exclude abstract or default methods, as well as cases where the
parameter is currently unused but is probably intended to be used later.
* Actually use the injected Clock param of isDateInTolerance
Use 'clock.millis()' instead of "new Date().getTime()" in SignedWitness
& AccountAgeWitness, as the latter may have been left as an oversight.
Also tidy the date field of the toString() methods.
* Suppress warnings of unused method params which may be needed later
Also fix forwarding of telescoping method parameters in FormBuilder and
FormattingUtils.
Use 'clock.millis()' instead of "new Date().getTime()" in SignedWitness
& AccountAgeWitness, as the latter may have been left as an oversight.
Also tidy the date field of the toString() methods.
* Add user preference combobox for BSQ block explorer with random default
* Remove betanet and testnet BSQ block explorers
* Always check if a valid BSQ block explorer is set
* Remove short cut for legacy arbitrator registration
* Change shortcut for reRepublishAllGovernanceData
* Use isAltOrCtrlPressed for removeFailedTrade
* Remove showStatisticsPopup
This was useful for legacy arbitrators as they received the trade fee
* Cleanup
- Remove setColumnSpan for titledGroupBg
- Fix row length
* Add list of shortcuts
* Update comment
* Change "click" to "press"
This fixes the problem if the local bitcoin core node is not detected by our client,
but bitcoinj is able to connect to it because of the auto connect to localhost behavior.
In that case the minimum required nodes to broadcast a transaction will be 4 (provided nodes settings),
but bitcoinj will only connect to one node. The requirement of 4 nodes will be never fulfilled and
the transaction never broadcasted.
Avoid mutating the Block tx list or the DaoState tx cache/index via a
Lombok getter. Instead wrap each in an unmodifiable[List|Map] & provide
specific mutator methods for use by DaoStateService to add newly parsed
transactions or load a DAO snapshot.
Also rename txMap to txCache, replace remaining use of getTxStream() in
the JSON file exporter with getUnorderedTxStream() (as this is safe) and
swap the arguments of the txCache initialisation merge function, for
exact consistency with the pre-caching behaviour.
Finally, add a missing assertDaoStateChange() and remove a potentially
harmful assertion from DaoStateService.onNewTxForLastBlock.
This is based on a suggested patch by @chimp1984 in the PR #3773 review.
Add getUnorderedTxStream() method to DaoStateService to stream directly
from the txMap cache/index wherever it is obviously safe to do so,
instead of iterating through the entire block list via getTxStream().
Also make getTxs() return a view of the txMap values in place of a copy.
This should improve efficiency slightly.
Build a HashMap of all BSQ transactions found, when loading the DaoState
from disc, and store it in a transient field which is always kept in
sync with the associated list of blocks. (The latter is only modified in
a couple of places in DaoStateService, making this straightforward.)
This is to speed up daoStateService.getTx(id), which is called from many
places and appears to be a significant bottleneck. In particular, the
initial load of the results in VoteResultView.doFillCycleList was very
slow (taking nearly a minute on a Core i3 machine) and likely to suffer
a quadratic slowdown (#cycles * #tx's) over time.
isDataOwner is used when deciding how many peer nodes should receive
a BroadcastMessage. If the BroadcastMessage originated
on the local node it is sent to ALL peer nodes with a small delay.
If the node is only relaying the message (it originated on a different
node) it is sent to MAX(peers.size(), 7) peers with a delay that is
twice as long.
All the information needed to determine whether or not the
BroadcastMessage originated on the local node is available at the final
broadcast site and there is no reason to have callers pass it in.
In the event that the sender address is not known during broadcast (which
is only a remote possibility due to how early the local node address
is set during startup) we can default to relay mode.
This first patch just removes the deep parameters. The next will remove
everything else. There is one real change in LiteNodeNetworkService.java
where it was using the local node when it should have been using the
peer node. This was updated to the correct behavior.
* Fix language tags so script and regional variants work correctly
* Adjust update_translations.sh language tags for script/regional variants
ACKs for top commit:
@ripcurlx:
ACK 089232716d
Fixes https://github.com/bisq-network/bisq/issues/3721
(part of the problem was that the trade ended up in failed trade)
Refactor method and add comments.
We did not handle the case of a mediated payout. isPayoutPublished() is
only reflecting non-disputed trade payouts.
Fixes https://github.com/bisq-network/bisq/issues/3721
(part of the problem was that the trade ended up in failed trade)
Refactor method and add comments.
We did not handle the case of a mediated payout. isPayoutPublished() is
only reflecting non-disputed trade payouts.
With v1.2 we use 2of2 multisig for deposit tx. This commit changes the
manual payout window to reflect that.
- Remove unused code from legacy arbitration
- Fix comments
With v1.2 we use 2of2 multisig for deposit tx. This commit changes the
manual payout window to reflect that.
- Remove unused code from legacy arbitration
- Fix comments
* Use strict stubbing for ReceiptValidatorTest to avoid confusion
Remove redundant stubs from the MoneyGram and Western Union tests and
ensure that all such stubs result in failure. In particular, the 'offer'
mock is never accessed directly by ReceiptValidator.
* Prevent taking of offers with unequal bank account types
Use stricter criteria when deciding which of the taker's accounts (if
any) are valid for a given offer. Specifically, prevent National Bank
accounts from being used to take Same / Specific Bank(s) offers, so the
three payment method types can never being mixed.
This prevents an error on the trading peer when the trade starts, due to
enforcement of equal maker & taker payment method IDs (except for SEPA)
in the Contract payload constructor.
This partially addresses #3602, where the erroneous peer response causes
the taker to be presented with a confusing timeout.
* [PR COMMENTS] Make maxSequenceNumberBeforePurge final
Instead of using a subclass that overwrites a value, utilize Guice
to inject the real value of 10000 in the app and let the tests overwrite
it with their own.
* [TESTS] Clean up 'Analyze Code' warnings
Remove unused imports and clean up some access modifiers now that
the final test structure is complete
* [REFACTOR] HashMapListener::onAdded/onRemoved
Previously, this interface was called each time an item was changed. This
required listeners to understand performance implications of multiple
adds or removes in a short time span.
Instead, give each listener the ability to process a list of added or
removed entrys which can help them avoid performance issues.
This patch is just a refactor. Each listener is called once for each
ProtectedStorageEntry. Future patches will change this.
* [REFACTOR] removeFromMapAndDataStore can operate on Collections
Minor performance overhead for constructing MapEntry and Collections
of one element, but keeps the code cleaner and all removes can still
use the same logic to remove from map, delete from data store, signal
listeners, etc.
The MapEntry type is used instead of Pair since it will require less
operations when this is eventually used in the removeExpiredEntries path.
* Change removeFromMapAndDataStore to signal listeners at the end in a batch
All current users still call this one-at-a-time. But, it gives the ability
for the expire code path to remove in a batch.
* Update removeExpiredEntries to remove all items in a batch
This will cause HashMapChangedListeners to receive just one onRemoved()
call for the expire work instead of multiple onRemoved() calls for each
item.
This required a bit of updating for the remove validation in tests so
that it correctly compares onRemoved with multiple items.
* ProposalService::onProtectedDataRemoved signals listeners once on batch removes
#3143 identified an issue that tempProposals listeners were being
signaled once for each item that was removed during the P2PDataStore
operation that expired old TempProposal objects. Some of the listeners
are very expensive (ProposalListPresentation::updateLists()) which results
in large UI performance issues.
Now that the infrastructure is in place to receive updates from the
P2PDataStore in a batch, the ProposalService can apply all of the removes
received from the P2PDataStore at once. This results in only 1 onChanged()
callback for each listener.
The end result is that updateLists() is only called once and the performance
problems are reduced.
This removes the need for #3148 and those interfaces will be removed in
the next patch.
* Remove HashmapChangedListener::onBatch operations
Now that the only user of this interface has been removed, go ahead
and delete it. This is a partial revert of
f5d75c4f60 that includes the code that was
added into ProposalService that subscribed to the P2PDataStore.
* [TESTS] Regression test for #3629
Write a test that shows the incorrect behavior for #3629, the hashmap
is rebuilt from disk using the 20-byte key instead of the 32-byte key.
* [BUGFIX] Reconstruct HashMap using 32-byte key
Addresses the first half of #3629 by ensuring that the reconstructed
HashMap always has the 32-byte key for each payload.
It turns out, the TempProposalStore persists the ProtectedStorageEntrys
on-disk as a List and doesn't persist the key at all. Then, on
reconstruction, it creates the 20-byte key for its internal map.
The fix is to update the TempProposalStore to use the 32-byte key instead.
This means that all writes, reads, and reconstrution of the TempProposalStore
uses the 32-byte key which matches perfectly with the in-memory map
of the P2PDataStorage that expects 32-byte keys.
Important to note that until all seednodes receive this update, nodes
will continue to have both the 20-byte and 32-byte keys in their HashMap.
* [BUGFIX] Use 32-byte key in requestData path
Addresses the second half of #3629 by using the HashMap, not the
protectedDataStore to generate the known keys in the requestData path.
This won't have any bandwidth reduction until all seednodes have the
update and only have the 32-byte key in their HashMap.
fixes#3629
* [DEAD CODE] Remove getProtectedDataStoreMap
The only user has been migrated to getMap(). Delete it so future
development doesn't have the same 20-byte vs 32-byte key issue.
* [TESTS] Allow tests to validate SequenceNumberMap write separately
In order to implement remove-before-add behavior, we need a way to
verify that the SequenceNumberMap was the only item updated.
* Implement remove-before-add message sequence behavior
It is possible to receive a RemoveData or RemoveMailboxData message
before the relevant AddData, but the current code does not handle
it.
This results in internal state updates and signal handler's being called
when an Add is received with a lower sequence number than a previously
seen Remove.
Minor test validation changes to allow tests to specify that only the
SequenceNumberMap should be written during an operation.
* [TESTS] Allow remove() verification to be more flexible
Now that we have introduced remove-before-add, we need a way
to validate that the SequenceNumberMap was written, but nothing
else. Add this feature to the validation path.
* Broadcast remove-before-add messages to P2P network
In order to aid in propagation of remove() messages, broadcast them
in the event the remove is seen before the add.
* [TESTS] Clean up remove verification helpers
Now that there are cases where the SequenceNumberMap and Broadcast
are called, but no other internal state is updated, the existing helper
functions conflate too many decisions. Remove them in favor of explicitly
defining each state change expected.
* [BUGFIX] Fix duplicate sequence number use case (startup)
Fix a bug introduced in d484617385 that
did not properly handle a valid use case for duplicate sequence numbers.
For in-memory-only ProtectedStoragePayloads, the client nodes need a way
to reconstruct the Payloads after startup from peer and seed nodes. This
involves sending a ProtectedStorageEntry with a sequence number that
is equal to the last one the client had already seen.
This patch adds tests to confirm the bug and fix as well as the changes
necessary to allow adding of Payloads that were previously seen, but
removed during a restart.
* Clean up AtomicBoolean usage in FileManager
Although the code was correct, it was hard to understand the relationship
between the to-be-written object and the savePending flag.
Trade two dependent atomics for one and comment the code to make it more
clear for the next reader.
* [DEADCODE] Clean up FileManager.java
* [BUGFIX] Shorter delay values not taking precedence
Fix a bug in the FileManager where a saveLater called with a low delay
won't execute until the delay specified by a previous saveLater call.
The trade off here is the execution of a task that returns early vs.
losing the requested delay.
* [REFACTOR] Inline saveNowInternal
Only one caller after deadcode removal.
Use stricter criteria when deciding which of the taker's accounts (if
any) are valid for a given offer. Specifically, prevent National Bank
accounts from being used to take Same / Specific Bank(s) offers, so the
three payment method types can never being mixed.
This prevents an error on the trading peer when the trade starts, due to
enforcement of equal maker & taker payment method IDs (except for SEPA)
in the Contract payload constructor.
This partially addresses #3602, where the erroneous peer response causes
the taker to be presented with a confusing timeout.
Remove redundant stubs from the MoneyGram and Western Union tests and
ensure that all such stubs result in failure. In particular, the 'offer'
mock is never accessed directly by ReceiptValidator.
This prevents a scammer to use publicly known account details
(without being in control of the account) as a seller to get
signed by a buyer. The money received in the seller account might
not be detected by the legitimate owner and/or the money not sent back.
30 days later the scammer could use this signed account as seed to peer sign other stolen accounts.
Addresses the first half of #3629 by ensuring that the reconstructed
HashMap always has the 32-byte key for each payload.
It turns out, the TempProposalStore persists the ProtectedStorageEntrys
on-disk as a List and doesn't persist the key at all. Then, on
reconstruction, it creates the 20-byte key for its internal map.
The fix is to update the TempProposalStore to use the 32-byte key instead.
This means that all writes, reads, and reconstrution of the TempProposalStore
uses the 32-byte key which matches perfectly with the in-memory map
of the P2PDataStorage that expects 32-byte keys.
Important to note that until all seednodes receive this update, nodes
will continue to have both the 20-byte and 32-byte keys in their HashMap.
Now that the only user of this interface has been removed, go ahead
and delete it. This is a partial revert of
f5d75c4f60 that includes the code that was
added into ProposalService that subscribed to the P2PDataStore.
#3143 identified an issue that tempProposals listeners were being
signaled once for each item that was removed during the P2PDataStore
operation that expired old TempProposal objects. Some of the listeners
are very expensive (ProposalListPresentation::updateLists()) which results
in large UI performance issues.
Now that the infrastructure is in place to receive updates from the
P2PDataStore in a batch, the ProposalService can apply all of the removes
received from the P2PDataStore at once. This results in only 1 onChanged()
callback for each listener.
The end result is that updateLists() is only called once and the performance
problems are reduced.
This removes the need for #3148 and those interfaces will be removed in
the next patch.
Previously, this interface was called each time an item was changed. This
required listeners to understand performance implications of multiple
adds or removes in a short time span.
Instead, give each listener the ability to process a list of added or
removed entrys which can help them avoid performance issues.
This patch is just a refactor. Each listener is called once for each
ProtectedStorageEntry. Future patches will change this.