In commit 5fb4b21 ("Refine deploy target..."), the 'build' target was
made normal, i.e. non-phony, but on further review it does in fact make
sense to declare 'build' phony, such that it is run no matter the status
of the root-level 'build' directory, but for different reasons.
Previously, we had been considering the presence of 'build' directory as
a reasonable proxy for determining whether the `./gradlew build` had
been run. If the directory was present, we considered the 'build' target
up-to-date. If not, then we would re-run `./gradlew build`. This is all
sensible enough, except for the fact that the root-level 'build'
directory has almost nothing to do with the actual output of `./gradlew
build`. Gradle does output 'build' directories, but in the respective
subdirectory for each module of the project. After `./gradlew build` has
been run, we would see a 'desktop/build' directory, a 'seednode/build'
directory and so forth. It just so happens that a root-level 'build'
directory was getting created at all due to idiosyncracies of a
particular Kotlin plugin.
This commit updates the makefile to better respect this reality by:
- preserving the 'build' target but marking it once again as PHONY
- introducing new 'seednode/build' and 'desktop/build' targets that
trigger './gradlew :seednode:build` and ./gradlew :desktop:build`
commands respectively.
- making 'build' depend on these two new targets
In light of this realization of flawed thinking about the root-level
build dir, this change also restores `make clean` to calling `./gradlew
clean` instead of `rm -rf build`.
Problem: Bitcoind Core v0.90.0 changed the default value of its
'peerbloomfilters' option from 1 to 0, now disabling them by default.
Bisq requires bloom filters be enabled on the Bitcoin node(s) it
communicates with, so users who are running >= v0.90 would get errors
when attempting to run `make bitcoind` with that target's current
recipe.
Solution: This change explicitly sets the 'peerbloomfilters' option to
1, ensuring it is enabled in any case. Note that this option has existed
in Bitcoin Core since v0.12.0, so there is no real concern for this new
option breaking users that are still on 0.18.x or even much earlier.
Problem: Prior to this change, it was necessary to first create and
attach to a screen session and then to run `make deploy` within it. This
meant extra steps for the user and was generally error-prone.
Solution: Usage of screen has been refined such that a screen session
named 'localnet' is created on the users behalf without any need to
attach to it. Individual node deployment targets such as `make
bitcoind`, `make alice`, et al. are issued to new windows within the
localnet screen session, and the user is free to attach or not whenever
they choose. The result is that a new user can clone the repository and
type nothing more than `make deploy` to get up and running with their
localnet.
This also reverts the changes in commit 97dd342e5 ("Make build target
phony") for the following reasons:
- As mentioned in that commit message, Gradle was not deleting the its
'build' directory when running `gradle clean`, meaning that the
'build' target was always up-to-date, even after running `make
clean`. This made it impossible to get a correct rebuild workflow. On
analysis, howewer, this situation was because of a badly behaving
Kotlin plugin not cleaning up after itself, leaving a subdirectory at
build/kotlin and preventing the build directory itself from being
deleted altogether. To address this, the `make clean` target has been
updated to `rm -rf build` instead of calling `build gradle`. While
it's a workaround until we back out the Kotlin changes that caused
this, it does have the added benefit of being faster than invoking
`gradle clean`.
- By making the 'build' target PHONY, this meant that `./gradlew build`
was getting invoked every time a dependent target was called. For
example, `make alice` depends on the 'setup' target, which in turn
depends on the 'build' target. When calling such targets in
isolation, this arrangement works out fine, because the phony 'build'
target always runs, invoking `./gradle build`, and the Gradle build
completes quickly assuming everything is up-to-date. The problem
arises when calling a number of these targets in rapid succession, as
we do when calling `make deploy` and running each individual node
target in its own screen window. This causes contention in two ways.
The first is that these multiple, simultaneous Gradle processes
compete for access to an available Gradle daemon, and because each
process needs its own, it ends up that as many Gradle daemons get
created as Bisq nodes we need to deploy (5 in total). This is a big
waste of time and resources. The second way it causes not only
contention but outright failure is that each of these builds are
operating in the same directory, and while most aspects of the build
are in fact up-to-date and therefore not modified in any way, there
are exceptions to this rule. The result is that build artifacts, e.g.
jars are getting deleted and rebuilt from underneath competing Gradle
processes, and all manner of chaos ensues, such as NoClassDefFound
errors and much more. This change (reverting 'build' back to a
normal, non-phony target) avoids these problems entirely. When
running `make deploy`, we run the 'build' target once as a function
of the 'deploy' target depending on it. At this point, the 'build'
directory exists, and all subsequent node deployment targets, e.g.
'alice', 'bob', etc do not re-run the build target because it is
up-to-date. For workflows where the user definitely wants to rebuild
prior to redeploying a given node, they can either run `make
clean-build`, or drop down to issuing Gradle build commands directly,
e.g. `./gradlew :desktop:build` followed by `make desktop`.
Problem: we use soft 4-space tabs throughout the Bisq codebase, and the
new makefile is a break to this rule due to make's default requirement
for hard tabs in recipes.
Solution: This commit updates our Editorconfig settings to reflect this
exception.
For vim users, it is also recommended that you add the following entry
to your .vimrc:
au FileType make set tw=72 noet cc=72
It will ensure that you wrap (documentation) lines at 72 chars. It also
sets noexpandtab explicitly. Even though .editorconfig should already be
doing this for you when working in Bisq, this more general vim
configuration will ensure you use tabs correctly in any makefile. The
`cc=72` setting adds a visual right margin at 72 characters.
This commit also updates the existing makefile, wrapping lines of
documentation that had exceeded the 72-char margin.
This change follows up on commit 650c5894d, which:
1. Renamed the 'localdir' directory to '.localdir' to better follow
convention with how local data directories are often managed, e.g.
.git and .gradle.
2. Introduced the STATE_DIR variable to avoid duplication of the
'.localdir' string throughout the Makefile, and at least in concept to
allow this value to be customized via setting an environment variable.
The changes in (1) are preserved, while the changes in (2) have been
backed out. Rationale:
- The STATE_DIR name introduces a new concept to the reader. They must
reason about its meaning, and this works against the intention of the
Makefile, which is to maximize understandability for the uninitiated.
- The name, if we were to preserve the variable, probably should have
been something like DATA_DIR_ROOT. 'STATE_DIR' is not conceptually
incorrect, but industry convention is to refer to such directories as
"data directories", e.g. Bitcoin Core's `datadir` option, LND's
`datadir` option and Bisq's `userDataDir` and `appDataDir` options.
- The variable, whatever its name, introduces a layer of indirection,
which while convenient to the makefile maintainer, is a barrier to
comprehension for the reader / contributor. For example, if a user
wished to copy and paste the recipe for a target, say 'bob' from the
makefile, with the varible in place, the user would have to figure
out its correct value and replace it before they could paste and use
the copied command. Like in the first note above, the idea with the
makefile is to maximize understanding for the uninitiated, i.e.
working code as executable documentation. It is reasonable given this
goal to increase the burden on a few maintainers in order to ease the
potentially many contributors.
Finally, this change follows up on the renaming of the 'localnet'
directory to '.localnet' by reflecting this change in the name of the
associated target as well. This is order to avoid dependent targets e.g.
'bitcoind', 'alice' or 'bob' constantly re-running the localnet target.
In turn it also adds an 'alias' target named 'localnet' (without the
leading dot) because targets with a leading dot are (I believe) treated
as "implicit targets". In any case, they do not show up in a tab
completion context, so introducing the normally-named alias fixes that.
This is a follow-up to cbeams/bisq#3.
This partially reverts commit e3a3fb5, removing the dependency from
the 'localnet' target to the 'clean-localnet' target. The reason for
this is that a number of higher level targets that deploy nodes, e.g.
the 'alice' and 'bob' targets depend on 'localnet' and, prior to this
reversion, therefore also depended on 'clean-localnet'. The effect was
that every time a node is deployed, the .localnet directory was removed
and re-created, destroying the state of any and all nodes that had been
deployed and modified thus far.
The change in the original commit that removes the temporary 'dao-setup'
directory in case of partial failures has been preserved.
This is a follow-up to cbeams/bisq#3.
Sometimes when running setup something goes wrong and the ./dao-state
dir is still hanging around, requiring manual cleanup nad preventing from simply
re-running the command.
Problem: contributors old and new must read and follow many manual steps
spread across three documents (docs/{build,dev-setup,dao-setup}.md) in
order to get up and running with a local regtest Bisq network deployment
suitable for isolated development and end-to-end testing. This process
is not only manual, but requires considerable trial and error for most
contributors, and can amount to hours of effort. Perhaps most
detrimental is that this friction makes it much less likely that we get
"all hands on deck" to cover test scenarios at release time. Getting up
and running with what this change refers to as a "localnet" should be
among the very first things a new contributor does. It should be fast
and easy, maximizing the contributor's ability to get productive right
away.
Solution: this commit introduces a simple and well-documented makefile
to the root of the source tree. It instructs the user to issue a series
of simple `make` commands, at the end of which they'll have a fully
functional localnet deployment.
Caveats:
- No support for Windows unless the user is running Git Bash, Cygwin or
similar. In any case, the makefile serves as clear documentation
about what a Windows user would need to do manually, i.e. without the
benefit of `make` automating it all.
- The aforementioned setup documents should be updated to point to this
makefile instead of explaining everything in prose. The dev-setup.md
and dao-setup.md documents may actually be candidates for deletion if
this new approach proves successful.
- These changes do not include passing the new -peerbloomfilters=1
option to bitcoin versions 0.19 and above. Those who have already
upgraded should take care to add that option.
Notes:
- The introduction of this makefile has no impact on Bisq's use of
Gradle as a build system. Everything there is as it has been. This
makefile is a completely optional convenience being added into the
mix. It has the added benefit of being a "friendly face" to those not
familiar with the Java / JVM ecosystem. Developers from many
different backgrounds are familiar with make and makefiles, and they
may find this one a pleasant and inviting surprise.
* Change access level for checkMaxConnections to be tested
* Refactor checkMaxConnections
Fix connection limit checks so as to prevent the following warning:
> WARN b.n.p2p.peers.PeerManager: No candidates found to remove (That
case should not be possible as we use in the last case all
connections).
* Add MockNode that allows for simulating connections
* Add PeerManagerTest
The old PeerManagerTest was located under network/p2p/routing, which is
no longer the correct location. Additionally, it was outdated so I
just removed it and added a new file under network/p2p/peers containing
tests for checkMaxConnections.
* Add testCompile dependency to core
This is necessary because bisq.network.p2p.MockNode imports
bisq.core.network.p2p.seed.DefaultSeedNodeRepository.
* Update based on review feedback
Mock the SeedNodeRepository superclass, thus eliminating the dependency
to core.
* [PR COMMENTS] Make maxSequenceNumberBeforePurge final
Instead of using a subclass that overwrites a value, utilize Guice
to inject the real value of 10000 in the app and let the tests overwrite
it with their own.
* [TESTS] Clean up 'Analyze Code' warnings
Remove unused imports and clean up some access modifiers now that
the final test structure is complete
* [REFACTOR] HashMapListener::onAdded/onRemoved
Previously, this interface was called each time an item was changed. This
required listeners to understand performance implications of multiple
adds or removes in a short time span.
Instead, give each listener the ability to process a list of added or
removed entrys which can help them avoid performance issues.
This patch is just a refactor. Each listener is called once for each
ProtectedStorageEntry. Future patches will change this.
* [REFACTOR] removeFromMapAndDataStore can operate on Collections
Minor performance overhead for constructing MapEntry and Collections
of one element, but keeps the code cleaner and all removes can still
use the same logic to remove from map, delete from data store, signal
listeners, etc.
The MapEntry type is used instead of Pair since it will require less
operations when this is eventually used in the removeExpiredEntries path.
* Change removeFromMapAndDataStore to signal listeners at the end in a batch
All current users still call this one-at-a-time. But, it gives the ability
for the expire code path to remove in a batch.
* Update removeExpiredEntries to remove all items in a batch
This will cause HashMapChangedListeners to receive just one onRemoved()
call for the expire work instead of multiple onRemoved() calls for each
item.
This required a bit of updating for the remove validation in tests so
that it correctly compares onRemoved with multiple items.
* ProposalService::onProtectedDataRemoved signals listeners once on batch removes
#3143 identified an issue that tempProposals listeners were being
signaled once for each item that was removed during the P2PDataStore
operation that expired old TempProposal objects. Some of the listeners
are very expensive (ProposalListPresentation::updateLists()) which results
in large UI performance issues.
Now that the infrastructure is in place to receive updates from the
P2PDataStore in a batch, the ProposalService can apply all of the removes
received from the P2PDataStore at once. This results in only 1 onChanged()
callback for each listener.
The end result is that updateLists() is only called once and the performance
problems are reduced.
This removes the need for #3148 and those interfaces will be removed in
the next patch.
* Remove HashmapChangedListener::onBatch operations
Now that the only user of this interface has been removed, go ahead
and delete it. This is a partial revert of
f5d75c4f60 that includes the code that was
added into ProposalService that subscribed to the P2PDataStore.
* [TESTS] Regression test for #3629
Write a test that shows the incorrect behavior for #3629, the hashmap
is rebuilt from disk using the 20-byte key instead of the 32-byte key.
* [BUGFIX] Reconstruct HashMap using 32-byte key
Addresses the first half of #3629 by ensuring that the reconstructed
HashMap always has the 32-byte key for each payload.
It turns out, the TempProposalStore persists the ProtectedStorageEntrys
on-disk as a List and doesn't persist the key at all. Then, on
reconstruction, it creates the 20-byte key for its internal map.
The fix is to update the TempProposalStore to use the 32-byte key instead.
This means that all writes, reads, and reconstrution of the TempProposalStore
uses the 32-byte key which matches perfectly with the in-memory map
of the P2PDataStorage that expects 32-byte keys.
Important to note that until all seednodes receive this update, nodes
will continue to have both the 20-byte and 32-byte keys in their HashMap.
* [BUGFIX] Use 32-byte key in requestData path
Addresses the second half of #3629 by using the HashMap, not the
protectedDataStore to generate the known keys in the requestData path.
This won't have any bandwidth reduction until all seednodes have the
update and only have the 32-byte key in their HashMap.
fixes#3629
* [DEAD CODE] Remove getProtectedDataStoreMap
The only user has been migrated to getMap(). Delete it so future
development doesn't have the same 20-byte vs 32-byte key issue.
* [TESTS] Allow tests to validate SequenceNumberMap write separately
In order to implement remove-before-add behavior, we need a way to
verify that the SequenceNumberMap was the only item updated.
* Implement remove-before-add message sequence behavior
It is possible to receive a RemoveData or RemoveMailboxData message
before the relevant AddData, but the current code does not handle
it.
This results in internal state updates and signal handler's being called
when an Add is received with a lower sequence number than a previously
seen Remove.
Minor test validation changes to allow tests to specify that only the
SequenceNumberMap should be written during an operation.
* [TESTS] Allow remove() verification to be more flexible
Now that we have introduced remove-before-add, we need a way
to validate that the SequenceNumberMap was written, but nothing
else. Add this feature to the validation path.
* Broadcast remove-before-add messages to P2P network
In order to aid in propagation of remove() messages, broadcast them
in the event the remove is seen before the add.
* [TESTS] Clean up remove verification helpers
Now that there are cases where the SequenceNumberMap and Broadcast
are called, but no other internal state is updated, the existing helper
functions conflate too many decisions. Remove them in favor of explicitly
defining each state change expected.
* [BUGFIX] Fix duplicate sequence number use case (startup)
Fix a bug introduced in d484617385 that
did not properly handle a valid use case for duplicate sequence numbers.
For in-memory-only ProtectedStoragePayloads, the client nodes need a way
to reconstruct the Payloads after startup from peer and seed nodes. This
involves sending a ProtectedStorageEntry with a sequence number that
is equal to the last one the client had already seen.
This patch adds tests to confirm the bug and fix as well as the changes
necessary to allow adding of Payloads that were previously seen, but
removed during a restart.
* Clean up AtomicBoolean usage in FileManager
Although the code was correct, it was hard to understand the relationship
between the to-be-written object and the savePending flag.
Trade two dependent atomics for one and comment the code to make it more
clear for the next reader.
* [DEADCODE] Clean up FileManager.java
* [BUGFIX] Shorter delay values not taking precedence
Fix a bug in the FileManager where a saveLater called with a low delay
won't execute until the delay specified by a previous saveLater call.
The trade off here is the execution of a task that returns early vs.
losing the requested delay.
* [REFACTOR] Inline saveNowInternal
Only one caller after deadcode removal.
* [TESTS] Introduce MapStoreServiceFake
Now that we want to make changes to the MapStoreService,
it isn't sufficient to have a Fake of the ProtectedDataStoreService.
Tests now use a REAL ProtectedDataStoreService and a FAKE MapStoreService
to exercise more of the production code and allow future testing of
changes to MapStoreService.
* Persist changes to ProtectedStorageEntrys
With the addition of ProtectedStorageEntrys, there are now persistable
maps that have different payloads and the same keys. In the
ProtectedDataStoreService case, the value is the ProtectedStorageEntry
which has a createdTimeStamp, sequenceNumber, and signature that can
all change, but still contain an identical payload.
Previously, the service was only updating the on-disk representation on
the first object and never again. So, when it was recreated from disk it
would not have any of the updated metadata. This was just copied from the
append-only implementation where the value was the Payload
which was immutable.
This hasn't caused any issues to this point, but it causes strange behavior
such as always receiving seqNr==1 items from seednodes on startup. It
is good practice to keep the in-memory objects and on-disk objects in
sync and removes an unexpected failure in future dev work that expects
the same behavior as the append-only on-disk objects.
* [DEADCODE] Remove protectedDataStoreListener
There were no users.
* [DEADCODE] Remove unused methods in ProtectedDataStoreService
* Use strict stubbing for ReceiptValidatorTest to avoid confusion
Remove redundant stubs from the MoneyGram and Western Union tests and
ensure that all such stubs result in failure. In particular, the 'offer'
mock is never accessed directly by ReceiptValidator.
* Prevent taking of offers with unequal bank account types
Use stricter criteria when deciding which of the taker's accounts (if
any) are valid for a given offer. Specifically, prevent National Bank
accounts from being used to take Same / Specific Bank(s) offers, so the
three payment method types can never being mixed.
This prevents an error on the trading peer when the trade starts, due to
enforcement of equal maker & taker payment method IDs (except for SEPA)
in the Contract payload constructor.
This partially addresses #3602, where the erroneous peer response causes
the taker to be presented with a confusing timeout.
* [PR COMMENTS] Make maxSequenceNumberBeforePurge final
Instead of using a subclass that overwrites a value, utilize Guice
to inject the real value of 10000 in the app and let the tests overwrite
it with their own.
* [TESTS] Clean up 'Analyze Code' warnings
Remove unused imports and clean up some access modifiers now that
the final test structure is complete
* [REFACTOR] HashMapListener::onAdded/onRemoved
Previously, this interface was called each time an item was changed. This
required listeners to understand performance implications of multiple
adds or removes in a short time span.
Instead, give each listener the ability to process a list of added or
removed entrys which can help them avoid performance issues.
This patch is just a refactor. Each listener is called once for each
ProtectedStorageEntry. Future patches will change this.
* [REFACTOR] removeFromMapAndDataStore can operate on Collections
Minor performance overhead for constructing MapEntry and Collections
of one element, but keeps the code cleaner and all removes can still
use the same logic to remove from map, delete from data store, signal
listeners, etc.
The MapEntry type is used instead of Pair since it will require less
operations when this is eventually used in the removeExpiredEntries path.
* Change removeFromMapAndDataStore to signal listeners at the end in a batch
All current users still call this one-at-a-time. But, it gives the ability
for the expire code path to remove in a batch.
* Update removeExpiredEntries to remove all items in a batch
This will cause HashMapChangedListeners to receive just one onRemoved()
call for the expire work instead of multiple onRemoved() calls for each
item.
This required a bit of updating for the remove validation in tests so
that it correctly compares onRemoved with multiple items.
* ProposalService::onProtectedDataRemoved signals listeners once on batch removes
#3143 identified an issue that tempProposals listeners were being
signaled once for each item that was removed during the P2PDataStore
operation that expired old TempProposal objects. Some of the listeners
are very expensive (ProposalListPresentation::updateLists()) which results
in large UI performance issues.
Now that the infrastructure is in place to receive updates from the
P2PDataStore in a batch, the ProposalService can apply all of the removes
received from the P2PDataStore at once. This results in only 1 onChanged()
callback for each listener.
The end result is that updateLists() is only called once and the performance
problems are reduced.
This removes the need for #3148 and those interfaces will be removed in
the next patch.
* Remove HashmapChangedListener::onBatch operations
Now that the only user of this interface has been removed, go ahead
and delete it. This is a partial revert of
f5d75c4f60 that includes the code that was
added into ProposalService that subscribed to the P2PDataStore.
* [TESTS] Regression test for #3629
Write a test that shows the incorrect behavior for #3629, the hashmap
is rebuilt from disk using the 20-byte key instead of the 32-byte key.
* [BUGFIX] Reconstruct HashMap using 32-byte key
Addresses the first half of #3629 by ensuring that the reconstructed
HashMap always has the 32-byte key for each payload.
It turns out, the TempProposalStore persists the ProtectedStorageEntrys
on-disk as a List and doesn't persist the key at all. Then, on
reconstruction, it creates the 20-byte key for its internal map.
The fix is to update the TempProposalStore to use the 32-byte key instead.
This means that all writes, reads, and reconstrution of the TempProposalStore
uses the 32-byte key which matches perfectly with the in-memory map
of the P2PDataStorage that expects 32-byte keys.
Important to note that until all seednodes receive this update, nodes
will continue to have both the 20-byte and 32-byte keys in their HashMap.
* [BUGFIX] Use 32-byte key in requestData path
Addresses the second half of #3629 by using the HashMap, not the
protectedDataStore to generate the known keys in the requestData path.
This won't have any bandwidth reduction until all seednodes have the
update and only have the 32-byte key in their HashMap.
fixes#3629
* [DEAD CODE] Remove getProtectedDataStoreMap
The only user has been migrated to getMap(). Delete it so future
development doesn't have the same 20-byte vs 32-byte key issue.
* [TESTS] Allow tests to validate SequenceNumberMap write separately
In order to implement remove-before-add behavior, we need a way to
verify that the SequenceNumberMap was the only item updated.
* Implement remove-before-add message sequence behavior
It is possible to receive a RemoveData or RemoveMailboxData message
before the relevant AddData, but the current code does not handle
it.
This results in internal state updates and signal handler's being called
when an Add is received with a lower sequence number than a previously
seen Remove.
Minor test validation changes to allow tests to specify that only the
SequenceNumberMap should be written during an operation.
* [TESTS] Allow remove() verification to be more flexible
Now that we have introduced remove-before-add, we need a way
to validate that the SequenceNumberMap was written, but nothing
else. Add this feature to the validation path.
* Broadcast remove-before-add messages to P2P network
In order to aid in propagation of remove() messages, broadcast them
in the event the remove is seen before the add.
* [TESTS] Clean up remove verification helpers
Now that there are cases where the SequenceNumberMap and Broadcast
are called, but no other internal state is updated, the existing helper
functions conflate too many decisions. Remove them in favor of explicitly
defining each state change expected.
* [BUGFIX] Fix duplicate sequence number use case (startup)
Fix a bug introduced in d484617385 that
did not properly handle a valid use case for duplicate sequence numbers.
For in-memory-only ProtectedStoragePayloads, the client nodes need a way
to reconstruct the Payloads after startup from peer and seed nodes. This
involves sending a ProtectedStorageEntry with a sequence number that
is equal to the last one the client had already seen.
This patch adds tests to confirm the bug and fix as well as the changes
necessary to allow adding of Payloads that were previously seen, but
removed during a restart.
* Clean up AtomicBoolean usage in FileManager
Although the code was correct, it was hard to understand the relationship
between the to-be-written object and the savePending flag.
Trade two dependent atomics for one and comment the code to make it more
clear for the next reader.
* [DEADCODE] Clean up FileManager.java
* [BUGFIX] Shorter delay values not taking precedence
Fix a bug in the FileManager where a saveLater called with a low delay
won't execute until the delay specified by a previous saveLater call.
The trade off here is the execution of a task that returns early vs.
losing the requested delay.
* [REFACTOR] Inline saveNowInternal
Only one caller after deadcode removal.
With the addition of ProtectedStorageEntrys, there are now persistable
maps that have different payloads and the same keys. In the
ProtectedDataStoreService case, the value is the ProtectedStorageEntry
which has a createdTimeStamp, sequenceNumber, and signature that can
all change, but still contain an identical payload.
Previously, the service was only updating the on-disk representation on
the first object and never again. So, when it was recreated from disk it
would not have any of the updated metadata. This was just copied from the
append-only implementation where the value was the Payload
which was immutable.
This hasn't caused any issues to this point, but it causes strange behavior
such as always receiving seqNr==1 items from seednodes on startup. It
is good practice to keep the in-memory objects and on-disk objects in
sync and removes an unexpected failure in future dev work that expects
the same behavior as the append-only on-disk objects.
Now that we want to make changes to the MapStoreService,
it isn't sufficient to have a Fake of the ProtectedDataStoreService.
Tests now use a REAL ProtectedDataStoreService and a FAKE MapStoreService
to exercise more of the production code and allow future testing of
changes to MapStoreService.
Fix a bug in the FileManager where a saveLater called with a low delay
won't execute until the delay specified by a previous saveLater call.
The trade off here is the execution of a task that returns early vs.
losing the requested delay.
Although the code was correct, it was hard to understand the relationship
between the to-be-written object and the savePending flag.
Trade two dependent atomics for one and comment the code to make it more
clear for the next reader.
Fix a bug introduced in d484617385 that
did not properly handle a valid use case for duplicate sequence numbers.
For in-memory-only ProtectedStoragePayloads, the client nodes need a way
to reconstruct the Payloads after startup from peer and seed nodes. This
involves sending a ProtectedStorageEntry with a sequence number that
is equal to the last one the client had already seen.
This patch adds tests to confirm the bug and fix as well as the changes
necessary to allow adding of Payloads that were previously seen, but
removed during a restart.
Prevent the 'arrow' of a message bubble from being sporadically anchored
to the wrong side - appearing on the left instead of the right hand side
of the bubble. This is due to the same ListCell object being reused by
JavaFX for different bubbles as the user scrolls up and down the chat
pane, which requires that the anchors of each arrow be properly cleared
between ListCell.updateItem(..) calls.
To this end, move the block of AnchorPane.clearConstraints(..) calls to
the beginning of the updateItem(..) method, as the apparent assumption
that 'updateItem(item, empty = true)' will always be called to clear the
given ListCell before reusing it as a new bubble turns out to be wrong.
Use stricter criteria when deciding which of the taker's accounts (if
any) are valid for a given offer. Specifically, prevent National Bank
accounts from being used to take Same / Specific Bank(s) offers, so the
three payment method types can never being mixed.
This prevents an error on the trading peer when the trade starts, due to
enforcement of equal maker & taker payment method IDs (except for SEPA)
in the Contract payload constructor.
This partially addresses #3602, where the erroneous peer response causes
the taker to be presented with a confusing timeout.
Remove redundant stubs from the MoneyGram and Western Union tests and
ensure that all such stubs result in failure. In particular, the 'offer'
mock is never accessed directly by ReceiptValidator.
Now that there are cases where the SequenceNumberMap and Broadcast
are called, but no other internal state is updated, the existing helper
functions conflate too many decisions. Remove them in favor of explicitly
defining each state change expected.
Now that we have introduced remove-before-add, we need a way
to validate that the SequenceNumberMap was written, but nothing
else. Add this feature to the validation path.
This prevents a scammer to use publicly known account details
(without being in control of the account) as a seller to get
signed by a buyer. The money received in the seller account might
not be detected by the legitimate owner and/or the money not sent back.
30 days later the scammer could use this signed account as seed to peer sign other stolen accounts.