* Change access level for checkMaxConnections to be tested
* Refactor checkMaxConnections
Fix connection limit checks so as to prevent the following warning:
> WARN b.n.p2p.peers.PeerManager: No candidates found to remove (That
case should not be possible as we use in the last case all
connections).
* Add MockNode that allows for simulating connections
* Add PeerManagerTest
The old PeerManagerTest was located under network/p2p/routing, which is
no longer the correct location. Additionally, it was outdated so I
just removed it and added a new file under network/p2p/peers containing
tests for checkMaxConnections.
* Add testCompile dependency to core
This is necessary because bisq.network.p2p.MockNode imports
bisq.core.network.p2p.seed.DefaultSeedNodeRepository.
* Update based on review feedback
Mock the SeedNodeRepository superclass, thus eliminating the dependency
to core.
* [PR COMMENTS] Make maxSequenceNumberBeforePurge final
Instead of using a subclass that overwrites a value, utilize Guice
to inject the real value of 10000 in the app and let the tests overwrite
it with their own.
* [TESTS] Clean up 'Analyze Code' warnings
Remove unused imports and clean up some access modifiers now that
the final test structure is complete
* [REFACTOR] HashMapListener::onAdded/onRemoved
Previously, this interface was called each time an item was changed. This
required listeners to understand performance implications of multiple
adds or removes in a short time span.
Instead, give each listener the ability to process a list of added or
removed entrys which can help them avoid performance issues.
This patch is just a refactor. Each listener is called once for each
ProtectedStorageEntry. Future patches will change this.
* [REFACTOR] removeFromMapAndDataStore can operate on Collections
Minor performance overhead for constructing MapEntry and Collections
of one element, but keeps the code cleaner and all removes can still
use the same logic to remove from map, delete from data store, signal
listeners, etc.
The MapEntry type is used instead of Pair since it will require less
operations when this is eventually used in the removeExpiredEntries path.
* Change removeFromMapAndDataStore to signal listeners at the end in a batch
All current users still call this one-at-a-time. But, it gives the ability
for the expire code path to remove in a batch.
* Update removeExpiredEntries to remove all items in a batch
This will cause HashMapChangedListeners to receive just one onRemoved()
call for the expire work instead of multiple onRemoved() calls for each
item.
This required a bit of updating for the remove validation in tests so
that it correctly compares onRemoved with multiple items.
* ProposalService::onProtectedDataRemoved signals listeners once on batch removes
#3143 identified an issue that tempProposals listeners were being
signaled once for each item that was removed during the P2PDataStore
operation that expired old TempProposal objects. Some of the listeners
are very expensive (ProposalListPresentation::updateLists()) which results
in large UI performance issues.
Now that the infrastructure is in place to receive updates from the
P2PDataStore in a batch, the ProposalService can apply all of the removes
received from the P2PDataStore at once. This results in only 1 onChanged()
callback for each listener.
The end result is that updateLists() is only called once and the performance
problems are reduced.
This removes the need for #3148 and those interfaces will be removed in
the next patch.
* Remove HashmapChangedListener::onBatch operations
Now that the only user of this interface has been removed, go ahead
and delete it. This is a partial revert of
f5d75c4f60 that includes the code that was
added into ProposalService that subscribed to the P2PDataStore.
* [TESTS] Regression test for #3629
Write a test that shows the incorrect behavior for #3629, the hashmap
is rebuilt from disk using the 20-byte key instead of the 32-byte key.
* [BUGFIX] Reconstruct HashMap using 32-byte key
Addresses the first half of #3629 by ensuring that the reconstructed
HashMap always has the 32-byte key for each payload.
It turns out, the TempProposalStore persists the ProtectedStorageEntrys
on-disk as a List and doesn't persist the key at all. Then, on
reconstruction, it creates the 20-byte key for its internal map.
The fix is to update the TempProposalStore to use the 32-byte key instead.
This means that all writes, reads, and reconstrution of the TempProposalStore
uses the 32-byte key which matches perfectly with the in-memory map
of the P2PDataStorage that expects 32-byte keys.
Important to note that until all seednodes receive this update, nodes
will continue to have both the 20-byte and 32-byte keys in their HashMap.
* [BUGFIX] Use 32-byte key in requestData path
Addresses the second half of #3629 by using the HashMap, not the
protectedDataStore to generate the known keys in the requestData path.
This won't have any bandwidth reduction until all seednodes have the
update and only have the 32-byte key in their HashMap.
fixes#3629
* [DEAD CODE] Remove getProtectedDataStoreMap
The only user has been migrated to getMap(). Delete it so future
development doesn't have the same 20-byte vs 32-byte key issue.
* [TESTS] Allow tests to validate SequenceNumberMap write separately
In order to implement remove-before-add behavior, we need a way to
verify that the SequenceNumberMap was the only item updated.
* Implement remove-before-add message sequence behavior
It is possible to receive a RemoveData or RemoveMailboxData message
before the relevant AddData, but the current code does not handle
it.
This results in internal state updates and signal handler's being called
when an Add is received with a lower sequence number than a previously
seen Remove.
Minor test validation changes to allow tests to specify that only the
SequenceNumberMap should be written during an operation.
* [TESTS] Allow remove() verification to be more flexible
Now that we have introduced remove-before-add, we need a way
to validate that the SequenceNumberMap was written, but nothing
else. Add this feature to the validation path.
* Broadcast remove-before-add messages to P2P network
In order to aid in propagation of remove() messages, broadcast them
in the event the remove is seen before the add.
* [TESTS] Clean up remove verification helpers
Now that there are cases where the SequenceNumberMap and Broadcast
are called, but no other internal state is updated, the existing helper
functions conflate too many decisions. Remove them in favor of explicitly
defining each state change expected.
* [BUGFIX] Fix duplicate sequence number use case (startup)
Fix a bug introduced in d484617385 that
did not properly handle a valid use case for duplicate sequence numbers.
For in-memory-only ProtectedStoragePayloads, the client nodes need a way
to reconstruct the Payloads after startup from peer and seed nodes. This
involves sending a ProtectedStorageEntry with a sequence number that
is equal to the last one the client had already seen.
This patch adds tests to confirm the bug and fix as well as the changes
necessary to allow adding of Payloads that were previously seen, but
removed during a restart.
* Clean up AtomicBoolean usage in FileManager
Although the code was correct, it was hard to understand the relationship
between the to-be-written object and the savePending flag.
Trade two dependent atomics for one and comment the code to make it more
clear for the next reader.
* [DEADCODE] Clean up FileManager.java
* [BUGFIX] Shorter delay values not taking precedence
Fix a bug in the FileManager where a saveLater called with a low delay
won't execute until the delay specified by a previous saveLater call.
The trade off here is the execution of a task that returns early vs.
losing the requested delay.
* [REFACTOR] Inline saveNowInternal
Only one caller after deadcode removal.
* [TESTS] Introduce MapStoreServiceFake
Now that we want to make changes to the MapStoreService,
it isn't sufficient to have a Fake of the ProtectedDataStoreService.
Tests now use a REAL ProtectedDataStoreService and a FAKE MapStoreService
to exercise more of the production code and allow future testing of
changes to MapStoreService.
* Persist changes to ProtectedStorageEntrys
With the addition of ProtectedStorageEntrys, there are now persistable
maps that have different payloads and the same keys. In the
ProtectedDataStoreService case, the value is the ProtectedStorageEntry
which has a createdTimeStamp, sequenceNumber, and signature that can
all change, but still contain an identical payload.
Previously, the service was only updating the on-disk representation on
the first object and never again. So, when it was recreated from disk it
would not have any of the updated metadata. This was just copied from the
append-only implementation where the value was the Payload
which was immutable.
This hasn't caused any issues to this point, but it causes strange behavior
such as always receiving seqNr==1 items from seednodes on startup. It
is good practice to keep the in-memory objects and on-disk objects in
sync and removes an unexpected failure in future dev work that expects
the same behavior as the append-only on-disk objects.
* [DEADCODE] Remove protectedDataStoreListener
There were no users.
* [DEADCODE] Remove unused methods in ProtectedDataStoreService
With the addition of ProtectedStorageEntrys, there are now persistable
maps that have different payloads and the same keys. In the
ProtectedDataStoreService case, the value is the ProtectedStorageEntry
which has a createdTimeStamp, sequenceNumber, and signature that can
all change, but still contain an identical payload.
Previously, the service was only updating the on-disk representation on
the first object and never again. So, when it was recreated from disk it
would not have any of the updated metadata. This was just copied from the
append-only implementation where the value was the Payload
which was immutable.
This hasn't caused any issues to this point, but it causes strange behavior
such as always receiving seqNr==1 items from seednodes on startup. It
is good practice to keep the in-memory objects and on-disk objects in
sync and removes an unexpected failure in future dev work that expects
the same behavior as the append-only on-disk objects.
Now that we want to make changes to the MapStoreService,
it isn't sufficient to have a Fake of the ProtectedDataStoreService.
Tests now use a REAL ProtectedDataStoreService and a FAKE MapStoreService
to exercise more of the production code and allow future testing of
changes to MapStoreService.
Fix a bug introduced in d484617385 that
did not properly handle a valid use case for duplicate sequence numbers.
For in-memory-only ProtectedStoragePayloads, the client nodes need a way
to reconstruct the Payloads after startup from peer and seed nodes. This
involves sending a ProtectedStorageEntry with a sequence number that
is equal to the last one the client had already seen.
This patch adds tests to confirm the bug and fix as well as the changes
necessary to allow adding of Payloads that were previously seen, but
removed during a restart.
Now that there are cases where the SequenceNumberMap and Broadcast
are called, but no other internal state is updated, the existing helper
functions conflate too many decisions. Remove them in favor of explicitly
defining each state change expected.
Now that we have introduced remove-before-add, we need a way
to validate that the SequenceNumberMap was written, but nothing
else. Add this feature to the validation path.
It is possible to receive a RemoveData or RemoveMailboxData message
before the relevant AddData, but the current code does not handle
it.
This results in internal state updates and signal handler's being called
when an Add is received with a lower sequence number than a previously
seen Remove.
Minor test validation changes to allow tests to specify that only the
SequenceNumberMap should be written during an operation.
Addresses the second half of #3629 by using the HashMap, not the
protectedDataStore to generate the known keys in the requestData path.
This won't have any bandwidth reduction until all seednodes have the
update and only have the 32-byte key in their HashMap.
fixes#3629
Addresses the first half of #3629 by ensuring that the reconstructed
HashMap always has the 32-byte key for each payload.
It turns out, the TempProposalStore persists the ProtectedStorageEntrys
on-disk as a List and doesn't persist the key at all. Then, on
reconstruction, it creates the 20-byte key for its internal map.
The fix is to update the TempProposalStore to use the 32-byte key instead.
This means that all writes, reads, and reconstrution of the TempProposalStore
uses the 32-byte key which matches perfectly with the in-memory map
of the P2PDataStorage that expects 32-byte keys.
Important to note that until all seednodes receive this update, nodes
will continue to have both the 20-byte and 32-byte keys in their HashMap.
Now that the only user of this interface has been removed, go ahead
and delete it. This is a partial revert of
f5d75c4f60 that includes the code that was
added into ProposalService that subscribed to the P2PDataStore.
This will cause HashMapChangedListeners to receive just one onRemoved()
call for the expire work instead of multiple onRemoved() calls for each
item.
This required a bit of updating for the remove validation in tests so
that it correctly compares onRemoved with multiple items.
Minor performance overhead for constructing MapEntry and Collections
of one element, but keeps the code cleaner and all removes can still
use the same logic to remove from map, delete from data store, signal
listeners, etc.
The MapEntry type is used instead of Pair since it will require less
operations when this is eventually used in the removeExpiredEntries path.
Previously, this interface was called each time an item was changed. This
required listeners to understand performance implications of multiple
adds or removes in a short time span.
Instead, give each listener the ability to process a list of added or
removed entrys which can help them avoid performance issues.
This patch is just a refactor. Each listener is called once for each
ProtectedStorageEntry. Future patches will change this.
Instead of using a subclass that overwrites a value, utilize Guice
to inject the real value of 10000 in the app and let the tests overwrite
it with their own.
The remove code checks to ensure these fields match, but the add code
never did. This could lead to a situation where a MailboxStoragePayload
could be added, but never removed.
Previously, the expire path, the remove path, and the onDisconnect
all used separate logic for updating the map, signaling listeners, and
removing PersistablePaylod objects from the data store. This led to a
bug where the onDisconnect path did not update the protectedDataStore.
Combine the three code paths to ensure that the same state is updated
regardless of the context.
The code to remove expired Entrys in the onDisconnect path was not
correctly removing the Entry from the protectedDataStore.
This patch adds a test that failed and fixes the bug.
* All of this work is done on the UserThread so there is no need to
clone the map.
* ArrayList objects are faster to iterate than HashSets and the data is
guaranteed to be unique since the source is a ConcurrentHashMap
* Finding all items to remove first, then removing them all is an easier
to read code pattern instead of removing during iteration.
It is currently possible to construct a valid Payload object
that implements both the ProtectedStoragePayload and
PersistableNetworkPayload interfaces even though this combination is
invalid.
Instead of depending on future reviewers to catch an error, assert that
ProtectedStoragePayloads and PersistableNetworkPayloads are incompatible
as objects inside a ProtectedStorageEntry.
This allows cleanup of removeExpiredEntries that branched on this
behavior.
The old PeerManagerTest was located under network/p2p/routing, which is
no longer the correct location. Additionally, it was outdated so I
just removed it and added a new file under network/p2p/peers containing
tests for checkMaxConnections.
All test callers now just ask the TestState for a SavedTestState instead
of SavedTestState ctor. This makes more sense with the object
relationship since SavedTestState is only used internally to TestState.
One monolithic test was useful when it was under development to reduce
code churn, but now that the tests are complete it is easier to find
and run a specific test when separated into separate test files.
This also fixes a downside of Enclosed.class that didn't allow individual
tests to be run in intellij.
Now that the unit tests cover all of the per-Entry validation,
the tests that create specific configuration of ProtectedStorageEntry
and ProtectedMailboxStorageEntry objects can be removed in favor
of mockable Entrys.
Using mocks prior to this patch was impossible due to the relationship
between the Entry objects and the P2PDataStorage helper functions. Now
that the objects are properly abstracted and tested, real unit tests
can be written for the P2PDataStore module.
This patch leaves the tests and adds an @Ignore so the reviewer can see
which unit test now supersedes the integration test.
Use a more compact version of string formatting
in log messages
Rename isMetadataEquals to matchesRelevantPubKey
which is more descriptive of the actual check
Now that all the code is abstracted and tested, the remove()
and removeMailboxData() functions are identical. Combine them and update
callers appropriately.
Now, any caller don't need to know the difference and it removes the
sharp edge originally found in #3556
Make the remove validation more robust by asserting that the
correct remove message is broadcast. This will provide a better
safety net when combining the remove functions.
Let the objects compare their metadata instead of doing it for them. This
allows for actual unit testing and paves the way for deduplicating the
remove code paths.
This patch also removes an unnecessary check around comparing the hash
of the stored data to the new data's hash. That check can't fail since
the hash was a requirement for the map lookup in the first place.
The current check verifies that the stored Payload.ownerPubKey == stored Entry.ownerPubKey.
This is the same check that was done when the item was originally added
and there is no reason to do it again.
This mailbox-only check can now exist inside the object for which it
belongs. This makes it easier to test and moves closer to allowing
the deduplication of the remove() methods.