Fix missing CSS color code xmr-orange, was missing from dark mode.
Fix log message spelling/typo errors.
Removed 2 fixes from SellerStep3View so that chimp1984 can make
changes.
Remove address validator from XMR service address settings because
it does not support https prefix.
I don't know why the tests failed as I just added an overloaded method
and it should not have any impact. There is also one exception which
makes it even more obscure. I guess its some test framework issue.
See comment at the exceptional handling
// If we remove the last argument (isNull()) tests fail. No idea why as the broadcast method has an
/ overloaded method with nullable listener. Seems a testframework issue as it should not matter if the
// method with listener is called with null argument or the other method with no listener. We removed the
// null value from all other calls but here we can't as it breaks the test.
It is important that we flush our queued requests
at shutdown and wait until broadcast is completed as a maker need to
remove his offers at shutdown.
- Add handling for the case that there are very few connections (as in
dev setup).
- Make BundleOfEnvelopes extend BroadcastMessage
- Add complete handler for broadCaster to shutdown in P2PService and
wait with shutdown of other services until broadcaster is completed.
- Remove case for repeated shutdown call on P2PService as it cannot
happen.
At slow internet connections the current timeout make it impossible to
get the initial data and therefor to use Bisq.
The timeout is containing the request and response as well as the time
it takes to start up the network connection which can also be quite
slow.
In my scenario, it took about 6-10 sec for the connection and the
request is atm nearly 3 MB which takes about 24 sec on a 1 Mbit/s
connection (note that over tor connection is slower so if normal speed
is 3-5 Mbit tors speed can be considerable lower). The response data
depends on the missing data/last update but can be easily 6 MB which
adds about another 48 sec. So one can easily hit the 90 sec. limit.
There is work in development for optimizing the initial data request,
but as that is more complex and not clear when it will be deployed I
recommend that we increase the current timeout to 180 sec. to avoid
that critical issue that users get "locked out".
I do not agree that not allowing Throwable in a catch makes the code
better. Unknown exceptions can be easier found if there is an error log
at the code where it occurred.
I would prefer if there is some flexibility like it is the case with the
IDEA code analysis, where one can edit and customize the suggestions.
Ignore annotations would help.
There have been several long delays as well a wrong order of the
shutdown process (wallet got shutdown after network shutdown.
Shutdown is now pretty fast, but depends on open offers and connections.
If torControlPort is specified, but neither torControlPassword nor
torControlCookieFile are specified, we have cookieFile == null in
bisq.network.p2p.network.RunningTor, but RunningTor.getTor() assumes a
cookie file has been specified and tries to check that the file exists,
causing the thread to crash. Added a check for null to fix this.
and do not broadcast.
It is unclear why we receive expired data (some are very old), but a
manipulated node might produce that and as it only removed at each
batch process running each minute to clean out expired data it still
could propagate. Is an attack vector also to flood the network with
outdated offers where the maker is likely not online.
Should fix https://github.com/bisq-network/bisq/issues/4026
and do not broadcast.
It is unclear why we receive expired data (some are very old), but a
manipulated node might produce that and as it only removed at each
batch process running each minute to clean out expired data it still
could propagate. Is an attack vector also to flood the network with
outdated offers where the maker is likely not online.
Should fix https://github.com/bisq-network/bisq/issues/4026
The getAllConnections() call in the while loop always returned the same
number of nodes so the timeout of 15 sec was always triggered.
We now wait for the shutdown handlers of the connections and if all are
called we run our handler. If it takes longer as our timeout of 3 sec.
the shutdown handler gets called by the timeout.
The large binary objects in p2p/src/main/resources/ are updated on every
Bisq release with the latest network data to avoid the need for new Bisq
clients to download all of this information from the network, which
would easily overload seed nodes and generally bog down the client.
This approach works well enough for its purposes, but comes with the
significant downside of storing all of this binary data in Git history
forever. The current version of these binary objects total about 65M,
and they grow with every release. In aggregate, this has caused the
total size of the repository to grow to 360M, making it cumbersome to
clone over a low-bandwith connection, and slowing down various local Git
operations.
To avoid further exacerbating this problem, this commit sets these files
up to be tracked via Git LFS. There's nothing we can do about the 360M
of files that already exist in history, but we can ensure it doesn't
grow in this unchecked way going forward. For an understanding of how
Git LFS works, see the reference material at [1], and see also the
sample project and README at [2].
The following command was used to track the files:
$ git lfs track "p2p/src/main/resources/*BTC_MAINNET"
Tracking "p2p/src/main/resources/AccountAgeWitnessStore_BTC_MAINNET"
Tracking "p2p/src/main/resources/BlindVoteStore_BTC_MAINNET"
Tracking "p2p/src/main/resources/DaoStateStore_BTC_MAINNET"
Tracking "p2p/src/main/resources/ProposalStore_BTC_MAINNET"
Tracking "p2p/src/main/resources/SignedWitnessStore_BTC_MAINNET"
Tracking "p2p/src/main/resources/TradeStatistics2Store_BTC_MAINNET"
We are using GitHub's built-in LFS service here, and it's important to
understand that there are storage and bandwidth limits there. We have
1G total storage and 1G per month of bandwidth on the free tier. We will
certainly exceed this, and so must purchase at least one "data pack"
from GitHub, possibly two. One gets us to 50G storage and bandwith.
In an attempt to avoid unnecessary LFS bandwidth usage, this commit also
updates the Travis CI build configuration to cache Git LFS files, such
that they are not re-downloaded on every CI build (see [3] and [4]
below). With that out of the way, the variable determining whether we
exceed the monthly limit is how many clones we have every month, and
there are many, though it's not clear how many are are Travis CI and how
many are users / developers.
Tracking these files via LFS means that developers will need to have Git
LFS installed in order to properly synchronize the files. If a developer
does not have LFS installed, cloning will complete successfully and the
build would complete successfully, but the app would fail when trying to
actually load the p2p data store files. For this reason, the build has
been updated to proactively check that the p2p data store files have
been properly synchronized via LFS, and if not, the build fails with a
helpful error message. The docs/build.md instructions have also been
updated accordingly.
It is important that we make this change now, not only to avoid growing
the repository in the way described above as we have been doing now for
many releases, but also because we are now considering adding yet more
binary objects to the repository, as proposed at
https://github.com/bisq-network/projects/issues/25.
[1]: https://git-lfs.github.com
[2]: https://github.com/cbeams/lfs-test
[3]: https://docs-staging.travis-ci.com/user/customizing-the-build/#git-lfs
[4]: https://github.com/travis-ci/travis-ci/issues/8787#issuecomment-394202791
The getAllConnections() call in the while loop always returned the same
number of nodes so the timeout of 15 sec was always triggered.
We now wait for the shutdown handlers of the connections and if all are
called we run our handler. If it takes longer as our timeout of 3 sec.
the shutdown handler gets called by the timeout.
The former class is dead code, together with its store service, as they
were only referenced from CorePersistenceProtoResolver::fromProto, the
binding logic and from AppendOnlyDataStoreService by orphaned migration
code. However, migration from the old persisted data was completed long
ago and the store file is no longer being read or written from anywhere
in the codebase.
Also remove the associated PersistableEnvelope proto message type, along
with the TradeStatisticsList message type. The latter is long deprecated
and has no corresponding Java class implementing PersistableEnvelope, so
removing it won't change behaviour (outside the exception message thrown
when attempting to resolve it).
* Report HS version to pricenode
In order to evaluate progress on https://github.com/bisq-network/projects/issues/23,
the Bisq app reports its hiddenservice version.
This change is going to be undone as soon as we do not need the
info anymore.
* Added hsversion scraper script
* Added installer/uninstaller
* Cleanup
* Fix unit name
Here, the tor object is a member variable and there are cases where
this member variable is not set yet.
Situation arose where a sigterm/sigint shutdown is requested and due
to the member variable not set tor was left running.
Remove an unused PersistableEnvelope interface from the following five
PersistableNetworkPayload implementations:
AccountAgeWitness, BlindVotePayload, ProposalPayload,
SignedWitness, TradeStatistics2
These already have corresponding *Store envelope classes which correctly
implement the interface.
The close connection process did fire up worker threads to actually
close the connections. Yet, once all threads have been spawned,
the code proceeds assuming that there are no connections left open
without checking.
This lead to situations where tor has been shutdown already but
open connections. These connections tried to gracefully close but
without tor, that only caused a wall of exceptions.
Currently bisq desktop does not accept IPv6 addresses in the settings for
custom nodes or via the --btcNodes command line option. The separation of
address and port is handled incorrectly in core / BtcNodes::fromFullAddress.
This results in IPv6 addresses being ignored. Where Tor is enabled for
Bitcoin connections, we need to handle the IPv6 address response
from Tor DNS lookup.
Fixes#3990
Make the default toPersistableMessage() method of PersistableEnvelope
simply delegate to Proto.toProtoMessage for speed, so that stores can
explicitly implement (Threaded|UserThreadMapped)PersistableEnvelope if
they actually need concurrency control.
As part of this, make PeerList implement PersistableEnvelope directly
instead of extending PersistableList, as it is non-critical & cloned on
the user thread prior to storage anyway, so doesn't need be thread-safe.
In this way, only PaymentAccountList & small DAO-related stores extend
PersistableList, so they can all be made user-thread-mapped.
After this change, the only concrete store classes not implementing
(Threaded|UserThreadMapped)PersistableEnvelope are:
AccountAgeWitness, BlindVotePayload, ProposalPayload, SignedWitness,
TradeStatistics2, NavigationPath & PeerList
The first five appear to erroneously implement PersistableEnvelope and
can be cleaned up in a separate commit. The last two are non-critical.
(Make NavigationPath.path an immutable list, for slightly better thread
safety anyway - that way it will never be observed half-constructed.)
Add toProtoMessageSynchronized() default method to PersistableEnvelope,
which performs (blocking) protobuf serialisation in the user thread,
regardless of the calling thread. This should prevent data races like
the ConcurrentModificationException observed in #3752, under the
reasonable assumption that shared persistable objects are only mutated
in the user thread.
Also add a ThreadedPersistableEnvelope sub-interface overriding the
default method above, to let objects which are expensive to serialise
(like DaoStateStore) be selectively serialised in the 'save-file-task-X'
thread as before, but directly synchronised with each mutating op. As
most objects are cheap to serialise, this avoids a noticeable perf drop
without having to track down every mutating method for each store.
In all cases but one, classes implementing ThreadedPersistableEnvelope
are stores like TradeStatistic2Store, with a single ConcurrentHashMap
field. These require no further serialisation, since the map entries are
immutable, so the only mutating operations are map.put(..) calls which
are already synchronised with map reads. (Even if map.values().stream()
sees updates @ different keys happen out-of-order, it should be benign.)
The remaining case is DaoStateStore, which is only ever reset or
modified via a single persist(..) call with a cloned DaoState instance
and hash chain from DaoStateSnapshotService, so there is no aliasing
risk from the various DAO state mutations done in DaoStateService and
elsewhere.
This should fix#3752.
Minor change for consistency: narrow the signature of some remaining
such methods, which have return type 'PersistableEnvelope'.
(This excludes some other cases with return type 'NetworkEnvelope'.)
Prior to this commit, the way that the appDataDir and its subdirectories
were created was a haphazard process that worked but in a fragile and
non-obvious way. When Config was instantiated, an attempt to call
btcNetworkDir.mkdir() was made, but if appDataDir did not already exist,
this call would always fail because mkdir() does not create parent
directories. This problem was never detected, though, because the
KeyStorage class happened to call mkdirs() on its 'keys' subdirectory,
which, because of the plural mkdirs() call ended up creating the whole
${appDataDir}/${btcNetworkDir}/keys hierarchy. Other btcNetworkDir
subdirectories such as tor/ and db/ then benefited from the hierarchy
already existing when they attempted to call mkdir() for their own dirs.
So the whole arrangement worked only because KeyStorage happened to make
a mkdirs() call and because that code in KeyStorage happened to get
invoked before the code that managed the other subdirectories.
This change ensures that appDataDir and all its subdirectories are
created up front, such that they are guaranteed to exist by the time
they are injected into Storage, KeyStorage, WalletsSetup and TorSetup.
The hierarchy is unchanged, structured as it always has been:
${appDataDir}
└── btc_mainnet
├── db
├── keys
├── wallet
└── tor
Note that the tor/ subdirectory actually gets deleted and re-created
within the TorSetup infrastructure regardless of whether the directory
exists beforehand.
In previous commits, BisqEnvironment functionality has been fully ported
to the new, simpler and more type-safe Config class. This change removes
BisqEnvironment and all dependencies on the Spring Framework Environment
interface that it implements.
The one exception is the pricenode module, which is separate and apart
from the rest of the codebase in that it is a standalone, Spring-based
HTTP service.
Prior to this commit, BisqExecutable has been responsible for parsing
command line and config file options and BisqEnvironment has been
responsible for assigning default values to those options and providing
access to option values to callers throughout the codebase.
This approach has worked, but at considerable costs in complexity,
verbosity, and lack of any type-safety in option values. BisqEnvironment
is based on the Spring Framework's Environment abstraction, which
provides a great deal of flexibility in handling command line options,
environment variables, and more, but also operates on the assumption
that such inputs have String-based values.
After having this infrastructure in place for years now, it has become
evident that using Spring's Environment abstraction was both overkill
for what we needed and limited us from getting the kind of concision and
type saftey that we want. The Environment abstraction is by default
actually too flexible. For example, Bisq does not want or need to have
environment variables potentially overriding configuration file values,
as this increases our attack surface and makes our threat model more
complex. This is why we explicitly removed support for handling
environment variables quite some time ago.
The BisqEnvironment class has also organically evolved toward becoming a
kind of "God object", responsible for more than just option handling. It
is also, for example, responsible for tracking the status of the user's
local Bitcoin node, if any. It is also responsible for writing values to
the bisq.properties config file when certain ban filters arrive via the
p2p network. In the commits that follow, these unrelated functions will
be factored out appropriately in order to separate concerns.
As a solution to these problems, this commit begins the process of
eliminating BisqEnvironment in favor of a new, bespoke Config class
custom-tailored to Bisq's needs. Config removes the responsibility for
option parsing from BisqExecutable, and in the end provides "one-stop
shopping" for all option parsing and access needs.
The changes included in this commit represent a proof of concept for the
Config class, where handling of a number of options has been moved from
BisqEnvironment and BisqExecutable over to Config. Because the migration
is only partial, both Config and BisqEnvironment are injected
side-by-side into calling code that needs access to options. As the
migration is completed, BisqEnvironment will be removed entirely, and
only the Config object will remain.
An additional benefit of the elimination of BisqEnvironment is that it
will allow us to remove our dependency on the Spring Framework (with the
exception of the standalone pricenode application, which is Spring-based
by design).
Note that while this change and those that follow it are principally a
refactoring effort, certain functional changes have been introduced. For
example, Bisq now supports a `--configFile` argument at the command line
that functions very similarly to Bitcoin Core's `-conf` option.
This reverts commit 26c053dae8 because
Kotlin compilation slows down the build, was applied too broadly to all
modules instead of just the one that needed it, and most importantly
because we never actually went ahead with converting anything of
importance to Kotlin. The commit being reverted was basically a demo,
converting a single test type to show what kind of difference it would
make.
We already have a garbage collection thread that runs every minute
to clean up items. Doing it again during onDisconnect is an unnecessary
optimization that adds complexity and caused bugs.
For example, the original implementation did not handle the sequence
number map correctly and was removing entries during a stream iteration.
This also reduces the complexity of testing. There is one code path
responsible for reducing ttls and one code path responsible for
expiring entries. Much easier to reason about.
1. Remove delete during stream iteration
2. Minimize branching w/ early returns for bad states
3. Use stream filter for readability
4. Implement additional checks that should be done when removing entries
Before refactoring the function ensure the tests cover all cases. This
fixes a bug where the payload ttl was too low in some instances causing
backDate to do no work when it should.
We had a small memory leak in the code base. Namely, there have been some
threadpools in use but not shutdown when they have been no longer needed.
Result was that the threads and the parent threads have been kept alive
which lead to hundreds of stale threads over the course of several days.
isDataOwner is used when deciding how many peer nodes should receive
a BroadcastMessage. If the BroadcastMessage originated
on the local node it is sent to ALL peer nodes with a small delay.
If the node is only relaying the message (it originated on a different
node) it is sent to MAX(peers.size(), 7) peers with a delay that is
twice as long.
All the information needed to determine whether or not the
BroadcastMessage originated on the local node is available at the final
broadcast site and there is no reason to have callers pass it in.
In the event that the sender address is not known during broadcast (which
is only a remote possibility due to how early the local node address
is set during startup) we can default to relay mode.
This first patch just removes the deep parameters. The next will remove
everything else. There is one real change in LiteNodeNetworkService.java
where it was using the local node when it should have been using the
peer node. This was updated to the correct behavior.
Now that more callers have moved internal, the public facing API
can be cleaner and more simple. This should lead to a more maintainable
API and less sharp edges with future development work.
Now that the only user is internal, the API can be made private and the
tests can be removed. This involved adding a few test cases to
processGetDataResponse to ensure the invalid hash size condition was
still covered.
The only two users of this constructor are the fromProto path which
now creates an empty Capabilities object similar to GetDataResponse.
The other internal usage of Capabilities.app which is initialized to empty.
The only two users of this constructor are the fromProto path which
already creates an empty Capabilities object if one is not provided and
the internal usage of Capabilities.app which is initialized to empty.
Remove the @Nullable so future readers aren't confused.
Checking for null creates hard-to-read code and it is simpler to just
create an empty set if we receive a pre-v0.6 GetDataResponse protobuf
message that does not have the field set.
Write a few integration test that exercises the exercise interesting
synchronization states including the lost remove bug. This fails
with the proper validation, but will pass at the end of the new feature
development.
Previously, multiple handlers needed to signal off one global variable.
Now, that this check is inside the singleton P2PDataStorage, make it
non-static and private.
Now that we want to unit test the GetData path which has different
behavior w.r.t. broadcasts, the tests need a way to verify that
state was updated, but not broadcast during an add.
This patch changes all verification function to take each state update
explicitly so the tests can do the proper verification.
Introduce a generic function that can be used to filter
Map<ByteArray, PersistableNetworkPayload> or
Map<ByteArray, ProtectedStorageEntry>.
Used to deduplicate the GetData code paths and ensure the logic is the
same between the two payload types.
Move the capability check inside the stream operation. This should
improve performance slightly, but more importantly it makes the
two filter functions almost identical so they can be combined.
The appendOnlyDataStoreService and map already have unique keys that
are based on the hash of the payload. This would catch instances
where:
PersistableNetworkPayload
- None: The key is based on ByteArray(payload.getHash()) which is the
same as this check.
ProtectedStorageEntry
- Cases where multiple PSEs contain payloads that have equivalent
hashCode(), but different data.toProtoMessage().toByteArray().
I don't think it is a good idea to keep 2 "unique" methods on
payloads. This is likely left over from a time when
Payload hashCode() needed to be different than the hash of
the payload.
Remove the dependence on the connection object by having the handler
pass in the peer's capabilities. This now allows unit testing of
buildGetDataResponse without any connection dependencies.
Move the logging that utilizes connection information into the request
handler. Now, buildGetDataResponse just returns whether or not the list
is truncated which will make it easier to test.
Changed the log to reference getDataResponse instead of getData. Now
that we might truncate the response, it ins't true that this is exactly
what the peer asked.
These are identical test cases to the requestHandler tests, but with much
fewer dependencies. The requestHandler tests will eventually be deleted,
but they are going to remain throughout development as an extra safety
net.
As part of changing the GetData path, we want to move all creation
and processing of GetData messages inside P2PDataStorage. This will allow
easier unit testing of the behavior as well as cleaner code in the
request handlers that can just focus on nonces, connections, etc.
* Change access level for checkMaxConnections to be tested
* Refactor checkMaxConnections
Fix connection limit checks so as to prevent the following warning:
> WARN b.n.p2p.peers.PeerManager: No candidates found to remove (That
case should not be possible as we use in the last case all
connections).
* Add MockNode that allows for simulating connections
* Add PeerManagerTest
The old PeerManagerTest was located under network/p2p/routing, which is
no longer the correct location. Additionally, it was outdated so I
just removed it and added a new file under network/p2p/peers containing
tests for checkMaxConnections.
* Add testCompile dependency to core
This is necessary because bisq.network.p2p.MockNode imports
bisq.core.network.p2p.seed.DefaultSeedNodeRepository.
* Update based on review feedback
Mock the SeedNodeRepository superclass, thus eliminating the dependency
to core.
* [PR COMMENTS] Make maxSequenceNumberBeforePurge final
Instead of using a subclass that overwrites a value, utilize Guice
to inject the real value of 10000 in the app and let the tests overwrite
it with their own.
* [TESTS] Clean up 'Analyze Code' warnings
Remove unused imports and clean up some access modifiers now that
the final test structure is complete
* [REFACTOR] HashMapListener::onAdded/onRemoved
Previously, this interface was called each time an item was changed. This
required listeners to understand performance implications of multiple
adds or removes in a short time span.
Instead, give each listener the ability to process a list of added or
removed entrys which can help them avoid performance issues.
This patch is just a refactor. Each listener is called once for each
ProtectedStorageEntry. Future patches will change this.
* [REFACTOR] removeFromMapAndDataStore can operate on Collections
Minor performance overhead for constructing MapEntry and Collections
of one element, but keeps the code cleaner and all removes can still
use the same logic to remove from map, delete from data store, signal
listeners, etc.
The MapEntry type is used instead of Pair since it will require less
operations when this is eventually used in the removeExpiredEntries path.
* Change removeFromMapAndDataStore to signal listeners at the end in a batch
All current users still call this one-at-a-time. But, it gives the ability
for the expire code path to remove in a batch.
* Update removeExpiredEntries to remove all items in a batch
This will cause HashMapChangedListeners to receive just one onRemoved()
call for the expire work instead of multiple onRemoved() calls for each
item.
This required a bit of updating for the remove validation in tests so
that it correctly compares onRemoved with multiple items.
* ProposalService::onProtectedDataRemoved signals listeners once on batch removes
#3143 identified an issue that tempProposals listeners were being
signaled once for each item that was removed during the P2PDataStore
operation that expired old TempProposal objects. Some of the listeners
are very expensive (ProposalListPresentation::updateLists()) which results
in large UI performance issues.
Now that the infrastructure is in place to receive updates from the
P2PDataStore in a batch, the ProposalService can apply all of the removes
received from the P2PDataStore at once. This results in only 1 onChanged()
callback for each listener.
The end result is that updateLists() is only called once and the performance
problems are reduced.
This removes the need for #3148 and those interfaces will be removed in
the next patch.
* Remove HashmapChangedListener::onBatch operations
Now that the only user of this interface has been removed, go ahead
and delete it. This is a partial revert of
f5d75c4f60 that includes the code that was
added into ProposalService that subscribed to the P2PDataStore.
* [TESTS] Regression test for #3629
Write a test that shows the incorrect behavior for #3629, the hashmap
is rebuilt from disk using the 20-byte key instead of the 32-byte key.
* [BUGFIX] Reconstruct HashMap using 32-byte key
Addresses the first half of #3629 by ensuring that the reconstructed
HashMap always has the 32-byte key for each payload.
It turns out, the TempProposalStore persists the ProtectedStorageEntrys
on-disk as a List and doesn't persist the key at all. Then, on
reconstruction, it creates the 20-byte key for its internal map.
The fix is to update the TempProposalStore to use the 32-byte key instead.
This means that all writes, reads, and reconstrution of the TempProposalStore
uses the 32-byte key which matches perfectly with the in-memory map
of the P2PDataStorage that expects 32-byte keys.
Important to note that until all seednodes receive this update, nodes
will continue to have both the 20-byte and 32-byte keys in their HashMap.
* [BUGFIX] Use 32-byte key in requestData path
Addresses the second half of #3629 by using the HashMap, not the
protectedDataStore to generate the known keys in the requestData path.
This won't have any bandwidth reduction until all seednodes have the
update and only have the 32-byte key in their HashMap.
fixes#3629
* [DEAD CODE] Remove getProtectedDataStoreMap
The only user has been migrated to getMap(). Delete it so future
development doesn't have the same 20-byte vs 32-byte key issue.
* [TESTS] Allow tests to validate SequenceNumberMap write separately
In order to implement remove-before-add behavior, we need a way to
verify that the SequenceNumberMap was the only item updated.
* Implement remove-before-add message sequence behavior
It is possible to receive a RemoveData or RemoveMailboxData message
before the relevant AddData, but the current code does not handle
it.
This results in internal state updates and signal handler's being called
when an Add is received with a lower sequence number than a previously
seen Remove.
Minor test validation changes to allow tests to specify that only the
SequenceNumberMap should be written during an operation.
* [TESTS] Allow remove() verification to be more flexible
Now that we have introduced remove-before-add, we need a way
to validate that the SequenceNumberMap was written, but nothing
else. Add this feature to the validation path.
* Broadcast remove-before-add messages to P2P network
In order to aid in propagation of remove() messages, broadcast them
in the event the remove is seen before the add.
* [TESTS] Clean up remove verification helpers
Now that there are cases where the SequenceNumberMap and Broadcast
are called, but no other internal state is updated, the existing helper
functions conflate too many decisions. Remove them in favor of explicitly
defining each state change expected.
* [BUGFIX] Fix duplicate sequence number use case (startup)
Fix a bug introduced in d484617385 that
did not properly handle a valid use case for duplicate sequence numbers.
For in-memory-only ProtectedStoragePayloads, the client nodes need a way
to reconstruct the Payloads after startup from peer and seed nodes. This
involves sending a ProtectedStorageEntry with a sequence number that
is equal to the last one the client had already seen.
This patch adds tests to confirm the bug and fix as well as the changes
necessary to allow adding of Payloads that were previously seen, but
removed during a restart.
* Clean up AtomicBoolean usage in FileManager
Although the code was correct, it was hard to understand the relationship
between the to-be-written object and the savePending flag.
Trade two dependent atomics for one and comment the code to make it more
clear for the next reader.
* [DEADCODE] Clean up FileManager.java
* [BUGFIX] Shorter delay values not taking precedence
Fix a bug in the FileManager where a saveLater called with a low delay
won't execute until the delay specified by a previous saveLater call.
The trade off here is the execution of a task that returns early vs.
losing the requested delay.
* [REFACTOR] Inline saveNowInternal
Only one caller after deadcode removal.
* [TESTS] Introduce MapStoreServiceFake
Now that we want to make changes to the MapStoreService,
it isn't sufficient to have a Fake of the ProtectedDataStoreService.
Tests now use a REAL ProtectedDataStoreService and a FAKE MapStoreService
to exercise more of the production code and allow future testing of
changes to MapStoreService.
* Persist changes to ProtectedStorageEntrys
With the addition of ProtectedStorageEntrys, there are now persistable
maps that have different payloads and the same keys. In the
ProtectedDataStoreService case, the value is the ProtectedStorageEntry
which has a createdTimeStamp, sequenceNumber, and signature that can
all change, but still contain an identical payload.
Previously, the service was only updating the on-disk representation on
the first object and never again. So, when it was recreated from disk it
would not have any of the updated metadata. This was just copied from the
append-only implementation where the value was the Payload
which was immutable.
This hasn't caused any issues to this point, but it causes strange behavior
such as always receiving seqNr==1 items from seednodes on startup. It
is good practice to keep the in-memory objects and on-disk objects in
sync and removes an unexpected failure in future dev work that expects
the same behavior as the append-only on-disk objects.
* [DEADCODE] Remove protectedDataStoreListener
There were no users.
* [DEADCODE] Remove unused methods in ProtectedDataStoreService
With the addition of ProtectedStorageEntrys, there are now persistable
maps that have different payloads and the same keys. In the
ProtectedDataStoreService case, the value is the ProtectedStorageEntry
which has a createdTimeStamp, sequenceNumber, and signature that can
all change, but still contain an identical payload.
Previously, the service was only updating the on-disk representation on
the first object and never again. So, when it was recreated from disk it
would not have any of the updated metadata. This was just copied from the
append-only implementation where the value was the Payload
which was immutable.
This hasn't caused any issues to this point, but it causes strange behavior
such as always receiving seqNr==1 items from seednodes on startup. It
is good practice to keep the in-memory objects and on-disk objects in
sync and removes an unexpected failure in future dev work that expects
the same behavior as the append-only on-disk objects.
Now that we want to make changes to the MapStoreService,
it isn't sufficient to have a Fake of the ProtectedDataStoreService.
Tests now use a REAL ProtectedDataStoreService and a FAKE MapStoreService
to exercise more of the production code and allow future testing of
changes to MapStoreService.
Fix a bug introduced in d484617385 that
did not properly handle a valid use case for duplicate sequence numbers.
For in-memory-only ProtectedStoragePayloads, the client nodes need a way
to reconstruct the Payloads after startup from peer and seed nodes. This
involves sending a ProtectedStorageEntry with a sequence number that
is equal to the last one the client had already seen.
This patch adds tests to confirm the bug and fix as well as the changes
necessary to allow adding of Payloads that were previously seen, but
removed during a restart.
Now that there are cases where the SequenceNumberMap and Broadcast
are called, but no other internal state is updated, the existing helper
functions conflate too many decisions. Remove them in favor of explicitly
defining each state change expected.
Now that we have introduced remove-before-add, we need a way
to validate that the SequenceNumberMap was written, but nothing
else. Add this feature to the validation path.
It is possible to receive a RemoveData or RemoveMailboxData message
before the relevant AddData, but the current code does not handle
it.
This results in internal state updates and signal handler's being called
when an Add is received with a lower sequence number than a previously
seen Remove.
Minor test validation changes to allow tests to specify that only the
SequenceNumberMap should be written during an operation.
Addresses the second half of #3629 by using the HashMap, not the
protectedDataStore to generate the known keys in the requestData path.
This won't have any bandwidth reduction until all seednodes have the
update and only have the 32-byte key in their HashMap.
fixes#3629
Addresses the first half of #3629 by ensuring that the reconstructed
HashMap always has the 32-byte key for each payload.
It turns out, the TempProposalStore persists the ProtectedStorageEntrys
on-disk as a List and doesn't persist the key at all. Then, on
reconstruction, it creates the 20-byte key for its internal map.
The fix is to update the TempProposalStore to use the 32-byte key instead.
This means that all writes, reads, and reconstrution of the TempProposalStore
uses the 32-byte key which matches perfectly with the in-memory map
of the P2PDataStorage that expects 32-byte keys.
Important to note that until all seednodes receive this update, nodes
will continue to have both the 20-byte and 32-byte keys in their HashMap.
Now that the only user of this interface has been removed, go ahead
and delete it. This is a partial revert of
f5d75c4f60 that includes the code that was
added into ProposalService that subscribed to the P2PDataStore.
This will cause HashMapChangedListeners to receive just one onRemoved()
call for the expire work instead of multiple onRemoved() calls for each
item.
This required a bit of updating for the remove validation in tests so
that it correctly compares onRemoved with multiple items.
Minor performance overhead for constructing MapEntry and Collections
of one element, but keeps the code cleaner and all removes can still
use the same logic to remove from map, delete from data store, signal
listeners, etc.
The MapEntry type is used instead of Pair since it will require less
operations when this is eventually used in the removeExpiredEntries path.
Previously, this interface was called each time an item was changed. This
required listeners to understand performance implications of multiple
adds or removes in a short time span.
Instead, give each listener the ability to process a list of added or
removed entrys which can help them avoid performance issues.
This patch is just a refactor. Each listener is called once for each
ProtectedStorageEntry. Future patches will change this.
Instead of using a subclass that overwrites a value, utilize Guice
to inject the real value of 10000 in the app and let the tests overwrite
it with their own.
The remove code checks to ensure these fields match, but the add code
never did. This could lead to a situation where a MailboxStoragePayload
could be added, but never removed.
Previously, the expire path, the remove path, and the onDisconnect
all used separate logic for updating the map, signaling listeners, and
removing PersistablePaylod objects from the data store. This led to a
bug where the onDisconnect path did not update the protectedDataStore.
Combine the three code paths to ensure that the same state is updated
regardless of the context.
The code to remove expired Entrys in the onDisconnect path was not
correctly removing the Entry from the protectedDataStore.
This patch adds a test that failed and fixes the bug.
* All of this work is done on the UserThread so there is no need to
clone the map.
* ArrayList objects are faster to iterate than HashSets and the data is
guaranteed to be unique since the source is a ConcurrentHashMap
* Finding all items to remove first, then removing them all is an easier
to read code pattern instead of removing during iteration.
It is currently possible to construct a valid Payload object
that implements both the ProtectedStoragePayload and
PersistableNetworkPayload interfaces even though this combination is
invalid.
Instead of depending on future reviewers to catch an error, assert that
ProtectedStoragePayloads and PersistableNetworkPayloads are incompatible
as objects inside a ProtectedStorageEntry.
This allows cleanup of removeExpiredEntries that branched on this
behavior.
The old PeerManagerTest was located under network/p2p/routing, which is
no longer the correct location. Additionally, it was outdated so I
just removed it and added a new file under network/p2p/peers containing
tests for checkMaxConnections.
All test callers now just ask the TestState for a SavedTestState instead
of SavedTestState ctor. This makes more sense with the object
relationship since SavedTestState is only used internally to TestState.
One monolithic test was useful when it was under development to reduce
code churn, but now that the tests are complete it is easier to find
and run a specific test when separated into separate test files.
This also fixes a downside of Enclosed.class that didn't allow individual
tests to be run in intellij.
Now that the unit tests cover all of the per-Entry validation,
the tests that create specific configuration of ProtectedStorageEntry
and ProtectedMailboxStorageEntry objects can be removed in favor
of mockable Entrys.
Using mocks prior to this patch was impossible due to the relationship
between the Entry objects and the P2PDataStorage helper functions. Now
that the objects are properly abstracted and tested, real unit tests
can be written for the P2PDataStore module.
This patch leaves the tests and adds an @Ignore so the reviewer can see
which unit test now supersedes the integration test.
Use a more compact version of string formatting
in log messages
Rename isMetadataEquals to matchesRelevantPubKey
which is more descriptive of the actual check
Now that all the code is abstracted and tested, the remove()
and removeMailboxData() functions are identical. Combine them and update
callers appropriately.
Now, any caller don't need to know the difference and it removes the
sharp edge originally found in #3556
Make the remove validation more robust by asserting that the
correct remove message is broadcast. This will provide a better
safety net when combining the remove functions.
Let the objects compare their metadata instead of doing it for them. This
allows for actual unit testing and paves the way for deduplicating the
remove code paths.
This patch also removes an unnecessary check around comparing the hash
of the stored data to the new data's hash. That check can't fail since
the hash was a requirement for the map lookup in the first place.
The current check verifies that the stored Payload.ownerPubKey == stored Entry.ownerPubKey.
This is the same check that was done when the item was originally added
and there is no reason to do it again.
This mailbox-only check can now exist inside the object for which it
belongs. This makes it easier to test and moves closer to allowing
the deduplication of the remove() methods.
Move the signature checks into the objects to clean up the calling code
and make it more testable.
The testing now has to take real hashes so some work was done in the fixtures
to create valid hashable objects.
Now that the objects can answer questions about valid conditions
for add/remove, ask them directly.
This also pushes the logging down into the ProtectedStorageEntry and
ProtectedMailboxStorageEntry and cleans up the message.
Method bodies are copied from P2PDataStore to separate refactoring
efforts and behavior changes.
Identified a bug where a ProtectedMailboxStorageEntry mailbox entry
could be added, but never removed.
The code around validating MailboxStoragePayloads is subtle when
a MailboxStoragePayload is wrapped in a ProtectedStorageEntry. Add tests
to document the current behavior.
The custom code to verify the refreshTTLMessage's signature and update
an entry isn't necessary. Just have the code construct an updated
ProtectedStorageEntry from the existing and new data, verify it,
and add it to the map.
This also allows the removal of the ProtectedStorageEntry APIs
that modify internal state.
The original test would take over 5 seconds. Allow tests to set the number
of required entries before purge to a lower value so the tests
can run faster with the same confidence.
Add tests for removing expired entries and optionally purging
the sequence number map. Now possible since these tests have
control over time with the ClockFake.
The remove validation needed to be improved since deletes through
the expire path don't signal HashMap listeners or write sequence numbers.
Reduces non-deterministic failures of the refreshTTL tests that resulted
from the uncontrollable System.currentTimeMillis().
Now, all tests have extremely fine control over the elapsed time between
calls which makes the current and future tests much better.
Switch from System.currentTimeMills() to
Clock.millis() so dependency injection can
be used for tests that need finer control of time.
This involves attaching a Clock to the resolver
so all fromProto methods have one available when they
reconstruct a message. This uses the Injector for the APP
and a default Clock.systemDefaultZone is used in the manual
instantiations.
Work was already done in #3037 to make this possible.
All tests still use the default system clock for now.
Use the DI Clock object already available in P2PDataStore, instead
of calling System.currentTimeMillis() directly. These two functions
have the same behavior and switching over allows finer control
of time in the tests.
Fix connection limit checks so as to prevent the following warning:
> WARN b.n.p2p.peers.PeerManager: No candidates found to remove (That
case should not be possible as we use in the last case all
connections).
Remove operations are now only processed if the sequence number
is greater than the last operation seen for a specific payload.
The only creator of new remove entrys is the P2PService layer that always increments
the sequence number. So, this is either left around from a time where
removes needed to work with non-incrementing sequence numbers or just
a longstanding bug.
With the completion of this patch, all operations now require increasing
sequence numbers so it should be easier to reason about the behavior in
future debugging.
Now returns false if the sequence number of the refresh matches
the last operation seen for the specified hash. This is a more expected
return value when no state change occurs.
The only callers are either P2PService users that always increment the
sequence number or the onMessage() handler which doesn't verify the return
so there will be no visible change other than the increased readability
of the code and deduplication of the code paths.
Now returns false on duplicate sequence numbers. This matches more of
the expected behavior for an add() function when the element previously exists.
The only callers are either P2PService users that always increment the
sequence number or the onMessage() handler which doesn't verify the return
so there will be no visible change other than the increased readability
of the code and deduplication of the code paths.
Removed duplicate log messages that are handled inside the various helper methods
and print more verbose state useful for debugging.
Updated potentially misleading comments around hashing collisions
Fix a bug where remove() was called in the addMailboxData()
failure path.
1. Sender's can't remove mailbox entries. Only
the receiver can remove it so even if the previous add() failed and
left partial state, the remove() can never succeed.
2. Even if the sender could remove, this path used remove() instead
of removeMailboxData() so it wouldn't have succeed anyway.
This patch cleans up the failure path as well as adds a precondition
for the remove() function to ensure future callers don't use them for
ProtectedMailboxStorageEntrys.
Change the use of "public api" to "Client API" to describe the set of
callers that use the pattern addProtectedStorageEntry(getProtectedStorageEntry())
as a contrast to the onMessage handler users or the GetData users.
* Limit max. nr. of PersistableNetworkPayload and ProtectedStorageEntry to 10000
To avoid that seed nodes get overloaded with requests for too many
PersistableNetworkPayload and ProtectedStorageEntry data we limit nr. of
entries to max 10000.
* Add peers node address to logs
* Improve logs
- Add log of size to GetBlocksResponse.toProtoNetworkEnvelope method
- Log in kb
* Log connection UID if not peer address available
* Add cleanup code or invalid objects
We have an invalid Filter object in the live network (prob. some dev
made some mistake). This code helps so clean that up.
* Add log
Instead, create a ProtectedStoragePayloadStub class which mocks out the required
protobuf Message for hashing. The hash is equal to the ownerPubKey so they are unique.
These tests create real versions of the supported Payload & Entry types and
run them through the 3 entry points (onMessage, Init, standard add()/remove()/refresh(),
to verify the expected return values, internal state changes, and
external signals (listeners, broadcasts).
The tests are involved and I am proposing future work to make many of the objects
more testable that will greatly reduce the work and tests cases needed.
This work identified a few unexpected scenarios and potential bugs that are addressed
in dependent pull requests.
Code coverage when running P2PDataStorageTest:
Before:
Line: 4%
Branch: 0%
After:
Line: 78%
Branch 76%
* New trade protocol (#3333)
* Remove arbitration key, cleanup
* Add BuyerAsMakerProcessDepositTxAndDelayedPayoutTxMessage
* Adopt trade protocol
- Add handler for DepositTxAndDelayedPayoutTxMessage
- Change handler for DepositTxPublishedMessage
- Add MakerSetsLockTime
- Rename MakerProcessPayDepositRequest to MakerProcessPayDepositRequest
- Rename MakerSendPublishDepositTxRequest to MakerSendsProvideInputsForDepositTxMessage
- Rename DepositTxPublishedMessage to DelayedPayoutTxSignatureRequest
- Rename MakerProcessDepositTxPublishedMessage to MakerAsBuyerProcessSignDelayedPayoutTxMessage
* Remove arbitratorKey
* Add new classes
* Add new message classes
* Add new task classes
* Renamed classed (no functional change yet)
* Add lockTime
* Add delayedPayoutTxSignature field
* Add useReimbursementModel field
* Add new classes
* Add setting.preferences.useReimbursementModel
* Apply renamed classes (new classes not added yet)
* Add useReimbursementModel
* Add preferences param
* Add new methods, cleanup
* Add daoFacade param, apply renaming
* Add delayedPayoutTx, lockTime and delayedPayoutTxId
- Support daoFacade param
* Remove DirectMessage interface
* Rename emergencySignAndPublishPayoutTx method, add new one for 2of2 MS
* Apply new protocol
* Apply new protocol
* Add renaming (no functional change yet)
* Add new messages, apply renaming
* Remove unneeded P2SHMultiSigOutputScript
* Remove PREFERRED_PROJECT_CODE_STYLE
* Refactor: Rename class
* Use InputsForDepositTxRequest instead of TradeMessage in handleTakeOfferRequest
* Do not sign deposit tx if maker is seller
We change behaviour that the maker as seller does not send the pre
signed deposit tx to the taker as the seller has more to lose and he
wants to control the creation process of the delayed payout tx.
* Apply new trade protocol to seller as maker version
* Apply new trade protocol
Delayed payout tx are now working for all scenarios but we use a small
hack to get around an issue with not receiving confirmations and the
peers tx.
We add a tiny output to both peers, so we see the tx and confirmation.
Without that only the publisher sees the tx and confirmations are not
displayed. Need further work to get that working without that extra
outputs.
* Set TRADE_PROTOCOL_VERSION to 2
* Add PeerPublishedDelayedPayoutTxMessage
We need add the delayed payout tx to the wallet once the peer publishes
it. We will not see the confidence as we do not receive or sent funds
from our address. Same is with dispute payouts where one peer does not
receive anything. Then the confidence is not set. It seems that is a
restriction in BitcoinJ or it requires some extra handling. We set the
confidence indicator invisible in the dispute case and that might be an
acceptable option here as well.
* Add refund agent domain
* Add refundAgentNodeAddress
* Apply refund domain
* Add refund views
* Apply refundAgent domain
* Support refundAgent
* Remove useReimbursementModel field
We dont need in the offer anymore the decision if reimbursement or
arbitration is chosen.
* Apply refundAgent payout
* Handle tx info and balances
* Remove mediation activation
* Add new tac accepted flag for v1.2.0 and adjust text
* Fix params for test classes
* Signed witness trading (#3334)
* Added basic UI for account signing for arbitrators
* Add domain layer for signed account age witnesses (credits ManfredKarrer and oscarguindzberg)
* Remove testing gridlines
* Arbitrator sign accountAgeWitnesses
Automatically filter to only sign accounts that
- have chargeback risk
- bought BTC
- was winner in dispute
* Handle chargebackrisk by currency
* Check winners only for closed disputes
* Show sign status of paymentaccounts in AccountsView
* Rename service to accountAgeWitnessService
* Refactor: Move account signing helpers to AccountAgeWitnessService
* Refactor: rename hasSignedWitness to myHasSignedWitness
* Show if witness is signed in offerbook view
* Use witness sign age for age comparison
* Refactor: rename to isTaker... to isMyTaker...
* Allow trading with signed witnesses
* Use witness age for showing account age icon
* Move AccountAgeRestrictions into AccountAgeWitnessService
* Handle trade limit of unverified accounts as normal case
* Avoid optional as argument
* Set trade limit depending on trade direction
* Avoid optional arguments
* Add text for seller as signer
* Seller with signer privilege signs buyer witness
* Fix merge issues
* Remove explicit check for risky offers
* Remove sellers explicit account age check
* Add limit check based on common accountAgeWitness function
* Fix arbitrator key event handling
* Filter accounts on tradelimit instead of maturity
* Fix test
* Buyer sign seller account
Add SIGNED_ACCOUNT_AGE_WITNESS capability
* Fix checks for signing at end of trade
Get correct valid accounts for offer
* Rename BuyerDataItem -> TraderDataItem
* Arbitrator sign both parties in a buyer payout dispute
* Only sign unsigned accountAgeWitnesses
* Remove unused code
* Add demo for material design icons
* Use different account age limits for sell/buy
* Fix signing interface for arbitrator
* Add signing state column to offer book
* Add signing state to fiat accounts overview
* Add signing state to selected fiat account
* Fix popover padding
* Add account signing state to peer info popup
* Retrieve only unsigned witnesses for arbitrator to sign
* Accounts signed by arbitrators are signers
* Disable test due to travis issues
* Improve witness handling (#3342)
* Fix comparison
* Add display strings for witness sign state
* Fix immaturity check
* Use accountAgeWitness age for non risky payment methods
* Show information about non risky account types
* Fix peer info icon account age text
* Complete new trade protocol (#3340)
* Improve handling of adding tx to wallet
* Add delayedPayoutTx to dispute
* Fix test
* Use RECIPIENT_BTC_ADDRESS from DAO for trade fee
* Set lockTime to 10 days for altcoins, 20 days others.
- Devmode uses 1 block
* Fix params
* Update text
* Update docs
* Update logging
if (log.isDebugEnabled()) only matches if logLevel is debug not
if it is INFO
* Remove log
* Remove arbitrator checks
* Remove arbitrator address
- It works not if not legacy arbitrator is registered.
We cannot remove too much from arbitration as we would risk to break
account signing and display of old arbitration cases.
Though if testing time permits we should try to clean out more of
arbitration domain what is not needed anymore.
* Use account signing state in accounts view (#3365)
* Add account signing icons to signing state in account display
* Remove unnecessary "." that caused layout issues in the past
* Add additional warning in the received payment popup for account signer
* Fix Revolut padding issues for currencies
* Hide signing icon for non-high-risk payment methods
* Add correct icon state and info text for account signing state
* Remove not implemented notification part
* Test self signing witnesses
* Change verified account limit factor to 0.5
* Account Signing: Add information popups for signing state (#3374)
* Add account signing icons to signing state in account display
* Remove not implemented notification part
* Hide time since signing column when not needed
* Remove fiat rounding popup as feature was introduced a long time ago already
* Add information popups for new signed states (only shown once for user) and minor clean-ups
* Update core/src/main/resources/i18n/displayStrings.properties
Co-Authored-By: sqrrm <sqrrm@users.noreply.github.com>
* Account Signing: Improve signed state notificaton (#3388)
* Remove new badge from Altcoin instant feature
* Remove new badge from percentage user deposit feature
* Fix line break issues in received payment confirmation popup
* Check if received payload fulfills signing state condition and not any personal witness
* Show additional badge for account sections to guide user to check out new signing states
* Fix account signing state in offer book (#3390)
* Account Signing: Fix verified usage (#3392)
* Rename witnessHash -> accountAgeWitnessHash
* Add enum for SignedWitness verification method
* Fix usage of isValidAccountAgeWitness
* Revert icon for signstate change
* Account signing: add signing state to payment account selection (#3403)
* Clean up dead code parts
* Add account signing state to payment account selection
* Account signing: revert dev date setting for trusted accounts (#3404)
* Revert temporary value for dev testing
* Only enable button if there are accounts to be signed
* Add trade limit exceptions (#3406)
* Remove dead code
* Add trade limit exception for accounts signed by arbitrator
* Update translations to adapt to new unified delay (#3409)
* NTP: Fix a couple of UI issues in the New Trade Protocol (#3410)
* Add badge support for refund agent (new arbitrator) tickets
* Fix translation typo
* Clean up arbitrator issues in translation
* Only show refund agent label to support staff
Every user should still see this role as arbitration
* NTP: Improve differentiation between mediation and new arbitration (#3414)
* Clean up property exposure
* Improve differentiation between mediation and arbitration cases
* Go to new refund view if it is no mediation and not open mediation notification if refund is already in progress
* Don't sign filtered accounts
* NTP: merge with master (#3420)
* Temporarily disable onion host for @KanoczTomas's BTC node
* Add Ergo (ERG) without Bouncy Castle dependency.
See #3195.
* List CTSCoin (CTSC)
* Tweak the English name of Japan Bank Transfer payment method
* Add mediator prefix to trade statistics
* List Faircoin (FAIR)
* List uPlexa (UPX)
* Remove not used private methods from BisqEnvironment
* Add onInitP2pNetwork and onInitWallet to BisqSetupListener
- Rename BisqSetupCompleteListener to BisqSetupListener
- Add onInitP2pNetwork and onInitWallet to BisqSetupListener
- make onInitP2pNetwork and onInitWallet default so no impl. required
* Start server at onInitWallet and add wallet password handler
- Add onInitWallet to HttpApiMain and start http server there
- Add onRequestWalletPassword to BisqSetupListener
- Override setupHandlers in HttpApiHeadlessApp and adjust
setRequestWalletPasswordHandler (impl. missing)
- Add onRequestWalletPassword to HttpApiMain
* Add combination (Blockstream.info + Mempool.space) block explorer
* Revert "Temporarily disable onion host for @KanoczTomas's BTC node"
This reverts commit d3335208bb.
* Temporarily disable KanoczTomas btcnode on both onion and clearnet
* Refactor BisqApp - update scene size calculation
* Refactor BisqApp - update error popup message build
* Refactor BisqApp - move icon load into ImageUtil
* Remove unused Utilities
* Increase minimum TX fee to 2 sats/vByte to fix#3106 (#3387)
* Fix mistakes in English source (#3386)
* Fix broken placeholders
* Replace non existing pending trades screen with open trades screen
* Update core/src/main/resources/i18n/displayStrings.properties
Co-Authored-By: Steve Jain <mfiver@gmail.com>
* Update message in failed trade popup
* Refactor BisqEnvironment
* Account Signing: Improve arbitrator signing flow (#3421)
* Pre-select a point of time 2 months in the past
So all arbitrator signed payment accounts will have their limits lifted completely
* Only show payment methods with high chargeback risk to be signed
* Show connected Bitcoin network peer info
* List Ndau (XND)
- Official project URL: https://ndau.io/
- Official block explorer URL: https://explorer.service.ndau.tech
* List Animecoin (ANI)
* Apply rule to not allow BSQ outputs after BTC output for regular txs (#3413)
* Apply rule to not allow BSQ outputs after BTC output for regular txs
* Enforce exactly 1 BSQ output for vote reveal tx
* Fix missing balance and button state update
* Refactor isBtcOutputOfBurnFeeTx method and add comments and TODOs
No functional change.
* Handle asset listing fee in custom method
We need to enforce a BSQ change output
As this is just tx creation code it has no consequences for the hard
fork.
* Use getPreparedBurnFeeTxForAssetListing
* Update comments to not use dust output values
* Fix missing balance and button state update
* Use same method for asset listing fee and proof of burn
Use same method for asset listing fee and proof of burn as tx structure
is same.
Update comments to be more general.
* Use getPreparedProofOfBurnTx
* Require mandatory BSQ change output for proposal fee tx.
We had in the doc stated that we require a mandatory BSQ change output
but it was not enforced in the implementation, causing similar issues
as in Asset listing and proof of burn txs.
* Add fix for not correctly handled issuance tx
* Use new method for issuance tx
// For issuance txs we also require a BSQ change output before the issuance output gets added. There was a
// minor bug with the old version that multiple inputs would have caused an exception in case there was no
// change output (e.g. inputs of 21 and 6 BSQ for BSQ fee of 21 BSQ would have caused that only 1 input was used
// and then caused an error as we enforced a change output. This new version handles such cases correctly.
* Handle all possible blind vote fee transactions
* Move check for invalid opReturn output up
* Add dust check at final sign method
* Fix incorrect comments
* Refactor
- Remove requireChangeOutput param which is always false
- Remove method which is used only by one caller
- Cleanup
* Add comment
* Fix comments, rename methods
* Move code of isBlindVoteBurnedFeeOutput to isBtcOutputOfBurnFeeTx
* Update account signing strings for v1.2 release (#3435)
* Update account signing strings for v1.2 release
* Add minor corrections from ripcurlx review
* Adjust tradeLimitDueAccountAgeRestriction string
So that it describes why an account isn't signed (in general) instead of
why it wasn't signed by an arbitrator.
* Account Signing/NTP: More improvements and fixes (#3436)
* Select the the correct sub view when a dispute is created
* Require capability REFUND_AGENT to receive RefundAgent Messages
* Remove unused return type for account signing
* Add new feature popup for account signing and new trade protocol
* Return void from account signing
* Fix bug with not updating vote result table at vote result block
* NTP: improve backwards compatibility for mediation (#3439)
* Improve readability of offer update
* Add type safeguard for dispute lists
* Set not existing dispute support type for clients < 1.2.0 from message support type
* Enable handling of mediation cases for old trade protocol disputes in 1.2.0 clients
* Remove unnecessary forEach
* Use correct formatter and add missing value for placeholder
* Bump version number
* Add sign all checkbox. Fix list entry display (#3450)
* Add sign all checkbox. Fix list entry display
* Add summary to log and clipboard
* Use safe version for seednodes (#3452)
* Apply shutdown and memory check again
To not risk issues with the release and seed nodes we merge back the
old code base for handling memory check and shutdowns.
The newly added changes for cross connecting between seed nodes cause
out of memory issues and require more work and testing before it can be
used.
* Revert code change for periodic updates between seed nodes.
The periodic updates code caused out of memory issues and require more
work and testing before it can be used.
* Arbitrator republish signedWitnesses on startup (#3448)
* Arbitrator republish signedWitnesses on startup
* Keep republish internal to SignedWitnessService
* Improve new feature popup for ntp and account signing (#3453)
* Do not commit delayedPayoutTx to avoid publishing at restart
Fixes https://github.com/bisq-network/bisq/issues/3463
BitcoinJ publishes automatically committed transactions.
We committed it to the wallet to be able to access it later after a
restart. We stored the txId in Trade and used that to request the tx
from the wallet (as it was committed). Now we store the
bitcoin serialized bytes of the tx and do not commit the tx before
broadcasting it (if a trader opens refund agent ticket).
* [1.2.0] Update client resources (#3456)
* Update bitcoinj checkpoint file
* Update data stores
* Update translations
* [1.2.0] Improve new feature popup (#3465)
* Improve layout of new feature popup
* Extract external hyperlinks into component to make it easier to update
* Comment in necessary showAgain check
* Add Raspberry Pi to build process (#3466)
* Add Raspberry Pi to build process
* Rename deploy variable to improve readability
* Update informational prompt upon creating fiat account with account signing details (#3467)
* Update informational prompt upon creating fiat account with account signing details
* Fix wrong buyer limit for first 30 days
* Set delayedPayoutTxBytes when setting delayedPayoutTx
Fixes https://github.com/bisq-network/bisq/issues/3473
The delayedPayoutTx is not committed to the wallet as long it is not
published. The seller who creates the delayedPayoutTx has not stored the
delayedPayoutTxBytes which caused a nullpointer after restart.
* Minor updates (#3474)
* Remove unnecessary log statement
This seems to be a left over log statement from debugging.
* Use a small delay for MakerSetsLockTime on regtest
When testing on regtest, not in devmode, we want a relatively short
delay to be able to test the delay period.
* Clarify payment limits up to 30 days after signing
* Update RECIPIENT_BTC_ADDRESS for regtest (#3478)
Use an address that is owned by the regtest wallet in the dao-setup.zip
file. This allows for easily verifying BTC trading fees are sent to
this address correctly. Also, it helps verify spending of the time lock
payout.
* Remove btc nodes from Manfred Karrer (#3480)
* Avoid null objects (#3481)
* Avoid null objects
* Remove check for type
Historical data can be arbitration instead of mediation (arbitration
was fallback at last update), so we need to tolerate the incorrect type
here. Is only for tickets from pre 1.2.
* Display appropriate account age info header
Depending on charge back risk type, accounts should show
accountAgeWitness age or time since signing
* Set amount for delayed payout tx to 0 (#3471)
We have shown the spent funds from the deposit tx to the bisq donation
address before. But that was incorrect from the wallet perspective and
would have lead to incorrect summary of all transaction amounts. We set
it now to 0 as we are not spending funds nor receiving any in our wallet.
* Check for result phase at activate method
Fixes https://github.com/bisq-network/bisq/issues/3487
* Only show warning for risky payment menthods (#3497)
* Fix style issues with dark mode (#3495)
* Addresses issues mentioned in https://github.com/bisq-network/bisq/issues/3482#issuecomment-546812730 (#3496)
* Clean up trade statistics from duplicate entries (#3476)
* Clean up trade statistics from duplicate entries
At software updates we added new entries to the extraMap which caused
duplicate entries (if one if the traders was on the new and the other on
the old version or at republishing). We set it now json exclude so avoid
that in future and clean up the map.
* Avoid repeated calls to addPersistableNetworkPayloadFromInitialRequest
For trade stat cleanup we don't want to apply it multiple times as it
is a bit expensive. We get from each seed node the initial data response
and would pollute with the second response our map again and if our node
is a seed node, the seed node itself could not get into a clean state and
would continue pollution other nodes.
* Refactor
Remove not used param
Rename method
Inline method
Cleanups
* Change unsigned to N/A
* [1.2.0] Update data stores and adding SignedWitnessStore (#3494)
* Update data stores and adding SignedWitnessStore
* Update translations
* Update cleaned TradeStatistics2Store and changes in other stores
* VoteResultView update results on any block in result phase
Avoid updating the result more than once per result phase but make
sure it's done if activated during the result phase
* [1.2.0] Format maker fee for BTC and BSQ correctly (#3498)
* Format maker fee for BTC and BSQ correctly
* Update tests
* Only automatically open popup if result wasn't accepted and disable action button when being accepted (#3503)
* Fix tradestatistics (#3469)
* Remove delayed re-publishing of tradeStatistics
This was done earlier when only maker was publishing trade statistics.
Now both traders do it so we get already higher resilience.
* Remove unused method
Forgot in prev. commit to remove also the method.
* Remove support for TradeStatistics2.ARBITRATOR_ADDRESS
* Add comment and set ARBITRATOR_ADDRESS deprecated
* Remove setting of arbitrator data from makers side
The 2 arbitrator related fields in Trade are only set by the maker and
not used anymore for reading, so it can be removed. The whole arbitrator
domain should be cleaned out some day, but because of backward
compatibility issues it id difficult to do it entirely at release date.
With release after v 1.2. when no old offers are out anymore we are
able to clean up that domain.
* Remove dev log
* Update translations
* [1.2.0] Improve dispute section (#3504)
* Improve wording for mediation summary and add specific next steps for refund agent case
* Select the first dispute case when entering the support section
* Revert to SNAPSHOT version
* Fix but with initialRequestApplied (#3512)
* Fix resource name (#3514)
* Remove minor version number in news popup
* Fix copy SignedWitnessStore db script
* Not show payment account details for blocked offers
* Use age of accountAgeWitness as basis for sell limits
* Bump version number
* Revert to SNAPSHOT version
* Merge v1.2.0/v1.2.1 with master (#3521)
* List Krypton (ZOD)
* Temporarily disable onion host for @KanoczTomas's BTC node
* Add Ergo (ERG) without Bouncy Castle dependency.
See #3195.
* List CTSCoin (CTSC)
* Tweak the English name of Japan Bank Transfer payment method
* List Animecoin (ANI)
* Add mediator prefix to trade statistics
* List Faircoin (FAIR)
* List uPlexa (UPX)
* Remove not used private methods from BisqEnvironment
* Add onInitP2pNetwork and onInitWallet to BisqSetupListener
- Rename BisqSetupCompleteListener to BisqSetupListener
- Add onInitP2pNetwork and onInitWallet to BisqSetupListener
- make onInitP2pNetwork and onInitWallet default so no impl. required
* Start server at onInitWallet and add wallet password handler
- Add onInitWallet to HttpApiMain and start http server there
- Add onRequestWalletPassword to BisqSetupListener
- Override setupHandlers in HttpApiHeadlessApp and adjust
setRequestWalletPasswordHandler (impl. missing)
- Add onRequestWalletPassword to HttpApiMain
* Add combination (Blockstream.info + Mempool.space) block explorer
* Revert "Temporarily disable onion host for @KanoczTomas's BTC node"
This reverts commit d3335208bb.
* Temporarily disable KanoczTomas btcnode on both onion and clearnet
* Refactor BisqApp - update scene size calculation
* Refactor BisqApp - update error popup message build
* Refactor BisqApp - move icon load into ImageUtil
* Remove unused Utilities
* Increase minimum TX fee to 2 sats/vByte to fix#3106 (#3387)
* Fix mistakes in English source (#3386)
* Fix broken placeholders
* Replace non existing pending trades screen with open trades screen
* Update core/src/main/resources/i18n/displayStrings.properties
Co-Authored-By: Steve Jain <mfiver@gmail.com>
* Update message in failed trade popup
* Refactor BisqEnvironment
* List Ndau (XND)
- Official project URL: https://ndau.io/
- Official block explorer URL: https://explorer.service.ndau.tech
* Show connected Bitcoin network peer info
* Not show payment account details for blocked offers (#3425)
* Add GitHub issue template for user reported bugs (#3454)
* Add issue template with steps to reproduce and actual/expected behavior
* Fix typo in .github/ISSUE_TEMPLATE.md
* Fix wrong auto merge
* Add CapabilityRequiringPayload to TradeStatistics2
With v1.2.0 we changed the way how the hash is created.
To not create too heavy load for seed nodes from
requests from old nodes we use the SIGNED_ACCOUNT_AGE_WITNESS
capability to send trade statistics only to new nodes.
As trade statistics are only used for informational purpose it will
not have any critical issue for the old nodes beside that they don't see the latest trades.
* Fix tradestat hash issue (#3529)
* Recreate hash from protobuf data
To ensure all data are using the new hash method (excluding extraMap) we
do not use the hash field from the protobug data but pass null which
causes to create the hash new based on the new hash method.
* Add filter.toString method and log filter in case of wrong signature
We have atm a invalid filter (prob. some dev polluted a test filter to mainnet)
* Change log level, add log
* Refactor: Move code to dump method
* Add TRADE_STATISTICS_HASH_UPDATE capability
We changed the hash method in 1.2.0 and that requires update to 1.2.2
for handling it correctly, otherwise the seed nodes have to process too
much data.
* Add logs for size of data exchange messages
* Add more data in log
* Improve logs
* Fix wrong msg in log, cahnge log level
* Add check for depositTxId not empty
* Remove check for duplicates
As we recreate the hash for all trade stat objects we don't need that
check anymore.
* Add logs
* Temporarily remove this part of the statistics
It prevents merging with master because through auto merge a duplication of this part of the code is happening and prevents Travis from succeeding
* Refactored logging into subroutine
* Only do recurring updatedatareq if seednode
* UpdateDataReq is run periodically
* Only store PeristablePayloads once
* Refactoring: Move arbitration package inside dispute package
* Use abstract base class DisputeResolver for arbitrator
* Refactoring: Move mediator to mediator package.
* Let Mediator inherit DisputeResolver.
* Do not use protobuf inheritance
- Do not use protobuf inheritance for Arbitrator and Mediator as it
would break backward compatibility (and protobuf inheritance sucks
anyway)
* Refactoring: Move ArbitratorModule to parent package
* Refactoring: Rename ArbitratorModule to DisputeModule
* Add mediators to Filter
* Add mediators to filter window
* Use abstract DisputeResolverService as base class for ArbitratorService
- Add common base class for ArbitratorService and MediatorService
* Fix test
* Use abstract DisputeResolverManager as base class for ArbitratorManager
- Add common base class for ArbitratorManager and MediatorManager
* Refactor: Move arbitratorregistration package inside register pkg
* Refactor: Rename arbitratorregistration package to arbitrator
* Add registration view for mediator
- With cmd+D one can open the mediator regisration in account screen.
For arbitrator its cmd+R
* Separate pub key list for mediator (no new keys added yet)
* Set new pubkeys for mediator registration
- Before release set new keys from maintainer who manages keys
* Set disputes @Nullable. Add null checks
* Remove pre v0.9 handling for supported arbitrators from offer
- We changed handling of arbitrator selection with v0.9 so the
supported arbitrators in the offer is not used anymore. As we
enforced v1.2 a while back for trading we can be sure no pre v0.9
clients are used anymore and we can remove the optional code part.
* Remove supported arbitrators info in offer details window
- As we do not use supported arbitraors in offer anymore since v0.9
we can remove that.
* Remove check for matching arbitrator languages
As we do not use the supported arbitratos from offer since v0.9 we can
remove that check.
* Remove not used classes
* Remove checks for arbitrator and mediator in offer
We do not use those fields anymore. We still need to keep the fields
not nullable as old clients have the check still.
* Add check if sig of proto object is not empty
We got in dev testing sometimes an empty protobuf Alert. Might be
caused from protobuf copatibility issues during development but not
100% clear.
As it causes an exception and corrupted user db file we prefer to set
it to null.
* Remove TakerSelectMediator
This is not used anymore. Currently we would get an exception in the
trade but with follow up changes we will fix that...
Mediator handling and selection will be done the same way like
arbitrator. The current mediator handling was a relict from earlier
partial support for mediators which never got completed. As still a
null check is in place we need to ensure backward compatibility.
* Set arbitratorNodeAddresses and mediatorNodeAddresses to deprecated
We do not use arbitratorNodeAddresses and mediatorNodeAddresses anymore
but as there is a null check we still need to keep the field ans set it
to an empty arrayList.
* Make ArbitratorSelection generic. Add MEDIATOR_ADDRESS
We want to use the same selection algorithm for mediators as for
arbitrators, so we make ArbitratorSelection generic.
We add MEDIATOR_ADDRESS as extraMap entry to TradeStatistics2 to be
able to track number of trades with specific mediators.
ExtraMap is used to add new data to existing protobuf definitions which
is supported also by not updated clients. Adding a new protobuf field
would only be supported by new clients. As mediator support is a new
feature we could add a new field but to keep it in the same style like
arbitrator we prefer to use the map here as well.
* Refactor: Rename ArbitratorSelection to DisputeResolverSelection
* Add mediator to OfferAvailabilityResponse and mediatorNodeAddress to OpenOffer
WIP for supporting mediator selection the same way like arbitrators.
* Make arbitrator not nullable
We can ensure that all users are post v0.9 so we can remove the nullable
support.
* Add selectedMediator to OfferAvailabilityModel
Remove nullable support in ProcessOfferAvailabilityResponse as we can
ensure all clients are post v0.9
* Refactor: Rename method
* Add todo for using more generic keys for display strings
* Refactor: Rename method
* Fix wrong handling of registeredMediator
Fix copy/paste error
* Add mediatorNodeAddress to trade
* Handle nullable mediator in ProcessOfferAvailabilityResponse
We do not get the mediator set from old clients but we expect a not null
value so we use the DisputeResolverSelection in case it is null.
We need to pass mediatorManager and tradeStatisticsManager to the
OfferAvailabilityModel.
* Change log level, cleanup
* Revert changes in OfferPayload due backward compatibility issues
Because of backward compatibility issues we needed to revert the removal
of arbitratorNodeAddresses and mediatorNodeAddresses. The signature
check for the offer would fail as an old client would send a not-empty
list but new clients would have had an empty list, so the hash
would be different and the sig check fail and we would not accept that
offer. That is the reason why we still need to support those data even
it is not used anymore.
This is one of the more tricky cases for backward compatibility issues.
This version now is tested between new and old clients and trade and
disputes work.
* Add checks if any mediator is available
* Cleanup classes
* Fix test
* Add mediator DisputeStates
Add isMediationDispute to Dispute class.
If a dispute opening gets requested we check if state is
DisputeState.NO_DISPUTE and the open mediation. If state is
DisputeState.MEDIATION_REQUESTED we open arbitration.
* Cleanup; support isMediationDispute
* Handle mediator data in Dispute domain
- Add getConflictResolverNodeAddress method to Dispute to resolve
arbitrator or mediator address based on isMediationDispute flag.
- Rename arbitratorPubKeyRing to conflictResolverPubKeyRing in Dispute.
We cannot rename arbitratorPubKeyRing in the protobuf definition
as it would break backward compatibility.
* Add support for mediation in dispute domain
- Add isMediationDispute method to ChatSession
- Add isMediationDispute method to DisputeCommunicationMessage
- Add isMediationDispute to dispute id
- Refactor findDispute method
- Add null checks
- Cleanups
* Remove impossible case
Reserved and locked funds are used for offers and trades only.
* Fix typos
* Handle mediator and arbitrator strings
- Work in progress of adjusting correct terms.
- Cleanups
* Refactor: Rename arbitrator package to disputeresolvers
* Refactor: Rename ArbitratorDisputeView classes to DisputeResolverView
* Add support for close ticket from mediator (WIP)
In mediator case we do not create any transaction but only send the
dispute result which contains the mediators recommended payout
distribution. At teh traders we set the disputeState in the trade to
closed. This will be used in the next commits to update the trade so
that the traders get displayed the recommended payout and get asked if
they agree to that.
* Refactoring: Rename class
Rename MessageDeliveryFailedException to
DisputeMessageDeliveryFailedException
* Refactoring: Move dispute classes to dispute package
* Refactoring: Move Attachment class to dispute package
* Refactoring: Move package one level up
Move bisq.core.dispute.arbitration.messages to
bisq.core.dispute.messages
* Add todo comment
* Use ARBITRATION instead of DISPUTE
* Make DisputeManager abstract base class for ArbitrationDisputeManager
WIP for separating DisputeManager to ArbitrationDisputeManager and
MediationDisputeManager
* Add MediationDisputeManager
* Add MediationDisputeManager and ArbitrationDisputeManager to test
* Add mediationDisputeManager to relevant classes
There are some cases where arbitrationDisputeManager only is used.
Those are usually related to the payout tx. As mediators do not do a
payout we don't need it there.
* Add TradersArbitrationDisputeView and TradersMediationDisputeView
WIP for separating TraderDisputeView
* Refactor: Rename class
* Refactor: Rename support.tab.support to support.tab.mediation.support
I am aware that committing non default translation files is not
recommended, but I think in that case it helps to avoid to show errors
for developers who use non-english locale. The changes will be
overwritten by transifex once it gets synced...
* Add DisputeView as common base class
Further refactor separation of diff. dispute views
* Refactor: Rename package
* Refactor: Rename DisputesView to SupportView
* Refactor: Rename package
* Add MediationDisputeManager to CorePersistedDataHost
* Add MediationDisputeList as db file, refactor DisputeList
WIP for making Dispute domain more generic. We want to separate
arbitration and mediation clearly.
* Further refactoring to split mediation and arbitration
* Further refactoring to split mediation and arbitration
Move methods used for arbitration only to ArbitrationDisputeManager
* Refactor: Rename package
Rename bisq.core.dispute to bisq.core.support
No other changes in that commit.
We want to improve the data structure with the trader chat.
Support will be the top level.
Then dispute containing arbitration and mediation.
Next to dispute will be trader chat.
bisq.core.support
bisq.core.support.dispute.arbitration
bisq.core.support.dispute.mediation
bisq.core.support.traderchat (not happy with name for that yet)
* Refactor: Move dispute domain classes into isq.core.support.dispute package
* Refactor: Move classes
Move bisq.core.chat.ChatSession to bisq.core.support.ChatSession
Move bisq.core.chat.ChatManager to bisq.core.support.ChatManager
Move bisq.core.trade.TradeChatSession to bisq.core.support.traderchat.TradeChatSession
* Refactor: Move DisputeCommunicationMessage
* Refactor: Rename DisputeCommunicationMessage to ChatMessage
* Add comments
* Refactor: Move class
* Refactor: Rename class
* Refactor: Rename addDisputeCommunicationMessage and strings and variables
Rename disputeCommunicationMessage to chatMessage
* Refactor: Rename method
* Refactor: Rename methods and strings
* Add ArbitrationChatMessage and DisputeChatMessage
* Refactor: Rename class
* Move ChatMessage.Type to SupportType
Add to all supportMessages the SupportType so that we can filter in our
chatSessions the messages we are interested in.
* Refactor: Move classed to new package
* Refactor: Rename package
* Refactor: Move classed to new package
* Refactor: Move classed to new package
* Refactor: Rename classes
* Refactor: Rename package
* Refactor: Rename classes
* Refactor: Rename classes
* Remove empty DisputeModule
* Refactor: Rename classes
* Refactor SupportManager domain (WIP)
* Refactor SupportSession domain (WIP)
* Remove methods from SupportSession
* Dont expose p2pService in SupportManager
* Remove supportType in SupportSession
* Remove supportSession from getPeerNodeAddress method
* Remove isBuyer from supportSession
* Move creation of ChatMessage to SupportManager
* Remove isMediationDispute fielf in ChatMessage
* Remove chatMessage.isMediationDispute()
* Refactor: Rename trade.getCommunicationMessages()
* Move creation of ChatMessage to Chat
* Refactor: Rename class
* Refactor: Move ChatView class
* Refactor: Move PriceFeedComboBoxItem class to shared package
* Refactor: Use 'public abstract' instead of 'abstract public'
* Refactor: Use 'protected abstract' instead of 'abstract protected'
* Add traderChatManager.onAllServicesInitialized() to BisqSetup
* Remove unused param
* Refactor: Rename addChatMessage to addAndPersistChatMessage
* Fix missing check at ack msg handling
Various WIP refactorings/improvements
* Remove addAndPersistChatMessage from SupportSession
* Remove disputeManager from DisputeSession
* Fix missing getConcreteDisputeChatSession impl.
* Refactor: Rename package
* Refactor: Rename classes
Avoid trader as it might confuse with trader chat.
As for mediation/arbitration the agent (mediator/arbitrator) are acting
a bit like a server we use the client terminology for the traders.
* Refactor: Move classes to new package
* Fix missing protobuf data
- Add missing SupportType to protobuf
- Remove is_mediation_dispute from Dispute protobuf
definition
- Add getAgentNodeAddress method
- Var. other refactorings, cleanups
* Clone list at persisting to avoid ConcurrentModificationException
* Fix order of SupportType
Old clients fall back to enum at slot 0.
* Add getDisputeState_StartedByPeer template method
* Add trade protocol tasks for mediation result tx signing and msg sending
* Complete protocol tasks for mediation
* Refactor: Remove unneeded SuppressWarnings type: "WeakerAccess"
* Complete mediation result protocol
Works now all but not much tested....
* Add activation date and capability
We need to make sure that not updated users cannot cause problems once mediation is supported. We would get mixed cases where one has a mediation ticket and the not updated user an arbitration ticket. To avoid that we set an activation date with about 10 days from release. Until that date mediation is not supported.
Additionally we use OfferRestrictions.REQUIRE_UPDATE_DATE for hiding offers from users how have not updated (we use the fact that mediator and arbitrator has been same in old version, in new version they are different).
An old client cannot take an offer from a new maker as he does not has set the new MEDIATION capability. He will get an null value as AvailabilityResult as he has not the new entry MISSING_MANDATORY_CAPABILITY.
We will also use the min version for trading in the filter, so that not updated users get a popup telling them to update and they see all offers deactivated.
* Various fixes
* Remove code part which does not make sense (anymore)
Maybe in older versions there was use of openDisputes and closedDisputes
but now it does not make sense anymore and arbitrator never gets 4 cases
opened if offline.
* Add check of balance is > 0
* Only close trade if payout tx is set
* Add missing check if arbitrator and mediator are available
* Fix wrong key
* Improve handling of checks and popup display
For create and take offer we check certain conditions and show a
popup if not met. This commit moves that to GuiUtils.
* Rename any occurrance of DisputeResolver to DisputeAgent
* Fix handling of mediatorPubKeyRing
* Remove disputeSummaryWindow.evidence fields
* Add missing persistence for MediationResultState
* Fix tests
* Make text more compact to not exceed space
* Refactor NotificationGroup
* Improve text, add dev testing feature for popups
* Improve text
* Renamed a key and assigned a new text
* Fix states
* Do not set errorMessage
Do not set errorMessage if both peers have opened a dispute and agent
was not online
* Remove logs used for dev testing
* Fix getMedian method with empty list
* Add new methods and tests
Add fromCommaSeparatedOrdinals and toCommaSeparatedOrdinals to convert
from string representations (used for handling backward compatibility
with mediation release).
Add check if int >= 0 to fromIntList
* Move error log outside of delayed call
* Add capabilities entry to extraDataMap in offer
The previous implementation did not work for supporting updates and
hiding offers from not updated clients.
We use now the capabilities converted to a string list and put it into
the extraDataMap. If a use with old persisted offers updates his offers
gets converted to add the capabilities. Updated clients will ignore
offers without the mediation capability set in the offer.
* Rename non sync protobuf definitions
As Christoph Sturm pointed out we can rename protobuf entries.
Only index number must not be changed.
* Fix UI state when arbitration has started
Only set mediation state if we are not in arbitration state.
* Remove restriction
* Fix typo; remove errorMessage
If both have opened a dispute and agent was not online we dont treat it
as error.
* Improve text
* Store full address for localhost dev testing
The arbitrator/mediator selection is based on statistics of usage of
agents in past trades. We put the first 4 chars into the trade
statistics, but for localhost that would be same vale for 2 diff nodes.
* Remove errorMessage
If both have opened a dispute and agent was not online we dont treat it
as error.
* Improve text
* Keep accept or reject button enabled after accept
- If peer never accepts the trader who has accepted first can change
to reject to open a arbitration dispute.
We could improve that by adding a new state to open arbitration
directly and show a diff. button text and popup. But I think for now
thats ok as well....
* Cleanups (no functional change)
- remove unused params
- remove not used code
- reformat
- clean up comments
- fix log levels
- remove redundant annotations
* Update core/src/main/resources/i18n/displayStrings.properties
Co-Authored-By: Steve Jain <mfiver@gmail.com>
* Update core/src/main/resources/i18n/displayStrings.properties
Co-Authored-By: Steve Jain <mfiver@gmail.com>
* Update core/src/main/resources/i18n/displayStrings.properties
Co-Authored-By: Steve Jain <mfiver@gmail.com>
* Update core/src/main/resources/i18n/displayStrings.properties
Co-Authored-By: Steve Jain <mfiver@gmail.com>
* Update core/src/main/resources/i18n/displayStrings.properties
Co-Authored-By: Steve Jain <mfiver@gmail.com>
* Update core/src/main/resources/i18n/displayStrings.properties
Co-Authored-By: Steve Jain <mfiver@gmail.com>
* Update core/src/main/resources/i18n/displayStrings.properties
Co-Authored-By: Steve Jain <mfiver@gmail.com>
* Update core/src/main/resources/i18n/displayStrings.properties
Co-Authored-By: Steve Jain <mfiver@gmail.com>
* Update core/src/main/resources/i18n/displayStrings.properties
Co-Authored-By: Steve Jain <mfiver@gmail.com>
* Update core/src/main/resources/i18n/displayStrings.properties
Co-Authored-By: Steve Jain <mfiver@gmail.com>
* Improve text
* Auto fill remaining amount in custom payout
If mediator or arbitrator are doing a custom payout, we auto-fill
counterpart field with remaining amount, so he does not need to
calculate.
We do not have any old versions anymore which might not support
AckMessages, so we can remove it. It might also fix issues that
AckMessages are not sent as the capabilities are not known yet.
- Remove processDelayedItems list as we do not delay anymore the data
items and protectedStoragePayloads do not get an extra treatment if
they are marked as LazyProcessedPayload.
- Add duration logging
- Replace checkArgument with an if check
- Apply code inspection
- Cleanup
- Add better logs and duration measurements for expensive operations
- Convert debug logs to trace to avoid flooding the output in debug log
mode.
- Cleanups
* this class is not a clock but it watches the clock, detects standby
and runs periodic tasks.
* there is already a jdk method called clock
First i thought it should be called PeriodicTaskManager, now i find
ClockWatcher more fitting.
When taking an offer, ack messages for
OfferAvailablilityRequest/Response were failing to be sent with the
following error.
> Jun-23 22:35:38.129 [JavaFX Application Thread] ERROR
> b.c.o.a.OfferAvailabilityProtocol: AckMessage for
> OfferAvailabilityResponse failed. AckMessage=AckMessage{
> uid='8779f9ae-22e9-4f16-bbbb-4da89fe23cdf',
> senderNodeAddress=localhost:3333,
> sourceType=OFFER_MESSAGE,
> sourceMsgClassName='OfferAvailabilityResponse',
> sourceUid='df1a50c5-c6e7-4c81-8ad4-a100d622a053',
> sourceId='pexluolj-2e5e5d9f-5aca-4a3d-b66a-60b72afe3d2c-112',
> success=true,
> errorMessage='null'
> } NetworkEnvelope{
> messageVersion=12
> }, makersNodeAddress=localhost:3632, errorMessage=We did not send the
> EncryptedMessage because the peer does not support the capability.