This change provides a gRPC CallRateMeteringInterceptor to help protect
the server and network against being overloaded by CLI scripting mistakes.
An interceptor instance can be configured on a gRPC service to set
individual method call rate limits on one or more of the the service's
methods. For example, the GrpcOffersService could be configured with
this interceptor to set the createoffer rate limit to 5/hour, and
the takeoffer call rate limit could be set to 20/day. Whenever a
call rate limit is exceeded, the gRPC call is aborted and the client
recieves a "rate limit exceeded" error.
Below is a simple example showing how to set rate limits for one method
in GrpcVersionService.
final ServerInterceptor[] interceptors() {
return new ServerInterceptor[]{
new CallRateMeteringInterceptor(new HashMap<>() {{
put("getVersion", new GrpcCallRateMeter(2, SECONDS));
}})
};
}
It specifies a CLI can execute getversion 2 times / second.
This is not a throttling mechansim, there is no blocking nor locking
to slow call rates. When call rates are exceeded, calls are
simply aborted.
Redesign the UI
Add import/export of payout settings
Add ability to import from mediation ticket
Mediator does not need private key
User can sign using own wallet or private key
Validation of input fields
Calculate the tx fee based on inputs
Display of the generated txid & hex so it can be checked
If oen starts with --daoActivated=false there is no service
set up in one of the data store services so it never calls
the result handler and the app never starts up.
If oen starts with --daoActivated=false there is no service
set up in one of the data store services so it never calls
the result handler and the app never starts up.
This change reduces gRPC service error handling duplication by moving
it into a @Singleton encapsulating everything needed to wrap
an expected or unexpected core api exception into a gRPC
StatusRuntimeException before sending it to the client. It also
fixes some boilerpate classes were gRPC error handling was missing.
Call flush at openOfferManager shutdown.
Remove unused method.
Force broadcaster to send out immediately, otherwise we could
have a 2 sec delay until the bundled messages sent out.
As the normal maps are not using a ConcurrentHashMap it is
likely unnecessary as well that I am not aware of a
multi-threaded access.
But as it does not show any difference in performance
it is likely the bit more safe option.
directly and we did not remove the item from the cache)
- Add getSignedWitnessMapValues method for access to signedWitnessMap values
- Remove Getter for signedWitnessMap
- Add removeSignedWitness method (used in test only)
- Use addToMap in test instead of direct access to map
- Remove getSignedWitnessSetCache entry matching
signedWitness.getHashAsByteArray() as well.
See comment. @sqrrm: Can you cross check?
The getSignedWitnessSet is called very often and is a bit
expensive. We cache the result in that map but we
remove the cache entry if we get a matching SignedWitness
added to the signedWitnessMap.
The accountAgeWitnessMap is very large (70k items) and
access is a bit expensive. We usually only access less
than 100 items, those who have offers online. So we
use a cache for a fast lookup and only if
not found there we use the accountAgeWitnessMap and
put then the new item into our cache.