https://github.com/tkaitchuck/aHash/pull/196 bumped the MSRV of
`ahash` in a patch release, which makes it rather difficult for us
to have it as a dependency.
Further, it seems that `ahash` hasn't been particularly robust in
the past, notably
https://github.com/tkaitchuck/aHash/issues/163 and
https://github.com/tkaitchuck/aHash/issues/166.
Luckily, `core` provides `SipHasher` even on no-std (sadly its
SipHash-2-4 unlike the SipHash-1-3 used by the `DefaultHasher` in
`std`). Thus, we drop the `ahash` dependency entirely here and
simply wrap `SipHasher` for our `no-std` HashMaps.
While this isn't expected to materially improve performance, it
does get us ahash 0.8, which allows us to reduce fuzzing
randomness, making our fuzzers much happier.
Sadly, by default `ahash` no longer tries to autodetect a
randomness source, so we cannot simply rely on `hashbrown` to do
randomization for us, but rather have to also explicitly depend on
`ahash`.
This is a good gut-check to ensure we don't end up taking a ton of
time decaying channel liquidity info.
It currently clocks in around 1.25ms on an i7-1360P.
Our "what is the success probability of paying over a channel with
the given liquidity bounds" calculation currently assumes the
probability of where the liquidity lies in a channel is constant
across the entire capacity of a channel. This is obviously a
somewhat dubious assumption given most nodes don't materially
rebalance and flows within the network often push liquidity
"towards the edges".
Here we add an option to consider this when scoring channels during
routefinding. Specifically, if a new `linear_success_probability`
flag is unset on `ProbabilisticScoringFeeParameters`, rather than
assuming a PDF of `1` (across the channel's capacity scaled from 0
to 1), we use `(x - 0.5)^2`.
This assumes liquidity is likely to be near the edges, which
matches experimental results. Further, calculating the CDF (i.e.
integral) between arbitrary points on the PDF is trivial, which we
do as our main scoring function.
While this (finally) introduces floats in our scoring, its not
practical to exponentiate using fixed-precision, and benchmarks
show this is a performance regression, but not a huge one, more
than made up for by the increase in payment success rates.
Firstly, we switch our BP over to use `FilesystemStore`, which also gives us test
coverage and ensures the compatibility.
Then, we remove the superseded `KVStorePersister` trait and
the `FilesystemPersister` code.
Rather than using the std benchmark framework (which isn't
maintained and is unlikely to get any further maintenance), we swap
for criterion, which at least gets us a variable number of test
runs so our benchmarks don't take forever.
We also fix the RGS benchmark to pass now that the file in use is
stale compared to today's date.