Exposing ChannelPhase in ChannelManager has led to verbose match
statements, which need to be modified each time a ChannelPhase is added.
Making ChannelPhase an implementation detail of Channel would help avoid
this.
As a step in this direction, define a conversion from PendingV2Channel
to ChannelPhase (to be renamed Channel).
Exposing ChannelPhase in ChannelManager has led to verbose match
statements, which need to be modified each time a ChannelPhase is added.
Making ChannelPhase an implementation detail of Channel would help avoid
this.
As a step in this direction, define a conversion from InboundV1Channel
to ChannelPhase (to be renamed Channel).
Exposing ChannelPhase in ChannelManager has led to verbose match
statements, which need to be modified each time a ChannelPhase is added.
Making ChannelPhase an implementation detail of Channel would help avoid
this.
As a step in this direction, define a conversion from OutboundV1Channel
to ChannelPhase (to be renamed Channel).
Exposing ChannelPhase in ChannelManager has led to verbose match
statements, which need to be modified each time a ChannelPhase is added.
Making ChannelPhase an implementation detail of Channel would help avoid
this.
As a step in this direction, define a conversion from Channel (to be
renamed FundedChannel) to ChannelPhase (to be renamed Channel).
Exposing ChannelPhase in ChannelManager has led to verbose match
statements, which need to be modified each time a ChannelPhase is added.
Making ChannelPhase an implementation detail of Channel would help avoid
this.
As a step in this direction, update ChannelManager methods to use
methods on ChannelPhase for obtaining the appropriate V1 channel types.
Exposing ChannelPhase in ChannelManager has led to verbose match
statements, which need to be modified each time a ChannelPhase is added.
Making ChannelPhase an implementation detail of Channel would help avoid
this.
As a step in this direction, update the convert_chan_phase_err macro to
use ChannelPhase::as_funded_mut instead.
Exposing ChannelPhase in ChannelManager has led to verbose match
statements, which need to be modified each time a ChannelPhase is added.
Making ChannelPhase an implementation detail of Channel would help avoid
this.
As a step in this direction, update ChannelManager methods to use
ChannelPhase::as_unfunded_v2_mut and ChannelPhase::into_unfunded_v2
methods.
Exposing ChannelPhase in ChannelManager has led to verbose match
statements, which need to be modified each time a ChannelPhase is added.
Making ChannelPhase an implementation detail of Channel would help avoid
this.
As a step in this direction, update ChannelManager::timer_tick_occurred
to use ChannelPhase::as_funded_mut and a new
ChannelPhase::unfunded_context_mut method.
Exposing ChannelPhase in ChannelManager has led to verbose match
statements, which need to be modified each time a ChannelPhase is added.
Making ChannelPhase an implementation detail of Channel would help avoid
this.
As a step in this direction, update ChannelManager::handle_error to use
a new ChannelPhase::maybe_handle_error_without_close.
Exposing ChannelPhase in ChannelManager has led to verbose match
statements, which need to be modified each time a ChannelPhase is added.
Making ChannelPhase an implementation detail of Channel would help avoid
this.
As a step in this direction, update ChannelManager::peer_connected to
use ChannelPhase::as_funded_mut and a new
ChannelPhase::maybe_get_open_channel method.
Exposing ChannelPhase in ChannelManager has led to verbose match
statements, which need to be modified each time a ChannelPhase is added.
Making ChannelPhase an implementation detail of Channel would help avoid
this.
As a step in this direction, update ChannelManager::peer_disconnected to
use ChannelPhase::as_funded_mut and a new ChannelPhase::is_resumable
method.
Exposing ChannelPhase in ChannelManager has led to verbose match
statements, which need to be modified each time a ChannelPhase is added.
Making ChannelPhase an implementation detail of Channel would help avoid
this.
As a step in this direction, update ChannelManager::signer_unblocked to
use ChannelPhase::as_funded and a new method on ChannelPhase dispatching
to each variant's signer_maybe_unblocked method.
Exposing ChannelPhase in ChannelManager has led to verbose match
statements, which need to be modified each time a ChannelPhase is added.
Making ChannelPhase an implementation detail of Channel would help avoid
this.
As a step in this direction, update ChannelManager::internal_tx_abort to
use ChannelPhase::is_funded and a new ChannelPhase::as_unfunded_v2_mut
method.
Exposing ChannelPhase in ChannelManager has led to verbose match
statements, which need to be modified each time a ChannelPhase is added.
Making ChannelPhase an implementation detail of Channel would help avoid
this.
As a step in this direction, rewrite ChannelManager's
unfunded_channel_count method to use ChannelPhase::as_funded and a new
ChannelPhase::as_unfunded_v2 method.
Exposing ChannelPhase in ChannelManager has led to verbose match
statements, which need to be modified each time a ChannelPhase is added.
Making ChannelPhase an implementation detail of Channel would help avoid
this.
As a step in this direction, introduce ChannelPhase::is_funded for use
in ChannelManager when a Channel (to be later renamed FundedChannel)
needs to be tested for.
Lack of bindings support was because the method used to return a slice
of tuples, it seems. Now that it returns &[BlindedPaymentPath], bindings
should be possible given that they can be generated for
Bolt12Invoice::message_paths.
Exposing ChannelPhase in ChannelManager has led to verbose match
statements, which need to be modified each time a ChannelPhase is added.
Making ChannelPhase an implementation detail of Channel would help avoid
this.
As a step in this direction, introduce ChannelPhase::as_funded and
ChannelPhase::as_funded_mut for use in ChannelManager when a Channel (to
be later renamed FundedChannel) is needed.
Pending v2 channels will need to be broken up into separate phases for
constructing and signing the funding transaction. To avoid increasing
the number of phases, combine the InboundV2Channel and OutboundV2Channel
types so that the can be used in one phase. Whether the channel is
inbound or outbound can be inferred from the ChannelContext.
We recently introduced `TRACE`-level logging for event handling.
However, in onion messenger we'd now log (twice, actually) every time
`process_events_async` is called, which is very very spammy. Here we fix
this by short-cutting to only proceed when we actualy have any event
futures to poll.
Previously, we would fail parsing `Offer`s if the HRP didn't match our
expected (lowercase) HRP. Here, we relax this check in accordance with
the spec to also allow all-uppercase HRPs.
Since adding support for creating static invoices from ChannelManager, it's
easier to test these failure cases that went untested when we added support for
paying static invoices.
We can't use our regular offer creation util for receiving async payments
because the recipient can't be relied on to be online to service
invoice_requests.
Therefore, add a new offer creation util that is parameterized by blinded
message paths to another node on the network that *is* always-online and can
serve static invoices on behalf of the often-offline recipient.
Also add a utility for creating static invoices corresponding to these offers.
See new utils' docs and BOLTs PR 1149 for more info.
Utilizing the results of probes sent once a minute to a random node
in the network for a random amount (within a reasonable range), we
were able to analyze the accuracy of our resulting success
probability estimation with various PDFs across the historical and
live-bounds models.
For each candidate PDF (as well as other parameters, including the
histogram bucket weight), we used the
`min_zero_implies_no_successes` fudge factor in
`success_probability` as well as a total probability multiple fudge
factor to get both the historical success model and the a priori
model to be neither too optimistic nor too pessimistic (as measured
by the relative log-loss between succeeding and failing hops in our
sample data).
We then compared the resulting log-loss for the historical success
model and selected the candidate PDF with the lowest log-loss,
skipping a few candidates with similar resulting log-loss but with
more extreme constants (such as a power of 11 with a higher
`min_zero_implies_no_successes` penalty).
Somewhat surprisingly (to me at least), the (fairly strongly)
preferred model was one where the bucket weights in the historical
histograms are exponentiated. In the current design, the weights
are effectively squared as we multiply the minimum- and maximum-
histogram buckets together before adding the weight*probabilities
together.
Here we multiply the weights yet again before addition. While the
simulation runs seemed to prefer a slightly stronger weight than
the 4th power we do here, the difference wasn't substantial
(log-loss 0.5058 to 0.4941), so we do the simpler single extra
multiply here.
Note that if we did this naively we'd run out of bits in our
arithmetic operations - we have 16-bit buckets, which when raised
to the 4th can fully fill a 64-bit int. Additionally, when looking
at the 0th min-bucket we occasionally add up to 32 weights together
before multiplying by the probability, requiring an additional five
bits.
Instead, we move to using floats during our histogram walks, which
further avoids some float -> int conversions because it allows for
retaining the floats we're already using to calculate probability.
Across the last handful of commits, the increased pessimism more
than makes up for the increased runtime complexity, leading to a
40-45% pathfinding speedup on a Xeon Silver 4116 and a 25-45%
speedup on a Xeon E5-2687W v3.
Thanks to @twood22 for being a sounding board and helping analyze
the resulting PDF.
In the next commit we'll want to return floats or ints from
`success_probability` depending on the callsite, so instead of
duplicating the calculation logic, here we split the linear (which
always uses int math) and nonlinear (which always uses float math)
into separate methods, allowing us to write trivial
`success_probability` wrappers that return the desired type.
Utilizing the results of probes sent once a minute to a random node
in the network for a random amount (within a reasonable range), we
were able to analyze the accuracy of our resulting success
probability estimation with various PDFs across the historical and
live-bounds models.
For each candidate PDF (as well as other parameters, to be tuned in
the coming commits), we used the `min_zero_implies_no_successes`
fudge factor in `success_probability` as well as a total
probability multiple fudge factor to get both the historical
success model and the a priori model to be neither too optimistic
nor too pessimistic (as measured by the relative log-loss between
succeeding and failing hops in our sample data).
Across the simulation runs, for a given PDF and other parameters,
we nearly always did better with a shorter half-life (even as short
as 1ms, i.e. only learning per-probe rather than across probes).
While this likely makes sense for nodes which do live probing, not
all nodes do, and thus we should avoid over-biasing on the dataset
we have.
While it may make sense to only learn per-payment and not across
payments, I can't fully rationalize this result and thus want to
avoid over-tuning, so here we reduce the half-life from 6 hours to
30 minutes.
If the liquidity penalty multipliers in the scoring config are both
0 (as is now the default), the corresponding liquiditiy penalties
will be 0. Thus, we should avoid doing the work to calculate them
if we're ultimately just gonna get a value of zero anyway, which we
do here.
Utilizing the results of probes sent once a minute to a random node
in the network for a random amount (within a reasonable range), we
were able to analyze the accuracy of our resulting success
probability estimation with various PDFs across the historical and
live-bounds models.
For each candidate PDF (as well as other parameters, to be tuned in
the coming commits), we used the `min_zero_implies_no_successes`
fudge factor in `success_probability` as well as a total
probability multiple fudge factor to get both the historical
success model and the a priori model to be neither too optimistic
nor too pessimistic (as measured by the relative log-loss between
succeeding and failing hops in our sample data).
We then compared the resulting log-loss for the historical success
model and selected the candidate PDF with the lowest log-loss,
skipping a few candidates with similar resulting log-loss but with
more extreme constants (such as a power of 11 with a higher
`min_zero_implies_no_successes` penalty).
In every case, the historical model performed substantially better
than the live-bounds model, so here we simply disable the
live-bounds model by default and use only the historical model.
Further, we use the calculated total probability multiple fudge
factor (0.7886892844179266) to choose the ratio between the
historical model and the per-hop penalty (as multiplying each hop's
probability by 78% is equivalent to adding a per-hop penalty of
log10(0.78) of our probabilistic penalty).
We take this opportunity to bump the penalties up a bit as well, as
anecdotally LDK users are willing to pay more than they do today to
get more successful paths.
Fixes#3040
Utilizing the results of probes sent once a minute to a random node
in the network for a random amount (within a reasonable range), we
were able to analyze the accuracy of our resulting success
probability estimation with various PDFs.
For each candidate PDF (as well as other parameters, to be tuned in
the coming commits), we used the `min_zero_implies_no_successes`
fudge factor in `success_probability` as well as a total
probability multiple fudge factor to get both the historical
success model and the a priori model to be neither too optimistic
nor too pessimistic (as measured by the relative log-loss between
succeeding and failing hops in our sample data).
We then compared the resulting log-loss for the historical success
model and selected the candidate PDF with the lowest log-loss,
skipping a few candidates with similar resulting log-loss but with
more extreme constants (such as a power of 11 with a higher
`min_zero_implies_no_successes` penalty).
This resulted in a PDF of `128 * (1/256 + 9*(x - 0.5)^8)` with a
`min_zero_implies_no_successes` probability multiplier of 64/78.
Thanks to @twood22 for being a sounding board and helping analyze
the resulting PDF.
We should update the return types on the signing methods here as
well, but we should at least start by documenting which methods are
async and which are not.
Once we complete async support for `get_per_commitment_point`, we
can change the return types as most things in the channel signing
traits will be finalized.
bcaba29f92 started returning
pre-built `Route`s from the router in the `chanmon_consistency`
fuzzer. In doing so, it didn't properly fill in the `route_parms`
field which is expected to match the requested parameters. This
causes a debug assertion when sending.
Here we fix this by setting the correct `route_params`.
This context is included in static invoice's blinded message paths, provided
back to us in HeldHtlcAvailable onion messages for blinded path authentication.
In future work, we will check if this context is valid and respond with a
ReleaseHeldHtlc message to release the upstream payment if so.
We also add creation methods for the hmac used for authenticating said blinded
path.
Now that the core features required for `async_signing` are in
place, we can go ahead and expose it publicly (rather than behind a
a `cfg`-flag). We still don't have full async support for
`get_per_commitment_point`, but only one case in channel
reconnection remains. The overall logic may still have some
hiccups, but its been in use in production at a major LDK user for
some time now. Thus, it doesn't really make sense to hide behind a
`cfg`-flag, even if the feature is only 99% complete. Further, the
new paths exposed are very restricted to signing operations that
run async, so the risk for existing users should be incredibly low.