We have two places we need to do that now: in the root payment after we
checked if the destination is reachable, and in any other payment directly in
the initialization-step callback.
It was spread over the step callback, but we only need to initialize the
routehints array there, child-payments can just inherit most of the information.
This does two things: it checks if the destination of the payment is at all
reachable without routehints, and if it is it adds a direct attempt as option
to the routehints in the form of a NULL routehint. It also simplifies the
selection of the routehint since the direct case is no longer special, instead
we just return a NULL routehint as if it were a normal routehint.
The adaptive MPP test was showing an issue with always using a routehint, even
when it wasn't necessary: we would insist on routhing to the entrypoint of the
routehint, even through the actual destination. If a channel on that loop
would result being over capacity we'd slam below 0, and then increase again by
unapplying the route. The solution really is not to insist on routing through
a routehint, so we implement random skipping of routehints, and we rotate them
if we have multiples.
As the hints get new fields added it is easy to forget to amend one of the
places we create them, since we already have an update method let's use that
to handle all additions to the array of known channel hints.
We were removing the current hint from the list and not inheriting the current
routehint, so we'd be forgetting a hint at each retry. Now we keep the array
unchanged in the root, and simply skip the ones that are not usable given the
current information we have about the channels (in the form of channel_hints).
Fixes#3861
This may be related to the issue #3862, however the water was muddied by it
being the wrong error to return, and the node should not expect this courtesy
feature to be present at all...
There is little point in trying to split if the resulting HTLCs exceed the
maximum number of HTLCs we can add to our channels. So abort if a split would
result in more HTLCs than our channels can support.
The presplit modifier could end up exceeding the maximum number of HTLCs we
can add to a channel right out the gate, so we switch to a dynamic presplit if
that is the case. The presplit will now at most use 1/3rd of the available
HTLCs on the channels if the normal split would exceed the number of availabe
HTLCs. And we also abort early if we don't have a sufficient HTLCs available.
It turns out that by aggressively splitting payments we may end up exhausting
the number of HTLCs we can add to a channel quickly. By tracking the number of
HTLCs we can still add, and excluding the channels to which we cannot add any
more we increase the route diversity, and avoid quickly exhausting the HTLC
budget.
In the next commit we'll also implement an early abort if we've exhausted all
channels, so we don't end up splitting indefinitely and we can also optimize
the initial split to not run afoul of that limit.
This PR includes the fix discussed on PR #3855. This fix was tested with the use case described inside the issue and worked.
Fixes: #3855
Changelog-None
The code had incorrect assertions, partially because it didn't clearly
distinguish errors from the final node (which, barring blockheight issues,
mean a complete failre) and intermediate nodes.
In particular, we can't trust the *values*, so we need to distinguish
these by the *sender*.
If a route is of length 2 (A, B):
- erring_index == 0 means us complaining about channel A.
- erring_index == 1 means A.node complaining about channel B.
- erring_index == 2 means the final destination node B.node.
This is particularly of note because Travis does NOT run test_pay_routeboost!
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We were using the current constraints, including any shadow route and other
modifications, when computing the remainder that the second child should
use. Instead we should use the `start_constraints` on the parent payment,
which is a copy of `constraints` created in `payment_start` exactly for this
purpose.
Also added an assert for the invariant on the multiplier.
When using mpp we need to always have partids>0, since we bumped the partid
for the root, but not the next_id we'd end up with partid=1 being
duplicated. Not a big problem since we never ended up sending the root to
lightningd, instead skipping it, but it was confusing me while trying to trace
sub-payment's ancestry.
We skip most payment steps and all sub-payments, so consolidate the skip
conditions in one if-statement. We also not use `payment_set_step` to skip any
modifiers after us after the step change.
We now check against both constraints on the modifier and the payment before
applying either. This "fixes" the assert that was causing the crash in #3851,
but we are still looking for the source of the inconsistency where the
modifier constraints, initialized to 1/4th of the payment, suddenly get more
permissive than the payment itself.
This was highlighted in #3851, so I added an assertion. After the rewrite in
the next commit we would simply skip if any of the constraints were not
maintained, but this serves as the canary in the coalmine, so we don't paper over.
While we were unsetting the `payment->cmd` in case of a success to signal that
we should not return to the JSON-RPC command twice, we were not doing that in
the case of failures. This was causing multiple responses to a single incoming
command, and `lightningd` was correctly killing the plugin. This issue was
introduced through early returns (anything setting `payment->abort=true`) and
was caused in Rusty's case through an MPP timeout.
Fixes#3847
Reported-by: Rusty Russell <@rustyrussell>
Signed-off-by: Christian Decker <@cdecker>
We were rather pedanticly failing the plugin if we were unable to parse the
`waitsendpay` result, but had coded all the modifiers in such a way that they
can handle a `NULL` result (verified in the code and manually by randomly
failing the parsing). So we now just log the result we failed to parse and
merrily go our way.
Worst case is that we end up retrying the same route multiple times, since we
can't blacklist any nodes / channels without understanding the error, but that
is still in the scope of what we must handle anyway.
This modifier splits a payment that has been attempted a number of times (by a
modifier earlier in the mod chain) and has failed consistently. It splits the
amount roughly in half, with a but if random fuzz, and then starts a new round
of attempts for the two smaller amounts.
With the `presplit`-modifier we actually skip execution of the root altogether
which results in the root not having a result at all. Instead we should use
the result returned by `payment_collect_result`.
With MPP we require that the sum of parts is equal to the `total_msat` amount
declared in the onion. Since that can't be changed once the first part arrives
we need a way to disable amount fuzzing for MPP.
Technically an API break, but nobody relies on these I hope!
Note that the feerates warning was buried inside the style object:
it should be top-level.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We were applying the fee exemption to all payments individually, which is ok
until we switch to MPP, where amounts change. Also the log entry was referring
to the total amount, and not the fee of the payment.
This was causing the state flapping test to fail, since we were yielding
control of the io_loop, waiting for the blockheight to be reached, and not
setting the status beforehand. An interim `paystatus` call would then find a
failed leaf and deduce the entire payment failed. Setting it back to the
previous state keeps the overall payment pending while we wait.
As suggested during the paymod-03 review it is better to activate the new code
right away, and give users an escape hatch to use the legacy code instead. The
way I implemented it allows using either `legacypay` or `pay` and then set
`legacy` to switch to the other implementation.
Changelog-Added: JSON-RPC: The `pay` command now uses the new payment flow, the new `legacypay` command can be used to issue payment with the legacy code if required.
Suggested-by: Rusty Russell <@rustyrussell>
Suggested-by: ZmnSCPxj <@zmnscpxj>
It turns out that the `failcodename` doesn't get populated if the `failcode`
isn't a known error from the enum (duh...) so don't fail parsing if it's
missing.
We want to differentiate a wrong block-height from other failure reasons, such
as an unknown `payment_hash`, so we skip the `waitblockheight` if we're
already at the correct height.
Add/remove the HTLC amount from the channel hints so concurrent getroute calls
have the correct exclusions. This can sometimes underflow, if we're unlucky
and call getroute too closely, but that's not a big issue, it can only result
in a failed MPP attempt too many, nothing fatal, and it'll get corrected based
on the result returned by the failing node.
These are primarily the fee and cltv constraints that we need to keep up to
date in order to give modifiers a correct view of what is and what isn't
allowed.
We could end up with multiple channel hints, which is a bit wasteful. We now
look for existing ones before adding a new one, and if one exists we use the
more restrictive parameters.
Suggested-by: Lisa Neigut <@niftynei>
We're lucky that we can distinguish the severity of the failure based on the
failcode, so we bubble up the one with the maximum failcode, and let callers
inspect details if they need more information.
We can have quite detailed information about our local channels, so call
`listpeers` before the `getroute` call on the root payment, to seed that
information in the channel_hints.
We were just handwaving the partid generation, which broke some tests that
expected the first payment attempt to always have partid=0, so here we just
track the partids assigned in the payment tree, starting at 0.
The status of what started as a simple JSON-RPC call is now spread across an
entire tree of partial payments and payment attempts. So we collect the status
in a single struct in order to report back success of failure.
This is just for testing for now, TLV payload computation will come next. We
stage all the payloads in deserialized form so modifiers can modify them more
easily and serialize them only before actually calling `createonion`.
This is necessary so we can build the absolute locktimes in the next step. For
now just fetch the blockheight on each (sub-)payment, later we can reuse the
root blockheight if it ends up using too much traffic.
A payment is considered finished if it is in a final state (success or
failure) or all its sub-payments are finished. If that's the case we notify
`payment_finished` and bubble up through `payment_child_finished`, eventually
bubbling up to the root, which can then report success of failure back to the
RPC command that initiated the whole process.
This is likely a bit of overkill for this type of functionality, but it is a
nice first use-case of how functionality can be compartmentalized into
modifiers. If makes swapping retry mechanisms in and out really simple.
This should make it easy for JSON-RPC functions and modifiers to get the
associated data for a given modifier name. Useful if a modifier needs to
access its parent's modifier data, or in other functions that need to access
modifier data, e.g., when passing destination pointers into the `param()`
call.
This commit can be reverted/skipped once we have implemented all the logic and
have feature parity with the normal `pay`. It's main purpose is to expose the
unfinished functionality to test it, without completely breaking the existing
`pay` command.