We only ever visit each node once, so we can just use an array. This
avoids calling tal() all the time, which is *especially* slow when we're
memory tracking.
I had an old canned gossmap which I benchmarked for these (and in
particular one node was unreachable, and that was slow):
Before: 17.27 seconds
After: 5.80 seconds
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We don't actually hit the htlc_max cases, since the flow code already
constrains us to that.
And handling htlc_min is better done in the caller, where diagnostics
are better (basically, we should eliminate them, and if that means no
route, give a clear error message).
And the refinement step can handle any extra millisats from rounding.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is the root cause of the problem worked around in 50949b7b9c
"askrene: hack in some padding so we don't overflow capacities."
When adding fees to flows, we didn't recheck the boundary conditions: in
renepay this is done by routebuilder.
Fortunately, we can use our "reservations" infrastructure to temporarily
use capacity as we process flows, so we handle the cases where they are
not independent correclty.
My assumption is that the resulting errors are small, so we divide
them between the remaining flows based on highest-to-least
probability.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Simply calculate it when we need it, which means we don't have to keep it
up-to-date as we tweak the flow.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Of course, we still will, since spendable is for a single HTLC, but
this also shows why we should treat *minimum* as the incorrect answer
if they cross, too.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Fixes: https://github.com/ElementsProject/lightning/issues/7563
It seems we didn't handle it correctly: we need to cap the first
segment as well as the others, as far as I can tell.
Also, it can be less than the maximum capacity.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
In general, we should be using tmpctx unless there's a specific reason not to.
It's clear, and simplifies the code somewhat.
If tmpctx is not cleaned often enough, we can look at a per-MCF context, but this
seems like premature optimization.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We let the caller choose mu, and iterate if necessary: it can also
check its limits for fees, etc. Rationalize it to 0-100 inclusive for
human consumption.
This means we don't loop internally, and in fact there's only one
failure mode: we cannot find enough capacity.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>