Changelog-Changed: runes: named parameters (e.g. `pnameamountmsat`) no longer need to remove underscores (i.e. `pnameamount_msat` now works as expected).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Rather than speaking 'rune' we should speak english in error messages.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Reported-by: @ShahanaFarooqui
This reveals an inadequacy in our rune error reporting:
we complain a missing parameter is "not an integer field" instead
of "not present":
```
# Rune requires amount_msat < 10,000!
> with pytest.raises(RpcError, match='Not permitted: pnameamountmsat is not present') as exc_info:
E AssertionError: Regex pattern did not match.
E Regex: 'Not permitted: pnameamountmsat is not present'
E Input: "RPC call failed: method: checkrune, payload: {'nodeid': '0266e4598d1d3c415f572a8488830b60f7e744ed9235eb0b1ba93283b315c03518', 'rune': 'b3hXuEM7Pqzk-C7HUw83xzvHOV7fmuGaWjdo-wHdfg89MCZtZXRob2Q9cGF5JnBuYW1lYW1vdW50bXNhdDwxMDAwMA==', 'method': 'pay', 'params': {'bolt11': 'lnbcrt123n1pj7flqdsp5ndqgxpwk2hf50gzm0d4ssgjnd90cwkrc8udh7lfr5x583jms7yqspp5kn5stlnkv70celgw4vmdva9m7a57drd2403vnx4whq2p5nawkh3sdq5v3jhxcmjd9c8g6t0dccsxqyjw5qcqp99qxpqysgqhrgp7wp640gyujxk0mz4l6e6dxmqp7fz8pnnpnnqjfxg2scvuzfpwlxrj332u72p5g709eqr8rwaueruce84h0qmh6kc5c2zxgg9q4qps4cu8k'}}, error: {'code': 1502, 'message': 'Not permitted: pnameamountmsat is not an integer field'}"
```
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We do this for DEBUG_SUBD already, but I wanted to debug the main lightningd.
(We rename --debugger to the more accurate --dev-debug-self)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
a.k.a. "Pay with a friend!".
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Added: JSON-RPC: `pay` has a new parameter `partial_msat` to only pay part of an invoice (someone else presumably will pay the rest at the same time!)
Suggested-by: Calle
This PR will generate doc/lightning-*.md file for rpc commands. It will get the information from consolidated doc/schemas/lightning-*.json file and use it to generate markdown file for RPC commands.
Changelog-Changed: Documentation: great documentation rewrite, all reference pages now generated from the fully-tested JSON schemas and include examples.
- Updated doc/Makefile for generating schemas/lightning-sql.json
- Read the lightning-sql template from schemas/lightning-sql-template.json
- Generate sql tables data and merge it with json object read from above template
- Save final merged object in schemas/lightning-sql.json
- Updated doc/Makefile to auto generate lightning-sql.7.md and lightning-sql.7
- Deleted schemas/lightning-sql.json and adding it into .gitignore
- Added lightning-sql specific titles in the fromschema.py script
- Fixed test_sql by changing the sequence for listpeerchannels `reestablished` field
- Ignoring lightning-sql.json for msggen schema.json because it is auto generated by sql plugin and lightning-sql-template.json.
- Update `make doc-all` script
- `fromschema.py` script
- Updated to generate whole doc/lightning-*.md via doc/schemas/lightning-*.json file
- Updated to print request params
- Added `oneOfMany` condition for request params
- Added `pairedWith` condition for request params
- Added `dependentUpon` condition for request params
- Updated for pre and post_return_value for response
- Hiding `hidden` fields
- `descriptions` are array now
- Unified gaps between titles
- Added default key-value pair
- script: msggen, sql-schema, listconfig and pyln-testing script updates
This does not add created markdowns for cleaner review. We will add the markdown files in a separate commit.
Merge information from `*.request.json` & `*.schema.json`. Also consolidate remaining details from `*.md` files and create a single file in schemas folder.
This commit will remove parameter descriptions from RPC markdown but we will fix it in next commits by reading these descriptions directly from json.
- Removing parameter description text
- Adding/removing newlines for cleaner formatting
- Adding ERRORS title wherever needed
- Updating titles for consistency
- Adding resources links
Added descriptions for rpc command parameters
This also performs the following fixes:
1. delforward parameters are compulsory (required).
2. disableinvoicerequest request added `added` field.
3. invoice request order fixed (label then description, not vice-versa!).
4. listpeers log levels are a proper enum
5. description parameter documented for sendonion requests.
6. deprecatred amount_msat removed from sendpay request.
7. sendpay request partid type fixed to u64 (was u16!)
8. sendpay request localinvreqid type tightened to hash (was hex)
9. sendpay request payment_metadata and description fields documented.
10. sendpsbt request reserve type fixed to u32 (was boolean)
11. utxopsbt request satoshi type fixed to msat_or_all (was msat)
12. withdraw request parameter satoshi is compulsory (required)
13. fundchannel_start request amount is sat, not msat_or_all.
14. openchannel_init request amount is sat, not msat
15. openchannel_init close_to is a string, not hex.
16: invoice labels can be strings OR numbers.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We actually use this in multiple places for requests. The key
difference between this and msat is what we do when presented with a
raw number! I prefer insisting on explicit suffixes, for this reason,
but at least this will document the field correctly!
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
In general, results should always be an object, so we can add new fields
later (e.g. warnings!). But we violated this for "stop", and when "recover"
used the same infrastructure, it started doing the same thing.
I'm assuming nobody cares, so we don't need to do a deprecation cycle.
Changelog-Changed: JSON-RPC: `stop` and `recover` now return a JSON object (not a raw string!) like every other command does.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The pattern of making tal-allocated copies of wally data to pass around
was made redundant after these calls were added by the use of
tal_wally_start/tal_wally_end to parent wally allocations. We can thus
just pass the data directly and avoid the allocations.
Removes redundant allocations when checking tx filters and computing fees.
Signed-off-by: Jon Griffiths <jon_p_griffiths@yahoo.com>
Avoids a hash160 for every key we derive to test. Callers that need the
key re-derive it without the skip flag, so there are no side effects
from this optimization.
Changelog-Changed: core: Processing blocks should now be faster
Signed-off-by: Jon Griffiths <jon_p_griffiths@yahoo.com>
Standardizes the is_xxx script function all take a script length, and changes
their first-level callers to pass it. This has several knock on benefits:
- We remove the repeated tal_count/tal_bytelen calls on the script, in
particular the redundant calls that result when we must check for multiple
types of script - which is almost all cases.
- We remove the dependency on the memory being tal-allocated (It is, in
all cases, but theres no reason we need to require that).
- We remove all cases where we create a copy of the script just to id it.
- We remove all allocations for non-interesting scripts while iterating block
txs in process_getfilteredblock_step1().
- We remove all allocations *including for potentially interesting scripts* in
topo_add_utxos().
Signed-off-by: Jon Griffiths <jon_p_griffiths@yahoo.com>
The is_xxx script functions already perform these checks for us without
allocating. They currently expect their memory to be tal-allocated,
which is why a copy is made. However:
- The memory already *is* tal-allocated since wally is configured to do so.
- The tal-dependency is artifical and is removed in a future commit in
this PR.
This reverts commit c329756723.
Signed-off-by: Jon Griffiths <jon_p_griffiths@yahoo.com>
In some weird situations it may happen that some channel along the route
could have htlcmax=htlcmin, so that the supremum of htlcmin and the
infimum of htlcmax are the same number. In that case there is only one
allowed amount that can go through that route.
Without this patch renepay would not handle correctly this cornercase.
Set up a simple line of channel pairs, where one should be preferred
over the other for our various reasons. Make sure this works, both
using a low-level call to Dijkstra and at a higher level as the pay
plugin does.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
I noticed that run-route-infloop chose some worse-looking paths after
routing was fixed, eg the second node:
Before:
Destination node, success, probability, hops, fees, cltv, scid...
02b3aa1e4ed31be83cca4bd367b2c01e39502cb25e282a9b4520ad376a1ba0a01a,1,0.991856,2,1004,40,2572260x39x0/1,2131897x45x0/0
After:
Destination node, success, probability, hops, fees, cltv, scid...
02b3aa1e4ed31be83cca4bd367b2c01e39502cb25e282a9b4520ad376a1ba0a01a,1,0.954540,3,1046,46,2570715x21x0/1,2346882x26x14/1,2131897x45x0/0
This is because although the final costs don't reflect it, routing was taking
into account local channels, and 2572260x39x0/1 has a base fee of 2970.
There's an easy fix: when we the pay plugin creates localmods for our
gossip graph, add all local channels with delay and fees equal to 0.
We do the same thing in our unit test. This improves things across
the board:
Linear success probability (when found): min-max(mean +/- stddev)
Before: 0.487040-0.999543(0.952548+/-0.075)
After: 0.486985-0.999750(0.975978+/-0.053)
Hops:
Before: 1-5(2.98374+/-0.77)
After: 1-5(2.09593+/-0.63)
Fees:
Before: 0-50848(922.457+/-2.7e+03)
After: 0-50041(861.621+/-2.7e+03)
Delay (blocks):
Before: 0-196(65.8081+/-60)
After: 0-190(60.3285+/-60)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Changed: Plugins: `pay` route algorithm doesn't bias against our own "expensive" channels any more.
The "path_score" callback was supposed to evaluate the *entire path*,
but that was counter-intuitive and opened the door to a cost function
bug which caused this path cost to be less than the closer path.
In particular, the capacity bias code didn't understand this at all.
1. Rename the function to `channel_score` and remove the "distance"
parameter (always "1" since you're supposed to be evaluating a
single hop).
2. Rename "cost" to the more specific "fee": "score" is our
actual cost function result (we avoid the word "cost" as it
may get confused with satoshi amounts).
3. For capacity biassing, we do want to know the amount, but
explicitly hand that as a separate parameter "total".
4. Fix a minor bug where total handed to scoring function previously
included channel fee (this is wrong: fee is paid before sending into
channel).
5. Remove the now-unused total_delay member from the dijkstra
struct.
Here are the results of our test now (routing 4194303 msat, which
didn't crash the old code, so we could compare). In both cases
we could find routes to 615 nodes:
Linear success probability (when found): min-max(mean +/- stddev)
Before: 0.484764-0.999750(0.9781+/-0.049)
After: 0.487040-0.999543(0.952548+/-0.075)
Hops:
Before: 1-5(2.13821+/-0.66)
After: 1-5(2.98374+/-0.77)
Fees:
Before: 0-50041(2173.75+/-5.3e+03)
After: 0-50848(922.457+/-2.7e+03)
Delay (blocks):
Before: 0-294(83.1642+/-68)
After: 0-196(65.8081+/-60)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Fixes: https://github.com/ElementsProject/lightning/issues/7092
Changelog-Fixed: Plugins: `pay` would occasionally crash on routing.
Changelog-Fixed: Plugins: `pay` route algorithm fixed and refined to balance fees and capacity far better.
It gave 0. A lot.
Firstly, rmsat was often very small, because delays are often small. Much
smaller than the actual fee.
We really just want to offset the bias and multiply it.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We attempted to introduce a "capacity bias" so we would penalize
channels where we were using a large amount of their total capacity.
Unfortunately, this was both naive, and applied wrongly:
-log((capmsat + 1 - amtmsat) / (capmsat + 1));
As an integer gives 0 up to about 65% capacity. We then applied this
by multiplying:
(cmsat * rmsat * bias) / (cmsat + rmsat + bias + 1);
Giving a total score of 0 in these cases! If we're using the
arithmetic mean we really need to add 1 to bias. We might as well
use a double the whole way through, for a slightly more fine-grained
approach.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>