TLVs have an implicit `len` field, so allow expressions containing
that (eg. `len-1`), but assume it means "the remainder of the
message".
This means in most places, f.size() needs an fallback for the
implicit-length case.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
They're currently called varint, but there's a proposal to call them all
bigsize. Allow both for now.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We need to hand -s to both header and body generation, or neither:
wire/gen_peer_wire.c:53:13: error: static declaration of ‘towire_channel_update_timestamps’ follows non-static declaration
In file included from wire/gen_peer_wire.c:5:
./wire/gen_peer_wire.h:78:6: note: previous declaration of ‘towire_channel_update_timestamps’ was here
We also need it for printwire, otherwise we get static unused functions for subtypes:
devtools/gen_print_wire.c:155:13: error: ‘printwire_channel_update_checksums’ defined but not used [-Werror=unused-function]
static void printwire_channel_update_checksums(const char *fieldname, const u8 **cursor, size_t *plen)
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
devtools/gen_print_wire.c:133:13: error: ‘printwire_channel_update_timestamps’ defined but not used [-Werror=unused-function]
static void printwire_channel_update_timestamps(const char *fieldname, const u8 **cursor, size_t *plen)
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
```
tools/test/enum.c: In function ‘fromwire_test_enum’:
tools/test/enum.c:11:34: error: format ‘%ld’ expects argument of type ‘long int’, but argument 2 has type ‘size_t {aka unsigned int}’ [-Werror=format=]
printf("fromwire_test_enum at %ld\n", *max);
```
and:
```
devtools/print_wire.c: In function ‘printwire_tlvs’:
devtools/print_wire.c:201:22: error: format ‘%ld’ expects argument of type ‘long int’, but argument 2 has type ‘u64 {aka long long unsigned int}’ [-Werror=format=]
printf("**TYPE #%ld UNKNOWN for TLV %s**\n", type, fieldname);
^
```
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
originally done so that any field within a subtype needed context,
that would be reflected at the top-most layer. in reality, we
allocate subtype fields off of the instance of the subtype, so
there's no need to check beneath the 'top' layer of field/types
in a message to determine whether or not to pass in a context.
Add parsing know-how for enum fields. This is necessary for
internally defined wire generators. Enums are denoted by prefixing
the field with an `e:`.
Ex:
msgdata,msg_name,field_name,e:enum_type,
we rely, perhaps a bit hackily, on there only being one copy of
each type object floating about. using `deepcopy` on a Message
for message extensions destroys this paradigm, which breaks
things in the case where it's a later defined subtype that contains
variable-length members.
to fix this, we modify `__deepcopy__` on the Field class, such
that it preserves the reference to the original type_obj instance.
remove extra space from the lead-in for inline comments
so that a provided comment like this
# This is a comment
will appear like this
/* This is a comment */
Add a test for checking that the bolt-gens do the right thing
for a fairly exhaustive test case set (and that it compiles).
Note that this doesn't check that we've got the memory assignment
pieces worked out.
It's got a kind of exotic reliance on the update-mocks in that in
order to depend on as little of the wire/ code as possible (we
only import wire/wire.h), we include an AUTOGENERATE comment
in the test_cases CSV file, and then run update-mocks as part of
the build for that file.
we'll need this for internal wire message formats. also disambiguates
from 'bolt message optional fields', which we rename to extensions here.
example of an optional field declaration (note the ? prefixing the
type):
msgdata,msg_name,field_name,?type,count
these are handled with either a boolean if they're not present,
or a true value and then the object if they are.
if there are any comments that aren't "attached" to a message,
print them at the top of the generated file. we need this for
the fancy auto-gen'd dependencies in the tool-wiregen tests.
The new TLV spec uses BigSize, like Bitcoin's CompactInt but
*little-endian*. So change our name for clarity, and insist that
decoding be minimal as the spec requires.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
the RFC's extract-format.py is switching to a new format.
this script can correctly parse them.
mostly moves logic over from generate-wire.py, uses a
Python formatting libarary called mako, which needs to be
installed prior to running this script.
you can add it to your system with
sudo apt-get install python3-mako
Encapsulating the peer state was a win for lightningd; not surprisingly,
it's even more of a win for the other daemons, especially as we want
to add a little gossip information.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We now have a test blockchain for MCP which has the correct channels,
so this is not needed.
Also fix a benchmark script bug where 'mv "$DIR"/log
"$DIR"/log.old.$$' would fail if you log didn't exist from a previous run.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Save some overhead, plus gets us ready for giving subdaemons direct
store access. This is the first time we *upgrade* the gossip_store,
rather than just discarding.
The downside is that we need to add an extra message after each
channel_announcement, containing the channel capacity.
After:
store_load_msec:28337-30288(28975+/-7.4e+02)
vsz_kb:582304-582316(582306+/-4.8)
store_rewrite_sec:11.240000-11.800000(11.55+/-0.21)
listnodes_sec:1.800000-1.880000(1.84+/-0.028)
listchannels_sec:22.690000-26.260000(23.878+/-1.3)
routing_sec:2.280000-9.570000(6.842+/-2.8)
peer_write_all_sec:48.160000-51.480000(49.608+/-1.1)
Differences:
-vsz_kb:582320
+vsz_kb:582316
-listnodes_sec:2.100000-2.170000(2.118+/-0.026)
+listnodes_sec:1.800000-1.880000(1.84+/-0.028)
-peer_write_all_sec:51.600000-52.550000(52.188+/-0.34)
+peer_write_all_sec:48.160000-51.480000(49.608+/-1.1)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
* remove libbase58, use base58 from libwally
This removes libbase58 and uses libwally instead.
It allocates and then frees some memory, we may want to
add a function in wally that doesn't or override
wally_operations to use tal.
Signed-off-by: Lawrence Nahum lawrence@greenaddress.it
Some tests require dev support, but the rest can run. We simplify
the gossip_store output so it's the same in non-dev mode too.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
It can take a while if bitcoind has the regtest chain, and grossly
distorts our benchmarks!
Reported-by: Joe Netti <jnetti@blockstream.io>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
added sanity check to make sure scid of csv is the same as scid in gossip.
Revised style, mem allocation, and error checks
[ Minor fixups, and updated benchmark script -- RR ]
With data.tar.gz: 456609740 Apr 2 12:33
store_load_msec:35300-42354(37118.2+/-2.7e+03)
vsz_kb:582832
store_rewrite_sec:12.700000-13.430000(12.988+/-0.27)
listnodes_sec:3.000000-3.160000(3.076+/-0.057)
listchannels_sec:30.790000-31.690000(31.03+/-0.34)
routing_sec:0.00
peer_write_all_sec:63.640000-67.860000(66.294+/-1.4)
Channels have a htlc_minimum_msat of 10000, which is why we didn't
find routes.
This makes a significant speed drop:
-routing_sec:26.940000-27.990000(27.616+/-0.39)
+routing_sec:60.70
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
now we print the subtypes out when you call printwire
note that we have to reverse the order the subtypes appear in
because
1) they're static and,
2) a few of them are nested
the original version of the subtype generator emitted '$'
to designate that a field was a subtype; now it's got a different
format:
funding_type,8,num_inputs,2
funding_type,10,input_info,num_inputs*input_info
this patch updates our generator to understand this new format
This is needed so that some csv's can expose their subtype parsing
functions in their header. This gets used in a later PR where
we start replacing manually created 'subtype' definitions with
generated ones.
This doesn't result in a speedup for our benchmark, since we use the
cli tool which does the formatting.
MCP results from 5 runs, min-max(mean +/- stddev):
store_load_msec:33422-36830(35196.2+/-1.2e+03)
vsz_kb:2637488
store_rewrite_sec:36.030000-37.630000(36.794+/-0.52)
listnodes_sec:0.720000-0.950000(0.86+/-0.077)
listchannels_sec:40.300000-41.080000(40.668+/-0.29)
routing_sec:30.440000-31.030000(30.69+/-0.2)
peer_write_all_sec:50.060000-52.800000(51.416+/-0.91)
MCP notable changes from 2 patches ago (>1 stddev):
-listchannels_sec:48.560000-55.680000(52.642+/-2.7)
+listchannels_sec:40.300000-41.080000(40.668+/-0.29)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Plugins don't do it right anyway, and we're about to remove it from
lightningd. Produces same format as json_pp.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We can save significant space by combining both sides: so much that we
can reduce the WIRE_LEN_LIMIT to something sane again.
MCP results from 5 runs, min-max(mean +/- stddev):
store_load_msec:34467-36764(35517.8+/-7.7e+02)
vsz_kb:2637488
store_rewrite_sec:35.310000-36.580000(35.816+/-0.44)
listnodes_sec:1.140000-2.780000(1.596+/-0.6)
listchannels_sec:55.390000-58.110000(56.998+/-0.99)
routing_sec:30.330000-30.920000(30.642+/-0.19)
peer_write_all_sec:50.640000-53.360000(51.822+/-0.91)
MCP notable changes from previous patch (>1 stddev):
-store_rewrite_sec:34.720000-35.130000(34.94+/-0.14)
+store_rewrite_sec:35.310000-36.580000(35.816+/-0.44)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Don't turn them to/from pubkeys implicitly. This means nodeids in the store
don't get converted, but bitcoin keys still do.
MCP results from 5 runs, min-max(mean +/- stddev):
store_load_msec:33934-35251(34531.4+/-5e+02)
vsz_kb:2637488
store_rewrite_sec:34.720000-35.130000(34.94+/-0.14)
listnodes_sec:1.020000-1.290000(1.146+/-0.086)
listchannels_sec:51.110000-58.240000(54.826+/-2.5)
routing_sec:30.000000-33.320000(30.726+/-1.3)
peer_write_all_sec:50.370000-52.970000(51.646+/-1.1)
MCP notable changes from previous patch (>1 stddev):
-store_load_msec:46184-47474(46673.4+/-4.5e+02)
+store_load_msec:33934-35251(34531.4+/-5e+02)
-vsz_kb:2638880
+vsz_kb:2637488
-store_rewrite_sec:46.750000-48.280000(47.512+/-0.51)
+store_rewrite_sec:34.720000-35.130000(34.94+/-0.14)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Outputs CSV. We add some stats for load times in developer mode, so we can
easily read them out.
peer_read_all_sec doesn't work, since we seem to reject about half the
updates for having bad signatures. It's also very slow...
routing fails, for unknown reasons, so that failure is ignored in routing_sec.
Results from 5 runs, min-max(mean +/- stddev):
store_load_msec,vsz_kb,store_rewrite_sec,listnodes_sec,listchannels_sec,routing_sec,peer_write_all_sec
39275-44779(40466.8+/-2.2e+03),2899248,41.010000-44.970000(41.972+/-1.5),2.280000-2.350000(2.304+/-0.025),49.770000-63.390000(59.178+/-5),33.310000-34.260000(33.62+/-0.35),42.100000-44.080000(43.082+/-0.67)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Header from folded patch 'fixup!_tools-bench-gossipd.sh__rough_benchmark_for_gossipd_and_the_million_channels_project-2.patch':
fixup! tools/bench-gossipd.sh: rough benchmark for gossipd and the million channels project
Suggested-by: @niftynei
Header from folded patch 'fixup!_tools-bench-gossipd.sh__rough_benchmark_for_gossipd_and_the_million_channels_project-1.patch':
fixup! tools/bench-gossipd.sh: rough benchmark for gossipd and the million channels project
MCP filename change.
Header from folded patch 'tools-bench-gossipd.sh__dont_print_csv_by_default.patch':
tools/bench-gossipd.sh: don't print CSV by default.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Header from folded patch 'fixup!_tools-bench-gossipd.sh__rough_benchmark_for_gossipd_and_the_million_channels_project.patch':
fixup! tools/bench-gossipd.sh: rough benchmark for gossipd and the million channels project
Make shellcheck happy.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
fixup printing methods in devtools/decodemsg such that TLV's can
now be printed as well. here's how you'd use it:
$ ./devtools/decodemsg --tlv opening_tlv 0120001E020202020202020202020202020202020202020202020202020202020202
> WIRE_OPTION_UPFRONT_SHUTDOWN_SCRIPT (size 32):
> shutdown_scriptpubkey=[020202020202020202020202020202020202020202020202020202020202]
TLV's use var_int's for messages sizes, both internally and
in the top level (you should really stack a var_int inside a var_int!!)
this updates our automagick generator code to understand 'var_ints'
passing back a null TLV was crashing here, because we tried to
dereference a null pointer. instead, we put it into a temporary
struct that we can check for NULL-ness, before assigning to the
passed in pointer.
let's let the fromwire__tlv methods allocate the tlv-objects and
return them. we also want to initialize all of their underlying
messages to NULL, and fail if we discover a duplicate mesage type.
if parsing fails, instead of returning a struct we return NULL.
Suggested-By: @rustyrussell
Since messages in TLV's are optional, the ideal way to deal with
them is to have a 'master struct' object for every defined tlv, where
the presence or lack of a field can be determined via the presence
(or lack thereof) of a struct for each of the optional message
types.
In order to do this, appropriately, we need a struct for every
TLV message. The next commit will make use of these.
Note that right now TLV message structs aren't namespaced to the
TLV they belong to, so there's the potential for collision. This
should be fixed when/where it occurs (should fail to compile).
Add tlv-messages to the general messages set so that their parsing
messages get printed out.
FIXME: figure out how to account for partial message length processing?
Version 1.1 of the lightning-rfc spec introduces TLVs for optional
data fields. This starts the process of updating our auto-gen'd
wireformat parsers to be able to understand TLV fields.
The general way to declare a new TLV field is to add a '+' to the
end of the fieldname. All field type declarations for that TLV set
should be added to a file in the same directory by the name
`gen_<field_name>_csv`.
Note that the FIXME included in this commit is difficult to fix, as
we currently pass in the csv files via stdin (so there's no easy
way to ascertain the originating directory of file)
Otherwise we can't really return a variable sized message with more than 65k
results. This was causing an integer overflow in `listchannels` (see #2504 for
details).
Signed-off-by: Christian Decker <decker.christian@gmail.com>
We were tarring up the build dir, not the destination dir! We did this
for 0.6.3 and nobody noticed :(
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
@cdecker reports that this gives warnings on exit; and we can't suppress
them by setting ASAN_OPTIONS within the binary itself, unfortunately.
So for 0.7, disable it by default. I'll work through the errors for 0.7.1.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
And fix trivial typo in MAKING-RELEASES.md, and date retreival in
build-release.sh and repro-build.sh (real git tags start with v!)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
I tried building zipfile on a fresh clone inside KVM, and got
1. Different times inside the zipfile, since zip seems to save *local* times.
2. A different zipfile order, since zip seems to use filesystem order.
Fix both of these. I don't know if LANG=C is necessary for git
ls-files, but it can't hurt.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
For the moment it's only Ubuntu 18.04.1.
Complete documentation is in the final commit; you can test this using
the prior commit and comparing with my intermediate files and results
at:
https://ozlabs.org/~rusty/clightning-repro
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Basically we tell it that every field ending in '_msat' is a struct
amount_msat, and 'satoshis' is an amount_sat. The exceptions are
channel_update's fee_base_msat which is a u32, and
final_incorrect_htlc_amount's incoming_htlc_amt which is also a
'struct amount_msat'.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
They're generally used pass-by-copy (unusual for C structs, but
convenient they're basically u64) and all possibly problematic
operations return WARN_UNUSED_RESULT bool to make you handle the
over/underflow cases.
The new #include in json.h means we bolt11.c sees the amount.h definition
of MSAT_PER_BTC, so delete its local version.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Christian and I both unwittingly used it in form:
*tal_arr_expand(&x) = tal(x, ...)
Since '=' isn't a sequence point, the compiler can (and does!) cache
the value of x, handing it to tal *after* tal_arr_expand() moves it
due to tal_resize().
The new version is somewhat less convenient to use, but doesn't have
this problem, since the assignment is always evaluated after the
resize.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This causes a compiler warning if we don't do something with the
result (hopefully return immediately!).
We use was_pending() to ignore the result in the case where we
complete a command in a callback (thus really do want to ignore
the result).
This actually fixes one bug: we didn't return after command_fail
in json_getroute with a bad seed value.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Turns out that I should have tested these with a new dependency
instead of just submitting. `sed` was missing the s command.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
This was introduced in ed268d6c, which broke the mocks
generation. This just filters out the invalid sentinel value.
Signed-off-by: Christian Decker <decker.christian@gmail.com>
There were a few reports that upgrading Ubuntu recently caused issues
because we assert that the sqlite3 library version matches the one we
were built with. 'make' doesn't fix this, because it doesn't know the
external libraries have changed.
Fix this harder, with a helper which updates a file every binary depends
on, which gets relinked every time so we detect link changes.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
If we have an array of varlen structures (which require a ctx arg), we
should make that arg the array itself (which was tal_arr()), not the
root context.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We do this a lot, and had boutique helpers in various places. So add
a more generic one; for convenience it returns a pointer to the new
end element.
I prefer the name tal_arr_expand to tal_arr_append, since it's up to
the caller to populate the new array entry.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
tal_count() is used where there's a type, even if it's char or u8, and
tal_bytelen() is going to replace tal_len() for clarity: it's only needed
where a pointer is void.
We shim tal_bytelen() for now.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This requires a tweak to generate-wire.py too, since it always called the
top-level routine 'print_message'.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
gossip_getnodes_entry was used by gossipd for reporting nodes, and for
reporting peers. But the local_features field is only available for peers,
and most other fields are only available from node_announcement.
Note that the connectd change actually means we get less information
about peers: gossipd used to do the node lookup for peers and include the
node_announcement information if it had it.
Since generate_wire.py can't create arrays-of-arrays, we add a 'struct
peer_features' to encapsulate the two feature arrays for each peer, and
for convenience we add it to lightningd/gossip_msg.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Well, it's generated by shachain, so technically it is a sha256, but
that's an internal detail. It's a secret.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We already work around this by using an array with a 0/1 length convention,
but better to be explicit.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This is based on @NicolasDorier's excellent proposal for a Dockerfile, sans the
writing of a config file.
Co-authored-by: Nicolas Dorier <nicolas.dorier@gmail.com>
Co-authored-by: Christian Decker <decker.christian@gmail.com>
Signed-off-by: Christian Decker <decker.christian@gmail.com>
If we change an upstream URL, all submodules break. Users would need
to run 'git submodule sync'. Note that the libbacktrace fix was merged
upstream so this is no longer necessary, but it's good for future changes.
Also, stress-testing reveals that git submodule fails locking
'.git/config' when run in paralell. It also segfaults and other
problems.
This is my final attempt to fix submodules; I've wasted far too many
days on obscure problems it creates: I've already lost one copy of my
repo to apparently unfixable submodule preoblems. The next "fix" will
be to simply import the source code so it works properly.
Reported-by: @jsarenik
Fixes: #1543
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Fixes: #1221
We were using `\x0` to match NUL chars in the input (on the
assumption that NUL chars are "impossible" for decent LFS-compliant
systems).
However `\x0` is a GNUism.
Use the `\n` and the newline character, which is supported by (most)
POSIX sed.