This is an internal type: it has no API guarantees (indeed, I'm about
to change it, which is how I discovered scb was using it).
Fortunately for every case we care about, it is actually a wireaddr
(in theory the peer can connect locally using a local socket, but this
is mostly for testing and is a very strange setup, and so simply don't
do scb for those).
In this case, the wire encoding is a single byte followed by the
wireaddr, so open-code that in scb_wire.csv for compatibility.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
I need to update gprcio-tools, but it needs protobuf v4. Christian
added this restriction in a99509db36 to
"Update protobuf dependency to silence dependabot", but perhaps it's
time to actually update?
```
$ poetry update
Updating dependencies
Resolving dependencies... (1.0s)
Because pyln-testing (23.05rc2) @ file:///home/rusty/devel/cvs/lightning/contrib/pyln-testing depends on protobuf (>=3.20.3,<4)
and grpcio-tools (1.54.0) depends on protobuf (>=4.21.6,<5.0dev), pyln-testing (23.05rc2) @ file:///home/rusty/devel/cvs/lightning/contrib/pyln-testing is incompatible with grpcio-tools (1.54.0).
So, because cln-meta-project depends on both grpcio-tools (1.54.0) and pyln-testing (23.05rc2) @ file:///home/rusty/devel/cvs/lightning/contrib/pyln-testing, version solving failed.
```
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We removed the (experimental-only!) annotation output in 611795beee
but we still loaded them from the db. Turns out that we were putting bogus
annotations into the db, and accessing out of range when loading them.
Consider the following db entry in transaction_annotations:
```
CREATE TABLE transaction_annotations ( txid BLOB, idx INTEGER, location INTEGER, type INTEGER, channel INTEGER REFERENCES channels(id), UNIQUE(txid, idx));
...
INSERT INTO transaction_annotations VALUES(X'19706f9af2875508a06c7db1754ef7ecb3da745ead005992e626441e4e83465f',18,1,129,53699);
```
Here is the corresponding entry in txs:
```
INSERT INTO transactions VALUES(X'19706f9af2875508a06c7db1754ef7ecb3da745ead005992e626441e4e83465f',710327,966,X'02000000000101f2add69112a1d557317826120e1f4ea3bc1cbe4674d720325695b26ecfe8355d120000000000000000013634000000000000160014dca21f104359bbb81e88ed7da985549f2cd0cbc30347304402201cdc854b76c4c7523e3ca09f38a81539d3b2f7fbd9a0de6ae10b7ceaa65ed9d402205a1770058cd1ef081c77c2fe957c07a334cb3a11bc0cc502834a29c59424fe010120589da1f809d955c7af150bf53123c27ffc0a741489b5291f6be811189863ec838576a9142f188d0d973c4ad1865a619d3748340b30746e088763ac672103b7bbcd592197ba6501e7176aabd3f908d94b126ae82ab1e7a4c58f5a789782e57c820120876475527c21029bcf62114eb36758fcb1aead7e67b6f707d32f34e67816894d5211ac9f2d6ce752ae67a9144483a115219ba65c63a3844be8445f739703bea988ac686800000000',NULL,NULL);
```
The annotation refers to output 18 of the tx, but it only has one output!
However, decoding the tx shows that it spent output 18 of a previous tx, so
that's probably where the `18` came from.
Remove this logic: we can remove the remaining (clearly broken!) annotation
adding code in another cleanup commit.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
CI hit this issue where it would get a tx_abort in fundchannel. This
happens when the dualopend we use to query the feerates has not exited
yet (it waits for the tx_abort reply), and we mistakenly reuse it.
With multi-channel support, this is wrong: just run another one and it
all Just Works.
This means we need to rework our dual_open_control.c logic, since it
would previously create an unsaved channel then not clean up if
something went wrong.
Most people will never try to negotiate opening multiple channels to
the same peer at the same time (vs. having an established channel and
opening a new one), so this case is a bit weird.
```
rates = l1.rpc.dev_queryrates(l2.info['id'], amount, amount)
# l1 leases a channel from l2
l1.rpc.fundchannel(l2.info['id'], amount, request_amt=amount,
feerate='{}perkw'.format(feerate),
> compact_lease=rates['compact_lease'])
tests/test_opening.py:1611:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
contrib/pyln-client/pyln/client/lightning.py:833: in fundchannel
return self.call("fundchannel", payload)
contrib/pyln-testing/pyln/testing/utils.py:721: in call
res = LightningRpc.call(self, method, payload, cmdprefix, filter)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <pyln.testing.utils.PrettyPrintingLightningRpc object at 0x7f6cbcd97950>
method = 'fundchannel'
payload = {'amount': 500000, 'announce': True, 'compact_lease': '029a00640064000000644c4b40', 'feerate': '2000perkw', ...}
cmdprefix = None, filter = None
def call(self, method, payload=None, cmdprefix=None, filter=None):
"""Generic call API: you can set cmdprefix here, or set self.cmdprefix
...
if not isinstance(resp, dict):
raise ValueError("Malformed response, response is not a dictionary %s." % resp)
elif "error" in resp:
> raise RpcError(method, payload, resp['error'])
E pyln.client.lightning.RpcError: RPC call failed: method: fundchannel, payload: {'id': '022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59', 'amount': 500000, 'feerate': '2000perkw', 'announce': True, 'request_amt': 500000, 'compact_lease': '029a00640064000000644c4b40'}, error: {'code': -1, 'message': 'Abort requested', 'data': {'id': '022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59', 'method': 'openchannel_init'}}
```
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Failure under CI:
```
> bitcoind.generate_block(1000)
tests/test_closing.py:853:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
contrib/pyln-testing/pyln/testing/utils.py:496: in generate_block
return self.rpc.generatetoaddress(numblocks, to_addr)
contrib/pyln-testing/pyln/testing/utils.py:374: in f
res = proxy._call(name, *args)
../../../.cache/pypoetry/virtualenvs/cln-meta-project-AqJ9wMix-py3.7/lib/python3.7/site-packages/bitcoin/rpc.py:246: in _call
response = self._get_response()
../../../.cache/pypoetry/virtualenvs/cln-meta-project-AqJ9wMix-py3.7/lib/python3.7/site-packages/bitcoin/rpc.py:276: in _get_response
http_response = self.__conn.getresponse()
/opt/hostedtoolcache/Python/3.7.16/x64/lib/python3.7/http/client.py:1373: in getresponse
response.begin()
/opt/hostedtoolcache/Python/3.7.16/x64/lib/python3.7/http/client.py:319: in begin
version, status, reason = self._read_status()
/opt/hostedtoolcache/Python/3.7.16/x64/lib/python3.7/http/client.py:280: in _read_status
line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <socket.SocketIO object at 0x7fa21aa5d710>
b = <memory at 0x7fa21b771390>
def readinto(self, b):
"""Read up to len(b) bytes into the writable buffer *b* and return
the number of bytes read. If the socket is non-blocking and no bytes
are available, None is returned.
If *b* is non-empty, a 0 return value indicates that the connection
was shutdown at the other end.
"""
self._checkClosed()
self._checkReadable()
if self._timeout_occurred:
raise OSError("cannot read from timed out object")
while True:
try:
> return self._sock.recv_into(b)
E socket.timeout: timed out
/opt/hostedtoolcache/Python/3.7.16/x64/lib/python3.7/socket.py:589: timeout
```
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
07413c20b9 et al reworked how onchaind
does broadcasts, meaning tests needed to be updated to the new helpers
rather than searching logs themselves.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We were passing a max_output_len that was too small, so every call to
bech32_encode was failing. Now we set max_output_len to the full size of
bech32_str.
s/max_input_len/max_output_len
This maximum length applies to the output parameter, not the data
parameter. Thus it is more intuitive to name it max_output_len.
Changelog-EXPERIMENTAL: Build: all experimental features are now runtime-enabled; no more ./configure --enable-experimental-features
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This currently means anchors tests are disabled, awaiting the
PR which implements zero-fee-htlc anchors to reenable them.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
And no longer insist on opt_quiesce.
Changelog-EXPERIMENTAL: Config: `--experimental-upgrade-protocol` enables simple channel upgrades.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Since this was merged, `make extract-peer-csv` was broken!
But the field names changed:
1. `tlv_update_add_tlvs` -> `tlv_update_add_htlc_tlvs`
2. `blinding` -> `blinding_point`.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The modern style is to assert that all messages have tlvs, but many
are currently empty. In particular,
c4c5a8e5fb30b1b99fa5bb0aba7d0b6b4c831ee5 added "update_add_htlc_tlvs"
without adding an explicit field of that type to "update_add_htlc".
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
There are no feature-dependent fields left in the spec. (I've also
made a PR for the spec to remove the tooling for it there).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We need to check that the key is valid for two reasons:
1) towire_ext_key() aborts if the key is invalid
2) fromwire_ext_key() doesn't check the parsed key for validity
Since bip32_key_get_fingerprint() fails if the key is invalid, we can
call it first to guarantee the key is valid before serializing.
By using fuzzer instrumentation for dependencies, we get more coverage
signal during fuzzing. This is useful when the fuzzer must figure out
how to take certain branches in a dependency.
In our case, the fuzz-bip32 target was failing to create a data buffer
that successfully passed fromwire_ext_key() parsing because the fuzzer
couldn't see what was happening inside libwally-core.