For now hooks are treated identically to rpcmethods, with the
exception of not being returned in the `getmanifest` call. Later on we
can add typed handlers as well.
Having a list of very targeted suppressions allows us to still run the
majority of tests with valgrind checking, and not fail when Rust does
some trickery. This is for example the case with `std::sync::Once`
which uses `num_procs` calling out to the cgroups subsystem, sometimes
with a null path.
Suggested-by: Rusty Russell <@rustyrussell>
`valgrind` reports seems to flag some memory accesses that are ok in
the Rust standard library, which we can consider false positives for
our purposes:
```Valgrind error file: valgrind-errors.69147
==69147== Syscall param statx(file_name) points to unaddressable byte(s)
==69147== at 0x4B049FE: statx (statx.c:29)
==69147== by 0x2E2DA0: std::sys::unix::fs::try_statx (weak.rs:139)
==69147== by 0x2D7BD5: <std::fs::File as std::io::Read>::read_to_string (fs.rs:784)
==69147== by 0x2632CE: num_cpus::linux::Cgroup::param (linux.rs:214)
==69147== by 0x263179: num_cpus::linux::Cgroup::quota_us (linux.rs:203)
==69147== by 0x263002: num_cpus::linux::Cgroup::cpu_quota (linux.rs:188)
==69147== by 0x262C01: num_cpus::linux::load_cgroups (linux.rs:149)
==69147== by 0x26289D: num_cpus::linux::init_cgroups (linux.rs:129)
==69147== by 0x26BD88: core::ops::function::FnOnce::call_once (function.rs:227)
==69147== by 0x26B749: std::sync::once::Once::call_once::{{closure}} (once.rs:262)
==69147== by 0x139717: std::sync::once::Once::call_inner (once.rs:419)
==69147== by 0x26B6D5: std::sync::once::Once::call_once (once.rs:262)
==69147== Address 0x0 is not stack'd, malloc'd or (recently) free'd
==69147==
==69147== Syscall param statx(buf) points to unaddressable byte(s)
==69147== at 0x4B049FE: statx (statx.c:29)
==69147== by 0x2E2DA0: std::sys::unix::fs::try_statx (weak.rs:139)
==69147== by 0x2D7BD5: <std::fs::File as std::io::Read>::read_to_string (fs.rs:784)
==69147== by 0x2632CE: num_cpus::linux::Cgroup::param (linux.rs:214)
==69147== by 0x263179: num_cpus::linux::Cgroup::quota_us (linux.rs:203)
==69147== by 0x263002: num_cpus::linux::Cgroup::cpu_quota (linux.rs:188)
==69147== by 0x262C01: num_cpus::linux::load_cgroups (linux.rs:149)
==69147== by 0x26289D: num_cpus::linux::init_cgroups (linux.rs:129)
==69147== by 0x26BD88: core::ops::function::FnOnce::call_once (function.rs:227)
==69147== by 0x26B749: std::sync::once::Once::call_once::{{closure}} (once.rs:262)
==69147== by 0x139717: std::sync::once::Once::call_inner (once.rs:419)
==69147== by 0x26B6D5: std::sync::once::Once::call_once (once.rs:262)
==69147== Address 0x0 is not stack'd, malloc'd or (recently) free'd
==69147==
```
We wrap emitted messages into a JSON-RPC notification envelope and
write them to stdout. We use an indirection over an mpsc channel in
order to avoid deadlocks if we emit logs while holding the writer lock
on stdout.
Next patch re-enables runtime leak checking for dualopend, so fix those
leak reports.
In some cases, this menas allocating off tmpctx or state->channel
(which gets reset on failure), not state. The problem with tmpctx is
that there are event loops in the *middle* of some functions, which
free it. So for RBF functions we use a rbf_ctx temporary (with leak
detection suppressed, like it is for tmpctx), then be careful to free
it on all exits!
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Instead of doing this weird chaining, just call them all at once and
use a reference counter.
To make it simpler, we return the subd_req so we can hang a destructor
off it which decrements after the request is complete.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This limit applies to both node_announcements (which we now send 1 per
day), and channel_updates; I've had reports of LND nodes going down
daily for database compation, so they end up ratelimited.
Changelog-Protocol: We now allow two channel_updates or node_announcements per day, up from 1.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Even if nothing has changed. Note that this is different from simply
re-xmitting the old one, in that it has a different timestamp, so others
will re-xmit it too.
This is a side-effect of reports that node_announcement propagation
through the network is poor:
https://github.com/ElementsProject/lightning/issues/5037
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Changelog-Protocol: We now refresh our ndoe_announcement every 24 hours, due to propagation issues.
node_announcement has to follow at least one channel_announcement.
When channels close, if this isn't the case, we remove the old
node_announcement and put it at the end of the gossip_store.
But we lose the "send even if they don't want it" bit in the case it's
our own node_announceent, so keep it.
This only happens if you don't change your node configuration at all
since you opened your first channel, but still worth fixing.
We expose the force_node_announce_rexmit() for later use.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
For example, if you do:
```
./lightningd/lightningd --network=regtest --experimental-websocket-port=19846
```
Then you're trying to reuse the normal port as the websocket port, but this
only fails at *listen* time, when we activate connectd. Catch this too.
Fixes incorrect fatal() message, too.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
By accessing `addr` after the loop, it's possible that it's one which
failed, in complex scenarios.
Also gives us a chance to warn if they specify a websocket but don't
actually end up advertizing it (you *must* advertize a normal addr as
well).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
We always added to both arrays, might as well just keep one.
We make mayfail an explicit flag, rather than relying on the presence
of errstr, which is never NULL now.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This lets us give a better error message if listen fails, and also
moved the callback closer to where it's needed.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This was causing journal_entries to show up in the accountant plugin,
since we don't emit events for unconfirmed events until they're actually
confirmed onchain.
Useful for accounting for missed/historical channel opens, to figure out
what the actual sat contribution from each peer is at a utxo level
Changelog-Added: JSONRPC: `listpeers` now includes a `pushed_msat` value. For leased channels, is the total lease_fee.