This functionality already exists in the Python framework; this feature
enables it for Rust plugins as well.
Changelog-Added: cln-plugin: Implement send_custom_notification to allow sending custom notifications to other plugins.
Under some circumstances we may want to not log to `lightningd`
directly, but rather configure the logging ourselves. This is useful
for example if we want to use `tracing` and `tracing-subscriber` to
add custom handling, or add opentelemetry span tracing.
Changelog-Changed: cln-plugin: Suppress internal logging handler via `with_logging(false)`
See: https://github.com/bitcoindevkit/bdk/issues/1047#issuecomment-1660645669
In general, futures produced by most libraries in the ecosystem of Rust, and bounds placed
on users of famous runtimes like tokio and its spawn method all lack Sync requirements.
Because of this, anyone who creates a callback using any sort of library that returns a
non-Sync future (which most libraries fit this description) inside of it will get some
cryptic error messages (async error messages still leave a lot to be desired).
Removing these Sync requirements will make the library more useful.
When plugins receive a "shutdown" notification, then can call this
method which will shutdown `cln_plugin`.
Then they can await `plugin.join()` and do any remaining cleanup
there.
This helps avoid a pain-point where plugin authors need to handle
2 separate plugin shutdown mechanisms https://github.com/ElementsProject/lightning/issues/6040
There are several cases where you want to access to the configuration,
and given the nature of the rust API, there is no way to access to
the `configuration` field at any point after the configuration logic.
Suggested-by: Sergi Delgado Segura <@sr-gi>
Signed-off-by: Vincenzo Palazzo <vincenzopalazzodev@gmail.com>
We filter based on the environment variable `CLN_PLUGIN_LOG`,
defaulting to `info` as that is not as noisy as `debug` or `trace`, at
least libraries will not spam us too heavily.
Changelog-Added cln-plugin: The logs level from cln-plugins can be configured by the `CLN_PLUGIN_LOG` environment variable.
We had a bit of a chicken-and-egg problem, where we instantiated the
`state` to be managed by the `Plugin` during the very first step when
creating the `Builder`, but then the state might depend on the
configuration we only get later. This would force developers to add
placeholders in the form of `Option` into the state, when really
they'd never be none after configuring.
This defers the binding until after we get the configuration and
cleans up the semantics:
- `Builder`: declare options, hooks, etc
- `ConfiguredPlugin`: we have exchanged the handshake with
`lightningd`, now we can construct the `state` accordingly
- `Plugin`: Running instance of the plugin
Changelog-Changed: cln-plugin: Moved the state binding to the plugin until after the configuration step
This is usually a signal that lightningd is shutting down, so notify
any instance that is waiting on `plugin.join()`.
Changelog-Fixed: cln-plugin: Fixed an issue where plugins would hang indefinitely despite `lightningd` closing the connection
Represents the "configuration" part of the "init" message during
plugin initialization.
Changelog-Added: cln_plugin: persist cln configuration from init msg
We now have ternary outcomes for `Builder.configure()` and
`Builder.start()`:
- Ok(Some(p)) means we were configured correctly, and can continue
with our work normally
- Ok(None) means that `lightningd` was invoked with `--help`, we
weren't configured (which is not an error since the `lightningd` just
implicitly told us to shut down) and user code should clean up and
exit as well
- Err(e) something went wrong, user code may report an error and exit.
Mostly comments and docs: some places are actually paths, which
I have avoided changing. We may migrate them slowly, particularly
when they're user-visible.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
For now hooks are treated identically to rpcmethods, with the
exception of not being returned in the `getmanifest` call. Later on we
can add typed handlers as well.
We wrap emitted messages into a JSON-RPC notification envelope and
write them to stdout. We use an indirection over an mpsc channel in
order to avoid deadlocks if we emit logs while holding the writer lock
on stdout.