update new documentation and move to doc/

This commit is contained in:
Adi Shankara 2023-07-14 11:43:59 +04:00 committed by Rusty Russell
parent 5151020224
commit b2caa56c27
44 changed files with 323 additions and 442 deletions

View file

@ -3,7 +3,7 @@ title: "Coding Style Guidelines"
slug: "coding-style-guidelines"
hidden: false
createdAt: "2023-01-25T05:34:10.822Z"
updatedAt: "2023-01-25T05:50:05.437Z"
updatedAt: "2023-07-13T05:11:09.525Z"
---
Style is an individualistic thing, but working on software is group activity, so consistency is important. Generally our coding style is similar to the [Linux coding style](https://www.kernel.org/doc/html/v4.10/process/coding-style.html).
@ -31,8 +31,6 @@ We have to stop somewhere. The two tools here are extracting deeply-indented co
}
```
## Tabs and indentaion
The C code uses TAB charaters with a visual indentation of 8 whitespaces.
@ -45,8 +43,6 @@ static void subtract_received_htlcs(const struct channel *channel,
struct amount_msat *amount)
```
Note: For more details, the files `.clang-format` and `.editorconfig` are located in the projects root directory.
## Prefer Simple Statements
@ -57,8 +53,6 @@ Notice the statement above uses separate tests, rather than combining them. We
if (i->something != NULL && *i->something < 100)
```
## Use of `take()`
Some functions have parameters marked with `TAKES`, indicating that they can take lifetime ownership of a parameter which is passed using `take()`. This can be a useful optimization which allows the function to avoid making a copy, but if you hand `take(foo)` to something which doesn't support `take()` you'll probably leak memory!
@ -72,8 +66,6 @@ If you're allocating something simply to hand it via `take()` you should use NUL
enqueue_peer_msg(peer, take(msg));
```
## Use of `tmpctx`
There's a convenient temporary context which gets cleaned regularly: you should use this for throwaways rather than (as you'll see some of our older code do!) grabbing some passing object to hang your temporaries off!
@ -98,8 +90,6 @@ Avoid double-initialization of variables; it's better to set them when they're k
if (is_foo)...
```
This way the compiler will warn you if you have one path which doesn't set the variable. If you initialize with `bool is_foo = false;` then you'll simply get that value without warning when you change the code and forget to set it on one path.
## Initialization of Memory
@ -174,7 +164,7 @@ We are maintaining a changelog in the top-level directory of this project. Howev
- `Changelog-Deprecated: ` if a feature has been marked for deprecation, but not yet removed
- `Changelog-Fixed: ` if a bug has been fixed
- `Changelog-Removed: ` if a (previously deprecated) feature has been removed
- `Changelog-Experimental: ` if it only affects --enable-experimental-features builds, or experimental- config options.
- `Changelog-Experimental: ` if it only affects experimental- config options
In case you think the pull request is small enough not to require a changelog entry please use `Changelog-None` in one of the commit messages to opt out.

View file

@ -0,0 +1,184 @@
---
title: "Contributor Workflow"
slug: "contributor-workflow"
excerpt: "Learn the practical process and guidelines for contributing."
hidden: false
createdAt: "2022-12-09T09:57:57.245Z"
updatedAt: "2023-07-12T13:40:58.465Z"
---
## Build and Development
Install the following dependencies for best results:
```shell
sudo apt update
sudo apt install valgrind cppcheck shellcheck libsecp256k1-dev libpq-dev
```
Re-run `configure` and build using `make`:
```shell
./configure --enable-developer
make -j$(nproc)
```
## Debugging
You can build Core Lightning with `DEVELOPER=1` to use dev commands listed in `cli/lightning-cli help`. `./configure --enable-developer` will do that. You can log console messages with log_info() in lightningd and status_debug() in other subdaemons.
You can debug crashing subdaemons with the argument `--dev-debugger=channeld`, where `channeld` is the subdaemon name. It will run `gnome-terminal` by default with a gdb attached to the subdaemon when it starts. You can change the terminal used by setting the `DEBUG_TERM` environment variable, such as `DEBUG_TERM="xterm -e"` or `DEBUG_TERM="konsole -e"`.
It will also print out (to stderr) the gdb command for manual connection. The subdaemon will be stopped (it sends itself a `SIGSTOP`); you'll need to `continue` in gdb.
```shell
./configure --enable-developer
make -j$(nproc)
```
## Making BOLT Modifications
All of code for marshalling/unmarshalling BOLT protocol messages is generated directly from the spec. These are pegged to the BOLTVERSION, as specified in `Makefile`.
## Source code analysis
An updated version of the NCC source code analysis tool is available at
<https://github.com/bitonic-cjp/ncc>
It can be used to analyze the lightningd source code by running `make clean && make ncc`. The output (which is built in parallel with the binaries) is stored in .nccout files. You can browse it, for instance, with a command like `nccnav lightningd/lightningd.nccout`.
## Code Coverage
Code coverage can be measured using Clang's source-based instrumentation.
First, build with the instrumentation enabled:
```shell
make clean
./configure --enable-coverage CC=clang
make -j$(nproc)
```
Then run the test for which you want to measure coverage. By default, the raw coverage profile will be written to `./default.profraw`. You can change the output file by setting `LLVM_PROFILE_FILE`:
```shell
LLVM_PROFILE_FILE="full_channel.profraw" ./channeld/test/run-full_channel
```
Finally, generate an HTML report from the profile. We have a script to make this easier:
```shell
./contrib/clang-coverage-report.sh channeld/test/run-full_channel \
full_channel.profraw full_channel.html
firefox full_channel.html
```
For more advanced report generation options, see the [Clang coverage documentation](https://clang.llvm.org/docs/SourceBasedCodeCoverage.html).
## Subtleties
There are a few subtleties you should be aware of as you modify deeper parts of the code:
- `ccan/structeq`'s STRUCTEQ_DEF will define safe comparison function `foo_eq()` for struct `foo`, failing the build if the structure has implied padding.
- `command_success`, `command_fail`, and `command_fail_detailed` will free the `cmd` you pass in.
This also means that if you `tal`-allocated anything from the `cmd`, they will also get freed at those points and will no longer be accessible afterwards.
- When making a structure part of a list, you will instance a `struct list_node`. This has to be the _first_ field of the structure, or else `dev-memleak` command will think your structure has leaked.
## Protocol Modifications
The source tree contains CSV files extracted from the v1.0 BOLT specifications (wire/extracted_peer_wire_csv and wire/extracted_onion_wire_csv). You can regenerate these by first deleting the local copy(if any) at directory .tmp.bolts, setting `BOLTDIR` and `BOLTVERSION` appropriately, and finally running `make
extract-bolt-csv`. By default the bolts will be retrieved from the directory `../bolts` and a recent git version.
e.g., `make extract-bolt-csv BOLTDIR=../bolts BOLTVERSION=ee76043271f79f45b3392e629fd35e47f1268dc8`
## Release checklist
Here's a checklist for the release process.
### Leading Up To The Release
1. Talk to team about whether there are any changes which MUST go in this release which may cause delay.
2. Look through outstanding issues, to identify any problems that might be necessary to fixup before the release. Good candidates are reports of the project not building on different architectures or crashes.
3. Identify a good lead for each outstanding issue, and ask them about a fix timeline.
4. Create a milestone for the _next_ release on Github, and go though open issues and PRs and mark accordingly.
5. Ask (via email) the most significant contributor who has not already named a release to name the release (use
`devtools/credit --verbose v<PREVIOUS-VERSION>` to find this contributor). CC previous namers and team.
### Preparing for -rc1
1. Check that `CHANGELOG.md` is well formatted, ordered in areas, covers all signficant changes, and sub-ordered approximately by user impact & coolness.
2. Use `devtools/changelog.py` to collect the changelog entries from pull request commit messages and merge them into the manually maintained `CHANGELOG.md`. This does API queries to GitHub, which are severely
ratelimited unless you use an API token: set the `GH_TOKEN` environment variable to a Personal Access Token from <https://github.com/settings/tokens>
3. Create a new CHANGELOG.md heading to `v<VERSION>rc1`, and create a link at the bottom. Note that you should exactly copy the date and name format from a previous release, as the `build-release.sh` script relies on this.
4. Update the contrib/pyln package versions: `make update-pyln-versions NEW_VERSION=<VERSION>`
5. Create a PR with the above.
### Releasing -rc1
1. Merge the above PR.
2. Tag it `git pull && git tag -s v<VERSION>rc1`. Note that you should get a prompt to give this tag a 'message'. Make sure you fill this in.
3. Confirm that the tag will show up for builds with `git describe`
4. Push the tag to remote `git push --tags`.
5. Announce rc1 release on core-lightning's release-chat channel on Discord & [BuildOnL2](https://community.corelightning.org/c/general-questions/).
6. Use `devtools/credit --verbose v<PREVIOUS-VERSION>` to get commits, days and contributors data for release note.
7. Prepare draft release notes including information from above step, and share with the team for editing.
8. Upgrade your personal nodes to the rc1, to help testing.
9. Follow [reproducible build](REPRODUCIBLE.md) for [Builder image setup](https://lightning.readthedocs.io/REPRODUCIBLE.html#builder-image-setup). It will create builder images `cl-repro-<codename>` which are required for the next step.
10. Run `tools/build-release.sh bin-Fedora-28-amd64 bin-Ubuntu sign` script to prepare required builds for the release. With `bin-Fedora-28-amd64 bin-Ubuntu sign`, it will build a zipfile, a non-reproducible Fedora, reproducible Ubuntu images. Once it is done, the script will sign the release contents and create SHA256SUMS and SHA256SUMS.asc in the release folder.
11. RC images are not uploaded on Docker. Hence they can be removed from the target list for RC versions. Each docker image takes approx. 90 minutes to bundle but it is highly recommended to test docker setup once, if you haven't done that before. Prior to building docker images, ensure that `multiarch/qemu-user-static` setup is working on your system as described [here](https://lightning.readthedocs.io/REPRODUCIBLE.html#setting-up-multiarch-qemu-user-static).
### Releasing -rc2, ..., -rcN
1. Change rc(N-1) to rcN in CHANGELOG.md.
2. Update the contrib/pyln package versions: `make update-pyln-versions NEW_VERSION=<VERSION>`
3. Add a PR with the rcN.
4. Tag it `git pull && git tag -s v<VERSION>rcN && git push --tags`
5. Announce tagged rc release on core-lightning's release-chat channel on Discord & [BuildOnL2](https://community.corelightning.org/c/general-questions/).
6. Upgrade your personal nodes to the rcN.
### Tagging the Release
1. Update the CHANGELOG.md; remove -rcN in both places, update the date and add title and namer.
2. Update the contrib/pyln package versions: `make update-pyln-versions NEW_VERSION=<VERSION>`
3. Add a PR with that release.
4. Merge the PR, then:
- `export VERSION=23.05`
- `git pull`
- `git tag -a -s v${VERSION} -m v${VERSION}`
- `git push --tags`
5. Run `tools/build-release.sh` to:
- Create reproducible zipfile
- Build non-reproducible Fedora image
- Build reproducible Ubuntu-v18.04, Ubuntu-v20.04, Ubuntu-v22.04 images. Follow [link](https://lightning.readthedocs.io/REPRODUCIBLE.html#building-using-the-builder-image) for manually Building Ubuntu Images.
- Build Docker images for amd64 and arm64v8
- Create and sign checksums. Follow [link](https://lightning.readthedocs.io/REPRODUCIBLE.html#co-signing-the-release-manifest) for manually signing the release.
6. The tarballs may be owned by root, so revert ownership if necessary:
`sudo chown ${USER}:${USER} *${VERSION}*`
7. Upload the resulting files to github and save as a draft.
(<https://github.com/ElementsProject/lightning/releases/>)
8. Send `SHA256SUMS` & `SHA256SUMS.asc` files to the rest of the team to check and sign the release.
9. Team members can verify the release with the help of `build-release.sh`:
1. Rename release captain's `SHA256SUMS` to `SHA256SUMS-v${VERSION}` and `SHA256SUMS.asc` to `SHA256SUMS-v${VERSION}.asc`.
2. Copy them in the root folder (`lightning`).
3. Run `tools/build-release.sh --verify`. It will create reproducible images, verify checksums and sign.
4. Send your signatures from `release/SHA256SUMS.new` to release captain.
5. Or follow [link](https://lightning.readthedocs.io/REPRODUCIBLE.html#verifying-a-reproducible-build) for manual verification instructions.
10. Append signatures shared by the team into the `SHA256SUMS.asc` file, verify with `gpg --verify SHA256SUMS.asc` and include the file in the draft release.
11. `make pyln-release` to upload pyln modules to pypi.org. This requires keys for each of pyln-client, pyln-proto, and pyln-testing accessible to poetry. This can be done by configuring the python keyring library along with a suitable backend. Alternatively, the key can be set as an environment variable and each of the pyln releases can be built and published independently:
- `export POETRY_PYPI_TOKEN_PYPI=<pyln-client token>`
- `make pyln-release-client`
- ... repeat for each pyln package.
### Performing the Release
1. Edit the GitHub draft and include the `SHA256SUMS.asc` file.
2. Publish the release as not a draft.
3. Announce the final release on core-lightning's release-chat channel on Discord & [BuildOnL2](https://community.corelightning.org/c/general-questions/).
4. Send a mail to c-lightning and lightning-dev mailing lists, using the same wording as the Release Notes in GitHub.
5. Write release blog, post it on [Blockstream](https://blog.blockstream.com/) and announce the release on Twitter.
### Post-release
1. Look through PRs which were delayed for release and merge them.
2. Close out the Milestone for the now-shipped release.
3. Update this file with any missing or changed instructions.

View file

@ -4,7 +4,7 @@ slug: "testing"
excerpt: "Understand the testing and code review practices."
hidden: false
createdAt: "2022-12-09T09:58:21.295Z"
updatedAt: "2023-04-22T11:58:25.622Z"
updatedAt: "2023-07-13T05:21:39.220Z"
---
# Testing
@ -16,16 +16,12 @@ VALGRIND=[0|1] - detects memory leaks during test execution but adds a signific
PYTEST_PAR=n - runs pytests in parallel
```
A modern desktop can build and run through all the tests in a couple of minutes with:
```shell Shell
make -j12 full-check PYTEST_PAR=24 DEVELOPER=1 VALGRIND=0
```
Adjust `-j` and `PYTEST_PAR` accordingly for your hardware.
There are four kinds of tests:
@ -66,8 +62,6 @@ TEST_DB_PROVIDER=[sqlite3|postgres] - Selects the database to use when running
EXPERIMENTAL_DUAL_FUND=[0|1] - Enable dual-funding tests.
```
#### Troubleshooting
##### Valgrind complains about code we don't control
@ -75,38 +69,31 @@ EXPERIMENTAL_DUAL_FUND=[0|1] - Enable dual-funding tests.
Sometimes `valgrind` will complain about code we do not control ourselves, either because it's in a library we use or it's a false positive. There are generally three ways to address these issues (in descending order of preference):
1. Add a suppression for the one specific call that is causing the issue. Upon finding an issue `valgrind` is instructed in the testing framework to print filters that'd match the issue. These can be added to the suppressions file under `tests/valgrind-suppressions.txt` in order to explicitly skip reporting these in future. This is preferred over the other solutions since it only disables reporting selectively for things that were manually checked. See the [valgrind docs](https://valgrind.org/docs/manual/manual-core.html#manual-core.suppress) for details.
2. Add the process that `valgrind` is complaining about to the `--trace-children-skip` argument in `pyln-testing`. This is used in cases of full binaries not being under our control, such as the `python3` interpreter used in tests that run plugins. Do not use
this for binaries that are compiled from our code, as it tends to mask real issues.
2. Add the process that `valgrind` is complaining about to the `--trace-children-skip` argument in `pyln-testing`. This is used in cases of full binaries not being under our control, such as the `python3` interpreter used in tests that run plugins. Do not use this for binaries that are compiled from our code, as it tends to mask real issues.
3. Mark the test as skipped if running under `valgrind`. It's mostly used to skip tests that otherwise would take considerably too long to test on CI. We discourage this for suppressions, since it is a very blunt tool.
# Fuzz testing
Core Lightning currently supports coverage-guided fuzz testing using [LLVM's libfuzzer](https://www.llvm.org/docs/LibFuzzer.html)
when built with `clang`.
Core Lightning currently supports coverage-guided fuzz testing using [LLVM's libfuzzer](https://www.llvm.org/docs/LibFuzzer.html) when built with `clang`.
The goal of fuzzing is to generate mutated -and often unexpected- inputs (`seed`s) to pass
to (parts of) a program (`target`) in order to make sure the codepaths used:
The goal of fuzzing is to generate mutated -and often unexpected- inputs (`seed`s) to pass to (parts of) a program (`target`) in order to make sure the codepaths used:
- do not crash
- are valid (if combined with sanitizers)
The generated seeds can be stored and form a `corpus`, which we try to optimise (don't
store two seeds that lead to the same codepath).
The generated seeds can be stored and form a `corpus`, which we try to optimise (don't store two seeds that lead to the same codepath).
For more info about fuzzing see [here](https://github.com/google/fuzzing/tree/master/docs), and for more about `libfuzzer` in particular see [here](https://www.llvm.org/docs/LibFuzzer.html).
## Build the fuzz targets
In order to build the Core Lightning binaries with code coverage you will need a recent
[clang](http://clang.llvm.org/). The more recent the compiler version the better.
In order to build the Core Lightning binaries with code coverage you will need a recent [clang](http://clang.llvm.org/). The more recent the compiler version the better.
Then you'll need to enable support at configuration time. You likely want to enable a few sanitizers for bug detections as well as experimental features for an extended coverage (not required though).
```shell
DEVELOPER=1 EXPERIMENTAL_FEATURES=1 ASAN=1 UBSAN=1 VALGRIND=0 FUZZING=1 CC=clang ./configure && make
./configure --enable-developer --enable-address-sanitizer --enable-ub-sanitizer --enable-fuzzing --disable-valgrind CC=clang && make
```
The targets will be built in `tests/fuzz/` as `fuzz-` binaries, with their best known seed corpora stored in `tests/fuzz/corpora/`.
You can run the fuzz targets on their seed corpora to check for regressions:
@ -115,8 +102,6 @@ You can run the fuzz targets on their seed corpora to check for regressions:
make check-fuzz
```
## Run one or more target(s)
You can run each target independently. Pass `-help=1` to see available options, for example:
@ -125,24 +110,18 @@ You can run each target independently. Pass `-help=1` to see available options,
./tests/fuzz/fuzz-addr -help=1
```
Otherwise, you can use the Python runner to either run the targets against a given seed corpus:
```
./tests/fuzz/run.py fuzz_corpus -j2
```
Or extend this corpus:
```
./tests/fuzz/run.py fuzz_corpus -j2 --generate --runs 12345
```
The latter will run all targets two by two `12345` times.
If you want to contribute new seeds, be sure to merge your corpus with the main one:
@ -152,8 +131,6 @@ If you want to contribute new seeds, be sure to merge your corpus with the main
./tests/fuzz/run.py tests/fuzz/corpora --merge_dir my_locally_extended_fuzz_corpus
```
## Improve seed corpora
If you find coverage increasing inputs while fuzzing, please create a pull request to add them into `tests/fuzz/corpora`. Be sure to minimize any additions to the corpora first.
@ -169,16 +146,12 @@ mkdir -p local_corpora/fuzz-addr
./tests/fuzz/fuzz-addr -jobs=4 local_corpora/fuzz-addr tests/fuzz/corpora/fuzz-addr/
```
After some time, libFuzzer may find some potential coverage increasing inputs and save them in `local_corpora/fuzz-addr`. We can then merge them into the seed corpora in `tests/fuzz/corpora`:
```shell
./tests/fuzz/run.py tests/fuzz/corpora --merge_dir local_corpora
```
This will copy over any inputs that improve the coverage of the existing corpus. If any new inputs were added, create a pull request to improve the upstream seed corpus:
```shell
@ -187,8 +160,6 @@ git commit
...
```
## Write new fuzzing targets
In order to write a new target:

View file

@ -3,7 +3,7 @@ title: "A day in the life of a plugin"
slug: "a-day-in-the-life-of-a-plugin"
hidden: false
createdAt: "2023-02-03T08:32:53.431Z"
updatedAt: "2023-02-21T14:57:10.491Z"
updatedAt: "2023-07-12T13:48:23.030Z"
---
A plugin may be written in any language, and communicates with `lightningd` through the plugin's `stdin` and `stdout`. JSON-RPCv2 is used as protocol on top of the two streams, with the plugin acting as server and `lightningd` acting as client. The plugin file needs to be executable (e.g. use `chmod a+x plugin_name`).
@ -21,8 +21,6 @@ plugin dirs, usually `/usr/local/libexec/c-lightning/plugins` and `~/.lightning/
lightningd --plugin=/path/to/plugin1 --plugin=path/to/plugin2
```
`lightningd` will run your plugins from the `--lightning-dir`/networkname as working directory and env variables "LIGHTNINGD_PLUGIN" and "LIGHTNINGD_VERSION" set, then will write JSON-RPC requests to the plugin's `stdin` and will read replies from its `stdout`. To initialise the plugin two RPC methods are required:
- `getmanifest` asks the plugin for command line options and JSON-RPC commands that should be passed through. This can be run before `lightningd` checks that it is the sole user of the `lightning-dir` directory (for `--help`) so your plugin should not touch files at this point.
@ -46,7 +44,8 @@ The `getmanifest` method is required for all plugins and will be called on start
"type": "string",
"default": "World",
"description": "What name should I call you?",
"deprecated": false
"deprecated": false,
"dynamic": false
}
],
"rpcmethods": [
@ -87,9 +86,7 @@ The `getmanifest` method is required for all plugins and will be called on start
}
```
During startup the `options` will be added to the list of command line options that `lightningd` accepts. If any `options` "name" is already taken startup will abort. The above will add a `--greeting` option with a default value of `World` and the specified description. _Notice that currently string, integers, bool, and flag options are supported._
During startup the `options` will be added to the list of command line options that `lightningd` accepts. If any `options` "name" is already taken startup will abort. The above will add a `--greeting` option with a default value of `World` and the specified description. _Notice that currently string, integers, bool, and flag options are supported._ If an option specifies `dynamic`: `true`, then it should allow a `setvalue` call for that option after initialization.
The `rpcmethods` are methods that will be exposed via `lightningd`'s JSON-RPC over Unix-Socket interface, just like the builtin commands. Any parameters given to the JSON-RPC calls will be passed through verbatim. Notice that the `name`, `description` and `usage` fields
are mandatory, while the `long_description` can be omitted (it'll be set to `description` if it was not provided). `usage` should surround optional parameter names in `[]`.
@ -106,8 +103,7 @@ The `featurebits` object allows the plugin to register featurebits that should b
The `notifications` array allows plugins to announce which custom notifications they intend to send to `lightningd`. These custom notifications can then be subscribed to by other plugins, allowing them to communicate with each other via the existing publish-subscribe mechanism and react to events that happen in other plugins, or collect information based on the notification topics.
Plugins are free to register any `name` for their `rpcmethod` as long as the name was not previously registered. This includes both built-in methods, such as `help` and `getinfo`, as well as methods registered by other plugins. If there is a conflict then `lightningd` will report
an error and kill the plugin, this aborts startup if the plugin is _important_.
Plugins are free to register any `name` for their `rpcmethod` as long as the name was not previously registered. This includes both built-in methods, such as `help` and `getinfo`, as well as methods registered by other plugins. If there is a conflict then `lightningd` will report an error and kill the plugin, this aborts startup if the plugin is _important_.
#### Types of Options
@ -116,9 +112,10 @@ There are currently four supported option 'types':
- string: a string
- bool: a boolean
- int: parsed as a signed integer (64-bit)
- flag: no-arg flag option. Is boolean under the hood. Defaults to false.
- flag: no-arg flag option. Presented as `true` if config specifies it.
In addition, string and int types can specify `"multi": true` to indicate they can be specified multiple times. These will always be represented in `init` as a (possibly empty) JSON array.
In addition, string and int types can specify `"multi": true` to indicate they can be specified multiple times. These will always be represented in `init` as a (possibly empty) JSON array. "multi" flag types do not make
sense.
Nota bene: if a `flag` type option is not set, it will not appear in the options set that is passed to the plugin.
@ -135,7 +132,6 @@ Here's an example option set, as sent in response to `getmanifest`
{
"name": "run-hot",
"type": "flag",
"default": None, // defaults to false
"description": "If set, overclocks plugin"
},
{
@ -160,10 +156,6 @@ Here's an example option set, as sent in response to `getmanifest`
],
```
**Note**: `lightningd` command line options are only parsed during startup and their values are not remembered when the plugin is stopped or killed. For dynamic plugins started with `plugin start`, options can be passed as extra arguments to the command [lightning-plugin](ref:lightning-plugin).
#### Custom notifications
The plugins may emit custom notifications for topics they have announced during startup. The list of notification topics declared during startup must include all topics that may be emitted, in order to verify that all topics plugins subscribe to are also emitted by some other plugin, and warn if a plugin subscribes to a non-existent topic. In case a plugin emits notifications it has not announced the notification will be ignored and not forwarded to subscribers.
@ -181,8 +173,6 @@ When forwarding a custom notification `lightningd` will wrap the payload of the
}
```
is delivered as
```json
@ -200,8 +190,6 @@ is delivered as
```
The notification topic (`method` in the JSON-RPC message) must not match one of the internal events in order to prevent breaking subscribers that expect the existing notification format. Multiple plugins are allowed to emit notifications for the same topics, allowing things like metric aggregators where the aggregator subscribes to a common topic and other plugins publish metrics as notifications.
### The `init` method
@ -236,8 +224,6 @@ The `init` method is required so that `lightningd` can pass back the filled comm
}
```
The plugin must respond to `init` calls. The response should be a valid JSON-RPC response to the `init`, but this is not currently enforced. If the response is an object containing `result` which contains `disable` then the plugin will be disabled and the contents
of this member is the reason why.

View file

@ -3,7 +3,7 @@ title: "Bitcoin backend"
slug: "bitcoin-backend"
hidden: false
createdAt: "2023-02-03T08:58:27.125Z"
updatedAt: "2023-02-21T15:10:05.895Z"
updatedAt: "2023-07-13T05:18:59.439Z"
---
Core Lightning communicates with the Bitcoin network through a plugin. It uses the `bcli` plugin by default but you can use a custom one, multiple custom ones for different operations, or write your own for your favourite Bitcoin data source!
@ -11,7 +11,7 @@ Communication with the plugin is done through 5 JSONRPC commands, `lightningd` c
### `getchaininfo`
Called at startup, it's used to check the network `lightningd` is operating on and to get the sync status of the backend.
Called at startup, it's used to check the network `lightningd` is operating on and to get the sync status of the backend. Optionally, the plugins can use `last_height` to make sure that the Bitcoin backend is not behind Core Lightning.
The plugin must respond to `getchaininfo` with the following fields:
- `chain` (string), the network name as introduced in bip70
@ -23,17 +23,21 @@ The plugin must respond to `getchaininfo` with the following fields:
Polled by `lightningd` to get the current feerate, all values must be passed in sat/kVB.
If fee estimation fails, the plugin must set all the fields to `null`.
The plugin must return `feerate_floor` (e.g. 1000 if mempool is empty), and an array of 0 or more `feerates`. Each element of `feerates` is an object with `blocks` and `feerate`, in ascending-blocks order, for example:
The plugin, if fee estimation succeeds, must respond with the following fields:
- `opening` (number), used for funding and also misc transactions
- `mutual_close` (number), used for the mutual close transaction
- `unilateral_close` (number), used for unilateral close (/commitment) transactions
- `delayed_to_us` (number), used for resolving our output from our unilateral close
- `htlc_resolution` (number), used for resolving HTLCs after an unilateral close
- `penalty` (number), used for resolving revoked transactions
- `min_acceptable` (number), used as the minimum acceptable feerate
- `max_acceptable` (number), used as the maximum acceptable feerate
```
{
"feerate_floor": <sat per kVB>,
"feerates": {
{ "blocks": 2, "feerate": <sat per kVB> },
{ "blocks": 6, "feerate": <sat per kVB> },
{ "blocks": 12, "feerate": <sat per kVB> }
{ "blocks": 100, "feerate": <sat per kVB> }
}
}
```
lightningd will currently linearly interpolate to estimate between given blocks (it will not extrapolate, but use the min/max blocks values).
### `getrawblockbyheight`

View file

@ -3,7 +3,7 @@ title: "Event notifications"
slug: "event-notifications"
hidden: false
createdAt: "2023-02-03T08:57:15.799Z"
updatedAt: "2023-02-21T15:00:34.233Z"
updatedAt: "2023-07-14T07:17:17.114Z"
---
Event notifications allow a plugin to subscribe to events in `lightningd`. `lightningd` will then send a push notification if an event matching the subscription occurred. A notification is defined in the JSON-RPC [specification][jsonrpc-spec] as an RPC call that does not include an `id` parameter:
@ -13,12 +13,9 @@ Event notifications allow a plugin to subscribe to events in `lightningd`. `ligh
Plugins subscribe by returning an array of subscriptions as part of the `getmanifest` response. The result for the `getmanifest` call above for example subscribes to the two topics `connect` and `disconnect`. The topics that are currently defined and the corresponding payloads are listed below.
### `*`
This is a way of specifying that you want to subscribe to all possible
event notifications. It is not recommended, but is useful for plugins
which want to provide generic infrastructure for others (in future, we
may add the ability to dynamically subscribe/unsubscribe).
> 📘
>
> This is a way of specifying that you want to subscribe to all possible event notifications. It is not recommended, but is useful for plugins which want to provide generic infrastructure for others (in future, we may add the ability to dynamically subscribe/unsubscribe).
### `channel_opened`
@ -35,8 +32,6 @@ A notification for topic `channel_opened` is sent if a peer successfully funded
}
```
### `channel_open_failed`
A notification to indicate that a channel open attempt has been unsuccessful.
@ -50,8 +45,6 @@ Useful for cleaning up state for a v2 channel open attempt. See `plugins/funder.
}
```
### `channel_state_changed`
A notification for topic `channel_state_changed` is sent every time a channel changes its state. The notification includes the `peer_id` and `channel_id`, the old and new channel states, the type of `cause` and a `message`.
@ -71,8 +64,6 @@ A notification for topic `channel_state_changed` is sent every time a channel ch
}
```
A `cause` can have the following values:
- "unknown" Anything other than the reasons below. Should not happen.
@ -102,8 +93,6 @@ A notification for topic `connect` is sent every time a new connection to a peer
}
```
### `disconnect`
A notification for topic `disconnect` is sent every time a connection to a peer was lost.
@ -116,8 +105,6 @@ A notification for topic `disconnect` is sent every time a connection to a peer
}
```
### `invoice_payment`
A notification for topic `invoice_payment` is sent every time an invoice is paid.
@ -133,8 +120,6 @@ A notification for topic `invoice_payment` is sent every time an invoice is paid
```
### `invoice_creation`
A notification for topic `invoice_creation` is sent every time an invoice is created.
@ -149,8 +134,6 @@ A notification for topic `invoice_creation` is sent every time an invoice is cre
}
```
### `warning`
A notification for topic `warning` is sent every time a new `BROKEN`/`UNUSUAL` level(in plugins, we use `error`/`warn`) log generated, which means an unusual/borken thing happens, such as channel failed, message resolving failed...
@ -166,8 +149,6 @@ A notification for topic `warning` is sent every time a new `BROKEN`/`UNUSUAL` l
}
```
1. `level` is `warn` or `error`: `warn` means something seems bad happened and it's under control, but we'd better check it; `error` means something extremely bad is out of control, and it may lead to crash;
2. `time` is the second since epoch;
3. `source` means where the event happened, it may have the following forms:
@ -196,8 +177,6 @@ A notification for topic `forward_event` is sent every time the status of a forw
}
```
or
```json
@ -218,8 +197,6 @@ or
```
- The status includes `offered`, `settled`, `failed` and `local_failed`, and they are all string type in json.
- When the forward payment is valid for us, we'll set `offered` and send the forward payment to next hop to resolve;
- When the payment forwarded by us gets paid eventually, the forward payment will change the status from `offered` to `settled`;
@ -255,8 +232,6 @@ A notification for topic `sendpay_success` is sent every time a sendpay succeeds
}
```
`sendpay` doesn't wait for the result of sendpay and `waitsendpay` returns the result of sendpay in specified time or timeout, but `sendpay_success` will always return the result anytime when sendpay successes if is was subscribed.
### `sendpay_failure`
@ -287,8 +262,6 @@ A notification for topic `sendpay_failure` is sent every time a sendpay complete
}
```
`sendpay` doesn't wait for the result of sendpay and `waitsendpay` returns the result of sendpay in specified time or timeout, but `sendpay_failure` will always return the result anytime when sendpay fails if is was subscribed.
### `coin_movement`
@ -321,8 +294,6 @@ A notification for topic `coin_movement` is sent to record the movement of coins
}
```
`version` indicates which version of the coin movement data struct this notification adheres to.
`node_id` specifies the node issuing the coin movement.
@ -422,23 +393,19 @@ Emitted after we've caught up to the chain head on first start. Lists all curren
}
```
### `block_added`
Emitted after each block is received from bitcoind, either during the initial sync or throughout the node's life as new blocks appear.
```json
{
"block_added": {
"hash": "000000000000000000034bdb3c01652a0aa8f63d32f949313d55af2509f9d245",
"block_added": {
"hash": "000000000000000000034bdb3c01652a0aa8f63d32f949313d55af2509f9d245",
"height": 753304
}
}
```
### `openchannel_peer_sigs`
When opening a channel with a peer using the collaborative transaction protocol `opt_dual_fund`), this notification is fired when the peer sends us their funding transaction signatures, `tx_signatures`. We update the in-progress PSBT and return it here, with the peer's signatures attached.
@ -452,10 +419,15 @@ When opening a channel with a peer using the collaborative transaction protocol
}
```
### `shutdown`
Send in two situations: lightningd is (almost completely) shutdown, or the plugin `stop` command has been called for this plugin. In both cases the plugin has 30 seconds to exit itself, otherwise it's killed.
In the shutdown case, plugins should not interact with lightnind except via (id-less) logging or notifications. New rpc calls will fail with error code -5 and (plugin's) responses will be ignored. Because lightningd can crash or be killed, a plugin cannot rely on the shutdown notification always been send.
In the shutdown case, plugins should not interact with lightnind except via (id-less) logging or notifications. New rpc calls will fail with error code -5 and (plugin's) responses will be ignored. Because lightningd can crash or be killed, a plugin cannot rely on the shutdown notification always been send.
```json
{
"shutdown": {
}
}
```

View file

@ -3,7 +3,7 @@ title: "Reproducible builds"
slug: "repro"
hidden: false
createdAt: "2023-01-25T10:37:03.476Z"
updatedAt: "2023-04-22T13:02:34.236Z"
updatedAt: "2023-07-12T13:26:52.005Z"
---
Reproducible builds close the final gap in the lifecycle of open-source projects by allowing maintainers to verify and certify that a given binary was indeed produced by compiling an unmodified version of the publicly available source. In particular the maintainer certifies that the binary corresponds a) to the exact version of the and b) that no malicious changes have been applied before or after the compilation.
@ -59,16 +59,12 @@ for v in bionic focal jammy; do
done
```
Verify that the image corresponds to our expectation and is runnable:
```shell
sudo docker run bionic cat /etc/lsb-release
```
Which should result in the following output for `bionic`:
```shell
@ -78,8 +74,6 @@ DISTRIB_CODENAME=bionic
DISTRIB_DESCRIPTION="Ubuntu 18.04 LTS"
```
## Builder image setup
Once we have the clean base image we need to customize it to be able to build Core Lightning. This includes disabling the update repositories, downloading the build dependencies and specifying the steps required to perform the build.
@ -94,8 +88,6 @@ sudo docker build -t cl-repro-focal - < contrib/reprobuild/Dockerfile.focal
sudo docker build -t cl-repro-jammy - < contrib/reprobuild/Dockerfile.jammy
```
Since we pass the `Dockerfile` through `stdin` the build command will not create a context, i.e., the current directory is not passed to `docker` and it'll be independent of the currently checked out version. This also means that you will be able to reuse the docker image for future builds, and don't have to repeat this dance every time. Verifying the `Dockerfile` therefore is
sufficient to ensure that the resulting `cl-repro-<codename>` image is reproducible.
@ -112,8 +104,6 @@ sudo docker run --rm -v $(pwd):/repo -ti cl-repro-focal
sudo docker run --rm -v $(pwd):/repo -ti cl-repro-jammy
```
The last few lines of output also contain the `sha256sum` hashes of all artifacts, so if you're just verifying the build those are the lines that are of interest to you:
```shell
@ -121,9 +111,77 @@ ee83cf4948228ab1f644dbd9d28541fd8ef7c453a3fec90462b08371a8686df8 /repo/release/
94bd77f400c332ac7571532c9f85b141a266941057e8fe1bfa04f054918d8c33 /repo/release/clightning-v0.9.0rc1.zip
```
Repeat this step for each distribution and each architecture you wish to sign. Once all the binaries are in the `release/` subdirectory we can sign the hashes.
# Setting up Docker's Buildx
Repeat this step for each distribution and each architecture you wish to sign. Once all the binaries are in the `release/` subdirectory we can sign the hashes:
Docker Buildx is an extension of Docker's build command, that provides a more efficient way to create images. It is part of Docker 19.03 and can also be manually installed as a CLI plugin for older versions.
1. Enable Docker CLI experimental features
Docker CLI experimental features are required to use Buildx. Enable them by setting the DOCKER_CLI_EXPERIMENTAL environment variable to enabled.
You can do this by adding the following line to your shell profile file (.bashrc, .zshrc, etc.):
```
export DOCKER_CLI_EXPERIMENTAL=enabled
```
After adding it, source your shell profile file or restart your shell to apply the changes.
2. Create a new builder instance
By default, Docker uses the "legacy" builder. You need to create a new builder instance that uses BuildKit. To create a new builder instance, use the following command:
```
docker buildx create --use
```
The --use flag sets the newly created builder as the current one.
# Setting up multiarch/qemu-user-static
1. Check Buildx is working
Use the `docker buildx inspect --bootstrap` command to verify that Buildx is working correctly. The `--bootstrap` option ensures the builder instance is running before inspecting it. The output should look something like this:
```
Name: my_builder
Driver: docker-container
Last Activity: 2023-06-13 04:37:30 +0000 UTC
Nodes:
Name: my_builder0
Endpoint: unix:///var/run/docker.sock
Status: running
Buildkit: v0.11.6
Platforms: linux/amd64, linux/amd64/v2, linux/amd64/v3, linux/amd64/v4, linux/386
```
2. Install `binfmt-support` and `qemu-user-static` if not installed already.
```shell
sudo apt-get update
sudo apt-get install docker.io binfmt-support qemu-user-static
sudo systemctl restart docker
```
3. Setup QEMU to run binaries from multiple different architectures
```
docker run --rm --privileged multiarch/qemu-user-static --reset -p yes
```
4. Confirm QEMU is working
Again run `docker buildx inspect --bootstrap` command to verify that `linux/arm64` is in the list of platforms.
```
Name: my_builder
Driver: docker-container
Last Activity: 2023-06-13 04:37:30 +0000 UTC
Nodes:
Name: my_builder0
Endpoint: unix:///var/run/docker.sock
Status: running
Buildkit: v0.11.6
Platforms: linux/amd64, linux/amd64/v2, linux/amd64/v3, linux/amd64/v4, linux/386, linux/arm64, linux/riscv64, linux/ppc64, linux/ppc64le, linux/s390x, linux/mips64le, linux/mips64
```
# (Co-)Signing the release manifest
@ -137,8 +195,6 @@ sha256sum *v0.9.0* > SHA256SUMS
gpg -sb --armor SHA256SUMS
```
Co-maintainers and contributors wishing to add their own signature verify that the `SHA256SUMS` and `SHA256SUMS.asc` files created by the release captain matches their binaries before also signing the manifest:
```shell
@ -148,8 +204,6 @@ sha256sum -c SHA256SUMS
cat SHA256SUMS | gpg -sb --armor > SHA256SUMS.new
```
Then send the resulting `SHA256SUMS.new` file to the release captain so it can be merged with the other signatures into `SHASUMS.asc`.
# Verifying a reproducible build
@ -165,8 +219,6 @@ Assuming you have downloaded the binaries, the manifest and the signatures into
gpg --verify SHA256SUMS.asc
```
And you should see a list of messages like the following:
```shell
@ -182,8 +234,6 @@ gpg: using RSA key 30DE693AE0DE9E37B3E7EB6BBFF0F67810C1EED1
gpg: Good signature from "Lisa Neigut <niftynei@gmail.com>" [full]
```
If there are any issues `gpg` will print `Bad signature`, it might be because the signatures in `SHA256SUMS.asc` do not match the `SHA256SUMS` file, and could be the result of a filename change. Do not continue using the binaries, and contact the maintainers, if this is not the case, a failure here means that the verification failed.
Next we verify that the binaries match the ones in the manifest:
@ -192,8 +242,6 @@ Next we verify that the binaries match the ones in the manifest:
sha256sum -c SHA256SUMS
```
Producing output similar to the following:
```shell
@ -204,17 +252,10 @@ clightning-v0.9.0.zip: OK
sha256sum: WARNING: 1 listed file could not be read
```
Notice that the two files we downloaded are marked as `OK`, but we're missing one file. If you didn't download that file this is to be expected, and is nothing to worry about. A failure to verify the hash would give a warning like the following:
```shell
sha256sum: WARNING: 1 computed checksum did NOT match
```
If both the signature verification and the manifest checksum verification
succeeded, then you have just successfully verified a reproducible build and,
assuming you trust the maintainers, are good to install and use the
binaries. Congratulations! 🎉🥳
If both the signature verification and the manifest checksum verification succeeded, then you have just successfully verified a reproducible build and, assuming you trust the maintainers, are good to install and use the binaries. Congratulations! 🎉🥳

View file

@ -4,7 +4,7 @@ slug: "installation"
excerpt: "Core lightning is available on many platforms and environments. Learn how to install on your preferred platform."
hidden: false
createdAt: "2022-11-18T14:32:02.251Z"
updatedAt: "2023-04-22T11:59:36.536Z"
updatedAt: "2023-07-13T05:08:44.966Z"
---
# Binaries
@ -18,8 +18,6 @@ sudo snap install bitcoin-core
sudo ln -s /snap/bitcoin-core/current/bin/bitcoin{d,-cli} /usr/local/bin/
```
Alternatively, you can install a pre-compiled binary from the [releases](https://github.com/ElementsProject/lightning/releases) page on GitHub. Core Lightning provides binaries for both Ubuntu and Fedora distributions.
If you're on a different distribution or OS, you can compile the source by following the instructions from [Installing from Source](<>).
@ -32,16 +30,12 @@ To install the Docker image for the latest stable release:
docker pull elementsproject/lightningd:latest
```
To install for a specific version, for example, 22.11.1:
```shell
docker pull elementsproject/lightningd:v22.11.1
```
See all of the docker images for Core Lightning on [Docker Hub](https://hub.docker.com/r/elementsproject/lightningd/tags).
# Third-party apps
@ -61,7 +55,6 @@ Core Lightning is also available on nixOS via the [nix-bitcoin](https://github.c
You will need several development libraries:
- libsqlite3: for database support.
- libgmp: for secp256k1
- zlib: for compression routines.
For actually doing development and running the tests, you will also need:
@ -80,14 +73,12 @@ Get dependencies:
```shell
sudo apt-get update
sudo apt-get install -y \
autoconf automake build-essential git libtool libgmp-dev libsqlite3-dev \
autoconf automake build-essential git libtool libsqlite3-dev \
python3 python3-pip net-tools zlib1g-dev libsodium-dev gettext
pip3 install --upgrade pip
pip3 install --user poetry
```
If you don't have Bitcoin installed locally you'll need to install that as well. It's now available via [snapd](https://snapcraft.io/bitcoin-core).
```shell
@ -98,8 +89,6 @@ sudo snap install bitcoin-core
sudo ln -s /snap/bitcoin-core/current/bin/bitcoin{d,-cli} /usr/local/bin/
```
Clone lightning:
```shell
@ -107,16 +96,12 @@ git clone https://github.com/ElementsProject/lightning.git
cd lightning
```
Checkout a release tag:
```shell
git checkout v22.11.1
```
For development or running tests, get additional dependencies:
```shell
@ -124,8 +109,6 @@ sudo apt-get install -y valgrind libpq-dev shellcheck cppcheck \
libsecp256k1-dev jq lowdown
```
If you can't install `lowdown`, a version will be built in-tree.
If you want to build the Rust plugins (currently, cln-grpc):
@ -134,8 +117,6 @@ If you want to build the Rust plugins (currently, cln-grpc):
sudo apt-get install -y cargo rustfmt protobuf-compiler
```
There are two ways to build core lightning, and this depends on how you want use it.
To build cln to just install a tagged or master version you can use the following commands:
@ -148,8 +129,6 @@ make
sudo make install
```
> 📘
>
> If you want disable Rust because you do not want use it or simple you do not want the grpc-plugin, you can use `./configure --disable-rust`.
@ -161,8 +140,6 @@ pip3 install poetry
poetry shell
```
This will put you in a new shell to enter the following commands:
```shell
@ -172,8 +149,6 @@ make
make check VALGRIND=0
```
Optionally, add `-j$(nproc)` after `make` to speed up compilation. (e.g. `make -j$(nproc)`)
Running lightning:
@ -184,8 +159,6 @@ bitcoind &
./cli/lightning-cli help
```
## To Build on Fedora
OS version: Fedora 27 or above
@ -214,8 +187,6 @@ $ sudo dnf update -y && \
sudo dnf clean all
```
Make sure you have [bitcoind](https://github.com/bitcoin/bitcoin) available to run.
Clone lightning:
@ -225,16 +196,12 @@ $ git clone https://github.com/ElementsProject/lightning.git
$ cd lightning
```
Checkout a release tag:
```shell
$ git checkout v22.11.1
```
Build and install lightning:
```shell
@ -243,8 +210,6 @@ $lightning> make
$lightning> sudo make install
```
Running lightning (mainnet):
```shell
@ -252,8 +217,6 @@ $ bitcoind &
$ lightningd --network=bitcoin
```
Running lightning on testnet:
```shell
@ -261,28 +224,32 @@ $ bitcoind -testnet &
$ lightningd --network=testnet
```
## To Build on FreeBSD
OS version: FreeBSD 11.1-RELEASE or above
Core Lightning is in the FreeBSD ports, so install it as any other port (dependencies are handled automatically):
```shell
pkg install git python py39-pip gmake libtool gmp sqlite3 postgresql13-client gettext autotools
https://github.com/ElementsProject/lightning.git
pip install --upgrade pip
pip3 install mako
./configure
gmake -j$(nproc)
gmake install
```
Alternatively, Core Lightning is in the FreeBSD ports, so install it as any other port (dependencies are handled automatically):
```shell
# pkg install c-lightning
```
If you want to compile locally and fiddle with compile time options:
```shell
# cd /usr/ports/net-p2p/c-lightning && make install
```
See `/usr/ports/net-p2p/c-lightning/Makefile` for instructions on how to build from an arbitrary git commit, instead of the latest release tag.
> 📘
@ -302,22 +269,18 @@ Configure lightningd: copy `/usr/local/etc/lightningd-bitcoin.conf.sample` to
# lightning-cli --rpc-file /var/db/c-lightning/bitcoin/lightning-rpc --lightning-dir=/var/db/c-lightning help
```
## To Build on OpenBSD
OS version: OpenBSD 6.7
OS version: OpenBSD 7.3
Install dependencies:
```shell
pkg_add git python gmake py3-pip libtool gmp
pkg_add git python gmake py3-pip libtool gettext-tools
pkg_add automake # (select highest version, automake1.16.2 at time of writing)
pkg_add autoconf # (select highest version, autoconf-2.69p2 at time of writing)
```
Install `mako` otherwise we run into build errors:
```shell
@ -325,8 +288,6 @@ pip3.7 install --user poetry
poetry install
```
Add `/home/<username>/.local/bin` to your path:
`export PATH=$PATH:/home/<username>/.local/bin`
@ -339,8 +300,6 @@ export AUTOMAKE_VERSION=1.16
./configure
```
Finally, build `c-lightning`:
`gmake`
@ -350,25 +309,21 @@ Finally, build `c-lightning`:
Use nix-shell launch a shell with a full Core Lightning dev environment:
```shell
$ nix-shell -Q -p gdb sqlite autoconf git clang libtool gmp sqlite autoconf \
$ nix-shell -Q -p gdb sqlite autoconf git clang libtool sqlite autoconf \
autogen automake libsodium 'python3.withPackages (p: [p.bitcoinlib])' \
valgrind --run make
```
## To Build on macOS
Assuming you have Xcode and Homebrew installed. Install dependencies:
```shell
$ brew install autoconf automake libtool python3 gmp gnu-sed gettext libsodium
$ brew install autoconf automake libtool python3 gnu-sed gettext libsodium
$ ln -s /usr/local/Cellar/gettext/0.20.1/bin/xgettext /usr/local/opt
$ export PATH="/usr/local/opt:$PATH"
```
If you need SQLite (or get a SQLite mismatch build error):
```shell
@ -377,8 +332,6 @@ $ export LDFLAGS="-L/usr/local/opt/sqlite/lib"
$ export CPPFLAGS="-I/usr/local/opt/sqlite/include"
```
Some library paths are different when using `homebrew` with M1 macs, therefore the following two variables need to be set for M1 machines
```shell
@ -386,8 +339,6 @@ $ export CPATH=/opt/homebrew/include
$ export LIBRARY_PATH=/opt/homebrew/lib
```
If you need Python 3.x for mako (or get a mako build error):
```shell
@ -399,8 +350,6 @@ $ pip install --upgrade pip
$ pip install poetry
```
If you don't have bitcoind installed locally you'll need to install that as well:
```shell
@ -412,8 +361,6 @@ $ ./configure
$ make src/bitcoind src/bitcoin-cli && make install
```
Clone lightning:
```shell
@ -421,16 +368,12 @@ $ git clone https://github.com/ElementsProject/lightning.git
$ cd lightning
```
Checkout a release tag:
```shell
$ git checkout v22.11.1
```
Build lightning:
```shell
@ -439,8 +382,6 @@ $ ./configure
$ poetry run make
```
Running lightning:
> 📘
@ -453,24 +394,18 @@ bitcoind &
./cli/lightning-cli help
```
To install the built binaries into your system, you'll need to run `make install`:
```shell
make install
```
On an M1 mac you may need to use this command instead:
```shell
sudo PATH="/usr/local/opt:$PATH" LIBRARY_PATH=/opt/homebrew/lib CPATH=/opt/homebrew/include make install
```
## To Build on Arch Linux
Install dependencies:
@ -480,8 +415,6 @@ pacman --sync autoconf automake gcc git make python-pip
pip install --user poetry
```
Clone Core Lightning:
```shell
@ -489,8 +422,6 @@ $ git clone https://github.com/ElementsProject/lightning.git
$ cd lightning
```
Build Core Lightning:
```shell
@ -499,16 +430,12 @@ python -m poetry install
python -m poetry run make
```
Launch Core Lightning:
```
./lightningd/lightningd
```
## To cross-compile for Android
Make a standalone toolchain as per <https://developer.android.com/ndk/guides/standalone_toolchain.html>.
@ -528,8 +455,6 @@ export LD=$target_host-ld
export STRIP=$target_host-strip
```
Two makefile targets should not be cross-compiled so we specify a native CC:
```shell
@ -538,8 +463,6 @@ make clean -C ccan/ccan/cdump/tools \
&& make CC=clang -C ccan/ccan/cdump/tools
```
Install the `qemu-user` package.
This will allow you to properly configure the build for the target device environment.
Build with:
@ -550,8 +473,6 @@ BUILD=x86_64 MAKE_HOST=arm-linux-androideabi \
CONFIGURATOR_CC="arm-linux-androideabi-clang -static"
```
## To cross-compile for Raspberry Pi
Obtain the [official Raspberry Pi toolchains](https://github.com/raspberrypi/tools). This document assumes compilation will occur towards the Raspberry Pi 3 (arm-linux-gnueabihf as of Mar. 2018).
@ -570,8 +491,6 @@ export LD=$target_host-ld
export STRIP=$target_host-strip
```
Install the `qemu-user` package. This will allow you to properly configure the
build for the target device environment.
Config the arm elf interpreter prefix:
@ -580,9 +499,7 @@ Config the arm elf interpreter prefix:
export QEMU_LD_PREFIX=/path/to/raspberry/arm-bcm2708/arm-rpi-4.9.3-linux-gnueabihf/arm-linux-gnueabihf/sysroot/
```
Obtain and install cross-compiled versions of sqlite3, gmp and zlib:
Obtain and install cross-compiled versions of sqlite3 and zlib:
Download and build zlib:
@ -595,8 +512,6 @@ make
make install
```
Download and build sqlite3:
```shell
@ -608,21 +523,6 @@ make
make install
```
Download and build gmp:
```shell
wget https://gmplib.org/download/gmp/gmp-6.1.2.tar.xz
tar xvf gmp-6.1.2.tar.xz
cd gmp-6.1.2
./configure --disable-assembly --host=$target_host --prefix=$QEMU_LD_PREFIX
make
make install
```
Then, build Core Lightning with the following commands:
```
@ -630,8 +530,6 @@ Then, build Core Lightning with the following commands:
make
```
## To compile for Armbian
For all the other Pi devices out there, consider using [Armbian](https://www.armbian.com).
@ -648,11 +546,9 @@ Get dependencies:
```shell
apk update
apk add --virtual .build-deps ca-certificates alpine-sdk autoconf automake git libtool \
gmp-dev sqlite-dev python3 py3-mako net-tools zlib-dev libsodium gettext
sqlite-dev python3 py3-mako net-tools zlib-dev libsodium gettext
```
Clone lightning:
```shell
@ -661,8 +557,6 @@ cd lightning
git submodule update --init --recursive
```
Build and install:
```shell
@ -671,8 +565,6 @@ make
make install
```
Clean up:
```shell
@ -680,10 +572,8 @@ cd .. && rm -rf lightning
apk del .build-deps
```
Install runtime dependencies:
```shell
apk add gmp libgcc libsodium sqlite-libs zlib
apk add libgcc libsodium sqlite-libs zlib
```

View file

@ -1,157 +0,0 @@
---
title: "Contributor Workflow"
slug: "contributor-workflow"
excerpt: "Learn the practical process and guidelines for contributing."
hidden: false
createdAt: "2022-12-09T09:57:57.245Z"
updatedAt: "2023-04-22T13:00:38.252Z"
---
## Build and Development
Install the following dependencies for best results:
```shell
sudo apt update
sudo apt install valgrind cppcheck shellcheck libsecp256k1-dev libpq-dev
```
Re-run `configure` and build using `make`:
```shell
./configure --enable-developer
make -j$(nproc)
```
## Debugging
You can build Core Lightning with `DEVELOPER=1` to use dev commands listed in `cli/lightning-cli help`. `./configure --enable-developer` will do that. You can log console messages with log_info() in lightningd and status_debug() in other subdaemons.
You can debug crashing subdaemons with the argument `--dev-debugger=channeld`, where `channeld` is the subdaemon name. It will run `gnome-terminal` by default with a gdb attached to the subdaemon when it starts. You can change the terminal used by setting the `DEBUG_TERM` environment variable, such as `DEBUG_TERM="xterm -e"` or `DEBUG_TERM="konsole -e"`.
It will also print out (to stderr) the gdb command for manual connection. The subdaemon will be stopped (it sends itself a `SIGSTOP`); you'll need to `continue` in gdb.
```shell
./configure --enable-developer
make -j$(nproc)
```
## Making BOLT Modifications
All of code for marshalling/unmarshalling BOLT protocol messages is generated directly from the spec. These are pegged to the BOLTVERSION, as specified in `Makefile`.
## Source code analysis
An updated version of the NCC source code analysis tool is available at
<https://github.com/bitonic-cjp/ncc>
It can be used to analyze the lightningd source code by running `make clean && make ncc`. The output (which is built in parallel with the binaries) is stored in .nccout files. You can browse it, for instance, with a command like `nccnav lightningd/lightningd.nccout`.
## Subtleties
There are a few subtleties you should be aware of as you modify deeper parts of the code:
- `ccan/structeq`'s STRUCTEQ_DEF will define safe comparison function `foo_eq()` for struct `foo`, failing the build if the structure has implied padding.
- `command_success`, `command_fail`, and `command_fail_detailed` will free the `cmd` you pass in.
This also means that if you `tal`-allocated anything from the `cmd`, they will also get freed at those points and will no longer be accessible afterwards.
- When making a structure part of a list, you will instance a `struct list_node`. This has to be the _first_ field of the structure, or else `dev-memleak` command will think your structure has leaked.
## Protocol Modifications
The source tree contains CSV files extracted from the v1.0 BOLT specifications (wire/extracted_peer_wire_csv and wire/extracted_onion_wire_csv). You can regenerate these by first deleting the local copy(if any) at directory .tmp.bolts, setting `BOLTDIR` and `BOLTVERSION` appropriately, and finally running `make
extract-bolt-csv`. By default the bolts will be retrieved from the directory `../bolts` and a recent git version.
e.g., `make extract-bolt-csv BOLTDIR=../bolts BOLTVERSION=ee76043271f79f45b3392e629fd35e47f1268dc8`
## Release checklist
Here's a checklist for the release process.
### Leading Up To The Release
1. Talk to team about whether there are any changes which MUST go in this release which may cause delay.
2. Look through outstanding issues, to identify any problems that might be necessary to fixup before the release. Good candidates are reports of the project not building on different architectures or crashes.
3. Identify a good lead for each outstanding issue, and ask them about a fix timeline.
4. Create a milestone for the _next_ release on Github, and go though open issues and PRs and mark accordingly.
5. Ask (via email) the most significant contributor who has not already named a release to name the release (use devtools/credit to find this contributor). CC previous namers and team.
### Preparing for -rc1
1. Check that `CHANGELOG.md` is well formatted, ordered in areas, covers all significant changes, and sub-ordered approximately by user impact & coolness.
2. Use `devtools/changelog.py` to collect the changelog entries from pull request commit messages and merge them into the manually maintained `CHANGELOG.md`. This does API queries to GitHub, which are severely ratelimited unless you use an API token: set the `GH_TOKEN` environment variable to a Personal Access Token from <https://github.com/settings/tokens>
3. Create a new CHANGELOG.md heading to `v<VERSION>rc1`, and create a link at the bottom. Note that you should exactly copy the date and name format from a previous release, as the `build-release.sh` script relies on this.
4. Update the contrib/pyln package versions: `make update-pyln-versions NEW_VERSION=<VERSION>`
5. Create a PR with the above.
### Releasing -rc1
1. Merge the above PR.
2. Tag it `git pull && git tag -s v<VERSION>rc1`. Note that you should get a prompt to give this tag a 'message'. Make sure you fill this in.
3. Confirm that the tag will show up for builds with `git describe`
4. Push the tag to remote `git push --tags`.
5. Update the /topic on #c-lightning on Libera.
6. Prepare draft release notes (see devtools/credit), and share with team for editing.
7. Upgrade your personal nodes to the rc1, to help testing.
8. Test `tools/build-release.sh` to build the non-reproducible images and reproducible zipfile.
9. Use the zipfile to produce a [Reproducible builds](doc:repro).
### Releasing -rc2, etc
1. Change rc1 to rc2 in CHANGELOG.md.
2. Add a PR with the rc2.
3. Tag it `git pull && git tag -s v<VERSION>rc2 && git push --tags`
4. Update the /topic on #c-lightning on Libera.
5. Upgrade your personal nodes to the rc2.
### Tagging the Release
1. Update the CHANGELOG.md; remove -rcN in both places, update the date and add title and namer.
2. Update the contrib/pyln package versions: `make update-pyln-versions NEW_VERSION=<VERSION>`
3. Add a PR with that release.
4. Merge the PR, then:
1. `export VERSION=0.9.3`
2. `git pull`
3. `git tag -a -s v${VERSION} -m v${VERSION}`
4. `git push --tags`
5. Run `tools/build-release.sh` to build the non-reproducible images and reproducible zipfile.
6. Use the zipfile to produce a [reproducible build](REPRODUCIBLE.md).
7. To create and sign checksums, start by entering the release dir: `cd release`
8. Create the checksums for signing: `sha256sum * > SHA256SUMS`
9. Create the first signature with `gpg -sb --armor SHA256SUMS`
10. The tarballs may be owned by root, so revert ownership if necessary:
`sudo chown ${USER}:${USER} *${VERSION}*`
11. Upload the resulting files to github and save as a draft.
(<https://github.com/ElementsProject/lightning/releases/>)
12. Ping the rest of the team to check the SHA256SUMS file and have them send their
`gpg -sb --armor SHA256SUMS`.
13. Append the signatures into a file called `SHA256SUMS.asc`, verify
with `gpg --verify SHA256SUMS.asc` and include the file in the draft
release.
14. `make pyln-release` to upload pyln modules to pypi.org. This requires keys
for each of pyln-client, pyln-proto, and pyln-testing accessible to poetry.
This can be done by configuring the python keyring library along with a
suitable backend. Alternatively, the key can be set as an environment
variable and each of the pyln releases can be built and published
independently:
- `export POETRY_PYPI_TOKEN_PYPI=<pyln-client token>`
- `make pyln-release-client`
- ... repeat for each pyln package.
### Performing the Release
1. Edit the GitHub draft and include the `SHA256SUMS.asc` file.
2. Publish the release as not a draft.
3. Update the /topic on #c-lightning on Libera.
4. Send a mail to c-lightning and lightning-dev mailing lists, using the same wording as the Release Notes in github.
### Post-release
1. Look through PRs which were delayed for release and merge them.
2. Close out the Milestone for the now-shipped release.
3. Update this file with any missing or changed instructions.

View file

@ -4,7 +4,7 @@ slug: "faq"
excerpt: "Common issues and frequently asked questions on operating a CLN node."
hidden: false
createdAt: "2023-01-25T13:15:09.290Z"
updatedAt: "2023-02-21T13:47:49.406Z"
updatedAt: "2023-07-05T09:42:38.017Z"
---
# General questions