mirror of
https://github.com/ElementsProject/lightning.git
synced 2025-02-20 13:54:36 +01:00
doc: Add guides and GitHub workflow for doc sync
This PR: - adds all the guides (in markdown format) that is published at https://docs.corelightning.org/docs - adds a github workflow to sync any future changes made to files inside the guides folder - does not include API reference (json-rpc commands). Those will be handled in a separate PR since they're used as manpages and will require a different github workflow Note that the guides do not exactly map to their related files in doc/, since we reorganized the overall documentation structure on readme for better readability and developer experience. For example, doc/FUZZING.md and doc/HACKING.md#Testing are merged into testing.md in the new docs. As on the creation date of this PR, content from each of the legacy documents has been synced with the new docs. Until this PR gets merged, I will continue to push any updates made to the legacy documents into the new docs. If this looks reasonable, I will add a separate PR to clean up the legacy documents from doc/ (or mark them deprecated) to avoid redundant upkeep and maintenance. Changelog-None
This commit is contained in:
parent
15e86f2bba
commit
e83782f5de
44 changed files with 5823 additions and 0 deletions
23
.github/workflows/rdme-docs-sync.yml
vendored
Normal file
23
.github/workflows/rdme-docs-sync.yml
vendored
Normal file
|
@ -0,0 +1,23 @@
|
|||
# This GitHub Actions workflow was auto-generated by the `rdme` cli on 2023-04-22T13:16:28.430Z
|
||||
# You can view our full documentation here: https://docs.readme.com/docs/rdme
|
||||
name: ReadMe GitHub Action 🦉
|
||||
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
# This workflow will run every time you push code to the following branch: `master`
|
||||
# Check out GitHub's docs for more info on configuring this:
|
||||
# https://docs.github.com/actions/using-workflows/events-that-trigger-workflows
|
||||
- master
|
||||
|
||||
jobs:
|
||||
rdme-docs:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Check out repo 📚
|
||||
uses: actions/checkout@v3
|
||||
|
||||
- name: Run `docs` command 🚀
|
||||
uses: readmeio/rdme@v8
|
||||
with:
|
||||
rdme: docs guides/ --key=${{ secrets.README_API_KEY }} --version=23.02
|
438
doc/guides/Beginner-s Guide/backup-and-recovery.md
Normal file
438
doc/guides/Beginner-s Guide/backup-and-recovery.md
Normal file
|
@ -0,0 +1,438 @@
|
|||
---
|
||||
title: "Backup and recovery"
|
||||
slug: "backup-and-recovery"
|
||||
excerpt: "Learn the various backup and recovery options available for your Core Lightning node."
|
||||
hidden: false
|
||||
createdAt: "2022-11-18T16:28:17.292Z"
|
||||
updatedAt: "2023-04-22T12:51:49.775Z"
|
||||
---
|
||||
Lightning Network channels get their scalability and privacy benefits from the very simple technique of _not telling anyone else about your in-channel activity_.
|
||||
This is in contrast to onchain payments, where you have to tell everyone about each and every payment and have it recorded on the blockchain, leading to scaling problems (you have to push data to everyone, everyone needs to validate every transaction) and privacy problems (everyone knows every payment you were ever involved in).
|
||||
|
||||
Unfortunately, this removes a property that onchain users are so used to, they react in surprise when learning about this removal.
|
||||
Your onchain activity is recorded in all archival fullnodes, so if you forget all your onchain activity because your storage got fried, you just go redownload the activity from the nearest archival full node.
|
||||
|
||||
But in Lightning, since _you_ are the only one storing all your financial information, you **_cannot_** recover this financial information from anywhere else.
|
||||
|
||||
This means that on Lightning, **you have to** responsibly back up your financial information yourself, using various processes and automation.
|
||||
|
||||
The discussion below assumes that you know where you put your `$LIGHTNINGDIR`, and you know the directory structure within. By default your `$LIGHTNINGDIR` will be in `~/.lightning/${COIN}`. For example, if you are running `--mainnet`, it will be
|
||||
`~/.lightning/bitcoin`.
|
||||
|
||||
## `hsm_secret`
|
||||
|
||||
> 📘 Who should do this:
|
||||
>
|
||||
> Everyone.
|
||||
|
||||
You need a copy of the `hsm_secret` file regardless of whatever backup strategy you use.
|
||||
|
||||
The `hsm_secret` is created when you first create the node, and does not change.
|
||||
Thus, a one-time backup of `hsm_secret` is sufficient.
|
||||
|
||||
This is just 32 bytes, and you can do something like the below and write the hexadecimal digits a few times on a piece of paper:
|
||||
|
||||
```shell
|
||||
cd $LIGHTNINGDIR
|
||||
xxd hsm_secret
|
||||
```
|
||||
|
||||
|
||||
|
||||
You can re-enter the hexdump into a text file later and use `xxd` to convert it back to a binary `hsm_secret`:
|
||||
|
||||
```
|
||||
cat > hsm_secret_hex.txt <<HEX
|
||||
00: 30cc f221 94e1 7f01 cd54 d68c a1ba f124
|
||||
10: e1f3 1d45 d904 823c 77b7 1e18 fd93 1676
|
||||
HEX
|
||||
xxd -r hsm_secret_hex.txt > hsm_secret
|
||||
chmod 0400 hsm_secret
|
||||
```
|
||||
|
||||
|
||||
|
||||
Notice that you need to ensure that the `hsm_secret` is only readable by the user, and is not writable, as otherwise `lightningd` will refuse to start. Hence the `chmod 0400 hsm_secret` command.
|
||||
|
||||
Alternatively, if you are deploying a new node that has no funds and channels yet, you can generate BIP39 words using any process, and create the `hsm_secret` using the `hsmtool generatehsm` command.
|
||||
If you did `make install` then `hsmtool` is installed as [`lightning-hsmtool`](ref:lightning-hsmtool), else you can find it in the `tools/` directory of the build directory.
|
||||
|
||||
```shell
|
||||
lightning-hsmtool generatehsm hsm_secret
|
||||
```
|
||||
|
||||
|
||||
|
||||
Then enter the BIP39 words, plus an optional passphrase. Then copy the `hsm_secret` to `${LIGHTNINGDIR}`
|
||||
|
||||
You can regenerate the same `hsm_secret` file using the same BIP39 words, which again, you can back up on paper.
|
||||
|
||||
Recovery of the `hsm_secret` is sufficient to recover any onchain funds.
|
||||
Recovery of the `hsm_secret` is necessary, but insufficient, to recover any in-channel funds.
|
||||
To recover in-channel funds, you need to use one or more of the other backup strategies below.
|
||||
|
||||
## SQLITE3 `--wallet=${main}:${backup}` And Remote NFS Mount
|
||||
|
||||
> 📘 Who should do this:
|
||||
>
|
||||
> Casual users.
|
||||
|
||||
> 🚧
|
||||
>
|
||||
> This technique is only supported after the version v0.10.2 (not included) or later.
|
||||
>
|
||||
> On earlier versions, the `:` character is not special and will be considered part of the path of the database file.
|
||||
|
||||
When using the SQLITE3 backend (the default), you can specify a second database file to replicate to, by separating the second file with a single `:` character in the `--wallet` option, after the main database filename.
|
||||
|
||||
For example, if the user running `lightningd` is named `user`, and you are on the Bitcoin mainnet with the default `${LIGHTNINGDIR}`, you can specify in your `config` file:
|
||||
|
||||
```shell
|
||||
wallet=sqlite3:///home/user/.lightning/bitcoin/lightningd.sqlite3:/my/backup/lightningd.sqlite3
|
||||
```
|
||||
|
||||
|
||||
|
||||
Or via command line:
|
||||
|
||||
```
|
||||
lightningd --wallet=sqlite3:///home/user/.lightning/bitcoin/lightningd.sqlite3:/my/backup/lightningd.sqlite3
|
||||
```
|
||||
|
||||
|
||||
|
||||
If the second database file does not exist but the directory that would contain it does exist, the file is created.
|
||||
If the directory of the second database file does not exist, `lightningd` will fail at startup.
|
||||
If the second database file already exists, on startup it will be overwritten with the main database.
|
||||
During operation, all database updates will be done on both databases.
|
||||
|
||||
The main and backup files will **not** be identical at every byte, but they will still contain the same data.
|
||||
|
||||
It is recommended that you use **the same filename** for both files, just on different directories.
|
||||
|
||||
This has the advantage compared to the `backup` plugin below of requiring exactly the same amount of space on both the main and backup storage. The `backup` plugin will take more space on the backup than on the main storage.
|
||||
It has the disadvantage that it will only work with the SQLITE3 backend and is not supported by the PostgreSQL backend, and is unlikely to be supported on any future database backends.
|
||||
|
||||
You can only specify _one_ replica.
|
||||
|
||||
It is recommended that you use a network-mounted filesystem for the backup destination.
|
||||
For example, if you have a NAS you can access remotely.
|
||||
|
||||
At the minimum, set the backup to a different storage device.
|
||||
This is no better than just using RAID-1 (and the RAID-1 will probably be faster) but this is easier to set up --- just plug in a commodity USB flash disk (with metal casing, since a lot of writes are done and you need to dissipate the heat quickly) and use it as the backup location, without
|
||||
repartitioning your OS disk, for example.
|
||||
|
||||
> 📘
|
||||
>
|
||||
> Do note that files are not stored encrypted, so you should really not do this with rented space ("cloud storage").
|
||||
|
||||
To recover, simply get **all** the backup database files.
|
||||
Note that SQLITE3 will sometimes create a `-journal` or `-wal` file, which is necessary to ensure correct recovery of the backup; you need to copy those too, with corresponding renames if you use a different filename for the backup database, e.g. if you named the backup `backup.sqlite3` and when you recover you find `backup.sqlite3` and `backup.sqlite3-journal` files, you rename `backup.sqlite3` to `lightningd.sqlite3` and
|
||||
`backup.sqlite3-journal` to `lightningd.sqlite3-journal`.
|
||||
Note that the `-journal` or `-wal` file may or may not exist, but if they _do_, you _must_ recover them as well (there can be an `-shm` file as well in WAL mode, but it is unnecessary;
|
||||
it is only used by SQLITE3 as a hack for portable shared memory, and contains no useful data; SQLITE3 will ignore its contents always).
|
||||
It is recommended that you use **the same filename** for both main and backup databases (just on different directories), and put the backup in its own directory, so that you can just recover all the files in that directory without worrying about missing any needed files or correctly
|
||||
renaming.
|
||||
|
||||
If your backup destination is a network-mounted filesystem that is in a remote location, then even loss of all hardware in one location will allow you to still recover your Lightning funds.
|
||||
|
||||
However, if instead you are just replicating the database on another storage device in a single location, you remain vulnerable to disasters like fire or computer confiscation.
|
||||
|
||||
## `backup` Plugin And Remote NFS Mount
|
||||
|
||||
> 📘 Who should do this:
|
||||
>
|
||||
> Casual users.
|
||||
|
||||
You can find the full source for the `backup` plugin here:
|
||||
<https://github.com/lightningd/plugins/tree/master/backup>
|
||||
|
||||
The `backup` plugin requires Python 3.
|
||||
|
||||
- Download the source for the plugin.
|
||||
- `git clone https://github.com/lightningd/plugins.git`
|
||||
- `cd` into its directory and install requirements.
|
||||
- `cd plugins/backup`
|
||||
- `pip3 install -r requirements.txt`
|
||||
- Figure out where you will put the backup files.
|
||||
- Ideally you have an NFS or other network-based mount on your system, into which you will put the backup.
|
||||
- Stop your Lightning node.
|
||||
- `/path/to/backup-cli init --lightning-dir ${LIGHTNINGDIR} file:///path/to/nfs/mount/file.bkp`.
|
||||
This creates an initial copy of the database at the NFS mount.
|
||||
- Add these settings to your `lightningd` configuration:
|
||||
- `important-plugin=/path/to/backup.py`
|
||||
- Restart your Lightning node.
|
||||
|
||||
It is recommended that you use a network-mounted filesystem for the backup destination.
|
||||
For example, if you have a NAS you can access remotely.
|
||||
|
||||
> 📘
|
||||
>
|
||||
> Do note that files are not stored encrypted, so you should really not do this with rented space ("cloud storage").
|
||||
|
||||
Alternately, you _could_ put it in another storage device (e.g. USB flash disk) in the same physical location.
|
||||
|
||||
To recover:
|
||||
|
||||
- Re-download the `backup` plugin and install Python 3 and the
|
||||
requirements of `backup`.
|
||||
- `/path/to/backup-cli restore file:///path/to/nfs/mount ${LIGHTNINGDIR}`
|
||||
|
||||
If your backup destination is a network-mounted filesystem that is in a remote location, then even loss of all hardware in one location will allow you to still recover your Lightning funds.
|
||||
|
||||
However, if instead you are just replicating the database on another storage device in a single location, you remain vulnerable to disasters like fire or computer confiscation.
|
||||
|
||||
## Filesystem Redundancy
|
||||
|
||||
> 📘 Who should do this:
|
||||
>
|
||||
> Filesystem nerds, data hoarders, home labs, enterprise users.
|
||||
|
||||
You can set up a RAID-1 with multiple storage devices, and point the `$LIGHTNINGDIR` to the RAID-1 setup. That way, failure of one storage device will still let you recover funds.
|
||||
|
||||
You can use a hardware RAID-1 setup, or just buy multiple commodity storage media you can add to your machine and use a software RAID, such as (not an exhaustive list!):
|
||||
|
||||
- `mdadm` to create a virtual volume which is the RAID combination of multiple physical media.
|
||||
- BTRFS RAID-1 or RAID-10, a filesystem built into Linux.
|
||||
- ZFS RAID-Z, a filesystem that cannot be legally distributed with the Linux kernel, but can be distributed in a BSD system, and can be installed on Linux with some extra effort, see
|
||||
[ZFSonLinux](https://zfsonlinux.org).
|
||||
|
||||
RAID-1 (whether by hardware, or software) like the above protects against failure of a single storage device, but does not protect you in case of certain disasters, such as fire or computer confiscation.
|
||||
|
||||
You can "just" use a pair of high-quality metal-casing USB flash devices (you need metal-casing since the devices will have a lot of small writes, which will cause a lot of heating, which needs to dissipate very fast, otherwise the flash device firmware will internally disconnect the flash device from your computer, reducing your reliability) in RAID-1, if you have enough USB ports.
|
||||
|
||||
### Example: BTRFS on Linux
|
||||
|
||||
On a Linux system, one of the simpler things you can do would be to use BTRFS RAID-1 setup between a partition on your primary storage and a USB flash disk.
|
||||
|
||||
The below "should" work, but assumes you are comfortable with low-level Linux administration.
|
||||
If you are on a system that would make you cry if you break it, you **MUST** stop your Lightning node and back up all files before doing the below.
|
||||
|
||||
- Install `btrfs-progs` or `btrfs-tools` or equivalent.
|
||||
- Get a 32Gb USB flash disk.
|
||||
- Stop your Lightning node and back up everything, do not be stupid.
|
||||
- Repartition your hard disk to have a 30Gb partition.
|
||||
- This is risky and may lose your data, so this is best done with a brand-new hard disk that contains no data.
|
||||
- Connect the USB flash disk.
|
||||
- Find the `/dev/sdXX` devices for the HDD 30Gb partition and the flash disk.
|
||||
- `lsblk -o NAME,TYPE,SIZE,MODEL` should help.
|
||||
- Create a RAID-1 `btrfs` filesystem.
|
||||
- `mkfs.btrfs -m raid1 -d raid1 /dev/${HDD30GB} /dev/${USB32GB}`
|
||||
- You may need to add `-f` if the USB flash disk is already formatted.
|
||||
- Create a mountpoint for the `btrfs` filesystem.
|
||||
- Create a `/etc/fstab` entry.
|
||||
- Use the `UUID` option instad of `/dev/sdXX` since the exact device letter can change across boots.
|
||||
- You can get the UUID by `lsblk -o NAME,UUID`. Specifying _either_ of the devices is sufficient.
|
||||
- Add `autodefrag` option, which tends to work better with SQLITE3 databases.
|
||||
- e.g. `UUID=${UUID} ${BTRFSMOUNTPOINT} btrfs defaults,autodefrag 0 0`
|
||||
- `mount -a` then `df` to confirm it got mounted.
|
||||
- Copy the contents of the `$LIGHTNINGDIR` to the BTRFS mount point.
|
||||
- Copy the entire directory, then `chown -R` the copy to the user who will run the `lightningd`.
|
||||
- If you are paranoid, run `diff -r` on both copies to check.
|
||||
- Remove the existing `$LIGHTNINGDIR`.
|
||||
- `ln -s ${BTRFSMOUNTPOINT}/lightningdirname ${LIGHTNINGDIR}`.
|
||||
- Make sure the `$LIGHTNINGDIR` has the same structure as what you originally had.
|
||||
- Add `crontab` entries for `root` that perform regular `btrfs` maintenance tasks.
|
||||
- `0 0 * * * /usr/bin/btrfs balance start -dusage=50 -dlimit=2 -musage=50 -mlimit=4 ${BTRFSMOUNTPOINT}`
|
||||
This prevents BTRFS from running out of blocks even if it has unused space _within_ blocks, and is run at midnight everyday. You may need to change the path to the `btrfs` binary.
|
||||
- `0 0 * * 0 /usr/bin/btrfs scrub start -B -c 2 -n 4 ${BTRFSMOUNTPOINT}`
|
||||
This detects bit rot (i.e. bad sectors) and auto-heals the filesystem, and is run on Sundays at midnight.
|
||||
- Restart your Lightning node.
|
||||
|
||||
If one or the other device fails completely, shut down your computer, boot on a recovery disk or similar, then:
|
||||
|
||||
- Connect the surviving device.
|
||||
- Mount the partition/USB flash disk in `degraded` mode:
|
||||
- `mount -o degraded /dev/sdXX /mnt/point`
|
||||
- Copy the `lightningd.sqlite3` and `hsm_secret` to new media.
|
||||
- Do **not** write to the degraded `btrfs` mount!
|
||||
- Start up a `lightningd` using the `hsm_secret` and `lightningd.sqlite3` and close all channels and move all funds to onchain cold storage you control, then set up a new Lightning node.
|
||||
|
||||
If the device that fails is the USB flash disk, you can replace it using BTRFS commands.
|
||||
You should probably stop your Lightning node while doing this.
|
||||
|
||||
- `btrfs replace start /dev/sdOLD /dev/sdNEW ${BTRFSMOUNTPOINT}`.
|
||||
- If `/dev/sdOLD` no longer even exists because the device is really really broken, use `btrfs filesystem show` to see the number after `devid` of the broken device, and use that number instead of `/dev/sdOLD`.
|
||||
- Monitor status with `btrfs replace status ${BTRFSMOUNTPOINT}`.
|
||||
|
||||
More sophisticated setups with more than two devices are possible. Take note that "RAID 1" in `btrfs` means "data is copied on up to two devices", meaning only up to one device can fail.
|
||||
You may be interested in `raid1c3` and `raid1c4` modes if you have three or four storage devices. BTRFS would probably work better if you were purchasing an entire set
|
||||
of new storage devices to set up a new node.
|
||||
|
||||
## PostgreSQL Cluster
|
||||
|
||||
> 📘 Who should do this:
|
||||
>
|
||||
> Enterprise users, whales.
|
||||
|
||||
`lightningd` may also be compiled with PostgreSQL support.
|
||||
|
||||
PostgreSQL is generally faster than SQLITE3, and also supports running a PostgreSQL cluster to be used by `lightningd`, with automatic replication and failover in case an entire node of the PostgreSQL cluster fails.
|
||||
|
||||
Setting this up, however, is more involved.
|
||||
|
||||
By default, `lightningd` compiles with PostgreSQL support **only** if it finds `libpq` installed when you `./configure`. To enable it, you have to install a developer version of `libpq`. On most Debian-derived systems that would be `libpq-dev`. To verify you have it properly installed on your system, check if the following command gives you a path:
|
||||
|
||||
```shell
|
||||
pg_config --includedir
|
||||
```
|
||||
|
||||
|
||||
|
||||
Versioning may also matter to you.
|
||||
For example, Debian Stable ("buster") as of late 2020 provides PostgreSQL 11.9 for the `libpq-dev` package, but Ubuntu LTS ("focal") of 2020 provides PostgreSQL 12.5.
|
||||
Debian Testing ("bullseye") uses PostgreSQL 13.0 as of this writing. PostgreSQL 12 had a non-trivial change in the way the restore operation is done for replication.
|
||||
|
||||
You should use the same PostgreSQL version of `libpq-dev` as what you run on your cluster, which probably means running the same distribution on your cluster.
|
||||
|
||||
Once you have decided on a specific version you will use throughout, refer as well to the "synchronous replication" document of PostgreSQL for the **specific version** you are using:
|
||||
|
||||
- [PostgreSQL 11](https://www.postgresql.org/docs/11/runtime-config-replication.html)
|
||||
- [PostgreSQL 12](https://www.postgresql.org/docs/12/runtime-config-replication.html)
|
||||
- [PostgreSQL 13](https://www.postgresql.org/docs/13/runtime-config-replication.html)
|
||||
|
||||
You then have to compile `lightningd` with PostgreSQL support.
|
||||
|
||||
- Clone or untar a new source tree for `lightning` and `cd` into it.
|
||||
- You _could_ just use `make clean` on an existing one, but for the avoidance of doubt (and potential bugs in our `Makefile` cleanup rules), just create a fresh source tree.
|
||||
- `./configure`
|
||||
- Add any options to `configure` that you normally use as well.
|
||||
- Double-check the `config.vars` file contains `HAVE_POSTGRES=1`.
|
||||
- `grep 'HAVE_POSTGRES' config.vars`
|
||||
- `make`
|
||||
- If you install `lightningd`, `sudo make install`.
|
||||
|
||||
If you were not using PostgreSQL before but have compiled and used `lightningd` on your system, the resulting `lightningd` will still continue supporting and using your current SQLITE3 database; it just gains the option to use a PostgreSQL database as well.
|
||||
|
||||
If you just want to use PostgreSQL without using a cluster (for example, as an initial test without risking any significant funds), then after setting up a PostgreSQL database, you just need to add
|
||||
`--wallet=postgres://${USER}:${PASSWORD}@${HOST}:${PORT}/${DB}` to your `lightningd` config or invocation.
|
||||
|
||||
To set up a cluster for a brand new node, follow this (external) [guide by @gabridome](https://github.com/gabridome/docs/blob/master/c-lightning_with_postgresql_reliability.md)
|
||||
|
||||
The above guide assumes you are setting up a new node from scratch. It is also specific to PostgreSQL 12, and setting up for other versions **will** have differences; read the PostgreSQL manuals linked above.
|
||||
|
||||
> 🚧
|
||||
>
|
||||
> If you want to continue a node that started using an SQLITE3 database, note that we do not support this. You should set up a new PostgreSQL node, move funds from the SQLITE3 node to the PostgreSQL node, then shut down the SQLITE3 node permanently.
|
||||
|
||||
There are also more ways to set up PostgreSQL replication.
|
||||
In general, you should use [synchronous replication](https://www.postgresql.org/docs/13/warm-standby.html#SYNCHRONOUS-REPLICATION), since `lightningd` assumes that once a transaction is committed, it is saved in all permanent storage. This can be difficult to create remote replicas due to the latency.
|
||||
|
||||
## SQLite Litestream Replication
|
||||
|
||||
|
||||
|
||||
> 🚧
|
||||
>
|
||||
> Previous versions of this document recommended this technique, but we no longer do so.
|
||||
> According to [issue 4857](https://github.com/ElementsProject/lightning/issues/4857), even with a 60-second timeout that we added in 0.10.2, this leads to constant crashing of `lightningd` in some situations. This section will be removed completely six months after 0.10.3. Consider using `--wallet=sqlite3://${main}:${backup}` above instead.
|
||||
|
||||
One of the simpler things on any system is to use Litestream to replicate the SQLite database. It continuously streams SQLite changes to file or external storage - the cloud storage option should not be used.
|
||||
Backups/replication should not be on the same disk as the original SQLite DB.
|
||||
|
||||
You need to enable WAL mode on your database.
|
||||
To do so, first stop `lightningd`, then:
|
||||
|
||||
```shell
|
||||
$ sqlite3 lightningd.sqlite3
|
||||
sqlite3> PRAGMA journal_mode = WAL;
|
||||
sqlite3> .quit
|
||||
```
|
||||
|
||||
Then just restart `lightningd`.
|
||||
|
||||
/etc/litestream.yml :
|
||||
|
||||
```shell
|
||||
dbs:
|
||||
- path: /home/bitcoin/.lightning/bitcoin/lightningd.sqlite3
|
||||
replicas:
|
||||
- path: /media/storage/lightning_backup
|
||||
```
|
||||
|
||||
and start the service using systemctl:
|
||||
|
||||
```shell
|
||||
$ sudo systemctl start litestream
|
||||
```
|
||||
|
||||
Restore:
|
||||
|
||||
```shell
|
||||
$ litestream restore -o /media/storage/lightning_backup /home/bitcoin/restore_lightningd.sqlite3
|
||||
```
|
||||
|
||||
Because Litestream only copies small changes and not the entire database (holding a read lock on the file while doing so), the 60-second timeout on locking should not be reached unless something has made your backup medium very very slow.
|
||||
|
||||
Litestream has its own timer, so there is a tiny (but non-negligible) probability that `lightningd` updates the
|
||||
database, then irrevocably commits to the update by sending revocation keys to the counterparty, and _then_ your main storage media crashes before Litestream can replicate the update.
|
||||
|
||||
Treat this as a superior version of "Database File Backups" section below and prefer recovering via other backup methods first.
|
||||
|
||||
|
||||
|
||||
## Database File Backups
|
||||
|
||||
> 📘 Who should do this:
|
||||
>
|
||||
> Those who already have at least one of the other backup methods, those who are #reckless.
|
||||
|
||||
This is the least desirable backup strategy, as it _can_ lead to loss of all in-channel funds if you use it.
|
||||
However, having _no_ backup strategy at all _will_ lead to loss of all in-channel funds, so this is still better than nothing.
|
||||
|
||||
This backup method is undesirable, since it cannot recover the following channels:
|
||||
|
||||
- Channels with peers that do not support `option_dataloss_protect`.
|
||||
- Most nodes on the network already support `option_dataloss_protect` as of November 2020.
|
||||
- If the peer does not support `option_dataloss_protect`, then the entire channel funds will be revoked by the peer.
|
||||
- Peers can _claim_ to honestly support this, but later steal funds from you by giving obsolete state when you recover.
|
||||
- Channels created _after_ the copy was made are not recoverable.
|
||||
- Data for those channels does not exist in the backup, so your node cannot recover them.
|
||||
|
||||
Because of the above, this strategy is discouraged: you _can_ potentially lose all funds in open channels.
|
||||
|
||||
However, again, note that a "no backups #reckless" strategy leads to _definite_ loss of funds, so you should still prefer _this_ strategy rather than having _no_ backups at all.
|
||||
|
||||
Even if you have one of the better options above, you might still want to do this as a worst-case fallback, as long as you:
|
||||
|
||||
- Attempt to recover using the other backup options above first. Any one of them will be better than this backup option.
|
||||
- Recover by this method **ONLY** as a **_last_** resort.
|
||||
- Recover using the most recent backup you can find. Take time to look for the most recent available backup.
|
||||
|
||||
Again, this strategy can lead to only **_partial_** recovery of funds, or even to complete failure to recover, so use the other methods first to recover!
|
||||
|
||||
### Offline Backup
|
||||
|
||||
While `lightningd` is not running, just copy the `lightningd.sqlite3` file in the `$LIGHTNINGDIR` on backup media somewhere.
|
||||
|
||||
To recover, just copy the backed up `lightningd.sqlite3` into your new `$LIGHTNINGDIR` together with the `hsm_secret`.
|
||||
|
||||
You can also use any automated backup system as long as it includes the `lightningd.sqlite3` file (and optionally `hsm_secret`, but note that as a secret key, thieves getting a copy of your backups may allow them to steal your funds, even in-channel funds) and as long as it copies the file while `lightningd` is not running.
|
||||
|
||||
### Backing Up While `lightningd` Is Running
|
||||
|
||||
Since `sqlite3` will be writing to the file while `lightningd` is running, `cp`ing the `lightningd.sqlite3` file while `lightningd` is running may result in the file not being copied properly if `sqlite3` happens to be committing database transactions at that time, potentially leading to a corrupted backup file that cannot be recovered from.
|
||||
|
||||
You have to stop `lightningd` before copying the database to backup in order to ensure that backup files are not corrupted, and in particular, wait for the `lightningd` process to exit.
|
||||
Obviously, this is disruptive to node operations, so you might prefer to just perform the `cp` even if the backup potentially is corrupted. As long as you maintain multiple backups sampled at different times, this may be more acceptable than stopping and restarting `lightningd`; the corruption only exists in the backup, not in the original file.
|
||||
|
||||
If the filesystem or volume manager containing `$LIGHTNINGDIR` has a snapshot facility, you can take a snapshot of the filesystem, then mount the snapshot, copy `lightningd.sqlite3`, unmount the snapshot, and then delete the snapshot.
|
||||
Similarly, if the filesystem supports a "reflink" feature, such as `cp -c` on an APFS on MacOS, or `cp --reflink=always` on an XFS or BTRFS on Linux, you can also use that, then copy the reflinked copy to a different storage medium; this is equivalent to a snapshot of a single file.
|
||||
This _reduces_ but does not _eliminate_ this race condition, so you should still maintain multiple backups.
|
||||
|
||||
You can additionally perform a check of the backup by this command:
|
||||
|
||||
```shell
|
||||
echo 'PRAGMA integrity_check;' | sqlite3 ${BACKUPFILE}
|
||||
```
|
||||
|
||||
|
||||
|
||||
This will result in the string `ok` being printed if the backup is **likely** not corrupted.
|
||||
If the result is anything else than `ok`, the backup is definitely corrupted and you should make another copy.
|
||||
|
||||
In order to make a proper uncorrupted backup of the SQLITE3 file while `lightningd` is running, we would need to have `lightningd` perform the backup itself, which, as of the version at the time of this writing, is not yet implemented.
|
||||
|
||||
Even if the backup is not corrupted, take note that this backup strategy should still be a last resort; recovery of all funds is still not assured with this backup strategy.
|
||||
|
||||
`sqlite3` has `.dump` and `VACUUM INTO` commands, but note that those lock the main database for long time periods, which will negatively affect your `lightningd` instance.
|
76
doc/guides/Beginner-s Guide/beginners-guide.md
Normal file
76
doc/guides/Beginner-s Guide/beginners-guide.md
Normal file
|
@ -0,0 +1,76 @@
|
|||
---
|
||||
title: "Running your node"
|
||||
slug: "beginners-guide"
|
||||
excerpt: "A guide to all the basics you need to get up and running immediately."
|
||||
hidden: false
|
||||
createdAt: "2022-11-18T14:27:50.098Z"
|
||||
updatedAt: "2023-02-21T13:49:20.132Z"
|
||||
---
|
||||
## Starting `lightningd`
|
||||
|
||||
#### Regtest (local, fast-start) option
|
||||
|
||||
If you want to experiment with `lightningd`, there's a script to set up a `bitcoind` regtest test network of two local lightning nodes, which provides a convenient `start_ln` helper. See the notes at the top of the `startup_regtest.sh` file for details on how to use it.
|
||||
|
||||
```bash
|
||||
. contrib/startup_regtest.sh
|
||||
```
|
||||
|
||||
|
||||
|
||||
Note that your local nodeset will be much faster/more responsive if you've configured your node to expose the developer options, e.g.
|
||||
|
||||
```bash
|
||||
./configure --enable-developer
|
||||
```
|
||||
|
||||
|
||||
|
||||
#### Mainnet Option
|
||||
|
||||
To test with real bitcoin, you will need to have a local `bitcoind` node running:
|
||||
|
||||
```bash
|
||||
bitcoind -daemon
|
||||
```
|
||||
|
||||
|
||||
|
||||
Wait until `bitcoind` has synchronized with the network.
|
||||
|
||||
Make sure that you do not have `walletbroadcast=0` in your `~/.bitcoin/bitcoin.conf`, or you may run into trouble.
|
||||
Notice that running `lightningd` against a pruned node may cause some issues if not managed carefully, see [pruning](doc:bitcoin-core##using-a-pruned-bitcoin-core-node) for more information.
|
||||
|
||||
You can start `lightningd` with the following command:
|
||||
|
||||
```bash
|
||||
lightningd --network=bitcoin --log-level=debug
|
||||
```
|
||||
|
||||
|
||||
|
||||
This creates a `.lightning/` subdirectory in your home directory: see `man -l doc/lightningd.8` (or [???](???)) for more runtime options.
|
||||
|
||||
## Using The JSON-RPC Interface
|
||||
|
||||
Core Lightning exposes a [JSON-RPC 2.0](https://www.jsonrpc.org/specification) interface over a Unix Domain socket; the [`lightning-cli`](ref:lightning-cli) tool can be used to access it, or there is a [python client library](???).
|
||||
|
||||
You can use `[lightning-cli](ref:lightning-cli) help` to print a table of RPC methods; `[lightning-cli](lightning-cli) help <command>` will offer specific information on that command.
|
||||
|
||||
Useful commands:
|
||||
|
||||
- [lightning-newaddr](ref:lightning-newaddr): get a bitcoin address to deposit funds into your lightning node.
|
||||
- [lightning-listfunds](ref:lightning-listfunds): see where your funds are.
|
||||
- [lightning-connect](ref:lightning-connect): connect to another lightning node.
|
||||
- [lightning-fundchannel](ref:lightning-fundchannel): create a channel to another connected node.
|
||||
- [lightning-invoice](ref:lightning-invoice): create an invoice to get paid by another node.
|
||||
- [lightning-pay](ref:lightning-pay): pay someone else's invoice.
|
||||
- [lightning-plugin](ref:lightning-plugin): commands to control extensions.
|
||||
|
||||
## Care And Feeding Of Your New Lightning Node
|
||||
|
||||
Once you've started for the first time, there's a script called `contrib/bootstrap-node.sh` which will connect you to other nodes on the lightning network.
|
||||
|
||||
There are also numerous plugins available for Core Lightning which add capabilities: see the [Plugins](doc:plugins) guide, and check out the plugin collection at: <https://github.com/lightningd/plugins>, including [helpme](https://github.com/lightningd/plugins/tree/master/helpme) which guides you through setting up your first channels and customising your node.
|
||||
|
||||
For a less reckless experience, you can encrypt the HD wallet seed: see [HD wallet encryption](doc:securing-keys).
|
50
doc/guides/Beginner-s Guide/opening-channels.md
Normal file
50
doc/guides/Beginner-s Guide/opening-channels.md
Normal file
|
@ -0,0 +1,50 @@
|
|||
---
|
||||
title: "Opening channels"
|
||||
slug: "opening-channels"
|
||||
hidden: false
|
||||
createdAt: "2022-11-18T16:26:57.798Z"
|
||||
updatedAt: "2023-01-31T15:07:08.196Z"
|
||||
---
|
||||
First you need to transfer some funds to `lightningd` so that it can open a channel:
|
||||
|
||||
```shell
|
||||
# Returns an address <address>
|
||||
lightning-cli newaddr
|
||||
```
|
||||
|
||||
|
||||
|
||||
`lightningd` will register the funds once the transaction is confirmed.
|
||||
|
||||
You may need to generate a p2sh-segwit address if the faucet does not support bech32:
|
||||
|
||||
```shell
|
||||
# Return a p2sh-segwit address
|
||||
lightning-cli newaddr p2sh-segwit
|
||||
```
|
||||
|
||||
|
||||
|
||||
Confirm `lightningd` got funds by:
|
||||
|
||||
```shell
|
||||
# Returns an array of on-chain funds.
|
||||
lightning-cli listfunds
|
||||
```
|
||||
|
||||
|
||||
|
||||
Once `lightningd` has funds, we can connect to a node and open a channel. Let's assume the **remote** node is accepting connections at `<ip>` (and optional `<port>`, if not 9735) and has the node ID `<node_id>`:
|
||||
|
||||
```shell
|
||||
lightning-cli connect <node_id> <ip> [<port>]
|
||||
lightning-cli fundchannel <node_id> <amount_in_satoshis>
|
||||
```
|
||||
|
||||
|
||||
|
||||
This opens a connection and, on top of that connection, then opens a channel.
|
||||
|
||||
The funding transaction needs 3 confirmations in order for the channel to be usable, and 6 to be announced for others to use.
|
||||
|
||||
You can check the status of the channel using `lightning-cli listpeers`, which after 3 confirmations (1 on testnet) should say that `state` is `CHANNELD_NORMAL`; after 6 confirmations you can use `lightning-cli listchannels` to verify that the `public` field is now `true`.
|
10
doc/guides/Beginner-s Guide/securing-keys.md
Normal file
10
doc/guides/Beginner-s Guide/securing-keys.md
Normal file
|
@ -0,0 +1,10 @@
|
|||
---
|
||||
title: "Securing keys"
|
||||
slug: "securing-keys"
|
||||
hidden: false
|
||||
createdAt: "2022-11-18T16:28:08.529Z"
|
||||
updatedAt: "2023-01-31T13:52:27.300Z"
|
||||
---
|
||||
You can encrypt the `hsm_secret` content (which is used to derive the HD wallet's master key) by passing the `--encrypted-hsm` startup argument, or by using the `hsmtool` (which you can find in the `tool/` directory at the root of [Core Lightning repository](https://github.com/ElementsProject/lightning)) with the `encrypt` method. You can unencrypt an encrypted `hsm_secret` using the `hsmtool` with the `decrypt` method.
|
||||
|
||||
If you encrypt your `hsm_secret`, you will have to pass the `--encrypted-hsm` startup option to `lightningd`. Once your `hsm_secret` is encrypted, you **will not** be able to access your funds without your password, so please beware with your password management. Also, beware of not feeling too safe with an encrypted `hsm_secret`: unlike for `bitcoind` where the wallet encryption can restrict the usage of some RPC command, `lightningd` always needs to access keys from the wallet which is thus **not locked** (yet), even with an encrypted BIP32 master seed.
|
|
@ -0,0 +1,24 @@
|
|||
---
|
||||
title: "Sending and receiving payments"
|
||||
slug: "sending-and-receiving-payments"
|
||||
hidden: false
|
||||
createdAt: "2022-11-18T16:27:07.625Z"
|
||||
updatedAt: "2023-01-31T15:06:02.214Z"
|
||||
---
|
||||
Payments in Lightning are invoice based.
|
||||
|
||||
The recipient creates an invoice with the expected `<amount>` in millisatoshi (or `"any"` for a donation), a unique `<label>` and a `<description>` the payer will see:
|
||||
|
||||
```shell
|
||||
lightning-cli invoice <amount> <label> <description>
|
||||
```
|
||||
|
||||
This returns some internal details, and a standard invoice string called `bolt11` (named after the [BOLT #11 lightning spec](https://github.com/lightning/bolts/blob/master/11-payment-encoding.md)).
|
||||
|
||||
The sender can feed this `bolt11` string to the `decodepay` command to see what it is, and pay it simply using the `pay` command:
|
||||
|
||||
```shell
|
||||
lightning-cli pay <bolt11>
|
||||
```
|
||||
|
||||
Note that there are lower-level interfaces (and more options to these interfaces) for more sophisticated use.
|
13
doc/guides/Beginner-s Guide/watchtowers.md
Normal file
13
doc/guides/Beginner-s Guide/watchtowers.md
Normal file
|
@ -0,0 +1,13 @@
|
|||
---
|
||||
title: "Watchtowers"
|
||||
slug: "watchtowers"
|
||||
excerpt: "Defend your node against breaches using a watchtower."
|
||||
hidden: false
|
||||
createdAt: "2022-11-18T16:28:27.054Z"
|
||||
updatedAt: "2023-02-02T07:13:57.111Z"
|
||||
---
|
||||
The Lightning Network protocol assumes that a node is always online and synchronised with the network. Should your lightning node go offline for some time, it is possible that a node on the other side of your channel may attempt to force close the channel with an outdated state (also known as revoked commitment). This may allow them to steal funds from the channel that belonged to you.
|
||||
|
||||
A watchtower is a third-party service that you can hire to defend your node against such breaches, whether malicious or accidental, in the event that your node goes offline. It will watch for breaches on the blockchain and punish the malicious peer by relaying a penalty transaction on your behalf.
|
||||
|
||||
There are a number of watchtower services available today. One of them is the [watchtower client plugin](https://github.com/talaia-labs/rust-teos/tree/master/watchtower-plugin) that works with the [Eye of Satoshi tower](https://github.com/talaia-labs/rust-teos) (or any [BOLT13](https://github.com/sr-gi/bolt13/blob/master/13-watchtowers.md) compliant watchtower).
|
74
doc/guides/Contribute to Core Lightning/code-generation.md
Normal file
74
doc/guides/Contribute to Core Lightning/code-generation.md
Normal file
|
@ -0,0 +1,74 @@
|
|||
---
|
||||
title: "Code Generation"
|
||||
slug: "code-generation"
|
||||
hidden: true
|
||||
createdAt: "2023-04-22T12:29:01.116Z"
|
||||
updatedAt: "2023-04-22T12:44:47.814Z"
|
||||
---
|
||||
The CLN project has a multitude of interfaces, most of which are generated from an abstract schema:
|
||||
|
||||
- Wire format for peer-to-peer communication: this is the binary format that is specific by the [LN spec](https://github.com/lightning/bolts). It uses the [generate-wire.py](https://github.com/ElementsProject/lightning/blob/master/tools/generate-wire.py) script to parse the (faux) CSV files that are automatically extracted from the specification and writes C source code files that are then used internally to encode and decode messages, as well as provide print functions for the messages.
|
||||
|
||||
- Wire format for inter-daemon communication: CLN follows a multi-daemon architecture, making communication explicit across daemons. For this inter-daemon communication we use a slightly altered message format from the [LN spec](https://github.com/lightning/bolts). The changes are
|
||||
1. addition of FD passing semantics to allow establishing a new connection between daemons (communication uses [socketpair](https://man7.org/linux/man-pages/man2/socketpair.2.html), so no `connect`)
|
||||
2. change the message length prefix from `u16` to `u32`, allowing for messages larger than 65Kb. The CSV files are with the respective sub-daemon and also use [generate-wire.py](https://github.com/ElementsProject/lightning/blob/master/tools/generate-wire.py) to generate encoding, decoding and printing functions
|
||||
|
||||
- We describe the JSON-RPC using [JSON Schema](https://json-schema.org/) in the [`doc/schemas`](https://github.com/ElementsProject/lightning/tree/master/doc/schemas) directory. Each method has a `.request.json` for the request message, and a `.schema.json` for the response (the mismatch is historical and will eventually be addressed). During tests the `pytest` target will verify responses, however the JSON-RPC methods are _not_ generated (yet?). We do generate various client stubs for languages, using the `msggen`][msggen] tool. More on the generated stubs and utilities below.
|
||||
|
||||
## Man pages
|
||||
|
||||
The manpages are partially generated from the JSON schemas using the [`fromschema`](https://github.com/ElementsProject/lightning/blob/master/tools/fromschema.py) tool. It reads the request schema and fills in the manpage between two markers:
|
||||
|
||||
```markdown
|
||||
[comment]: # (GENERATE-FROM-SCHEMA-START)
|
||||
...
|
||||
[comment]: # (GENERATE-FROM-SCHEMA-END)
|
||||
```
|
||||
|
||||
|
||||
|
||||
> 📘
|
||||
>
|
||||
> Some of this functionality overlaps with [`msggen`](https://github.com/ElementsProject/lightning/tree/master/contrib/msggen) (parsing the Schemas) and [blockreplace.py](https://github.com/ElementsProject/lightning/blob/master/devtools/blockreplace.py) (filling in the template). It is likely that this will eventually be merged.
|
||||
|
||||
## `msggen`
|
||||
|
||||
`msggen` is used to generate JSON-RPC client stubs, and converters between in-memory formats and the JSON format. In addition, by chaining some of these we can expose a [grpc](https://grpc.io/) interface that matches the JSON-RPC interface. This conversion chain is implemented in the [grpc-plugin](https://github.com/ElementsProject/lightning/tree/master/plugins/grpc-plugin).
|
||||
|
||||
[block:image]
|
||||
{
|
||||
"images": [
|
||||
{
|
||||
"image": [
|
||||
"https://files.readme.io/8777cc4-image.png",
|
||||
null,
|
||||
null
|
||||
],
|
||||
"align": "center",
|
||||
"caption": "Artifacts generated from the JSON Schemas using `msggen`"
|
||||
}
|
||||
]
|
||||
}
|
||||
[/block]
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
### `cln-rpc`
|
||||
|
||||
We use `msggen` to generate the Rust bindings crate [`cln-rpc`](https://github.com/ElementsProject/lightning/tree/master/cln-rpc). These bindings contain the stubs for the JSON-RPC methods, as well as types for the request and response structs. The [generator code](https://github.com/ElementsProject/lightning/blob/master/contrib/msggen/msggen/gen/rust.py) maps each abstract JSON-RPC type to a Rust type, minimizing size (e.g., binary data is hex-decoded).
|
||||
|
||||
The calling pattern follows the `call(req_obj) -> resp_obj` format, and the individual arguments are not expanded. For more ergonomic handling of generic requests and responses we also define the `Request` and `Response` enumerations, so you can hand them to a generic function without having to resort to dynamic dispatch.
|
||||
|
||||
The remainder of the crate implements an async/await JSON-RPC client, that can deal with the Unix Domain Socket [transport](ref:lightningd-rpc) used by CLN.
|
||||
|
||||
### `cln-grpc`
|
||||
|
||||
The `cln-grpc` crate is mostly used to provide the primitives to build the `grpc-plugin`. As mentioned above, the grpc functionality relies on a chain of generated parts:
|
||||
|
||||
- First `msggen` is used to generate the [protobuf file](https://github.com/ElementsProject/lightning/blob/master/cln-grpc/proto/node.proto), containing the service definition with the method stubs, and the types referenced by those stubs.
|
||||
- Next it generates the `convert.rs` file which is used to convert the structs for in-memory representation from `cln-rpc` into the corresponding protobuf structs.
|
||||
- Finally `msggen` generates the `server.rs` file which can be bound to a grpc endpoint listening for incoming grpc requests, and it will convert the request and forward it to the JSON-RPC. Upon receiving the response it gets converted back into a grpc response and sent back.
|
||||
|
||||

|
|
@ -0,0 +1,183 @@
|
|||
---
|
||||
title: "Coding Style Guidelines"
|
||||
slug: "coding-style-guidelines"
|
||||
hidden: false
|
||||
createdAt: "2023-01-25T05:34:10.822Z"
|
||||
updatedAt: "2023-01-25T05:50:05.437Z"
|
||||
---
|
||||
Style is an individualistic thing, but working on software is group activity, so consistency is important. Generally our coding style is similar to the [Linux coding style](https://www.kernel.org/doc/html/v4.10/process/coding-style.html).
|
||||
|
||||
## Communication
|
||||
|
||||
We communicate with each other via code; we polish each others code, and give nuanced feedback. Exceptions to the rules below always exist: accept them. Particularly if they're funny!
|
||||
|
||||
## Prefer Short Names
|
||||
|
||||
`num_foos` is better than `number_of_foos`, and `i` is better than`counter`. But `bool found;` is better than `bool ret;`. Be as short as you can but still descriptive.
|
||||
|
||||
## Prefer 80 Columns
|
||||
|
||||
We have to stop somewhere. The two tools here are extracting deeply-indented code into their own functions, and use of short-cuts using early returns or continues, eg:
|
||||
|
||||
```c
|
||||
for (i = start; i != end; i++) {
|
||||
if (i->something)
|
||||
continue;
|
||||
|
||||
if (!i->something_else)
|
||||
continue;
|
||||
|
||||
do_something(i);
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
## Tabs and indentaion
|
||||
|
||||
The C code uses TAB charaters with a visual indentation of 8 whitespaces.
|
||||
If you submit code for a review, make sure your editor knows this.
|
||||
|
||||
When breaking a line with more than 80 characters, align parameters and arguments like so:
|
||||
|
||||
```c
|
||||
static void subtract_received_htlcs(const struct channel *channel,
|
||||
struct amount_msat *amount)
|
||||
```
|
||||
|
||||
|
||||
|
||||
Note: For more details, the files `.clang-format` and `.editorconfig` are located in the projects root directory.
|
||||
|
||||
## Prefer Simple Statements
|
||||
|
||||
Notice the statement above uses separate tests, rather than combining them. We prefer to only combine conditionals which are fundamentally related, eg:
|
||||
|
||||
```c
|
||||
if (i->something != NULL && *i->something < 100)
|
||||
```
|
||||
|
||||
|
||||
|
||||
## Use of `take()`
|
||||
|
||||
Some functions have parameters marked with `TAKES`, indicating that they can take lifetime ownership of a parameter which is passed using `take()`. This can be a useful optimization which allows the function to avoid making a copy, but if you hand `take(foo)` to something which doesn't support `take()` you'll probably leak memory!
|
||||
|
||||
In particular, our automatically generated marshalling code doesn't support `take()`.
|
||||
|
||||
If you're allocating something simply to hand it via `take()` you should use NULL as the parent for clarity, eg:
|
||||
|
||||
```c
|
||||
msg = towire_shutdown(NULL, &peer->channel_id, peer->final_scriptpubkey);
|
||||
enqueue_peer_msg(peer, take(msg));
|
||||
```
|
||||
|
||||
|
||||
|
||||
## Use of `tmpctx`
|
||||
|
||||
There's a convenient temporary context which gets cleaned regularly: you should use this for throwaways rather than (as you'll see some of our older code do!) grabbing some passing object to hang your temporaries off!
|
||||
|
||||
## Enums and Switch Statements
|
||||
|
||||
If you handle various enumerated values in a `switch`, don't use `default:` but instead mention every enumeration case-by-case. That way when a new enumeration case is added, most compilers will warn that you don't cover it. This is particularly valuable for code auto-generated from the specification!
|
||||
|
||||
## Initialization of Variables
|
||||
|
||||
Avoid double-initialization of variables; it's better to set them when they're known, eg:
|
||||
|
||||
```c
|
||||
bool is_foo;
|
||||
|
||||
if (bar == foo)
|
||||
is_foo = true;
|
||||
else
|
||||
is_foo = false;
|
||||
|
||||
...
|
||||
if (is_foo)...
|
||||
```
|
||||
|
||||
|
||||
|
||||
This way the compiler will warn you if you have one path which doesn't set the variable. If you initialize with `bool is_foo = false;` then you'll simply get that value without warning when you change the code and forget to set it on one path.
|
||||
|
||||
## Initialization of Memory
|
||||
|
||||
`valgrind` warns about decisions made on uninitialized memory. Prefer `tal` and `tal_arr` to `talz` and `tal_arrz` for this reason, and initialize only the fields you expect to be used.
|
||||
|
||||
Similarly, you can use `memcheck(mem, len)` to explicitly assert that memory should have been initialized, rather than having valgrind trigger later. We use this when placing things on queues, for example.
|
||||
|
||||
## Use of static and const
|
||||
|
||||
Everything should be declared static and const by default. Note that `tal_free()` can free a const pointer (also, that it returns `NULL`, for convenience).
|
||||
|
||||
## Typesafety Is Worth Some Pain
|
||||
|
||||
If code is typesafe, refactoring is as simple as changing a type and compiling to find where to refactor. We rely on this, so most places in the code will break if you hand the wrong type, eg
|
||||
`type_to_string` and `structeq`.
|
||||
|
||||
The two tools we have to help us are complicated macros in `ccan/typesafe_cb` allow you to create callbacks which must match the type of their argument, rather than using `void *`. The other is `ARRAY_SIZE`, a macro which won't compile if you hand it a pointer instead of an actual array.
|
||||
|
||||
## Use of `FIXME`
|
||||
|
||||
There are two cases in which you should use a `/* FIXME: */` comment: one is where an optimization is possible but it's not clear that it's yet worthwhile, and the second one is to note an ugly corner case which could be improved (and may be in a following patch).
|
||||
|
||||
There are always compromises in code: eventually it needs to ship. `FIXME` is `grep`-fodder for yourself and others, as well as useful warning signs if we later encounter an issue in some part of the code.
|
||||
|
||||
## If You Don't Know The Right Thing, Do The Simplest Thing
|
||||
|
||||
Sometimes the right way is unclear, so it's best not to spend time on it. It's far easier to rewrite simple code than complex code, too.
|
||||
|
||||
## Write For Today: Unused Code Is Buggy Code
|
||||
|
||||
Don't overdesign: complexity is a killer. If you need a fancy data structure, start with a brute force linked list. Once that's working, perhaps consider your fancy structure, but don't implement a generic thing. Use `/* FIXME: ...*/` to salve your conscience.
|
||||
|
||||
## Keep Your Patches Reviewable
|
||||
|
||||
Try to make a single change at a time. It's tempting to do "drive-by" fixes as you see other things, and a minimal amount is unavoidable, but you can end up shaving infinite yaks. This is a good time to drop a `/* FIXME: ...*/` comment and move on.
|
||||
|
||||
## Creating JSON APIs
|
||||
|
||||
Our JSON RPCs always return a top-level object. This allows us to add warnings (e.g. that we're still starting up) or other optional fields later.
|
||||
|
||||
Prefer to use JSON names which are already in use, or otherwise names from the BOLT specifications.
|
||||
|
||||
The same command should always return the same JSON format: this is why e.g. `listchannels` return an array even if given an argument so there's only zero or one entries.
|
||||
|
||||
All `warning` fields should have unique names which start with `warning_`, the value of which should be an explanation. This allows for programs to deal with them sanely, and also perform translations.
|
||||
|
||||
### Documenting JSON APIs
|
||||
|
||||
We use JSON schemas to validate that JSON-RPC returns are in the correct form, and also to generate documentation. See [Writing JSON Schemas](doc:writing-json-schemas).
|
||||
|
||||
## Changing JSON APIs
|
||||
|
||||
All JSON API changes need a Changelog line (see below).
|
||||
|
||||
You can always add a new output JSON field (Changelog-Added), but you cannot remove one without going through a 6-month deprecation cycle (Changelog-Deprecated)
|
||||
|
||||
So, only output it if `allow-deprecated-apis` is true, so users can test their code is futureproof. In 6 months remove it (Changelog-Removed).
|
||||
|
||||
Changing existing input parameters is harder, and should generally be avoided. Adding input parameters is possible, but should be done cautiously as too many parameters get unwieldy quickly.
|
||||
|
||||
## Github Workflows
|
||||
|
||||
We have adopted a number of workflows to facilitate the development of Core Lightning, and to make things more pleasant for contributors.
|
||||
|
||||
### Changelog Entries in Commit Messages
|
||||
|
||||
We are maintaining a changelog in the top-level directory of this project. However since every pull request has a tendency to touch the file and therefore create merge-conflicts we decided to derive the changelog file from the pull requests that were added between releases. In order for a pull request to show up in the changelog at least one of its commits will have to have a line with one of the following prefixes:
|
||||
|
||||
- `Changelog-Added: ` if the pull request adds a new feature
|
||||
- `Changelog-Changed: ` if a feature has been modified and might require changes on the user side
|
||||
- `Changelog-Deprecated: ` if a feature has been marked for deprecation, but not yet removed
|
||||
- `Changelog-Fixed: ` if a bug has been fixed
|
||||
- `Changelog-Removed: ` if a (previously deprecated) feature has been removed
|
||||
- `Changelog-Experimental: ` if it only affects --enable-experimental-features builds, or experimental- config options.
|
||||
|
||||
In case you think the pull request is small enough not to require a changelog entry please use `Changelog-None` in one of the commit messages to opt out.
|
||||
|
||||
Under some circumstances a feature may be removed even without deprecation warning if it was not part of a released version yet, or the removal is urgent.
|
||||
|
||||
In order to ensure that each pull request has the required `Changelog-*:` line for the changelog our trusty @bitcoin-bot will check logs whenever a pull request is created or updated and search for the required line. If there is no such line it'll mark the pull request as `pending` to call out the need for an entry.
|
|
@ -0,0 +1,64 @@
|
|||
---
|
||||
title: "Writing JSON Schemas"
|
||||
slug: "writing-json-schemas"
|
||||
hidden: false
|
||||
createdAt: "2023-01-25T05:46:43.718Z"
|
||||
updatedAt: "2023-01-30T15:36:28.523Z"
|
||||
---
|
||||
A JSON Schema is a JSON file which defines what a structure should look like; in our case we use it in our testsuite to check that they match command responses, and also use it to generate our documentation.
|
||||
|
||||
Yes, schemas are horrible to write, but they're damn useful. We can only use a subset of the full [JSON Schema Specification](https://json-schema.org/), but if you find that limiting it's probably a sign that you should simplify your JSON output.
|
||||
|
||||
## Updating a Schema
|
||||
|
||||
If you add a field, you should add it to the schema, and you must add "added": "VERSION" (where VERSION is the next release version!).
|
||||
|
||||
Similarly, if you deprecate a field, add "deprecated": "VERSION" (where VERSION is the next release version). They will be removed two versions later.
|
||||
|
||||
## How to Write a Schema
|
||||
|
||||
Name the schema doc/schemas/`command`.schema.json: the testsuite should pick it up and check all invocations of that command against it.
|
||||
|
||||
I recommend copying an existing one to start.
|
||||
|
||||
You will need to put the magic lines in the manual page so `make doc-all` will fill it in for you:
|
||||
|
||||
```json
|
||||
[comment]: # (GENERATE-FROM-SCHEMA-START)
|
||||
[comment]: # (GENERATE-FROM-SCHEMA-END)
|
||||
```
|
||||
|
||||
|
||||
|
||||
If something goes wrong, try tools/fromscheme.py doc/schemas/`command`.schema.json to see how far it got before it died.
|
||||
|
||||
You should always use `"additionalProperties": false`, otherwise your schema might not be covering everything. Deprecated fields simply have `"deprecated": true` in their properties, so they are allowed by omitted from the documentation.
|
||||
|
||||
You should always list all fields which are _always_ present in `"required"`.
|
||||
|
||||
We extend the basic types; see [fixtures.py](https://github.com/ElementsProject/lightning/tree/master/contrib/pyln-testing/pyln/testing/fixtures.py).
|
||||
|
||||
In addition, before committing a new schema or a new version of it, make sure that it is well formatted. If you don't want do it by hand, use `make fmt-schema` that uses jq under the hood.
|
||||
|
||||
### Using Conditional Fields
|
||||
|
||||
Sometimes one field is only sometimes present; if you can, you should make the schema know when it should (and should not!) be there.
|
||||
|
||||
There are two kinds of conditional fields expressable: fields which are only present if another field is present, or fields only present if another field has certain values.
|
||||
|
||||
To add conditional fields:
|
||||
|
||||
1. Do _not_ mention them in the main "properties" section.
|
||||
2. Set `"additionalProperties": true` for the main "properties" section.
|
||||
3. Add an `"allOf": [` array at the same height as `"properties"'`. Inside this place one `if`/`then` for each conditional field.
|
||||
4. If a field simply requires another field to be present, use the pattern `"required": [ "field" ]` inside the "if".
|
||||
5. If a field requires another field value, use the pattern
|
||||
`"properties": { "field": { "enum": [ "val1", "val2" ] } }` inside the "if".
|
||||
6. Inside the "then", use `"additionalProperties": false` and place empty `{}` for all the other possible properties.
|
||||
7. If you haven't covered all the possibilties with `if` statements, add an `else` with `"additionalProperties": false` which simply mentions every allowable property. This ensures that the fields can _only_ be present when conditions are met.
|
||||
|
||||
### JSON Drinking Game!
|
||||
|
||||
1. Sip whenever you have an additional comma at the end of a sequence.
|
||||
2. Sip whenever you omit a comma in a sequence because you cut & paste.
|
||||
3. Skull whenever you wish JSON had comments.
|
|
@ -0,0 +1,237 @@
|
|||
---
|
||||
title: "CLN Architecture"
|
||||
slug: "contribute-to-core-lightning"
|
||||
excerpt: "Familiarise yourself with the core components of Core Lightning."
|
||||
hidden: false
|
||||
createdAt: "2022-11-18T14:28:33.564Z"
|
||||
updatedAt: "2023-02-21T15:12:37.888Z"
|
||||
---
|
||||
The Core Lightning project implements the lightning protocol as specified in [various BOLTs](https://github.com/lightning/bolts). It's broken into subdaemons, with the idea being that we can add more layers of separation between different clients and extra barriers to exploits.
|
||||
|
||||
To read the code, you should start from [lightningd.c](https://github.com/ElementsProject/lightning/blob/master/lightningd/lightningd.c) and hop your way through the '~' comments at the head of each daemon in the suggested order.
|
||||
|
||||
## The Components
|
||||
|
||||
Here's a list of parts, with notes:
|
||||
|
||||
- ccan - useful routines from <http://ccodearchive.net>
|
||||
- Use make update-ccan to update it.
|
||||
- Use make update-ccan CCAN_NEW="mod1 mod2..." to add modules
|
||||
- Do not edit this! If you want a wrapper, add one to common/utils.h.
|
||||
|
||||
- bitcoin/ - bitcoin script, signature and transaction routines.
|
||||
- Not a complete set, but enough for our purposes.
|
||||
|
||||
- external/ - external libraries from other sources
|
||||
- libbacktrace - library to provide backtraces when things go wrong.
|
||||
- libsodium - encryption library (should be replaced soon with built-in)
|
||||
- libwally-core - bitcoin helper library
|
||||
- secp256k1 - bitcoin curve encryption library within libwally-core
|
||||
- jsmn - tiny JSON parsing helper
|
||||
|
||||
- tools/ - tools for building
|
||||
- check-bolt.c: check the source code contains correct BOLT quotes (as used by check-source)
|
||||
- generate-wire.py: generates wire marshal/un-marshaling routines for subdaemons and BOLT specs.
|
||||
- mockup.sh / update-mocks.sh: tools to generate mock functions for unit tests.
|
||||
|
||||
- tests/ - blackbox tests (mainly)
|
||||
- unit tests are in tests/ subdirectories in each other directory.
|
||||
|
||||
- doc/ - you are here
|
||||
|
||||
- devtools/ - tools for developers
|
||||
- Generally for decoding our formats.
|
||||
|
||||
- contrib/ - python support and other stuff which doesn't belong :)
|
||||
|
||||
- wire/ - basic marshalling/un for messages defined in the BOLTs
|
||||
|
||||
- common/ - routines needed by any two or more of the directories below
|
||||
|
||||
- cli/ - commandline utility to control lightning daemon.
|
||||
|
||||
- lightningd/ - master daemon which controls the subdaemons and passes peer file descriptors between them.
|
||||
|
||||
- wallet/ - database code used by master for tracking what's happening.
|
||||
|
||||
- hsmd/ - daemon which looks after the cryptographic secret, and performs commitment signing.
|
||||
|
||||
- gossipd/ - daemon to maintain routing information and broadcast gossip.
|
||||
|
||||
- connectd/ - daemon to connect to other peers, and receive incoming.
|
||||
|
||||
- openingd/ - daemon to open a channel for a single peer, and chat to a peer which doesn't have any channels/
|
||||
|
||||
- channeld/ - daemon to operate a single peer once channel is operating normally.
|
||||
|
||||
- closingd/ - daemon to handle mutual closing negotiation with a single peer.
|
||||
|
||||
- onchaind/ - daemon to handle a single channel which has had its funding transaction spent.
|
||||
|
||||
## Database
|
||||
|
||||
Core Lightning state is persisted in `lightning-dir`. It is a sqlite database stored in the `lightningd.sqlite3` file, typically under `~/.lightning/<network>/`.
|
||||
You can run queries against this file like so:
|
||||
|
||||
```shell
|
||||
$ sqlite3 ~/.lightning/bitcoin/lightningd.sqlite3 \
|
||||
"SELECT HEX(prev_out_tx), prev_out_index, status FROM outputs"
|
||||
```
|
||||
|
||||
|
||||
|
||||
Or you can launch into the sqlite3 repl and check things out from there:
|
||||
|
||||
```shell
|
||||
$ sqlite3 ~/.lightning/bitcoin/lightningd.sqlite3
|
||||
SQLite version 3.21.0 2017-10-24 18:55:49
|
||||
Enter ".help" for usage hints.
|
||||
sqlite> .tables
|
||||
channel_configs invoices peers vars
|
||||
channel_htlcs outputs shachain_known version
|
||||
channels payments shachains
|
||||
sqlite> .schema outputs
|
||||
...
|
||||
```
|
||||
|
||||
|
||||
|
||||
Some data is stored as raw bytes, use `HEX(column)` to pretty print these.
|
||||
|
||||
Make sure that clightning is not running when you query the database, as some queries may lock the database and cause crashes.
|
||||
|
||||
#### Common variables
|
||||
|
||||
Table `vars` contains global variables used by lightning node.
|
||||
|
||||
```shell
|
||||
$ sqlite3 ~/.lightning/bitcoin/lightningd.sqlite3
|
||||
SQLite version 3.21.0 2017-10-24 18:55:49
|
||||
Enter ".help" for usage hints.
|
||||
sqlite> .headers on
|
||||
sqlite> select * from vars;
|
||||
name|val
|
||||
next_pay_index|2
|
||||
bip32_max_index|4
|
||||
...
|
||||
```
|
||||
|
||||
|
||||
|
||||
Variables:
|
||||
|
||||
- `next_pay_index` next resolved invoice counter that will get assigned.
|
||||
- `bip32_max_index` last wallet derivation counter.
|
||||
|
||||
Note: Each time `newaddr` command is called, `bip32_max_index` counter is increased to the last derivation index. Each address generated after `bip32_max_index` is not included as
|
||||
lightning funds.
|
||||
|
||||
# gossip_store: Direct Access To Lightning Gossip
|
||||
|
||||
The `lightning_gossipd` daemon stores the gossip messages, along with some internal data, in a file called the "gossip_store". Various plugins and daemons access this (in a read-only manner), and the format is documented here.
|
||||
|
||||
## The File Header
|
||||
|
||||
```
|
||||
u8 version;
|
||||
```
|
||||
|
||||
|
||||
|
||||
The gossmap header consists of one byte. The top 3 bits are the major version: if these are not all zero, you need to re-read this (updated) document to see what changed. The lower 5 bits are the minor version, which won't worry you: currently they will be 11.
|
||||
|
||||
After the file header comes a number of records.
|
||||
|
||||
## The Record Header
|
||||
|
||||
```
|
||||
be16 flags;
|
||||
be16 len;
|
||||
be32 crc;
|
||||
be32 timestamp;
|
||||
```
|
||||
|
||||
|
||||
|
||||
Each record consists of a header and a message. The header is big-endian, containing flags, the length (of the following body), the crc32c (of the following message, starting with the timestamp field in the header) and a timestamp extracted from certain messages (zero where not relevant, but ignore it in those cases).
|
||||
|
||||
The flags currently defined are:
|
||||
|
||||
```
|
||||
#define DELETED 0x8000
|
||||
#define PUSH 0x4000
|
||||
#define RATELIMIT 0x2000
|
||||
```
|
||||
|
||||
|
||||
|
||||
Deleted fields should be ignored: on restart, they will be removed as the gossip_store is rewritten.
|
||||
|
||||
The push flag indicates gossip which is generated locally: this is important for gossip timestamp filtering, where peers request gossip and we always send our own gossip messages even if the timestamp wasn't within their request.
|
||||
|
||||
The ratelimit flag indicates that this gossip message came too fast: we record it, but don't relay it to peers.
|
||||
|
||||
Other flags should be ignored.
|
||||
|
||||
## The Message
|
||||
|
||||
Each messages consists of a 16-bit big-endian "type" field (for efficiency, an implementation may read this along with the header), and optional data. Some messages are defined by the BOLT 7 gossip protocol, others are for internal use. Unknown message types should be skipped over.
|
||||
|
||||
### BOLT 7 Messages
|
||||
|
||||
These are the messages which gossipd has validated, and ensured are in order.
|
||||
|
||||
- `channel_announcement` (256): a complete, validated channel announcement. This will always come before any `channel_update` which refers to it, or `node_announcement` which refers to a node.
|
||||
- `channel_update` (258): a complete, validated channel update. Note that you can see multiple of these (old ones will be deleted as they are replaced though).
|
||||
- `node_announcement` (257): a complete, validated node announcement. Note that you can also see multiple of these (old ones will be deleted as they are replaced).
|
||||
|
||||
### Internal Gossip Daemon Messages
|
||||
|
||||
These messages contain additional data, which may be useful.
|
||||
|
||||
- `gossip_store_channel_amount` (4101)
|
||||
- `satoshis`: u64
|
||||
|
||||
This always immediately follows `channel_announcement` messages, and contains the actual capacity of the channel.
|
||||
|
||||
- `gossip_store_private_channel` (4104)
|
||||
- `amount_sat`: u64
|
||||
- `len`: u16
|
||||
- `announcement`: u8[len]
|
||||
|
||||
This contains information about a private (could be made public later!) channel, with announcement in the same format as a normal `channel_announcement` with invalid signatures.
|
||||
|
||||
- `gossip_store_private_update` (4102)
|
||||
- `len`: u16
|
||||
- `update`: u8[len]
|
||||
|
||||
This contains a private `channel_update` (i.e. for a channel described by `gossip_store_private_channel`.
|
||||
|
||||
- `gossip_store_delete_chan` (4103)
|
||||
- `scid`: u64
|
||||
|
||||
This is added when a channel is deleted. You won't often see this if you're reading the file once (as the channel record header will have been marked `deleted` first), but useful if you are polling the file for updates.
|
||||
|
||||
- `gossip_store_ended` (4105)
|
||||
- `equivalent_offset`: u64
|
||||
|
||||
This is only ever added as the final entry in the gossip_store. It means the file has been deleted (usually because lightningd has been restarted), and you should re-open it. As an optimization, the `equivalent_offset` in the new file reflects the point at which the new gossip_store is equivalent to this one (with deleted records removed). However, if lightningd has been restarted multiple times it is possible that this offset is not valid, so it's really only useful if you're actively monitoring the file.
|
||||
|
||||
- `gossip_store_chan_dying` (4106)
|
||||
- `scid`: u64
|
||||
- `blockheight`: u32
|
||||
|
||||
This is placed in the gossip_store file when a funding transaction is spent. `blockheight` is set to 12 blocks beyond the block containing the spend: at this point, gossipd will delete the channel.
|
||||
|
||||
## Using the Gossip Store File
|
||||
|
||||
- Always check the major version number! We will increment it if the format changes in a way that breaks readers.
|
||||
- Ignore unknown flags in the header.
|
||||
- Ignore message types you don't know.
|
||||
- You don't need to check the messages, as they have been validated.
|
||||
- It is possible to see a partially-written record at the end. Ignore it.
|
||||
|
||||
If you are keeping the file open to watch for changes:
|
||||
|
||||
- The file is append-only, so you can simply try reading more records using inotify (or equivalent) or simply checking every few seconds.
|
||||
- If you see a `gossip_store_ended` message, reopen the file.
|
157
doc/guides/Contribute to Core Lightning/contributor-workflow.md
Normal file
157
doc/guides/Contribute to Core Lightning/contributor-workflow.md
Normal file
|
@ -0,0 +1,157 @@
|
|||
---
|
||||
title: "Contributor Workflow"
|
||||
slug: "contributor-workflow"
|
||||
excerpt: "Learn the practical process and guidelines for contributing."
|
||||
hidden: false
|
||||
createdAt: "2022-12-09T09:57:57.245Z"
|
||||
updatedAt: "2023-04-22T13:00:38.252Z"
|
||||
---
|
||||
## Build and Development
|
||||
|
||||
Install the following dependencies for best results:
|
||||
|
||||
```shell
|
||||
sudo apt update
|
||||
sudo apt install valgrind cppcheck shellcheck libsecp256k1-dev libpq-dev
|
||||
```
|
||||
|
||||
|
||||
|
||||
Re-run `configure` and build using `make`:
|
||||
|
||||
```shell
|
||||
./configure --enable-developer
|
||||
make -j$(nproc)
|
||||
```
|
||||
|
||||
|
||||
|
||||
## Debugging
|
||||
|
||||
You can build Core Lightning with `DEVELOPER=1` to use dev commands listed in `cli/lightning-cli help`. `./configure --enable-developer` will do that. You can log console messages with log_info() in lightningd and status_debug() in other subdaemons.
|
||||
|
||||
You can debug crashing subdaemons with the argument `--dev-debugger=channeld`, where `channeld` is the subdaemon name. It will run `gnome-terminal` by default with a gdb attached to the subdaemon when it starts. You can change the terminal used by setting the `DEBUG_TERM` environment variable, such as `DEBUG_TERM="xterm -e"` or `DEBUG_TERM="konsole -e"`.
|
||||
|
||||
It will also print out (to stderr) the gdb command for manual connection. The subdaemon will be stopped (it sends itself a `SIGSTOP`); you'll need to `continue` in gdb.
|
||||
|
||||
```shell
|
||||
./configure --enable-developer
|
||||
make -j$(nproc)
|
||||
```
|
||||
|
||||
|
||||
|
||||
## Making BOLT Modifications
|
||||
|
||||
All of code for marshalling/unmarshalling BOLT protocol messages is generated directly from the spec. These are pegged to the BOLTVERSION, as specified in `Makefile`.
|
||||
|
||||
## Source code analysis
|
||||
|
||||
An updated version of the NCC source code analysis tool is available at
|
||||
|
||||
<https://github.com/bitonic-cjp/ncc>
|
||||
|
||||
It can be used to analyze the lightningd source code by running `make clean && make ncc`. The output (which is built in parallel with the binaries) is stored in .nccout files. You can browse it, for instance, with a command like `nccnav lightningd/lightningd.nccout`.
|
||||
|
||||
## Subtleties
|
||||
|
||||
There are a few subtleties you should be aware of as you modify deeper parts of the code:
|
||||
|
||||
- `ccan/structeq`'s STRUCTEQ_DEF will define safe comparison function `foo_eq()` for struct `foo`, failing the build if the structure has implied padding.
|
||||
- `command_success`, `command_fail`, and `command_fail_detailed` will free the `cmd` you pass in.
|
||||
This also means that if you `tal`-allocated anything from the `cmd`, they will also get freed at those points and will no longer be accessible afterwards.
|
||||
- When making a structure part of a list, you will instance a `struct list_node`. This has to be the _first_ field of the structure, or else `dev-memleak` command will think your structure has leaked.
|
||||
|
||||
## Protocol Modifications
|
||||
|
||||
The source tree contains CSV files extracted from the v1.0 BOLT specifications (wire/extracted_peer_wire_csv and wire/extracted_onion_wire_csv). You can regenerate these by first deleting the local copy(if any) at directory .tmp.bolts, setting `BOLTDIR` and `BOLTVERSION` appropriately, and finally running `make
|
||||
extract-bolt-csv`. By default the bolts will be retrieved from the directory `../bolts` and a recent git version.
|
||||
|
||||
e.g., `make extract-bolt-csv BOLTDIR=../bolts BOLTVERSION=ee76043271f79f45b3392e629fd35e47f1268dc8`
|
||||
|
||||
## Release checklist
|
||||
|
||||
Here's a checklist for the release process.
|
||||
|
||||
### Leading Up To The Release
|
||||
|
||||
1. Talk to team about whether there are any changes which MUST go in this release which may cause delay.
|
||||
2. Look through outstanding issues, to identify any problems that might be necessary to fixup before the release. Good candidates are reports of the project not building on different architectures or crashes.
|
||||
3. Identify a good lead for each outstanding issue, and ask them about a fix timeline.
|
||||
4. Create a milestone for the _next_ release on Github, and go though open issues and PRs and mark accordingly.
|
||||
5. Ask (via email) the most significant contributor who has not already named a release to name the release (use devtools/credit to find this contributor). CC previous namers and team.
|
||||
|
||||
### Preparing for -rc1
|
||||
|
||||
1. Check that `CHANGELOG.md` is well formatted, ordered in areas, covers all significant changes, and sub-ordered approximately by user impact & coolness.
|
||||
2. Use `devtools/changelog.py` to collect the changelog entries from pull request commit messages and merge them into the manually maintained `CHANGELOG.md`. This does API queries to GitHub, which are severely ratelimited unless you use an API token: set the `GH_TOKEN` environment variable to a Personal Access Token from <https://github.com/settings/tokens>
|
||||
3. Create a new CHANGELOG.md heading to `v<VERSION>rc1`, and create a link at the bottom. Note that you should exactly copy the date and name format from a previous release, as the `build-release.sh` script relies on this.
|
||||
4. Update the contrib/pyln package versions: `make update-pyln-versions NEW_VERSION=<VERSION>`
|
||||
5. Create a PR with the above.
|
||||
|
||||
### Releasing -rc1
|
||||
|
||||
1. Merge the above PR.
|
||||
2. Tag it `git pull && git tag -s v<VERSION>rc1`. Note that you should get a prompt to give this tag a 'message'. Make sure you fill this in.
|
||||
3. Confirm that the tag will show up for builds with `git describe`
|
||||
4. Push the tag to remote `git push --tags`.
|
||||
5. Update the /topic on #c-lightning on Libera.
|
||||
6. Prepare draft release notes (see devtools/credit), and share with team for editing.
|
||||
7. Upgrade your personal nodes to the rc1, to help testing.
|
||||
8. Test `tools/build-release.sh` to build the non-reproducible images and reproducible zipfile.
|
||||
9. Use the zipfile to produce a [Reproducible builds](doc:repro).
|
||||
|
||||
### Releasing -rc2, etc
|
||||
|
||||
1. Change rc1 to rc2 in CHANGELOG.md.
|
||||
2. Add a PR with the rc2.
|
||||
3. Tag it `git pull && git tag -s v<VERSION>rc2 && git push --tags`
|
||||
4. Update the /topic on #c-lightning on Libera.
|
||||
5. Upgrade your personal nodes to the rc2.
|
||||
|
||||
### Tagging the Release
|
||||
|
||||
1. Update the CHANGELOG.md; remove -rcN in both places, update the date and add title and namer.
|
||||
2. Update the contrib/pyln package versions: `make update-pyln-versions NEW_VERSION=<VERSION>`
|
||||
3. Add a PR with that release.
|
||||
4. Merge the PR, then:
|
||||
1. `export VERSION=0.9.3`
|
||||
2. `git pull`
|
||||
3. `git tag -a -s v${VERSION} -m v${VERSION}`
|
||||
4. `git push --tags`
|
||||
5. Run `tools/build-release.sh` to build the non-reproducible images and reproducible zipfile.
|
||||
6. Use the zipfile to produce a [reproducible build](REPRODUCIBLE.md).
|
||||
7. To create and sign checksums, start by entering the release dir: `cd release`
|
||||
8. Create the checksums for signing: `sha256sum * > SHA256SUMS`
|
||||
9. Create the first signature with `gpg -sb --armor SHA256SUMS`
|
||||
10. The tarballs may be owned by root, so revert ownership if necessary:
|
||||
`sudo chown ${USER}:${USER} *${VERSION}*`
|
||||
11. Upload the resulting files to github and save as a draft.
|
||||
(<https://github.com/ElementsProject/lightning/releases/>)
|
||||
12. Ping the rest of the team to check the SHA256SUMS file and have them send their
|
||||
`gpg -sb --armor SHA256SUMS`.
|
||||
13. Append the signatures into a file called `SHA256SUMS.asc`, verify
|
||||
with `gpg --verify SHA256SUMS.asc` and include the file in the draft
|
||||
release.
|
||||
14. `make pyln-release` to upload pyln modules to pypi.org. This requires keys
|
||||
for each of pyln-client, pyln-proto, and pyln-testing accessible to poetry.
|
||||
This can be done by configuring the python keyring library along with a
|
||||
suitable backend. Alternatively, the key can be set as an environment
|
||||
variable and each of the pyln releases can be built and published
|
||||
independently:
|
||||
- `export POETRY_PYPI_TOKEN_PYPI=<pyln-client token>`
|
||||
- `make pyln-release-client`
|
||||
- ... repeat for each pyln package.
|
||||
|
||||
### Performing the Release
|
||||
|
||||
1. Edit the GitHub draft and include the `SHA256SUMS.asc` file.
|
||||
2. Publish the release as not a draft.
|
||||
3. Update the /topic on #c-lightning on Libera.
|
||||
4. Send a mail to c-lightning and lightning-dev mailing lists, using the same wording as the Release Notes in github.
|
||||
|
||||
### Post-release
|
||||
|
||||
1. Look through PRs which were delayed for release and merge them.
|
||||
2. Close out the Milestone for the now-shipped release.
|
||||
3. Update this file with any missing or changed instructions.
|
29
doc/guides/Contribute to Core Lightning/security-policy.md
Normal file
29
doc/guides/Contribute to Core Lightning/security-policy.md
Normal file
|
@ -0,0 +1,29 @@
|
|||
---
|
||||
title: "Security policy"
|
||||
slug: "security-policy"
|
||||
excerpt: "Learn how to responsibly report a security issue."
|
||||
hidden: false
|
||||
createdAt: "2022-12-09T09:58:38.899Z"
|
||||
updatedAt: "2023-02-21T15:15:57.281Z"
|
||||
---
|
||||
## Supported Versions
|
||||
|
||||
We have a 3 month release cycle, and the last two versions are supported.
|
||||
|
||||
## Reporting a Vulnerability
|
||||
|
||||
To report security issues, send an email to rusty `at` rustcorp.com.au, or security `at` blockstream.com (not for support).
|
||||
|
||||
## Signatures For Releases
|
||||
|
||||
The following keys may be used to communicate sensitive information to
|
||||
developers, and to validate signatures on releases:
|
||||
|
||||
| Name | Fingerprint |
|
||||
| ---------------- | -------------------------------------------------- |
|
||||
| Rusty Russell | 15EE 8D6C AB0E 7F0C F999 BFCB D920 0E6C D1AD B8F1 |
|
||||
| Christian Decker | B731 AAC5 21B0 1385 9313 F674 A26D 6D9F E088 ED58 |
|
||||
| Lisa Neigut | 30DE 693A E0DE 9E37 B3E7 EB6B BFF0 F678 10C1 EED1 |
|
||||
| Alex Myers | 0437 4E42 789B BBA9 462E 4767 F3BF 63F2 7474 36AB |
|
||||
|
||||
You can import a key by running the following command with that individual’s fingerprint: `gpg --keyserver hkps://keys.openpgp.org --recv-keys "<fingerprint>"`. Ensure that you put quotes around fingerprints containing spaces.
|
200
doc/guides/Contribute to Core Lightning/testing.md
Normal file
200
doc/guides/Contribute to Core Lightning/testing.md
Normal file
|
@ -0,0 +1,200 @@
|
|||
---
|
||||
title: "Testing"
|
||||
slug: "testing"
|
||||
excerpt: "Understand the testing and code review practices."
|
||||
hidden: false
|
||||
createdAt: "2022-12-09T09:58:21.295Z"
|
||||
updatedAt: "2023-04-22T11:58:25.622Z"
|
||||
---
|
||||
# Testing
|
||||
|
||||
Tests are run with: `make check [flags]` where the pertinent flags are:
|
||||
|
||||
```shell
|
||||
DEVELOPER=[0|1] - developer mode increases test coverage
|
||||
VALGRIND=[0|1] - detects memory leaks during test execution but adds a significant delay
|
||||
PYTEST_PAR=n - runs pytests in parallel
|
||||
```
|
||||
|
||||
|
||||
|
||||
A modern desktop can build and run through all the tests in a couple of minutes with:
|
||||
|
||||
```shell Shell
|
||||
make -j12 full-check PYTEST_PAR=24 DEVELOPER=1 VALGRIND=0
|
||||
```
|
||||
|
||||
|
||||
|
||||
Adjust `-j` and `PYTEST_PAR` accordingly for your hardware.
|
||||
|
||||
There are four kinds of tests:
|
||||
|
||||
- **source tests** - run by `make check-source`, looks for whitespace, header order, and checks formatted quotes from BOLTs if BOLTDIR exists.
|
||||
|
||||
- **unit tests** - standalone programs that can be run individually. You can also run all of the unit tests with `make check-units`. They are `run-*.c` files in test/ subdirectories used to test routines inside C source files.
|
||||
|
||||
You should insert the lines when implementing a unit test:
|
||||
|
||||
`/* AUTOGENERATED MOCKS START */`
|
||||
|
||||
`/* AUTOGENERATED MOCKS END */`
|
||||
|
||||
and `make update-mocks` will automatically generate stub functions which will allow you to link (and conveniently crash if they're called).
|
||||
|
||||
- **blackbox tests** - These tests setup a mini-regtest environment and test lightningd as a whole. They can be run individually:
|
||||
|
||||
`PYTHONPATH=contrib/pylightning:contrib/pyln-client:contrib/pyln-testing:contrib/pyln-proto py.test -v tests/`
|
||||
|
||||
You can also append `-k TESTNAME` to run a single test. Environment variables `DEBUG_SUBD=<subdaemon>` and `TIMEOUT=<seconds>` can be useful for debugging subdaemons on individual tests.
|
||||
|
||||
- **pylightning tests** - will check contrib pylightning for codestyle and run the tests in `contrib/pylightning/tests` afterwards:
|
||||
|
||||
`make check-python`
|
||||
|
||||
Our Github Actions instance (see `.github/workflows/*.yml`) runs all these for each pull request.
|
||||
|
||||
#### Additional Environment Variables
|
||||
|
||||
```text
|
||||
TEST_CHECK_DBSTMTS=[0|1] - When running blackbox tests, this will
|
||||
load a plugin that logs all compiled
|
||||
and expanded database statements.
|
||||
Note: Only SQLite3.
|
||||
TEST_DB_PROVIDER=[sqlite3|postgres] - Selects the database to use when running
|
||||
blackbox tests.
|
||||
EXPERIMENTAL_DUAL_FUND=[0|1] - Enable dual-funding tests.
|
||||
```
|
||||
|
||||
|
||||
|
||||
#### Troubleshooting
|
||||
|
||||
##### Valgrind complains about code we don't control
|
||||
|
||||
Sometimes `valgrind` will complain about code we do not control ourselves, either because it's in a library we use or it's a false positive. There are generally three ways to address these issues (in descending order of preference):
|
||||
|
||||
1. Add a suppression for the one specific call that is causing the issue. Upon finding an issue `valgrind` is instructed in the testing framework to print filters that'd match the issue. These can be added to the suppressions file under `tests/valgrind-suppressions.txt` in order to explicitly skip reporting these in future. This is preferred over the other solutions since it only disables reporting selectively for things that were manually checked. See the [valgrind docs](https://valgrind.org/docs/manual/manual-core.html#manual-core.suppress) for details.
|
||||
2. Add the process that `valgrind` is complaining about to the `--trace-children-skip` argument in `pyln-testing`. This is used in cases of full binaries not being under our control, such as the `python3` interpreter used in tests that run plugins. Do not use
|
||||
this for binaries that are compiled from our code, as it tends to mask real issues.
|
||||
3. Mark the test as skipped if running under `valgrind`. It's mostly used to skip tests that otherwise would take considerably too long to test on CI. We discourage this for suppressions, since it is a very blunt tool.
|
||||
|
||||
# Fuzz testing
|
||||
|
||||
Core Lightning currently supports coverage-guided fuzz testing using [LLVM's libfuzzer](https://www.llvm.org/docs/LibFuzzer.html)
|
||||
when built with `clang`.
|
||||
|
||||
The goal of fuzzing is to generate mutated -and often unexpected- inputs (`seed`s) to pass
|
||||
to (parts of) a program (`target`) in order to make sure the codepaths used:
|
||||
|
||||
- do not crash
|
||||
- are valid (if combined with sanitizers)
|
||||
The generated seeds can be stored and form a `corpus`, which we try to optimise (don't
|
||||
store two seeds that lead to the same codepath).
|
||||
|
||||
For more info about fuzzing see [here](https://github.com/google/fuzzing/tree/master/docs), and for more about `libfuzzer` in particular see [here](https://www.llvm.org/docs/LibFuzzer.html).
|
||||
|
||||
## Build the fuzz targets
|
||||
|
||||
In order to build the Core Lightning binaries with code coverage you will need a recent
|
||||
[clang](http://clang.llvm.org/). The more recent the compiler version the better.
|
||||
|
||||
Then you'll need to enable support at configuration time. You likely want to enable a few sanitizers for bug detections as well as experimental features for an extended coverage (not required though).
|
||||
|
||||
```shell
|
||||
DEVELOPER=1 EXPERIMENTAL_FEATURES=1 ASAN=1 UBSAN=1 VALGRIND=0 FUZZING=1 CC=clang ./configure && make
|
||||
```
|
||||
|
||||
|
||||
|
||||
The targets will be built in `tests/fuzz/` as `fuzz-` binaries, with their best known seed corpora stored in `tests/fuzz/corpora/`.
|
||||
|
||||
You can run the fuzz targets on their seed corpora to check for regressions:
|
||||
|
||||
```shell
|
||||
make check-fuzz
|
||||
```
|
||||
|
||||
|
||||
|
||||
## Run one or more target(s)
|
||||
|
||||
You can run each target independently. Pass `-help=1` to see available options, for example:
|
||||
|
||||
```
|
||||
./tests/fuzz/fuzz-addr -help=1
|
||||
```
|
||||
|
||||
|
||||
|
||||
Otherwise, you can use the Python runner to either run the targets against a given seed corpus:
|
||||
|
||||
```
|
||||
./tests/fuzz/run.py fuzz_corpus -j2
|
||||
```
|
||||
|
||||
|
||||
|
||||
Or extend this corpus:
|
||||
|
||||
```
|
||||
./tests/fuzz/run.py fuzz_corpus -j2 --generate --runs 12345
|
||||
```
|
||||
|
||||
|
||||
|
||||
The latter will run all targets two by two `12345` times.
|
||||
|
||||
If you want to contribute new seeds, be sure to merge your corpus with the main one:
|
||||
|
||||
```
|
||||
./tests/fuzz/run.py my_locally_extended_fuzz_corpus -j2 --generate --runs 12345
|
||||
./tests/fuzz/run.py tests/fuzz/corpora --merge_dir my_locally_extended_fuzz_corpus
|
||||
```
|
||||
|
||||
|
||||
|
||||
## Improve seed corpora
|
||||
|
||||
If you find coverage increasing inputs while fuzzing, please create a pull request to add them into `tests/fuzz/corpora`. Be sure to minimize any additions to the corpora first.
|
||||
|
||||
### Example
|
||||
|
||||
Here's an example workflow to contribute new inputs for the `fuzz-addr` target.
|
||||
|
||||
Create a directory for newly found corpus inputs and begin fuzzing:
|
||||
|
||||
```shell
|
||||
mkdir -p local_corpora/fuzz-addr
|
||||
./tests/fuzz/fuzz-addr -jobs=4 local_corpora/fuzz-addr tests/fuzz/corpora/fuzz-addr/
|
||||
```
|
||||
|
||||
|
||||
|
||||
After some time, libFuzzer may find some potential coverage increasing inputs and save them in `local_corpora/fuzz-addr`. We can then merge them into the seed corpora in `tests/fuzz/corpora`:
|
||||
|
||||
```shell
|
||||
./tests/fuzz/run.py tests/fuzz/corpora --merge_dir local_corpora
|
||||
```
|
||||
|
||||
|
||||
|
||||
This will copy over any inputs that improve the coverage of the existing corpus. If any new inputs were added, create a pull request to improve the upstream seed corpus:
|
||||
|
||||
```shell
|
||||
git add tests/fuzz/corpora/fuzz-addr/*
|
||||
git commit
|
||||
...
|
||||
```
|
||||
|
||||
|
||||
|
||||
## Write new fuzzing targets
|
||||
|
||||
In order to write a new target:
|
||||
|
||||
- include the `libfuzz.h` header
|
||||
- fill two functions: `init()` for static stuff and `run()` which will be called repeatedly with mutated data.
|
||||
- read about [what makes a good fuzz target](https://github.com/google/fuzzing/blob/master/docs/good-fuzz-target.md).
|
||||
|
||||
A simple example is [`fuzz-addr`](https://github.com/ElementsProject/lightning/blob/master/tests/fuzz/fuzz-addr.c). It setups the chainparams and context (wally, tmpctx, ..) in `init()` then bruteforces the bech32 encoder in `run()`.
|
10
doc/guides/Developer-s Guide/api-reference.md
Normal file
10
doc/guides/Developer-s Guide/api-reference.md
Normal file
|
@ -0,0 +1,10 @@
|
|||
---
|
||||
title: "API Reference"
|
||||
slug: "api-reference"
|
||||
excerpt: "View all API methods, attributes and responses."
|
||||
hidden: false
|
||||
createdAt: "2022-12-09T09:57:05.023Z"
|
||||
updatedAt: "2023-01-22T18:11:50.838Z"
|
||||
type: "link"
|
||||
link_url: "https://docs.corelightning.org/reference"
|
||||
---
|
31
doc/guides/Developer-s Guide/app-development.md
Normal file
31
doc/guides/Developer-s Guide/app-development.md
Normal file
|
@ -0,0 +1,31 @@
|
|||
---
|
||||
title: "App Development"
|
||||
slug: "app-development"
|
||||
excerpt: "Build a lightning application using Core Lightning APIs."
|
||||
hidden: false
|
||||
createdAt: "2022-12-09T09:56:04.704Z"
|
||||
updatedAt: "2023-02-21T13:48:15.261Z"
|
||||
---
|
||||
There are several ways to connect and interact with a Core Lightning node in order to build a lightning app or integrate lightning in your application.
|
||||
|
||||
- Using **[JSON-RPC commands](doc:json-rpc) **if you're building an application in the same system as the CLN node.
|
||||
- Using **[gRPC APIs](doc:grpc)** if you're building an application in a remote client and want to connect to the CLN node over a secure network.
|
||||
- Using **[Commando](doc:commando)** to connect to a CLN node over the lightning network and issue commands.
|
||||
- Third-party libraries that offer **[REST](doc:third-party-libraries#rest)**, **[GraphQL](doc:third-party-libraries#graphql)** or **[JSON over HTTPS](doc:third-party-libraries#json-over-https)** frameworks to connect to a CLN node remotely.
|
||||
|
||||
[block:image]
|
||||
{
|
||||
"images": [
|
||||
{
|
||||
"image": [
|
||||
"https://files.readme.io/b8d50a6-cln-api.png",
|
||||
null,
|
||||
"A visual chart of all interface and transport protocols to interact with a CLN node."
|
||||
],
|
||||
"align": "center",
|
||||
"border": true,
|
||||
"caption": "A visual chart of available API interfaces and transport protocols for interacting with a CLN node"
|
||||
}
|
||||
]
|
||||
}
|
||||
[/block]
|
28
doc/guides/Developer-s Guide/app-development/commando.md
Normal file
28
doc/guides/Developer-s Guide/app-development/commando.md
Normal file
|
@ -0,0 +1,28 @@
|
|||
---
|
||||
title: "Commando"
|
||||
slug: "commando"
|
||||
hidden: false
|
||||
createdAt: "2023-02-08T09:54:01.784Z"
|
||||
updatedAt: "2023-02-21T13:55:16.224Z"
|
||||
---
|
||||
> 📘
|
||||
>
|
||||
> Used for applications that want to connect to a CLN node over the lightning network in a secure manner.
|
||||
|
||||
Commando is a direct-to-node plugin that ships natively with Core Lightning. It lets you set _runes_ to create fine-grained access controls to a CLN node's RPC , and provides access to those same RPCs via Lightning-native network connections.
|
||||
|
||||
The commando plugin adds two new RPC methods: `commando` and `commando-rune`.
|
||||
|
||||
- `commando` allows you to send a directly connected peer an RPC request, who, in turn, will run and send the result to you. This uses the secure connections that Lightning nodes establish with each other on connect. As arbitrary RPC executions by any connected node can be dangerous, generally, the peer will only allow you to execute the command if you've also provided a `rune`.
|
||||
- `commando-rune` is the RPC command that allows you to construct a base64 encoded permissions string, which can be handed to peers to allow them to use commando to query or ask your node for things remotely. runes have restrictions added to them, meaning no one can remove a restriction from a rune you've generated and handed them. These restrictions allow you to carefully craft the RPC commands a caller is allowed to access, the number of times that they can do so, the length of time the rune is valid for, and more.
|
||||
|
||||
For more details on using runes, read through the docs for [commando](ref:lightning-commando) and [commando-rune](ref:lightning-commando-rune).
|
||||
|
||||
Check out [this](https://www.youtube.com/watch?v=LZLRCPNn7vA) video of William Casarin (@jb55) walking through how to create runes and connect to a Lightning node via [lnsocket](https://github.com/jb55/lnsocket).
|
||||
|
||||
|
||||
|
||||
> 📘 Pro-tip
|
||||
>
|
||||
> - **[lnsocket](https://github.com/jb55/lnsocket)** allows you to build a web or mobile app that talks directly to a CLN node over commando with runes. Check out [LNLink](https://lnlink.app/) - a mobile app that allows you to control a Core Lightning node over the lightning network itself!
|
||||
> - **[lnmessage](https://github.com/aaronbarnardsound/lnmessage)** allows you to talk to Core Lightning nodes from the browser via a Websocket connection and control it using commando.
|
131
doc/guides/Developer-s Guide/app-development/grpc.md
Normal file
131
doc/guides/Developer-s Guide/app-development/grpc.md
Normal file
|
@ -0,0 +1,131 @@
|
|||
---
|
||||
title: "gRPC APIs"
|
||||
slug: "grpc"
|
||||
hidden: false
|
||||
createdAt: "2023-02-07T12:52:39.665Z"
|
||||
updatedAt: "2023-02-08T09:56:41.158Z"
|
||||
---
|
||||
> 📘
|
||||
>
|
||||
> Used for applications that want to connect to CLN over the network in a secure manner.
|
||||
|
||||
Since v0.11.0, Core Lightning provides a new interface: `cln-grpc`, a Rust-based plugin that provides a standardized API that apps, plugins, and other tools could use to interact with Core Lightning securely.
|
||||
|
||||
We always had a JSON-RPC, with a very exhaustive API, but it was exposed only locally over a Unix-domain socket. Some plugins chose to re-expose the API over a variety of protocols, ranging from REST to gRPC, but it was additional work to install them. The gRPC API is automatically generated from our existing JSON-RPC API, so it has the same low-level and high-level access that app devs are accustomed to but uses a more efficient binary encoding where possible and is secured via mutual TLS authentication.
|
||||
|
||||
To use it, just add the `--grpc-port` option, and it’ll automatically start alongside Core Lightning and generate the appropriate mTLS certificates. It will listen on the configured port, authenticate clients using mTLS certificates, and will forward any request to the JSON-RPC interface, performing translations from protobuf to JSON and back.
|
||||
|
||||
## Tutorial
|
||||
|
||||
### Generating the certificates
|
||||
|
||||
The plugin only runs when `lightningd` is configured with the option `--grpc-port`. Upon starting, the plugin generates a number of files, if they don't already exist:
|
||||
|
||||
- `ca.pem` and `ca-key.pem`: These are the certificate and private key for your own certificate authority. The plugin will only accept incoming connections using certificates that are signed by this CA.
|
||||
- `server.pem` and `server-key.pem`: this is the identity (certificate and private key) used by the plugin to authenticate itself. It is signed by the CA, and the client will verify its identity.
|
||||
- `client.pem` and `client-key.pem`: this is an example identity that can be used by a client to connect to the plugin, and issue requests. It is also signed by the CA.
|
||||
|
||||
These files are generated with sane defaults, however you can generate custom certificates should you require some changes (see [below](doc:app-development#generating-custom-certificates) for details).
|
||||
|
||||
The client needs a valid mTLS identity in order to connect to the plugin, so copy over the `ca.pem`, `client.pem` and `client-key.pem` files from the node to your project directory.
|
||||
|
||||
### Generating language-specific bindings
|
||||
|
||||
The gRPC interface is described in the [protobuf file](https://github.com/ElementsProject/lightning/blob/master/cln-grpc/proto/node.proto), and we'll first need to generate language specific bindings.
|
||||
|
||||
In this tutorial, we walk through the steps for Python, however they are mostly the same for other languages. For instance, if you're developing in Rust, use [`tonic-build`](https://docs.rs/tonic-build/latest/tonic_build/) to generate the bindings. For other languages, see the official [gRPC docs](https://grpc.io/docs/languages/) on how to generate gRPC client library for your specific language using the protobuf file.
|
||||
|
||||
We start by downloading the dependencies and `protoc` compiler:
|
||||
|
||||
```shell
|
||||
pip install grpcio-tools
|
||||
```
|
||||
|
||||
|
||||
|
||||
Next we generate the bindings in the current directory:
|
||||
|
||||
```bash
|
||||
python -m grpc_tools.protoc \
|
||||
-I path/to/cln-grpc/proto \
|
||||
path/to/cln-grpc/proto/node.proto \
|
||||
--python_out=. \
|
||||
--grpc_python_out=. \
|
||||
--experimental_allow_proto3_optional
|
||||
```
|
||||
|
||||
|
||||
|
||||
This will generate two files in the current directory:
|
||||
|
||||
- `node_pb2.py`: the description of the protobuf messages we'll be exchanging with the server.
|
||||
- `node_pb2_grpc.py`: the service and method stubs representing the server-side methods as local objects and associated methods.
|
||||
|
||||
### Connecting to the node
|
||||
|
||||
Finally we can use the generated stubs and mTLS identity to connect to the node:
|
||||
|
||||
```python
|
||||
from pathlib import Path
|
||||
from node_pb2_grpc import NodeStub
|
||||
import node_pb2
|
||||
|
||||
p = Path(".")
|
||||
cert_path = p / "client.pem"
|
||||
key_path = p / "client-key.pem"
|
||||
ca_cert_path = p / "ca.pem"
|
||||
|
||||
creds = grpc.ssl_channel_credentials(
|
||||
root_certificates=ca_cert_path.open('rb').read(),
|
||||
private_key=key_path.open('rb').read(),
|
||||
certificate_chain=cert_path.open('rb').read()
|
||||
)
|
||||
|
||||
channel = grpc.secure_channel(
|
||||
f"localhost:{grpc_port}",
|
||||
creds,
|
||||
options=(('grpc.ssl_target_name_override', 'cln'),)
|
||||
)
|
||||
stub = NodeStub(channel)
|
||||
|
||||
print(stub.Getinfo(node_pb2.GetinfoRequest()))
|
||||
```
|
||||
|
||||
|
||||
|
||||
In this example, we first load the client identity as well as the CA certificate so we can verify the server's identity against it. We then create a `creds` instance using those details. Next we open a secure channel, i.e., a channel over TLS with verification of identities.
|
||||
|
||||
Notice that we override the expected SSL name with `cln`. This is required because the plugin does not know the domain under which it will be reachable, and will therefore use `cln` as a standin. See [custom certificate generation](doc:app-development#generating-custom-certificates) for how this could be changed.
|
||||
|
||||
We then use the channel to instantiate the `NodeStub` representing the service and its methods, so we can finally call the `Getinfo` method with default arguments.
|
||||
|
||||
### Generating custom certificates (optional)
|
||||
|
||||
The automatically generated mTLS certificate will not know about potential domains that it'll be served under, and will chose a number of other parameters by default. If you'd like to generate a server certificate with a custom domain, you can use the following:
|
||||
|
||||
```shell
|
||||
openssl genrsa -out server-key.pem 2048
|
||||
```
|
||||
|
||||
|
||||
|
||||
This generates the private key. Next we create a Certificate Signature Request (CSR) that we can then process using our CA identity:
|
||||
|
||||
```shell
|
||||
openssl req -key server-key.pem -new -out server.csr
|
||||
```
|
||||
|
||||
|
||||
|
||||
You will be asked a number of questions, the most important of which is the _Common Name_, which you should set to the domain name you'll be serving the interface under. Next we can generate the actual certificate by processing the request with the CA identity:
|
||||
|
||||
```shell
|
||||
openssl x509 -req -CA ca.pem -CAkey ca-key.pem \
|
||||
-in server.csr \
|
||||
-out server.pem \
|
||||
-days 365 -CAcreateserial
|
||||
```
|
||||
|
||||
|
||||
|
||||
This will finally create the `server.pem` file, signed by the CA, allowing you to access the node through its real domain name. You can now move `server.pem` and `server-key.pem` into the lightning directory, and they should be picked up during the start.
|
112
doc/guides/Developer-s Guide/app-development/json-rpc.md
Normal file
112
doc/guides/Developer-s Guide/app-development/json-rpc.md
Normal file
|
@ -0,0 +1,112 @@
|
|||
---
|
||||
title: "JSON-RPC commands"
|
||||
slug: "json-rpc"
|
||||
hidden: false
|
||||
createdAt: "2023-02-07T12:53:11.917Z"
|
||||
updatedAt: "2023-02-21T13:50:10.086Z"
|
||||
---
|
||||
> 📘
|
||||
>
|
||||
> Used for applications running on the same system as CLN.
|
||||
|
||||
## Using `lightning-cli`
|
||||
|
||||
Core Lightning exposes a [JSON-RPC 2.0](https://www.jsonrpc.org/specification) interface over a Unix Domain socket; the [`lightning-cli`](ref:lightning-cli) tool can be used to access it, or there is a [python client library](???).
|
||||
|
||||
You can use `[lightning-cli](ref:lightning-cli) help` to print a table of RPC methods; `[lightning-cli](lightning-cli) help <command>` will offer specific information on that command.
|
||||
|
||||
Useful commands:
|
||||
|
||||
- [lightning-newaddr](ref:lightning-newaddr): get a bitcoin address to deposit funds into your lightning node.
|
||||
- [lightning-listfunds](ref:lightning-listfunds): see where your funds are.
|
||||
- [lightning-connect](ref:lightning-connect): connect to another lightning node.
|
||||
- [lightning-fundchannel](ref:lightning-fundchannel): create a channel to another connected node.
|
||||
- [lightning-invoice](ref:lightning-invoice): create an invoice to get paid by another node.
|
||||
- [lightning-pay](ref:lightning-pay): pay someone else's invoice.
|
||||
- [lightning-plugin](ref:lightning-plugin): commands to control extensions.
|
||||
|
||||
A complete list of all JSON-RPC commands is available at [API Reference](doc:api-reference).
|
||||
|
||||
## Using Python
|
||||
|
||||
[pyln-client](https://github.com/ElementsProject/lightning/tree/master/contrib/pyln-client) is a python client library for lightningd, that implements the Unix socket based JSON-RPC protocol. It can be used to call arbitrary functions on the RPC interface, and serves as a basis for applications or plugins written in python.
|
||||
|
||||
### Installation
|
||||
|
||||
`pyln-client` is available on `pip`:
|
||||
|
||||
```shell
|
||||
pip install pyln-client
|
||||
```
|
||||
|
||||
|
||||
|
||||
Alternatively you can also install the development version to get access to currently unreleased features by checking out the Core Lightning source code and installing into your python3 environment:
|
||||
|
||||
```shell
|
||||
git clone https://github.com/ElementsProject/lightning.git
|
||||
cd lightning/contrib/pyln-client
|
||||
poetry install
|
||||
```
|
||||
|
||||
|
||||
|
||||
This will add links to the library into your environment so changing the checked out source code will also result in the environment picking up these changes. Notice however that unreleased versions may change API without warning, so test thoroughly with the released version.
|
||||
|
||||
### Tutorials
|
||||
|
||||
Check out the following recipes to learn how to use pyln-client in your applications.
|
||||
|
||||
|
||||
[block:tutorial-tile]
|
||||
{
|
||||
"backgroundColor": "#dfb316",
|
||||
"emoji": "🦉",
|
||||
"id": "63dbbcd59880f6000e329079",
|
||||
"link": "https://docs.corelightning.org/v1.0/recipes/write-a-program-in-python-to-interact-with-lightningd",
|
||||
"slug": "write-a-program-in-python-to-interact-with-lightningd",
|
||||
"title": "Write a program in Python to interact with lightningd"
|
||||
}
|
||||
[/block]
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
[block:tutorial-tile]
|
||||
{
|
||||
"backgroundColor": "#dfb316",
|
||||
"emoji": "🦉",
|
||||
"id": "63dbd6993ef79b07b8f399be",
|
||||
"link": "https://docs.corelightning.org/v1.0/recipes/write-a-hello-world-plugin-in-python",
|
||||
"slug": "write-a-hello-world-plugin-in-python",
|
||||
"title": "Write a hello-world plugin in Python"
|
||||
}
|
||||
[/block]
|
||||
|
||||
|
||||
|
||||
|
||||
## Using Rust
|
||||
|
||||
[cln-rpc](https://crates.io/crates/cln-rpc) is a Rust-based crate for lightningd, that implements the Unix socket based JSON-RPC protocol. It can be used to call arbitrary functions on the RPC interface, and serves as a basis for applications or plugins written in Rust.
|
||||
|
||||
### Installation
|
||||
|
||||
Run the following Cargo command in your project directory:
|
||||
|
||||
```shell
|
||||
cargo add cln-rpc
|
||||
```
|
||||
|
||||
|
||||
|
||||
Or add the following line to your Cargo.toml:
|
||||
|
||||
```Text Cargo.toml
|
||||
cln-rpc = "0.1.2"
|
||||
```
|
||||
|
||||
|
||||
|
||||
Documentation for the `cln-rpc` crate is available at <https://docs.rs/cln-rpc/>.
|
|
@ -0,0 +1,22 @@
|
|||
---
|
||||
title: "Third-party libraries"
|
||||
slug: "third-party-libraries"
|
||||
hidden: false
|
||||
createdAt: "2023-02-07T12:55:33.547Z"
|
||||
updatedAt: "2023-02-21T13:55:39.216Z"
|
||||
---
|
||||
## REST
|
||||
|
||||
[C-lightning-REST](https://github.com/Ride-The-Lightning/c-lightning-REST) is a _third party_ REST API interface for Core Lightning written in Node.js.
|
||||
|
||||
> 📘
|
||||
>
|
||||
> Official support for REST APIs in Core Lightning is coming soon!
|
||||
|
||||
## GraphQL
|
||||
|
||||
[c-lightning-graphql](https://github.com/nettijoe96/c-lightning-graphql) exposes the Core Lightning API over GraphQL.
|
||||
|
||||
## JSON over HTTPS
|
||||
|
||||
[Sparko](https://github.com/fiatjaf/sparko) offers a full-blown JSON-RPC over HTTP bridge to a CLN node with fine-grained permissions, SSE and spark-wallet support that can be used to develop apps.
|
53
doc/guides/Developer-s Guide/developers-guide.md
Normal file
53
doc/guides/Developer-s Guide/developers-guide.md
Normal file
|
@ -0,0 +1,53 @@
|
|||
---
|
||||
title: "Setting up a dev environment"
|
||||
slug: "developers-guide"
|
||||
excerpt: "Get up and running in your local environment with essential tools and libraries in your preferred programming language."
|
||||
hidden: false
|
||||
createdAt: "2022-11-18T14:28:23.407Z"
|
||||
updatedAt: "2023-02-08T11:42:44.759Z"
|
||||
---
|
||||
## Using `startup_regtest.sh`
|
||||
|
||||
The Core Lightning project provides a script `startup_regtest.sh` to simulate the Lightning Network in your local dev environment. The script starts up some local nodes with bitcoind, all running on regtest and makes it easier to test things out, by hand.
|
||||
|
||||
Navigate to `contrib` in your Core Lightning directory:
|
||||
|
||||
```shell
|
||||
cd contrib
|
||||
```
|
||||
|
||||
Load the script, using `source` so it can set aliases:
|
||||
|
||||
```shell
|
||||
source contrib/startup_regtest.sh
|
||||
```
|
||||
|
||||
Start up the nodeset:
|
||||
|
||||
```shell
|
||||
start_ln 3
|
||||
```
|
||||
|
||||
Connect the nodes. The `connect a b` command connects node a to b:
|
||||
|
||||
```shell
|
||||
connect 1 2
|
||||
```
|
||||
|
||||
When you're finished, stop:
|
||||
|
||||
```shell
|
||||
stop_ln
|
||||
```
|
||||
|
||||
Clean up the lightning directories:
|
||||
|
||||
```shell
|
||||
destroy_ln
|
||||
```
|
||||
|
||||
|
||||
|
||||
## Using Polar
|
||||
|
||||
[Polar](https://lightningpolar.com/) offers a one-click setup of Lightning Network for local app development & testing.
|
14
doc/guides/Developer-s Guide/plugin-development.md
Normal file
14
doc/guides/Developer-s Guide/plugin-development.md
Normal file
|
@ -0,0 +1,14 @@
|
|||
---
|
||||
title: "Plugin Development"
|
||||
slug: "plugin-development"
|
||||
excerpt: "Customise your Core Lightning node by leveraging its powerful modular architecture via plugins."
|
||||
hidden: false
|
||||
createdAt: "2022-12-09T09:56:22.085Z"
|
||||
updatedAt: "2023-02-06T03:21:36.614Z"
|
||||
---
|
||||
Plugins are a simple yet powerful way to extend the functionality provided by Core Lightning. They are subprocesses that are started by the main `lightningd` daemon and can interact with `lightningd` in a variety of ways:
|
||||
|
||||
- **[Command line option passthrough](doc:a-day-in-the-life-of-a-plugin)** allows plugins to register their own command line options that are exposed through `lightningd` so that only the main process needs to be configured. Option values are not remembered when a plugin is stopped or killed, but can be passed as parameters to [`plugin start`][lightning-plugin].
|
||||
- **[JSON-RPC command passthrough](doc:json-rpc-passthrough)** adds a way for plugins to add their own commands to the JSON-RPC interface.
|
||||
- **[Event stream subscriptions](doc:event-notifications)** provide plugins with a push-based notification mechanism about events from the `lightningd`.
|
||||
- **[Hooks](doc:hooks)** are a primitive that allows plugins to be notified about internal events in `lightningd` and alter its behavior or inject custom behaviors.
|
|
@ -0,0 +1,249 @@
|
|||
---
|
||||
title: "A day in the life of a plugin"
|
||||
slug: "a-day-in-the-life-of-a-plugin"
|
||||
hidden: false
|
||||
createdAt: "2023-02-03T08:32:53.431Z"
|
||||
updatedAt: "2023-02-21T14:57:10.491Z"
|
||||
---
|
||||
A plugin may be written in any language, and communicates with `lightningd` through the plugin's `stdin` and `stdout`. JSON-RPCv2 is used as protocol on top of the two streams, with the plugin acting as server and `lightningd` acting as client. The plugin file needs to be executable (e.g. use `chmod a+x plugin_name`).
|
||||
|
||||
> 🚧
|
||||
>
|
||||
> As noted, `lightningd` uses `stdin` as an intake mechanism. This can cause unexpected behavior if one is not careful. To wit, care should be taken to ensure that debug/logging statements must be routed to `stderr` or directly to a file. Activities that are benign in other contexts (`println!`, `dbg!`, etc) will cause the plugin to be killed with an error along the lines of:
|
||||
>
|
||||
> `UNUSUAL plugin-cln-plugin-startup: Killing plugin: JSON-RPC message
|
||||
> does not contain "jsonrpc" field`
|
||||
|
||||
During startup of `lightningd` you can use the `--plugin=` option to register one or more plugins that should be started. In case you wish to start several plugins you have to use the `--plugin=` argument once for each plugin (or `--plugin-dir` or place them in the default
|
||||
plugin dirs, usually `/usr/local/libexec/c-lightning/plugins` and `~/.lightning/plugins`). An example call might look like:
|
||||
|
||||
```
|
||||
lightningd --plugin=/path/to/plugin1 --plugin=path/to/plugin2
|
||||
```
|
||||
|
||||
|
||||
|
||||
`lightningd` will run your plugins from the `--lightning-dir`/networkname as working directory and env variables "LIGHTNINGD_PLUGIN" and "LIGHTNINGD_VERSION" set, then will write JSON-RPC requests to the plugin's `stdin` and will read replies from its `stdout`. To initialise the plugin two RPC methods are required:
|
||||
|
||||
- `getmanifest` asks the plugin for command line options and JSON-RPC commands that should be passed through. This can be run before `lightningd` checks that it is the sole user of the `lightning-dir` directory (for `--help`) so your plugin should not touch files at this point.
|
||||
- `init` is called after the command line options have been parsed and passes them through with the real values (if specified). This is also the signal that `lightningd`'s JSON-RPC over Unix Socket is now up and ready to receive incoming requests from the plugin.
|
||||
|
||||
Once those two methods were called `lightningd` will start passing through incoming JSON-RPC commands that were registered and the plugin may interact with `lightningd` using the JSON-RPC over Unix-Socket interface.
|
||||
|
||||
Above is generally valid for plugins that start when `lightningd` starts. For dynamic plugins that start via the [lightning-plugin](ref:lightning-plugin) JSON-RPC command there is some difference, mainly in options passthrough (see note in [Types of Options](doc:a-day-in-the-life-of-a-plugin#types-of-options)).
|
||||
|
||||
- `shutdown` (optional): if subscribed to "shutdown" notification, a plugin can exit cleanly when `lightningd` is shutting down or when stopped via `plugin stop`.
|
||||
|
||||
### The `getmanifest` method
|
||||
|
||||
The `getmanifest` method is required for all plugins and will be called on startup with optional parameters (in particular, it may have `allow-deprecated-apis: false`, but you should accept, and ignore, other parameters). It MUST return a JSON object similar to this example:
|
||||
|
||||
```json
|
||||
{
|
||||
"options": [
|
||||
{
|
||||
"name": "greeting",
|
||||
"type": "string",
|
||||
"default": "World",
|
||||
"description": "What name should I call you?",
|
||||
"deprecated": false
|
||||
}
|
||||
],
|
||||
"rpcmethods": [
|
||||
{
|
||||
"name": "hello",
|
||||
"usage": "[name]",
|
||||
"description": "Returns a personalized greeting for {greeting} (set via options)."
|
||||
},
|
||||
{
|
||||
"name": "gettime",
|
||||
"usage": "",
|
||||
"description": "Returns the current time in {timezone}",
|
||||
"long_description": "Returns the current time in the timezone that is given as the only parameter.\nThis description may be quite long and is allowed to span multiple lines.",
|
||||
"deprecated": false
|
||||
}
|
||||
],
|
||||
"subscriptions": [
|
||||
"connect",
|
||||
"disconnect"
|
||||
],
|
||||
"hooks": [
|
||||
{ "name": "openchannel", "before": ["another_plugin"] },
|
||||
{ "name": "htlc_accepted" }
|
||||
],
|
||||
"featurebits": {
|
||||
"node": "D0000000",
|
||||
"channel": "D0000000",
|
||||
"init": "0E000000",
|
||||
"invoice": "00AD0000"
|
||||
},
|
||||
"notifications": [
|
||||
{
|
||||
"method": "mycustomnotification"
|
||||
}
|
||||
],
|
||||
"nonnumericids": true,
|
||||
"dynamic": true
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
During startup the `options` will be added to the list of command line options that `lightningd` accepts. If any `options` "name" is already taken startup will abort. The above will add a `--greeting` option with a default value of `World` and the specified description. _Notice that currently string, integers, bool, and flag options are supported._
|
||||
|
||||
The `rpcmethods` are methods that will be exposed via `lightningd`'s JSON-RPC over Unix-Socket interface, just like the builtin commands. Any parameters given to the JSON-RPC calls will be passed through verbatim. Notice that the `name`, `description` and `usage` fields
|
||||
are mandatory, while the `long_description` can be omitted (it'll be set to `description` if it was not provided). `usage` should surround optional parameter names in `[]`.
|
||||
|
||||
`options` and `rpcmethods` can mark themselves `deprecated: true` if you plan on removing them: this will disable them if the user sets `allow-deprecated-apis` to false (which every developer should do, right?).
|
||||
|
||||
The `nonnumericids` indicates that the plugin can handle string JSON request `id` fields: prior to v22.11 lightningd used numbers for these, and the change to strings broke some plugins. If not set, then strings will be used once this feature is removed after v23.05. See the [lightningd-rpc](ref:lightningd-rpc) documentation for how to handle JSON `id` fields!
|
||||
|
||||
The `dynamic` indicates if the plugin can be managed after `lightningd` has been started using the [lightning-plugin](ref:lightning-plugin) JSON-RPC command. Critical plugins that should not be stopped should set it to false. Plugin `options` can be passed to dynamic plugins as argument to the `plugin` command .
|
||||
|
||||
If a `disable` member exists, the plugin will be disabled and the contents of this member is the reason why. This allows plugins to disable themselves if they are not supported in this configuration.
|
||||
|
||||
The `featurebits` object allows the plugin to register featurebits that should be announced in a number of places in [the protocol](https://github.com/lightning/bolts/blob/master/09-features). They can be used to signal support for custom protocol extensions to direct peers, remote nodes and in invoices. Custom protocol extensions can be implemented for example using the `sendcustommsg` method and the `custommsg` hook, or the `sendonion` method and the `htlc_accepted` hook. The keys in the `featurebits` object are `node` for features that should be announced via the `node_announcement` to all nodes in the network, `init` for features that should be announced to direct peers during the connection setup, `channel` for features which should apply to `channel_announcement`, and `invoice` for features that should be announced to a potential sender of a payment in the invoice. The low range of featurebits is reserved for standardize features, so please pick random, high position bits for experiments. If you'd like to standardize your extension please reach out to the [specification repository][spec] to get a featurebit assigned.
|
||||
|
||||
The `notifications` array allows plugins to announce which custom notifications they intend to send to `lightningd`. These custom notifications can then be subscribed to by other plugins, allowing them to communicate with each other via the existing publish-subscribe mechanism and react to events that happen in other plugins, or collect information based on the notification topics.
|
||||
|
||||
Plugins are free to register any `name` for their `rpcmethod` as long as the name was not previously registered. This includes both built-in methods, such as `help` and `getinfo`, as well as methods registered by other plugins. If there is a conflict then `lightningd` will report
|
||||
an error and kill the plugin, this aborts startup if the plugin is _important_.
|
||||
|
||||
#### Types of Options
|
||||
|
||||
There are currently four supported option 'types':
|
||||
|
||||
- string: a string
|
||||
- bool: a boolean
|
||||
- int: parsed as a signed integer (64-bit)
|
||||
- flag: no-arg flag option. Is boolean under the hood. Defaults to false.
|
||||
|
||||
In addition, string and int types can specify `"multi": true` to indicate they can be specified multiple times. These will always be represented in `init` as a (possibly empty) JSON array.
|
||||
|
||||
Nota bene: if a `flag` type option is not set, it will not appear in the options set that is passed to the plugin.
|
||||
|
||||
Here's an example option set, as sent in response to `getmanifest`
|
||||
|
||||
```json
|
||||
"options": [
|
||||
{
|
||||
"name": "greeting",
|
||||
"type": "string",
|
||||
"default": "World",
|
||||
"description": "What name should I call you?"
|
||||
},
|
||||
{
|
||||
"name": "run-hot",
|
||||
"type": "flag",
|
||||
"default": None, // defaults to false
|
||||
"description": "If set, overclocks plugin"
|
||||
},
|
||||
{
|
||||
"name": "is_online",
|
||||
"type": "bool",
|
||||
"default": false,
|
||||
"description": "Set to true if plugin can use network"
|
||||
},
|
||||
{
|
||||
"name": "service-port",
|
||||
"type": "int",
|
||||
"default": 6666,
|
||||
"description": "Port to use to connect to 3rd-party service"
|
||||
},
|
||||
{
|
||||
"name": "number",
|
||||
"type": "int",
|
||||
"default": 0,
|
||||
"description": "Another number to add",
|
||||
"multi": true
|
||||
}
|
||||
],
|
||||
```
|
||||
|
||||
|
||||
|
||||
**Note**: `lightningd` command line options are only parsed during startup and their values are not remembered when the plugin is stopped or killed. For dynamic plugins started with `plugin start`, options can be passed as extra arguments to the command [lightning-plugin](ref:lightning-plugin).
|
||||
|
||||
#### Custom notifications
|
||||
|
||||
The plugins may emit custom notifications for topics they have announced during startup. The list of notification topics declared during startup must include all topics that may be emitted, in order to verify that all topics plugins subscribe to are also emitted by some other plugin, and warn if a plugin subscribes to a non-existent topic. In case a plugin emits notifications it has not announced the notification will be ignored and not forwarded to subscribers.
|
||||
|
||||
When forwarding a custom notification `lightningd` will wrap the payload of the notification in an object that contains metadata about the notification. The following is an example of this transformation. The first listing is the original notification emitted by the `sender` plugin, while the second is the the notification as received by the `receiver` plugin (both listings show the full [JSON-RPC](https://www.jsonrpc.org/specification) notification to illustrate the wrapping).
|
||||
|
||||
```json
|
||||
{
|
||||
"jsonrpc": "2.0",
|
||||
"method": "mycustomnotification",
|
||||
"params": {
|
||||
"key": "value",
|
||||
"message": "Hello fellow plugin!"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
is delivered as
|
||||
|
||||
```json
|
||||
{
|
||||
"jsonrpc": "2.0",
|
||||
"method": "mycustomnotification",
|
||||
"params": {
|
||||
"origin": "sender",
|
||||
"payload": {
|
||||
"key": "value",
|
||||
"message": "Hello fellow plugin!"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
|
||||
|
||||
The notification topic (`method` in the JSON-RPC message) must not match one of the internal events in order to prevent breaking subscribers that expect the existing notification format. Multiple plugins are allowed to emit notifications for the same topics, allowing things like metric aggregators where the aggregator subscribes to a common topic and other plugins publish metrics as notifications.
|
||||
|
||||
### The `init` method
|
||||
|
||||
The `init` method is required so that `lightningd` can pass back the filled command line options and notify the plugin that `lightningd` is now ready to receive JSON-RPC commands. The `params` of the call are a simple JSON object containing the options:
|
||||
|
||||
```json
|
||||
{
|
||||
"options": {
|
||||
"greeting": "World",
|
||||
"number": [0]
|
||||
},
|
||||
"configuration": {
|
||||
"lightning-dir": "/home/user/.lightning/testnet",
|
||||
"rpc-file": "lightning-rpc",
|
||||
"startup": true,
|
||||
"network": "testnet",
|
||||
"feature_set": {
|
||||
"init": "02aaa2",
|
||||
"node": "8000000002aaa2",
|
||||
"channel": "",
|
||||
"invoice": "028200"
|
||||
},
|
||||
"proxy": {
|
||||
"type": "ipv4",
|
||||
"address": "127.0.0.1",
|
||||
"port": 9050
|
||||
},
|
||||
"torv3-enabled": true,
|
||||
"always_use_proxy": false
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
The plugin must respond to `init` calls. The response should be a valid JSON-RPC response to the `init`, but this is not currently enforced. If the response is an object containing `result` which contains `disable` then the plugin will be disabled and the contents
|
||||
of this member is the reason why.
|
||||
|
||||
The `startup` field allows a plugin to detect if it was started at `lightningd` startup (true), or at runtime (false).
|
||||
|
||||
### Timeouts
|
||||
|
||||
During startup ("startup" is true), the plugin has 60 seconds to return `getmanifest` and another 60 seconds to return `init`, or gets killed.
|
||||
When started dynamically via the [lightning-plugin](ref:lightning-plugin) JSON-RPC command, both `getmanifest` and `init` should be completed within 60 seconds.
|
|
@ -0,0 +1,54 @@
|
|||
---
|
||||
title: "Tutorials"
|
||||
slug: "additional-resources"
|
||||
hidden: false
|
||||
createdAt: "2023-02-03T08:33:51.998Z"
|
||||
updatedAt: "2023-02-08T09:36:57.988Z"
|
||||
---
|
||||
## Writing a plugin in Python
|
||||
|
||||
Check out a step-by-step recipe for building a simple `helloworld.py` example plugin based on [pyln-client](https://github.com/ElementsProject/lightning/tree/master/contrib/pyln-client).
|
||||
|
||||
|
||||
[block:tutorial-tile]
|
||||
{
|
||||
"backgroundColor": "#dfb316",
|
||||
"emoji": "🦉",
|
||||
"id": "63dbd6993ef79b07b8f399be",
|
||||
"link": "https://docs.corelightning.org/v1.0/recipes/write-a-hello-world-plugin-in-python",
|
||||
"slug": "write-a-hello-world-plugin-in-python",
|
||||
"title": "Write a hello-world plugin in Python"
|
||||
}
|
||||
[/block]
|
||||
|
||||
|
||||
|
||||
|
||||
You can also follow along the video below where Blockstream Engineer Rusty Russell walks you all the way from getting started with Core Lightning to building a plugin in Python.
|
||||
|
||||
|
||||
[block:embed]
|
||||
{
|
||||
"html": "<iframe class=\"embedly-embed\" src=\"//cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2Ffab4P3BIZxk%3Ffeature%3Doembed&display_name=YouTube&url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3Dfab4P3BIZxk&image=https%3A%2F%2Fi.ytimg.com%2Fvi%2Ffab4P3BIZxk%2Fhqdefault.jpg&key=7788cb384c9f4d5dbbdbeffd9fe4b92f&type=text%2Fhtml&schema=youtube\" width=\"854\" height=\"480\" scrolling=\"no\" title=\"YouTube embed\" frameborder=\"0\" allow=\"autoplay; fullscreen\" allowfullscreen=\"true\"></iframe>",
|
||||
"url": "https://www.youtube.com/watch?v=fab4P3BIZxk",
|
||||
"title": "Rusty Russell | Getting Started with c-lightning | July 2019",
|
||||
"favicon": "https://www.google.com/favicon.ico",
|
||||
"image": "https://i.ytimg.com/vi/fab4P3BIZxk/hqdefault.jpg",
|
||||
"provider": "youtube.com",
|
||||
"href": "https://www.youtube.com/watch?v=fab4P3BIZxk",
|
||||
"typeOfEmbed": "youtube"
|
||||
}
|
||||
[/block]
|
||||
|
||||
|
||||
|
||||
|
||||
Finally, `lightningd`'s own internal [tests](https://github.com/ElementsProject/lightning/tree/master/tests/plugins) can be a useful (and most reliable) resource.
|
||||
|
||||
## Writing a plugin in Rust
|
||||
|
||||
[`cln-plugin`](https://docs.rs/cln-plugin/) is a library that facilitates the creation of plugins in Rust, with async/await support, for low-footprint plugins.
|
||||
|
||||
## Community built plugins
|
||||
|
||||
Check out this [repository](https://github.com/lightningd/plugins#plugin-builder-resources) that has a collection of actively maintained plugins as well as plugin libraries (in your favourite language) built by the community.
|
|
@ -0,0 +1,65 @@
|
|||
---
|
||||
title: "Bitcoin backend"
|
||||
slug: "bitcoin-backend"
|
||||
hidden: false
|
||||
createdAt: "2023-02-03T08:58:27.125Z"
|
||||
updatedAt: "2023-02-21T15:10:05.895Z"
|
||||
---
|
||||
Core Lightning communicates with the Bitcoin network through a plugin. It uses the `bcli` plugin by default but you can use a custom one, multiple custom ones for different operations, or write your own for your favourite Bitcoin data source!
|
||||
|
||||
Communication with the plugin is done through 5 JSONRPC commands, `lightningd` can use from 1 to 5 plugin(s) registering these 5 commands for gathering Bitcoin data. Each plugin must follow the below specification for `lightningd` to operate.
|
||||
|
||||
### `getchaininfo`
|
||||
|
||||
Called at startup, it's used to check the network `lightningd` is operating on and to get the sync status of the backend.
|
||||
|
||||
The plugin must respond to `getchaininfo` with the following fields:
|
||||
- `chain` (string), the network name as introduced in bip70
|
||||
- `headercount` (number), the number of fetched block headers
|
||||
- `blockcount` (number), the number of fetched block body
|
||||
- `ibd` (bool), whether the backend is performing initial block download
|
||||
|
||||
### `estimatefees`
|
||||
|
||||
Polled by `lightningd` to get the current feerate, all values must be passed in sat/kVB.
|
||||
|
||||
If fee estimation fails, the plugin must set all the fields to `null`.
|
||||
|
||||
The plugin, if fee estimation succeeds, must respond with the following fields:
|
||||
- `opening` (number), used for funding and also misc transactions
|
||||
- `mutual_close` (number), used for the mutual close transaction
|
||||
- `unilateral_close` (number), used for unilateral close (/commitment) transactions
|
||||
- `delayed_to_us` (number), used for resolving our output from our unilateral close
|
||||
- `htlc_resolution` (number), used for resolving HTLCs after an unilateral close
|
||||
- `penalty` (number), used for resolving revoked transactions
|
||||
- `min_acceptable` (number), used as the minimum acceptable feerate
|
||||
- `max_acceptable` (number), used as the maximum acceptable feerate
|
||||
|
||||
### `getrawblockbyheight`
|
||||
|
||||
This call takes one parameter, `height`, which determines the block height of the block to fetch.
|
||||
|
||||
The plugin must set all fields to `null` if no block was found at the specified `height`.
|
||||
|
||||
The plugin must respond to `getrawblockbyheight` with the following fields:
|
||||
- `blockhash` (string), the block hash as a hexadecimal string
|
||||
- `block` (string), the block content as a hexadecimal string
|
||||
|
||||
### `getutxout`
|
||||
|
||||
This call takes two parameter, the `txid` (string) and the `vout` (number) identifying the UTXO we're interested in.
|
||||
|
||||
The plugin must set both fields to `null` if the specified TXO was spent.
|
||||
|
||||
The plugin must respond to `gettxout` with the following fields:
|
||||
- `amount` (number), the output value in **sats**
|
||||
- `script` (string), the output scriptPubKey
|
||||
|
||||
### `sendrawtransaction`
|
||||
|
||||
This call takes two parameters, a string `tx` representing a hex-encoded Bitcoin transaction,
|
||||
and a boolean `allowhighfees`, which if set means suppress any high-fees check implemented in the backend, since the given transaction may have fees that are very high.
|
||||
|
||||
The plugin must broadcast it and respond with the following fields:
|
||||
- `success` (boolean), which is `true` if the broadcast succeeded
|
||||
- `errmsg` (string), if success is `false`, the reason why it failed
|
|
@ -0,0 +1,450 @@
|
|||
---
|
||||
title: "Event notifications"
|
||||
slug: "event-notifications"
|
||||
hidden: false
|
||||
createdAt: "2023-02-03T08:57:15.799Z"
|
||||
updatedAt: "2023-02-21T15:00:34.233Z"
|
||||
---
|
||||
Event notifications allow a plugin to subscribe to events in `lightningd`. `lightningd` will then send a push notification if an event matching the subscription occurred. A notification is defined in the JSON-RPC [specification][jsonrpc-spec] as an RPC call that does not include an `id` parameter:
|
||||
|
||||
> A Notification is a Request object without an "id" member. A Request object that is a Notification signifies the Client's lack of interest in the corresponding Response object, and as such no Response object needs to be returned to the client. The Server MUST NOT reply to a Notification, including those that are within a batch request.
|
||||
>
|
||||
> Notifications are not confirmable by definition, since they do not have a Response object to be returned. As such, the Client would not be aware of any errors (like e.g. "Invalid params","Internal error").
|
||||
|
||||
Plugins subscribe by returning an array of subscriptions as part of the `getmanifest` response. The result for the `getmanifest` call above for example subscribes to the two topics `connect` and `disconnect`. The topics that are currently defined and the corresponding payloads are listed below.
|
||||
|
||||
### `channel_opened`
|
||||
|
||||
A notification for topic `channel_opened` is sent if a peer successfully funded a channel with us. It contains the peer id, the funding amount (in millisatoshis), the funding transaction id, and a boolean indicating if the funding transaction has been included into a block.
|
||||
|
||||
```json
|
||||
{
|
||||
"channel_opened": {
|
||||
"id": "03864ef025fde8fb587d989186ce6a4a186895ee44a926bfc370e2c366597a3f8f",
|
||||
"funding_msat": 100000000,
|
||||
"funding_txid": "4a5e1e4baab89f3a32518a88c31bc87f618f76673e2cc77ab2127b7afdeda33b",
|
||||
"channel_ready": false
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
### `channel_open_failed`
|
||||
|
||||
A notification to indicate that a channel open attempt has been unsuccessful.
|
||||
Useful for cleaning up state for a v2 channel open attempt. See `plugins/funder.c` for an example of how to use this.
|
||||
|
||||
```json
|
||||
{
|
||||
"channel_open_failed": {
|
||||
"channel_id": "a2d0851832f0e30a0cf...",
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
### `channel_state_changed`
|
||||
|
||||
A notification for topic `channel_state_changed` is sent every time a channel changes its state. The notification includes the `peer_id` and `channel_id`, the old and new channel states, the type of `cause` and a `message`.
|
||||
|
||||
```json
|
||||
{
|
||||
"channel_state_changed": {
|
||||
"peer_id": "03bc9337c7a28bb784d67742ebedd30a93bacdf7e4ca16436ef3798000242b2251",
|
||||
"channel_id": "a2d0851832f0e30a0cf778a826d72f077ca86b69f72677e0267f23f63a0599b4",
|
||||
"short_channel_id" : "561820x1020x1",
|
||||
"timestamp":"2023-01-05T18:27:12.145Z",
|
||||
"old_state": "CHANNELD_NORMAL",
|
||||
"new_state": "CHANNELD_SHUTTING_DOWN",
|
||||
"cause" : "remote",
|
||||
"message" : "Peer closes channel"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
A `cause` can have the following values:
|
||||
|
||||
- "unknown" Anything other than the reasons below. Should not happen.
|
||||
- "local" Unconscious internal reasons, e.g. dev fail of a channel.
|
||||
- "user" The operator or a plugin opened or closed a channel by intention.
|
||||
- "remote" The remote closed or funded a channel with us by intention.
|
||||
- "protocol" We need to close a channel because of bad signatures and such.
|
||||
- "onchain" A channel was closed onchain, while we were offline.
|
||||
|
||||
Most state changes are caused subsequentially for a prior state change, e.g. "_CLOSINGD\_COMPLETE_" is followed by "_FUNDING\_SPEND\_SEEN_". Because of this, the `cause` reflects the last known reason in terms of local or remote user interaction, protocol reasons, etc. More specifically, a `new_state` "_FUNDING\_SPEND_SEEN_" will likely _not_ have "onchain" as a `cause` but some value such as "REMOTE" or "LOCAL" depending on who initiated the closing of a channel.
|
||||
|
||||
Note: If the channel is not closed or being closed yet, the `cause` will reflect which side "remote" or "local" opened the channel.
|
||||
|
||||
Note: If the cause is "onchain" this was very likely a conscious decision of the remote peer, but we have been offline.
|
||||
|
||||
### `connect`
|
||||
|
||||
A notification for topic `connect` is sent every time a new connection to a peer is established. `direction` is either `"in"` or `"out"`.
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "02f6725f9c1c40333b67faea92fd211c183050f28df32cac3f9d69685fe9665432",
|
||||
"direction": "in",
|
||||
"address": "1.2.3.4:1234"
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
### `disconnect`
|
||||
|
||||
A notification for topic `disconnect` is sent every time a connection to a peer was lost.
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "02f6725f9c1c40333b67faea92fd211c183050f28df32cac3f9d69685fe9665432"
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
### `invoice_payment`
|
||||
|
||||
A notification for topic `invoice_payment` is sent every time an invoice is paid.
|
||||
|
||||
```json
|
||||
{
|
||||
"invoice_payment": {
|
||||
"label": "unique-label-for-invoice",
|
||||
"preimage": "0000000000000000000000000000000000000000000000000000000000000000",
|
||||
"amount_msat": 10000
|
||||
}
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
|
||||
|
||||
### `invoice_creation`
|
||||
|
||||
A notification for topic `invoice_creation` is sent every time an invoice is created.
|
||||
|
||||
```json
|
||||
{
|
||||
"invoice_creation": {
|
||||
"label": "unique-label-for-invoice",
|
||||
"preimage": "0000000000000000000000000000000000000000000000000000000000000000",
|
||||
"amount_msat": 10000
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
### `warning`
|
||||
|
||||
A notification for topic `warning` is sent every time a new `BROKEN`/`UNUSUAL` level(in plugins, we use `error`/`warn`) log generated, which means an unusual/borken thing happens, such as channel failed, message resolving failed...
|
||||
|
||||
```json
|
||||
{
|
||||
"warning": {
|
||||
"level": "warn",
|
||||
"time": "1559743608.565342521",
|
||||
"source": "lightningd(17652): 0821f80652fb840239df8dc99205792bba2e559a05469915804c08420230e23c7c chan #7854:",
|
||||
"log": "Peer permanent failure in CHANNELD_NORMAL: lightning_channeld: sent ERROR bad reestablish dataloss msg"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
1. `level` is `warn` or `error`: `warn` means something seems bad happened and it's under control, but we'd better check it; `error` means something extremely bad is out of control, and it may lead to crash;
|
||||
2. `time` is the second since epoch;
|
||||
3. `source` means where the event happened, it may have the following forms:
|
||||
`<node_id> chan #<db_id_of_channel>:`,`lightningd(<lightningd_pid>):`,
|
||||
`plugin-<plugin_name>:`, `<daemon_name>(<daemon_pid>):`, `jsonrpc:`,
|
||||
`jcon fd <error_fd_to_jsonrpc>:`, `plugin-manager`;
|
||||
4. `log` is the context of the original log entry.
|
||||
|
||||
### `forward_event`
|
||||
|
||||
A notification for topic `forward_event` is sent every time the status of a forward payment is set. The json format is same as the API `listforwards`.
|
||||
|
||||
```json
|
||||
{
|
||||
"forward_event": {
|
||||
"payment_hash": "f5a6a059a25d1e329d9b094aeeec8c2191ca037d3f5b0662e21ae850debe8ea2",
|
||||
"in_channel": "103x2x1",
|
||||
"out_channel": "103x1x1",
|
||||
"in_msat": 100001001,
|
||||
"out_msat": 100000000,
|
||||
"fee_msat": 1001,
|
||||
"status": "settled",
|
||||
"received_time": 1560696342.368,
|
||||
"resolved_time": 1560696342.556
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
or
|
||||
|
||||
```json
|
||||
{
|
||||
"forward_event": {
|
||||
"payment_hash": "ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff",
|
||||
"in_channel": "103x2x1",
|
||||
"out_channel": "110x1x0",
|
||||
"in_msat": 100001001,
|
||||
"out_msat": 100000000,
|
||||
"fee_msat": 1001,
|
||||
"status": "local_failed",
|
||||
"failcode": 16392,
|
||||
"failreason": "WIRE_PERMANENT_CHANNEL_FAILURE",
|
||||
"received_time": 1560696343.052
|
||||
}
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
|
||||
|
||||
- The status includes `offered`, `settled`, `failed` and `local_failed`, and they are all string type in json.
|
||||
- When the forward payment is valid for us, we'll set `offered` and send the forward payment to next hop to resolve;
|
||||
- When the payment forwarded by us gets paid eventually, the forward payment will change the status from `offered` to `settled`;
|
||||
- If payment fails locally(like failing to resolve locally) or the corresponding htlc with next hop fails(like htlc timeout), we will set the status as `local_failed`. `local_failed` may be set before setting `offered` or after setting `offered`. In fact, from the
|
||||
time we receive the htlc of the previous hop, all we can know the cause of the failure is treated as `local_failed`. `local_failed` only occuors locally or happens in the htlc between us and next hop;
|
||||
- If `local_failed` is set before `offered`, this means we just received htlc from the previous hop and haven't generate htlc for next hop. In this case, the json of `forward_event` sets the fields of `out_msatoshi`, `out_msat`,`fee` and `out_channel` as 0;
|
||||
- Note: In fact, for this case we may be not sure if this incoming htlc represents a pay to us or a payment we need to forward. We just simply treat all incoming failed to resolve as `local_failed`.
|
||||
- Only in `local_failed` case, json includes `failcode` and `failreason` fields;
|
||||
- `failed` means the payment forwarded by us fails in the latter hops, and the failure isn't related to us, so we aren't accessed to the fail reason. `failed` must be set after
|
||||
`offered`.
|
||||
- `failed` case doesn't include `failcode` and `failreason`
|
||||
fields;
|
||||
- `received_time` means when we received the htlc of this payment from the previous peer. It will be contained into all status case;
|
||||
- `resolved_time` means when the htlc of this payment between us and the next peer was resolved. The resolved result may success or fail, so only `settled` and `failed` case contain `resolved_time`;
|
||||
- The `failcode` and `failreason` are defined in [BOLT 4](https://github.com/lightning/bolts/blob/master/04-onion-routing.md#failure-messages).
|
||||
|
||||
### `sendpay_success`
|
||||
|
||||
A notification for topic `sendpay_success` is sent every time a sendpay succeeds (with `complete` status). The json is the same as the return value of the commands `sendpay`/`waitsendpay` when these commands succeed.
|
||||
|
||||
```json
|
||||
{
|
||||
"sendpay_success": {
|
||||
"id": 1,
|
||||
"payment_hash": "5c85bf402b87d4860f4a728e2e58a2418bda92cd7aea0ce494f11670cfbfb206",
|
||||
"destination": "035d2b1192dfba134e10e540875d366ebc8bc353d5aa766b80c090b39c3a5d885d",
|
||||
"amount_msat": 100000000,
|
||||
"amount_sent_msat": 100001001,
|
||||
"created_at": 1561390572,
|
||||
"status": "complete",
|
||||
"payment_preimage": "9540d98095fd7f37687ebb7759e733934234d4f934e34433d4998a37de3733ee"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
`sendpay` doesn't wait for the result of sendpay and `waitsendpay` returns the result of sendpay in specified time or timeout, but `sendpay_success` will always return the result anytime when sendpay successes if is was subscribed.
|
||||
|
||||
### `sendpay_failure`
|
||||
|
||||
A notification for topic `sendpay_failure` is sent every time a sendpay completes with `failed` status. The JSON is same as the return value of the commands `sendpay`/`waitsendpay` when these commands fail.
|
||||
|
||||
```json
|
||||
{
|
||||
"sendpay_failure": {
|
||||
"code": 204,
|
||||
"message": "failed: WIRE_UNKNOWN_NEXT_PEER (reply from remote)",
|
||||
"data": {
|
||||
"id": 2,
|
||||
"payment_hash": "9036e3bdbd2515f1e653cb9f22f8e4c49b73aa2c36e937c926f43e33b8db8851",
|
||||
"destination": "035d2b1192dfba134e10e540875d366ebc8bc353d5aa766b80c090b39c3a5d885d",
|
||||
"amount_msat": 100000000,
|
||||
"amount_sent_msat": 100001001,
|
||||
"created_at": 1561395134,
|
||||
"status": "failed",
|
||||
"erring_index": 1,
|
||||
"failcode": 16394,
|
||||
"failcodename": "WIRE_UNKNOWN_NEXT_PEER",
|
||||
"erring_node": "022d223620a359a47ff7f7ac447c85c46c923da53389221a0054c11c1e3ca31d59",
|
||||
"erring_channel": "103x2x1",
|
||||
"erring_direction": 0
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
`sendpay` doesn't wait for the result of sendpay and `waitsendpay` returns the result of sendpay in specified time or timeout, but `sendpay_failure` will always return the result anytime when sendpay fails if is was subscribed.
|
||||
|
||||
### `coin_movement`
|
||||
|
||||
A notification for topic `coin_movement` is sent to record the movement of coins. It is only triggered by finalized ledger updates, i.e. only definitively resolved HTLCs or confirmed bitcoin transactions.
|
||||
|
||||
```json
|
||||
{
|
||||
"coin_movement": {
|
||||
"version":2,
|
||||
"node_id":"03a7103a2322b811f7369cbb27fb213d30bbc0b012082fed3cad7e4498da2dc56b",
|
||||
"type":"chain_mvt",
|
||||
"account_id":"wallet",
|
||||
"originating_account": "wallet", // (`chain_mvt` only, optional)
|
||||
"txid":"0159693d8f3876b4def468b208712c630309381e9d106a9836fa0a9571a28722", // (`chain_mvt` only, optional)
|
||||
"utxo_txid":"0159693d8f3876b4def468b208712c630309381e9d106a9836fa0a9571a28722", // (`chain_mvt` only)
|
||||
"vout":1, // (`chain_mvt` only)
|
||||
"payment_hash": "xxx", // (either type, optional on both)
|
||||
"part_id": 0, // (`channel_mvt` only, optional)
|
||||
"credit_msat":2000000000,
|
||||
"debit_msat":0,
|
||||
"output_msat": 2000000000, // ('chain_mvt' only)
|
||||
"output_count": 2, // ('chain_mvt' only, typically only channel closes)
|
||||
"fees_msat": 382, // ('channel_mvt' only)
|
||||
"tags": ["deposit"],
|
||||
"blockheight":102, // 'chain_mvt' only
|
||||
"timestamp":1585948198,
|
||||
"coin_type":"bc"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
`version` indicates which version of the coin movement data struct this notification adheres to.
|
||||
|
||||
`node_id` specifies the node issuing the coin movement.
|
||||
|
||||
`type` marks the underlying mechanism which moved these coins. There are two 'types' of `coin_movements`:
|
||||
|
||||
- `channel_mvt`s, which occur as a result of htlcs being resolved and,
|
||||
- `chain_mvt`s, which occur as a result of bitcoin txs being mined.
|
||||
|
||||
`account_id` is the name of this account. The node's wallet is named 'wallet', all channel funds' account are the channel id.
|
||||
|
||||
`originating_account` is the account that this movement originated from.
|
||||
_Only_ tagged on external events (deposits/withdrawals to an external party).
|
||||
|
||||
`txid` is the transaction id of the bitcoin transaction that triggered this ledger event. `utxo_txid` and `vout` identify the bitcoin output which triggered this notification. (`chain_mvt` only). Notifications tagged `journal_entry` do not have a `utxo_txid` as they're not represented in the utxo set.
|
||||
|
||||
`payment_hash` is the hash of the preimage used to move this payment. Only present for HTLC mediated moves (both `chain_mvt` and `channel_mvt`) A `chain_mvt` will have a `payment_hash` iff it's recording an htlc that was fulfilled onchain.
|
||||
|
||||
`part_id` is an identifier for parts of a multi-part payment. useful for aggregating payments for an invoice or to indicate why a payment hash appears multiple times. `channel_mvt` only
|
||||
|
||||
`credit` and `debit` are millisatoshi denominated amounts of the fund movement. A
|
||||
'credit' is funds deposited into an account; a `debit` is funds withdrawn.
|
||||
|
||||
`output_value` is the total value of the on-chain UTXO. Note that for channel opens/closes the total output value will not necessarily correspond to the amount that's credited/debited.
|
||||
|
||||
`output_count` is the total outputs to expect for a channel close. Useful for figuring out when every onchain output for a close has been resolved.
|
||||
|
||||
`fees` is an HTLC annotation for the amount of fees either paid or earned. For "invoice" tagged events, the fees are the total fees paid to send that payment. The end amount can be found by subtracting the total fees from the `debited` amount. For "routed" tagged events, both the debit/credit contain fees. Technically routed debits are the 'fee generating' event, however we include them on routed credits as well.
|
||||
|
||||
`tag` is a movement descriptor. Current tags are as follows:
|
||||
|
||||
- `deposit`: funds deposited
|
||||
- `withdrawal`: funds withdrawn
|
||||
- `penalty`: funds paid or gained from a penalty tx.
|
||||
- `invoice`: funds paid to or recieved from an invoice.
|
||||
- `routed`: funds routed through this node.
|
||||
- `pushed`: funds pushed to peer.
|
||||
- `channel_open` : channel is opened, initial channel balance
|
||||
- `channel_close`: channel is closed, final channel balance
|
||||
- `delayed_to_us`: on-chain output to us, spent back into our wallet
|
||||
- `htlc_timeout`: on-chain htlc timeout output
|
||||
- `htlc_fulfill`: on-chian htlc fulfill output
|
||||
- `htlc_tx`: on-chain htlc tx has happened
|
||||
- `to_wallet`: output being spent into our wallet
|
||||
- `ignored`: output is being ignored
|
||||
- `anchor`: an anchor output
|
||||
- `to_them`: output intended to peer's wallet
|
||||
- `penalized`: output we've 'lost' due to a penalty (failed cheat attempt)
|
||||
- `stolen`: output we've 'lost' due to peer's cheat
|
||||
- `to_miner`: output we've burned to miner (OP_RETURN)
|
||||
- `opener`: tags channel_open, we are the channel opener
|
||||
- `lease_fee`: amount paid as lease fee
|
||||
- `leased`: tags channel_open, channel contains leased funds
|
||||
|
||||
`blockheight` is the block the txid is included in. `channel_mvt`s will be null, so will the blockheight for withdrawals to external parties (we issue these events when we send the tx containing them, before they're included in the chain).
|
||||
|
||||
The `timestamp` is seconds since Unix epoch of the node's machine time at the time lightningd broadcasts the notification.
|
||||
|
||||
`coin_type` is the BIP173 name for the coin which moved.
|
||||
|
||||
### `balance_snapshot`
|
||||
|
||||
Emitted after we've caught up to the chain head on first start. Lists all current accounts (`account_id` matches the `account_id` emitted from `coin_movement`). Useful for checkpointing account balances.
|
||||
|
||||
```json
|
||||
{
|
||||
"balance_snapshots": [
|
||||
{
|
||||
'node_id': '035d2b1192dfba134e10e540875d366ebc8bc353d5aa766b80c090b39c3a5d885d',
|
||||
'blockheight': 101,
|
||||
'timestamp': 1639076327,
|
||||
'accounts': [
|
||||
{
|
||||
'account_id': 'wallet',
|
||||
'balance': '0msat',
|
||||
'coin_type': 'bcrt'
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
'node_id': '035d2b1192dfba134e10e540875d366ebc8bc353d5aa766b80c090b39c3a5d885d',
|
||||
'blockheight': 110,
|
||||
'timestamp': 1639076343,
|
||||
'accounts': [
|
||||
{
|
||||
'account_id': 'wallet',
|
||||
'balance': '995433000msat',
|
||||
'coin_type': 'bcrt'
|
||||
}, {
|
||||
'account_id': '5b65c199ee862f49758603a5a29081912c8816a7c0243d1667489d244d3d055f',
|
||||
'balance': '500000000msat',
|
||||
'coin_type': 'bcrt'
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
### `block_added`
|
||||
|
||||
Emitted after each block is received from bitcoind, either during the initial sync or throughout the node's life as new blocks appear.
|
||||
|
||||
```json
|
||||
{
|
||||
"block": {
|
||||
"hash": "000000000000000000034bdb3c01652a0aa8f63d32f949313d55af2509f9d245",
|
||||
"height": 753304
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
### `openchannel_peer_sigs`
|
||||
|
||||
When opening a channel with a peer using the collaborative transaction protocol `opt_dual_fund`), this notification is fired when the peer sends us their funding transaction signatures, `tx_signatures`. We update the in-progress PSBT and return it here, with the peer's signatures attached.
|
||||
|
||||
```json
|
||||
{
|
||||
"openchannel_peer_sigs": {
|
||||
"channel_id": "252d1b0a1e5789...",
|
||||
"signed_psbt": "cHNidP8BAKgCAAAAAQ+y+61AQAAAAD9////AzbkHAAAAAAAFgAUwsyrFxwqW+natS7EG4JYYwJMVGZQwwAAAAAAACIAIKYE2s4YZ+RON6BB5lYQESHR9cA7hDm6/maYtTzSLA0hUMMAAAAAAAAiACBbjNO5FM9nzdj6YnPJMDU902R2c0+9liECwt9TuQiAzWYAAAAAAQDfAgAAAAABARtaSZufCbC+P+/G23XVaQ8mDwZQFW1vlCsCYhLbmVrpAAAAAAD+////AvJs5ykBAAAAFgAUT6ORgb3CgFsbwSOzNLzF7jQS5s+AhB4AAAAAABepFNi369DMyAJmqX2agouvGHcDKsZkhwJHMEQCIHELIyqrqlwRjyzquEPvqiorzL2hrvdu9EBxsqppeIKiAiBykC6De/PDElnqWw49y2vTqauSJIVBgGtSc+vq5BQd+gEhAg0f8WITWvA8o4grxNKfgdrNDncqreMLeRFiteUlne+GZQAAAAEBIICEHgAAAAAAF6kU2Lfr0MzIAmapfZqCi68YdwMqxmSHAQcXFgAUAfrZCrzWZpfiWSFkci3kqV6+4WUBCGsCRzBEAiBF31wbNWECsJ0DrPel2inWla2hYpCgaxeVgPAvFEOT2AIgWiFWN0hvUaK6kEnXhED50wQ2fBqnobsRhoy1iDDKXE0BIQPXRURck2JmXyLg2W6edm8nPzJg3qOcina/oF3SaE3czwz8CWxpZ2h0bmluZwEIexhVcpJl8ugM/AlsaWdodG5pbmcCAgABAAz8CWxpZ2h0bmluZwEIR7FutlQgkSoADPwJbGlnaHRuaW5nAQhYT+HjxFBqeAAM/AlsaWdodG5pbmcBCOpQ5iiTTNQEAA=="
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
### `shutdown`
|
||||
|
||||
Send in two situations: lightningd is (almost completely) shutdown, or the plugin `stop` command has been called for this plugin. In both cases the plugin has 30 seconds to exit itself, otherwise it's killed.
|
||||
|
||||
In the shutdown case, plugins should not interact with lightnind except via (id-less) logging or notifications. New rpc calls will fail with error code -5 and (plugin's) responses will be ignored. Because lightningd can crash or be killed, a plugin cannot rely on the shutdown notification always been send.
|
618
doc/guides/Developer-s Guide/plugin-development/hooks.md
Normal file
618
doc/guides/Developer-s Guide/plugin-development/hooks.md
Normal file
|
@ -0,0 +1,618 @@
|
|||
---
|
||||
title: "Hooks"
|
||||
slug: "hooks"
|
||||
hidden: false
|
||||
createdAt: "2023-02-03T08:57:58.166Z"
|
||||
updatedAt: "2023-02-21T15:08:30.254Z"
|
||||
---
|
||||
Hooks allow a plugin to define custom behavior for `lightningd` without having to modify the Core Lightning source code itself. A plugin declares that it'd like to be consulted on what to do next for certain events in the daemon. A hook can then decide how `lightningd` should
|
||||
react to the given event.
|
||||
|
||||
When hooks are registered, they can optionally specify "before" and "after" arrays of plugin names, which control what order they will be called in. If a plugin name is unknown, it is ignored, otherwise if the hook calls cannot be ordered to satisfy the specifications of all plugin hooks, the plugin registration will fail.
|
||||
|
||||
The call semantics of the hooks, i.e., when and how hooks are called, depend on the hook type. Most hooks are currently set to `single`-mode. In this mode only a single plugin can register the hook, and that plugin will get called for each event of that type. If a second plugin attempts to register the hook it gets killed and a corresponding log entry will be added to the logs.
|
||||
|
||||
In `chain`-mode multiple plugins can register for the hook type and they are called in any order they are loaded (i.e. cmdline order first, configuration order file second: though note that the order of plugin directories is implementation-dependent), overriden only by `before` and `after` requirements the plugin's hook registrations specify. Each plugin can then handle the event or defer by returning a `continue` result like the following:
|
||||
|
||||
```json
|
||||
{
|
||||
"result": "continue"
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
The remainder of the response is ignored and if there are any more plugins that have registered the hook the next one gets called. If there are no more plugins then the internal handling is resumed as if no hook had been called. Any other result returned by a plugin is considered an exit from the chain. Upon exit no more plugin hooks are called for the current event, and
|
||||
the result is executed. Unless otherwise stated all hooks are `single`-mode.
|
||||
|
||||
Hooks and notifications are very similar, however there are a few key differences:
|
||||
|
||||
- Notifications are asynchronous, i.e., `lightningd` will send the notifications but not wait for the plugin to process them. Hooks on the other hand are synchronous, `lightningd` cannot finish processing the event until the plugin has returned.
|
||||
- Any number of plugins can subscribe to a notification topic and get notified in parallel, however only one plugin may register for `single`-mode hook types, and in all cases only one plugin may return a non-`continue` response. This avoids having multiple contradictory responses.
|
||||
|
||||
Hooks are considered to be an advanced feature due to the fact that `lightningd` relies on the plugin to tell it what to do next. Use them carefully, and make sure your plugins always return a valid response to any hook invocation.
|
||||
|
||||
As a convention, for all hooks, returning the object `{ "result" : "continue" }` results in `lightningd` behaving exactly as if no plugin is registered on the hook.
|
||||
|
||||
### `peer_connected`
|
||||
|
||||
This hook is called whenever a peer has connected and successfully completed the cryptographic handshake. The parameters have the following structure:
|
||||
|
||||
```json
|
||||
{
|
||||
"peer": {
|
||||
"id": "03864ef025fde8fb587d989186ce6a4a186895ee44a926bfc370e2c366597a3f8f",
|
||||
"direction": "in",
|
||||
"addr": "34.239.230.56:9735",
|
||||
"features": ""
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
The hook is sparse on information, since the plugin can use the JSON-RPC `listpeers` command to get additional details should they be required. `direction` is either `"in"` or `"out"`. The `addr` field shows the address that we are connected to ourselves, not the gossiped list of known addresses. In particular this means that the port for incoming connections is an ephemeral port, that may not be available for reconnections.
|
||||
|
||||
The returned result must contain a `result` member which is either the string `disconnect` or `continue`. If `disconnect` and there's a member `error_message`, that member is sent to the peer before disconnection.
|
||||
|
||||
Note that `peer_connected` is a chained hook. The first plugin that decides to `disconnect` with or without an `error_message` will lead to the subsequent plugins not being called anymore.
|
||||
|
||||
### `commitment_revocation`
|
||||
|
||||
This hook is called whenever a channel state is updated, and the old state was revoked. State updates in Lightning consist of the following steps:
|
||||
|
||||
1. Proposal of a new state commitment in the form of a commitment transaction
|
||||
2. Exchange of signatures for the agreed upon commitment transaction
|
||||
3. Verification that the signatures match the commitment transaction
|
||||
4. Exchange of revocation secrets that could be used to penalize an eventual misbehaving party
|
||||
|
||||
The `commitment_revocation` hook is used to inform the plugin about the state transition being completed, and deliver the penalty transaction. The penalty transaction could then be sent to a watchtower that automaticaly reacts in case one party attempts to settle using a revoked commitment.
|
||||
|
||||
The payload consists of the following information:
|
||||
|
||||
```json
|
||||
{
|
||||
"commitment_txid": "58eea2cf538cfed79f4d6b809b920b40bb6b35962c4bb4cc81f5550a7728ab05",
|
||||
"penalty_tx": "02000000000101...ac00000000",
|
||||
"channel_id": "fb16398de93e8690c665873715ef590c038dfac5dd6c49a9d4b61dccfcedc2fb",
|
||||
"commitnum": 21
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
Notice that the `commitment_txid` could also be extracted from the sole input of the `penalty_tx`, however it is enclosed so plugins don't have to include the logic to parse transactions.
|
||||
|
||||
Not included are the `htlc_success` and `htlc_failure` transactions that may also be spending `commitment_tx` outputs. This is because these transactions are much more dynamic and have a predictable timeout, allowing wallets to ensure a quick checkin when the CLTV of the HTLC is about to expire.
|
||||
|
||||
The `commitment_revocation` hook is a chained hook, i.e., multiple plugins can register it, and they will be called in the order they were registered in. Plugins should always return `{"result": "continue"}`, otherwise subsequent hook subscribers would not get called.
|
||||
|
||||
### `db_write`
|
||||
|
||||
This hook is called whenever a change is about to be committed to the database, if you are using a SQLITE3 database (the default).
|
||||
This hook will be useless (the `"writes"` field will always be empty) if you are using a PostgreSQL database.
|
||||
|
||||
It is currently extremely restricted:
|
||||
|
||||
1. a plugin registering for this hook should not perform anything that may cause a db operation in response (pretty much, anything but logging).
|
||||
2. a plugin registering for this hook should not register for other hooks or commands, as these may become intermingled and break rule #1.
|
||||
3. the hook will be called before your plugin is initialized!
|
||||
|
||||
This hook, unlike all the other hooks, is also strongly synchronous: `lightningd` will stop almost all the other processing until this hook responds.
|
||||
|
||||
```json
|
||||
{
|
||||
"data_version": 42,
|
||||
"writes": [
|
||||
"PRAGMA foreign_keys = ON"
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
This hook is intended for creating continuous backups. The intent is that your backup plugin maintains three pieces of information (possibly in separate files):
|
||||
|
||||
1. a snapshot of the database
|
||||
2. a log of database queries that will bring that snapshot up-to-date
|
||||
3. the previous `data_version`
|
||||
|
||||
`data_version` is an unsigned 32-bit number that will always increment by 1 each time `db_write` is called. Note that this will wrap around on the limit of 32-bit numbers.
|
||||
|
||||
`writes` is an array of strings, each string being a database query that modifies the database.
|
||||
If the `data_version` above is validated correctly, then you can simply append this to the log of database queries.
|
||||
|
||||
Your plugin **MUST** validate the `data_version`. It **MUST** keep track of the previous `data_version` it got, and:
|
||||
|
||||
1. If the new `data_version` is **_exactly_** one higher than the previous, then this is the ideal case and nothing bad happened and we should save this and continue.
|
||||
2. If the new `data_version` is **_exactly_** the same value as the previous, then the previous set of queries was not committed.
|
||||
Your plugin **MAY** overwrite the previous set of queries with the current set, or it **MAY** overwrite its entire backup with a new snapshot of the database and the current `writes`
|
||||
array (treating this case as if `data_version` were two or more higher than the previous).
|
||||
3. If the new `data_version` is **_less than_** the previous, your plugin **MUST** halt and catch fire, and have the operator inspect what exactly happend here.
|
||||
4. Otherwise, some queries were lost and your plugin **SHOULD** recover by creating a new snapshot of the database: copy the database file, back up the given `writes` array, then delete (or atomically `rename` if in a POSIX filesystem) the previous backups of the database and SQL statements, or you **MAY** fail the hook to abort `lightningd`.
|
||||
|
||||
The "rolling up" of the database could be done periodically as well if the log of SQL statements has grown large.
|
||||
|
||||
Any response other than `{"result": "continue"}` will cause lightningd to error without
|
||||
committing to the database!
|
||||
This is the expected way to halt and catch fire.
|
||||
|
||||
`db_write` is a parallel-chained hook, i.e., multiple plugins can register it, and all of them will be invoked simultaneously without regard for order of registration.
|
||||
The hook is considered handled if all registered plugins return `{"result": "continue"}`.
|
||||
If any plugin returns anything else, `lightningd` will error without committing to the database.
|
||||
|
||||
### `invoice_payment`
|
||||
|
||||
This hook is called whenever a valid payment for an unpaid invoice has arrived.
|
||||
|
||||
```json
|
||||
{
|
||||
"payment": {
|
||||
"label": "unique-label-for-invoice",
|
||||
"preimage": "0000000000000000000000000000000000000000000000000000000000000000",
|
||||
"amount_msat": 10000
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
The hook is deliberately sparse, since the plugin can use the JSON-RPC `listinvoices` command to get additional details about this invoice. It can return a `failure_message` field as defined for final nodes in [BOLT 4](https://github.com/lightning/bolts/blob/master/04-onion-routing.md#failure-messages), a `result` field with the string
|
||||
`reject` to fail it with `incorrect_or_unknown_payment_details`, or a `result` field with the string `continue` to accept the payment.
|
||||
|
||||
### `openchannel`
|
||||
|
||||
This hook is called whenever a remote peer tries to fund a channel to us using the v1 protocol, and it has passed basic sanity checks:
|
||||
|
||||
```json
|
||||
{
|
||||
"openchannel": {
|
||||
"id": "03864ef025fde8fb587d989186ce6a4a186895ee44a926bfc370e2c366597a3f8f",
|
||||
"funding_msat": 100000000,
|
||||
"push_msat": 0,
|
||||
"dust_limit_msat": 546000,
|
||||
"max_htlc_value_in_flight_msat": 18446744073709551615,
|
||||
"channel_reserve_msat": 1000000,
|
||||
"htlc_minimum_msat": 0,
|
||||
"feerate_per_kw": 7500,
|
||||
"to_self_delay": 5,
|
||||
"max_accepted_htlcs": 483,
|
||||
"channel_flags": 1
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
There may be additional fields, including `shutdown_scriptpubkey` and a hex-string. You can see the definitions of these fields in [BOLT 2's description of the open_channel message](https://github.com/lightning/bolts/blob/master/02-peer-protocol.md#the-open_channel-message).
|
||||
|
||||
The returned result must contain a `result` member which is either the string `reject` or `continue`. If `reject` and there's a member `error_message`, that member is sent to the peer before disconnection.
|
||||
|
||||
For a 'continue'd result, you can also include a `close_to` address, which will be used as the output address for a mutual close transaction.
|
||||
|
||||
e.g.
|
||||
|
||||
```json
|
||||
{
|
||||
"result": "continue",
|
||||
"close_to": "bc1qlq8srqnz64wgklmqvurv7qnr4rvtq2u96hhfg2"
|
||||
"mindepth": 0,
|
||||
"reserve": "1234sat"
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
Note that `close_to` must be a valid address for the current chain, an invalid address will cause the node to exit with an error.
|
||||
|
||||
- `mindepth` is the number of confirmations to require before making the channel usable. Notice that setting this to 0 (`zeroconf`) or some other low value might expose you to double-spending issues, so only lower this value from the default if you trust the peer not to
|
||||
double-spend, or you reject incoming payments, including forwards, until the funding is confirmed.
|
||||
|
||||
- `reserve` is an absolute value for the amount in the channel that the peer must keep on their side. This ensures that they always have something to lose, so only lower this below the 1% of funding amount if you trust the peer. The protocol requires this to be larger than the dust limit, hence it will be adjusted to be the dust limit if the specified value is below.
|
||||
|
||||
Note that `openchannel` is a chained hook. Therefore `close_to`, `reserve` will only be
|
||||
evaluated for the first plugin that sets it. If more than one plugin tries to set a `close_to` address an error will be logged.
|
||||
|
||||
### `openchannel2`
|
||||
|
||||
This hook is called whenever a remote peer tries to fund a channel to us using the v2 protocol, and it has passed basic sanity checks:
|
||||
|
||||
```json
|
||||
{
|
||||
"openchannel2": {
|
||||
"id": "03864ef025fde8fb587d989186ce6a4a186895ee44a926bfc370e2c366597a3f8f",
|
||||
"channel_id": "252d1b0a1e57895e84137f28cf19ab2c35847e284c112fefdecc7afeaa5c1de7",
|
||||
"their_funding_msat": 100000000,
|
||||
"dust_limit_msat": 546000,
|
||||
"max_htlc_value_in_flight_msat": 18446744073709551615,
|
||||
"htlc_minimum_msat": 0,
|
||||
"funding_feerate_per_kw": 7500,
|
||||
"commitment_feerate_per_kw": 7500,
|
||||
"feerate_our_max": 10000,
|
||||
"feerate_our_min": 253,
|
||||
"to_self_delay": 5,
|
||||
"max_accepted_htlcs": 483,
|
||||
"channel_flags": 1
|
||||
"locktime": 2453,
|
||||
"channel_max_msat": 16777215000,
|
||||
"requested_lease_msat": 100000000,
|
||||
"lease_blockheight_start": 683990,
|
||||
"node_blockheight": 683990
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
There may be additional fields, such as `shutdown_scriptpubkey`. You can see the definitions of these fields in [BOLT 2's description of the open_channel message](https://github.com/lightning/bolts/blob/master/02-peer-protocol.md#the-open_channel-message).
|
||||
|
||||
`requested_lease_msat`, `lease_blockheight_start`, and `node_blockheight` are
|
||||
only present if the opening peer has requested a funding lease, per `option_will_fund`.
|
||||
|
||||
The returned result must contain a `result` member which is either the string `reject` or `continue`. If `reject` and there's a member `error_message`, that member is sent to the peer before disconnection.
|
||||
|
||||
For a 'continue'd result, you can also include a `close_to` address, which will be used as the output address for a mutual close transaction; you can include a `psbt` and an `our_funding_msat` to contribute funds, inputs and outputs to this channel open.
|
||||
|
||||
Note that, like `openchannel_init` RPC call, the `our_funding_msat` amount must NOT be accounted for in any supplied output. Change, however, should be included and should use the `funding_feerate_per_kw` to calculate.
|
||||
|
||||
See `plugins/funder.c` for an example of how to use this hook to contribute funds to a channel open.
|
||||
|
||||
e.g.
|
||||
|
||||
```json
|
||||
{
|
||||
"result": "continue",
|
||||
"close_to": "bc1qlq8srqnz64wgklmqvurv7qnr4rvtq2u96hhfg2"
|
||||
"psbt": "cHNidP8BADMCAAAAAQ+yBipSVZrrw28Oed52hTw3N7t0HbIyZhFdcZRH3+61AQAAAAD9////AGYAAAAAAQDfAgAAAAABARtaSZufCbC+P+/G23XVaQ8mDwZQFW1vlCsCYhLbmVrpAAAAAAD+////AvJs5ykBAAAAFgAUT6ORgb3CgFsbwSOzNLzF7jQS5s+AhB4AAAAAABepFNi369DMyAJmqX2agouvGHcDKsZkhwJHMEQCIHELIyqrqlwRjyzquEPvqiorzL2hrvdu9EBxsqppeIKiAiBykC6De/PDElnqWw49y2vTqauSJIVBgGtSc+vq5BQd+gEhAg0f8WITWvA8o4grxNKfgdrNDncqreMLeRFiteUlne+GZQAAAAEBIICEHgAAAAAAF6kU2Lfr0MzIAmapfZqCi68YdwMqxmSHAQQWABQB+tkKvNZml+JZIWRyLeSpXr7hZQz8CWxpZ2h0bmluZwEIexhVcpJl8ugM/AlsaWdodG5pbmcCAgABAA==",
|
||||
"our_funding_msat": 39999000
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
Note that `close_to` must be a valid address for the current chain, an invalid address will cause the node to exit with an error.
|
||||
|
||||
Note that `openchannel` is a chained hook. Therefore `close_to` will only be evaluated for the first plugin that sets it. If more than one plugin tries to set a `close_to` address an error will be logged.
|
||||
|
||||
### `openchannel2_changed`
|
||||
|
||||
This hook is called when we received updates to the funding transaction from the peer.
|
||||
|
||||
```json
|
||||
{
|
||||
"openchannel2_changed": {
|
||||
"channel_id": "252d1b0a1e57895e841...",
|
||||
"psbt": "cHNidP8BADMCAAAAAQ+yBipSVZr..."
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
In return, we expect a `result` indicated to `continue` and an updated `psbt`.
|
||||
If we have no updates to contribute, return the passed in PSBT. Once no changes to the PSBT are made on either side, the transaction construction negotiation will end and commitment transactions will be exchanged.
|
||||
|
||||
#### Expected Return
|
||||
|
||||
```json
|
||||
{
|
||||
"result": "continue",
|
||||
"psbt": "cHNidP8BADMCAAAAAQ+yBipSVZr..."
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
See `plugins/funder.c` for an example of how to use this hook to continue a v2 channel open.
|
||||
|
||||
### `openchannel2_sign`
|
||||
|
||||
This hook is called after we've gotten the commitment transactions for a channel open. It expects psbt to be returned which contains signatures for our inputs to the funding transaction.
|
||||
|
||||
```json
|
||||
{
|
||||
"openchannel2_sign": {
|
||||
"channel_id": "252d1b0a1e57895e841...",
|
||||
"psbt": "cHNidP8BADMCAAAAAQ+yBipSVZr..."
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
In return, we expect a `result` indicated to `continue` and an partially signed `psbt`.
|
||||
|
||||
If we have no inputs to sign, return the passed in PSBT. Once we have also received the signatures from the peer, the funding transaction will be broadcast.
|
||||
|
||||
#### Expected Return
|
||||
|
||||
```json
|
||||
{
|
||||
"result": "continue",
|
||||
"psbt": "cHNidP8BADMCAAAAAQ+yBipSVZr..."
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
See `plugins/funder.c` for an example of how to use this hook to sign a funding transaction.
|
||||
|
||||
### `rbf_channel`
|
||||
|
||||
Similar to `openchannel2`, the `rbf_channel` hook is called when a peer requests an RBF for a channel funding transaction.
|
||||
|
||||
```json
|
||||
{
|
||||
"rbf_channel": {
|
||||
"id": "03864ef025fde8fb587d989186ce6a4a186895ee44a926bfc370e2c366597a3f8f",
|
||||
"channel_id": "252d1b0a1e57895e84137f28cf19ab2c35847e284c112fefdecc7afeaa5c1de7",
|
||||
"their_last_funding_msat": 100000000,
|
||||
"their_funding_msat": 100000000,
|
||||
"our_last_funding_msat": 100000000,
|
||||
"funding_feerate_per_kw": 7500,
|
||||
"feerate_our_max": 10000,
|
||||
"feerate_our_min": 253,
|
||||
"channel_max_msat": 16777215000,
|
||||
"locktime": 2453,
|
||||
"requested_lease_msat": 100000000,
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
The returned result must contain a `result` member which is either the string `reject` or `continue`. If `reject` and there's a member `error_message`, that member is sent to the peer before disconnection.
|
||||
|
||||
For a 'continue'd result, you can include a `psbt` and an `our_funding_msat` to contribute funds, inputs and outputs to this channel open.
|
||||
|
||||
Note that, like the `openchannel_init` RPC call, the `our_funding_msat` amount must NOT be accounted for in any supplied output. Change, however, should be included and should use the `funding_feerate_per_kw` to calculate.
|
||||
|
||||
#### Return
|
||||
|
||||
```json
|
||||
{
|
||||
"result": "continue",
|
||||
"psbt": "cHNidP8BADMCAAAAAQ+yBipSVZrrw28Oed52hTw3N7t0HbIyZhFdcZRH3+61AQAAAAD9////AGYAAAAAAQDfAgAAAAABARtaSZufCbC+P+/G23XVaQ8mDwZQFW1vlCsCYhLbmVrpAAAAAAD+////AvJs5ykBAAAAFgAUT6ORgb3CgFsbwSOzNLzF7jQS5s+AhB4AAAAAABepFNi369DMyAJmqX2agouvGHcDKsZkhwJHMEQCIHELIyqrqlwRjyzquEPvqiorzL2hrvdu9EBxsqppeIKiAiBykC6De/PDElnqWw49y2vTqauSJIVBgGtSc+vq5BQd+gEhAg0f8WITWvA8o4grxNKfgdrNDncqreMLeRFiteUlne+GZQAAAAEBIICEHgAAAAAAF6kU2Lfr0MzIAmapfZqCi68YdwMqxmSHAQQWABQB+tkKvNZml+JZIWRyLeSpXr7hZQz8CWxpZ2h0bmluZwEIexhVcpJl8ugM/AlsaWdodG5pbmcCAgABAA==",
|
||||
"our_funding_msat": 39999000
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
### `htlc_accepted`
|
||||
|
||||
The `htlc_accepted` hook is called whenever an incoming HTLC is accepted, and its result determines how `lightningd` should treat that HTLC.
|
||||
|
||||
The payload of the hook call has the following format:
|
||||
|
||||
```json
|
||||
{
|
||||
"onion": {
|
||||
"payload": "",
|
||||
"short_channel_id": "1x2x3",
|
||||
"forward_msat": 42,
|
||||
"outgoing_cltv_value": 500014,
|
||||
"shared_secret": "0000000000000000000000000000000000000000000000000000000000000000",
|
||||
"next_onion": "[1365bytes of serialized onion]"
|
||||
},
|
||||
"htlc": {
|
||||
"short_channel_id": "4x5x6",
|
||||
"id": 27,
|
||||
"amount_msat": 43,
|
||||
"cltv_expiry": 500028,
|
||||
"cltv_expiry_relative": 10,
|
||||
"payment_hash": "0000000000000000000000000000000000000000000000000000000000000000"
|
||||
},
|
||||
"forward_to": "0000000000000000000000000000000000000000000000000000000000000000"
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
For detailed information about each field please refer to [BOLT 04 of the specification](https://github.com/lightning/bolts/blob/master/04-onion-routing.md), the following is just a brief summary:
|
||||
|
||||
- `onion`:
|
||||
- `payload` contains the unparsed payload that was sent to us from the sender of the payment.
|
||||
- `short_channel_id` determines the channel that the sender is hinting should be used next. Not present if we're the final destination.
|
||||
- `forward_amount` is the amount we should be forwarding to the next hop, and should match the incoming funds in case we are the recipient.
|
||||
- `outgoing_cltv_value` determines what the CLTV value for the HTLC that we forward to the next hop should be.
|
||||
- `total_msat` specifies the total amount to pay, if present.
|
||||
- `payment_secret` specifies the payment secret (which the payer should have obtained from the invoice), if present.
|
||||
- `next_onion` is the fully processed onion that we should be sending to the next hop as part of the outgoing HTLC. Processed in this case means that we took the incoming onion, decrypted it, extracted the payload destined for us, and serialised the resulting onion again.
|
||||
- `shared_secret` is the shared secret we used to decrypt the incoming onion. It is shared with the sender that constructed the onion.
|
||||
- `htlc`:
|
||||
- `short_channel_id` is the channel this payment is coming from.
|
||||
- `id` is the low-level sequential HTLC id integer as sent by the channel peer.
|
||||
- `amount` is the amount that we received with the HTLC. This amount minus the `forward_amount` is the fee that will stay with us.
|
||||
- `cltv_expiry` determines when the HTLC reverts back to the sender. `cltv_expiry` minus `outgoing_cltv_expiry` should be equal or larger than our `cltv_delta` setting.
|
||||
- `cltv_expiry_relative` hints how much time we still have to claim the HTLC. It is the `cltv_expiry` minus the current `blockheight` and is passed along mainly to avoid the plugin having to look up the current blockheight.
|
||||
- `payment_hash` is the hash whose `payment_preimage` will unlock the funds and allow us to claim the HTLC.
|
||||
- `forward_to`: if set, the channel_id we intend to forward this to (will not be present if the short_channel_id was invalid or we were the final destination).
|
||||
|
||||
The hook response must have one of the following formats:
|
||||
|
||||
```json
|
||||
{
|
||||
"result": "continue"
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
This means that the plugin does not want to do anything special and `lightningd` should continue processing it normally, i.e., resolve the payment if we're the recipient, or attempt to forward it otherwise. Notice that the usual checks such as sufficient fees and CLTV deltas are still enforced.
|
||||
|
||||
It can also replace the `onion.payload` by specifying a `payload` in the response. Note that this is always a TLV-style payload, so unlike `onion.payload` there is no length prefix (and it must be at least 4 hex digits long). This will be re-parsed; it's useful for removing onion fields which a plugin doesn't want lightningd to consider.
|
||||
|
||||
It can also specify `forward_to` in the response, replacing the destination. This usually only makes sense if it wants to choose an alternate channel to the same next peer, but is useful if the `payload` is also replaced.
|
||||
|
||||
```json
|
||||
{
|
||||
"result": "fail",
|
||||
"failure_message": "2002"
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
`fail` will tell `lightningd` to fail the HTLC with a given hex-encoded `failure_message` (please refer to the [spec](https://github.com/lightning/bolts/blob/master/04-onion-routing.md) for details: `incorrect_or_unknown_payment_details` is the most common).
|
||||
|
||||
```json
|
||||
{
|
||||
"result": "fail",
|
||||
"failure_onion": "[serialized error packet]"
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
Instead of `failure_message` the response can contain a hex-encoded `failure_onion` that will be used instead (please refer to the [spec](https://github.com/lightning/bolts/blob/master/04-onion-routing.md) for details). This can be used, for example, if you're writing a bridge between two Lightning Networks. Note that `lightningd` will apply the obfuscation step to the value returned here with its own shared secret (and key type `ammag`) before returning it to the previous hop.
|
||||
|
||||
```json
|
||||
{
|
||||
"result": "resolve",
|
||||
"payment_key": "0000000000000000000000000000000000000000000000000000000000000000"
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
`resolve` instructs `lightningd` to claim the HTLC by providing the preimage matching the `payment_hash` presented in the call. Notice that the plugin must ensure that the `payment_key` really matches the `payment_hash` since `lightningd` will not check and the wrong value could result in the channel being closed.
|
||||
|
||||
> 🚧
|
||||
>
|
||||
> `lightningd` will replay the HTLCs for which it doesn't have a final verdict during startup. This means that, if the plugin response wasn't processed before the HTLC was forwarded, failed, or resolved, then the plugin may see the same HTLC again during startup. It is therefore paramount that the plugin is idempotent if it talks to an external system.
|
||||
|
||||
The `htlc_accepted` hook is a chained hook, i.e., multiple plugins can register it, and they will be called in the order they were registered in until the first plugin return a result that is not `{"result": "continue"}`, after which the event is considered to be handled. After the event has been handled the remaining plugins will be skipped.
|
||||
|
||||
### `rpc_command`
|
||||
|
||||
The `rpc_command` hook allows a plugin to take over any RPC command. It sends the received JSON-RPC request (for any method!) to the registered plugin,
|
||||
|
||||
```json
|
||||
{
|
||||
"rpc_command": {
|
||||
"id": 3,
|
||||
"method": "method_name",
|
||||
"params": {
|
||||
"param_1": [],
|
||||
"param_2": {},
|
||||
"param_n": "",
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
which can in turn:
|
||||
|
||||
Let `lightningd` execute the command with
|
||||
|
||||
```json
|
||||
{
|
||||
"result" : "continue"
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
Replace the request made to `lightningd`:
|
||||
|
||||
```json
|
||||
{
|
||||
"replace": {
|
||||
"id": 3,
|
||||
"method": "method_name",
|
||||
"params": {
|
||||
"param_1": [],
|
||||
"param_2": {},
|
||||
"param_n": "",
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
Return a custom response to the request sender:
|
||||
|
||||
```json
|
||||
{
|
||||
"return": {
|
||||
"result": {
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
Return a custom error to the request sender:
|
||||
|
||||
```json
|
||||
{
|
||||
"return": {
|
||||
"error": {
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
Note: The `rpc_command` hook is chainable. If two or more plugins try to replace/result/error the same `method`, only the first plugin in the chain will be respected. Others will be ignored and a warning will be logged.
|
||||
|
||||
### `custommsg`
|
||||
|
||||
The `custommsg` plugin hook is the receiving counterpart to the [`sendcustommsg`](ref:lightning-sendcustommsg) RPC method and allows plugins to handle messages that are not handled internally. The goal of these two components is to allow the implementation of custom protocols or prototypes on top of a Core Lightning node, without having to change the node's implementation itself.
|
||||
|
||||
The payload for a call follows this format:
|
||||
|
||||
```json
|
||||
{
|
||||
"peer_id": "02df5ffe895c778e10f7742a6c5b8a0cefbe9465df58b92fadeb883752c8107c8f",
|
||||
"payload": "1337ffffffff"
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
This payload would have been sent by the peer with the `node_id` matching `peer_id`, and the message has type `0x1337` and contents `ffffffff`. Notice that the messages are currently limited to odd-numbered types and must not match a type that is handled internally by Core Lightning. These limitations are in place in order to avoid conflicts with the internal state tracking, and avoiding disconnections or channel closures, since odd-numbered message can be
|
||||
ignored by nodes (see ["it's ok to be odd" in the specification](https://github.com/lightning/bolts/blob/c74a3bbcf890799d343c62cb05fcbcdc952a1cf3/01-messaging.md#lightning-message-format) for details). The plugin must implement the parsing of the message, including the type prefix, since Core Lightning does not know how to parse the message.
|
||||
|
||||
Because this is a chained hook, the daemon expects the result to be `{'result': 'continue'}`. It will fail if something else is returned.
|
||||
|
||||
### `onion_message_recv` and `onion_message_recv_secret`
|
||||
|
||||
> 🚧 experimental-offers only
|
||||
|
||||
These two hooks are almost identical, in that they are called when an onion message is received.
|
||||
|
||||
`onion_message_recv` is used for unsolicited messages (where the source knows that it is sending to this node), and `onion_message_recv_secret` is used for messages which use a blinded path we supplied. The latter hook will have a `pathsecret` field, the former never will.
|
||||
|
||||
These hooks are separate, because replies MUST be ignored unless they use the correct path (i.e. `onion_message_recv_secret`, with the expected `pathsecret`). This avoids the source trying to probe for responses without using the designated delivery path.
|
||||
|
||||
The payload for a call follows this format:
|
||||
|
||||
```json
|
||||
{
|
||||
"onion_message": {
|
||||
"pathsecret": "0000000000000000000000000000000000000000000000000000000000000000",
|
||||
"reply_first_node": "02df5ffe895c778e10f7742a6c5b8a0cefbe9465df58b92fadeb883752c8107c8f",
|
||||
"reply_blinding": "02df5ffe895c778e10f7742a6c5b8a0cefbe9465df58b92fadeb883752c8107c8f",
|
||||
"reply_path": [ {"id": "02df5ffe895c778e10f7742a6c5b8a0cefbe9465df58b92fadeb883752c8107c8f",
|
||||
"encrypted_recipient_data": "0a020d0d",
|
||||
"blinding": "02df5ffe895c778e10f7742a6c5b8a0cefbe9465df58b92fadeb883752c8107c8f"} ],
|
||||
"invoice_request": "0a020d0d",
|
||||
"invoice": "0a020d0d",
|
||||
"invoice_error": "0a020d0d",
|
||||
"unknown_fields": [ {"number": 12345, "value": "0a020d0d"} ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
|
||||
All fields shown here are optional.
|
||||
|
||||
We suggest just returning `{'result': 'continue'}`; any other result will cause the message not to be handed to any other hooks.
|
|
@ -0,0 +1,39 @@
|
|||
---
|
||||
title: "JSON-RPC passthrough"
|
||||
slug: "json-rpc-passthrough"
|
||||
hidden: false
|
||||
createdAt: "2023-02-03T08:53:50.840Z"
|
||||
updatedAt: "2023-02-03T08:53:50.840Z"
|
||||
---
|
||||
Plugins may register their own JSON-RPC methods that are exposed through the JSON-RPC provided by `lightningd`. This provides users with a single interface to interact with, while allowing the addition of custom methods without having to modify the daemon itself.
|
||||
|
||||
JSON-RPC methods are registered as part of the `getmanifest` result. Each registered method must provide a `name` and a `description`. An optional `long_description` may also be
|
||||
provided. This information is then added to the internal dispatch table, and used to return the help text when using `lightning-cli
|
||||
help`, and the methods can be called using the `name`.
|
||||
|
||||
For example, `getmanifest` result will register two methods, called `hello` and `gettime`:
|
||||
|
||||
```json
|
||||
...
|
||||
"rpcmethods": [
|
||||
{
|
||||
"name": "hello",
|
||||
"usage": "[name]",
|
||||
"description": "Returns a personalized greeting for {greeting} (set via options)."
|
||||
},
|
||||
{
|
||||
"name": "gettime",
|
||||
"description": "Returns the current time in {timezone}",
|
||||
"usage": "",
|
||||
"long_description": "Returns the current time in the timezone that is given as the only parameter.\nThis description may be quite long and is allowed to span multiple lines."
|
||||
}
|
||||
],
|
||||
...
|
||||
```
|
||||
|
||||
|
||||
|
||||
The RPC call will be passed through unmodified, with the exception of the JSON-RPC call `id`, which is internally remapped to a unique integer instead, in order to avoid collisions. When passing the result back the `id` field is restored to its original value.
|
||||
|
||||
Note that if your `result` for an RPC call includes `"format-hint":
|
||||
"simple"`, then `lightning-cli` will default to printing your output in "human-readable" flat form.
|
|
@ -0,0 +1,24 @@
|
|||
---
|
||||
title: "Plugin manager"
|
||||
slug: "plugin-manager"
|
||||
excerpt: "Learn how to add your plugin to the `reckless` plugin manager."
|
||||
hidden: false
|
||||
createdAt: "2023-02-08T13:22:17.211Z"
|
||||
updatedAt: "2023-02-21T15:11:45.714Z"
|
||||
---
|
||||
`reckless` is a plugin manager for Core Lightning that you can use to install and uninstall [plugins](doc:plugins) with a single command.
|
||||
|
||||
To make your plugin compatible with reckless install:
|
||||
|
||||
- Choose a unique plugin name.
|
||||
- The plugin entrypoint is inferred. Naming your plugin executable the same as your plugin name will allow reckless to identify it correctly (file extensions are okay).
|
||||
- For python plugins, a requirements.txt is the preferred medium for python dependencies. A pyproject.toml will be used as a fallback, but test installation via `pip install -e .` - Poetry looks for additional files in the working directory, whereas with pip, any
|
||||
references to these will require something like `packages = [{ include = "*.py" }]` under the `[tool.poetry]` section.
|
||||
- Additional repository sources may be added with `reckless source add https://my.repo.url/here` however <https://github.com/lightningd/plugins> is included by default. Consider adding your plugin lightningd/plugins to make installation simpler.
|
||||
- If your plugin is located in a subdirectory of your repo with a different name than your plugin, it will likely be overlooked.
|
||||
|
||||
> 📘
|
||||
>
|
||||
> As reckless needs to know how to handle and install the dependencies of a plugin, current version only supports python plugins. We are working on a broader support, e.g., for javascript, golang and other popular programming languages.
|
||||
>
|
||||
> Stay tuned and tell us what languages you need support for, and what features you're missing.
|
10
doc/guides/Developer-s Guide/tutorials.md
Normal file
10
doc/guides/Developer-s Guide/tutorials.md
Normal file
|
@ -0,0 +1,10 @@
|
|||
---
|
||||
title: "Recipes"
|
||||
slug: "tutorials"
|
||||
excerpt: "Explore working tutorials for common use cases."
|
||||
hidden: false
|
||||
createdAt: "2022-12-09T09:56:46.172Z"
|
||||
updatedAt: "2023-02-02T13:59:44.679Z"
|
||||
type: "link"
|
||||
link_url: "https://docs.corelightning.org/recipes"
|
||||
---
|
7
doc/guides/Getting Started/advanced-setup.md
Normal file
7
doc/guides/Getting Started/advanced-setup.md
Normal file
|
@ -0,0 +1,7 @@
|
|||
---
|
||||
title: "Advanced setup"
|
||||
slug: "advanced-setup"
|
||||
hidden: false
|
||||
createdAt: "2023-01-25T10:54:00.674Z"
|
||||
updatedAt: "2023-01-25T10:55:28.187Z"
|
||||
---
|
25
doc/guides/Getting Started/advanced-setup/bitcoin-core.md
Normal file
25
doc/guides/Getting Started/advanced-setup/bitcoin-core.md
Normal file
|
@ -0,0 +1,25 @@
|
|||
---
|
||||
title: "Bitcoin Core"
|
||||
slug: "bitcoin-core"
|
||||
hidden: false
|
||||
createdAt: "2023-01-31T13:24:19.300Z"
|
||||
updatedAt: "2023-02-21T13:30:53.906Z"
|
||||
---
|
||||
# Using a pruned Bitcoin Core node
|
||||
|
||||
Core Lightning requires JSON-RPC access to a fully synchronized `bitcoind` in order to synchronize with the Bitcoin network.
|
||||
|
||||
Access to ZeroMQ is not required and `bitcoind` does not need to be run with `txindex` like other implementations.
|
||||
|
||||
The lightning daemon will poll `bitcoind` for new blocks that it hasn't processed yet, thus synchronizing itself with `bitcoind`.
|
||||
|
||||
If `bitcoind` prunes a block that Core Lightning has not processed yet, e.g., Core Lightning was not running for a prolonged period, then `bitcoind` will not be able to serve the missing blocks, hence Core Lightning will not be able to synchronize anymore and will be stuck.
|
||||
|
||||
In order to avoid this situation you should be monitoring the gap between Core Lightning's blockheight using `[lightning-cli](ref:lightning-cli) getinfo` and `bitcoind`'s blockheight using `bitcoin-cli getblockchaininfo`. If the two blockheights drift apart it might be necessary to intervene.
|
||||
|
||||
# Connecting to Bitcoin Core remotely
|
||||
|
||||
You can use _trusted_ third-party plugins as bitcoin backends instead of using your own node.
|
||||
|
||||
- [sauron](https://github.com/lightningd/plugins/tree/master/sauron) is a bitcoin backend plugin relying on [Esplora](https://github.com/Blockstream/esplora).
|
||||
- [trustedcoin](https://github.com/nbd-wtf/trustedcoin) is a plugin that uses block explorers (blockstream.info, mempool.space, blockchair.com and blockchain.info) as backends instead of your own bitcoin node.
|
220
doc/guides/Getting Started/advanced-setup/repro.md
Normal file
220
doc/guides/Getting Started/advanced-setup/repro.md
Normal file
|
@ -0,0 +1,220 @@
|
|||
---
|
||||
title: "Reproducible builds"
|
||||
slug: "repro"
|
||||
hidden: false
|
||||
createdAt: "2023-01-25T10:37:03.476Z"
|
||||
updatedAt: "2023-04-22T13:02:34.236Z"
|
||||
---
|
||||
Reproducible builds close the final gap in the lifecycle of open-source projects by allowing maintainers to verify and certify that a given binary was indeed produced by compiling an unmodified version of the publicly available source. In particular the maintainer certifies that the binary corresponds a) to the exact version of the and b) that no malicious changes have been applied before or after the compilation.
|
||||
|
||||
Core Lightning has provided a manifest of the binaries included in a release, along with signatures from the maintainers since version 0.6.2.
|
||||
|
||||
The steps involved in creating reproducible builds are:
|
||||
|
||||
- Creation of a known environment in which to build the source code
|
||||
- Removal of variance during the compilation (randomness, timestamps, etc)
|
||||
- Packaging of binaries
|
||||
- Creation of a manifest (`SHA256SUMS` file containing the crytographic hashes of the binaries and packages)
|
||||
- Signing of the manifest by maintainers and volunteers that have reproduced the files in the manifest starting from the source.
|
||||
|
||||
The bulk of these operations is handled by the [`repro-build.sh`](https://github.com/ElementsProject/lightning/blob/master/tools/repro-build.sh) script, but some manual operations are required to setup the build environment. Since a binary is built against platorm specific libraries we also need to replicate the steps once for each OS distribution and architecture, so the majority of this guide will describe how to set up those starting from a minimal trusted base. This minimal trusted base in most cases is the official installation medium from the OS provider.
|
||||
|
||||
Note: Since your signature certifies the integrity of the resulting binaries, please familiarize youself with both the [`repro-build.sh`](https://github.com/ElementsProject/lightning/blob/master/tools/repro-build.sh) script, as well as with the setup instructions for the build environments before signing anything.
|
||||
|
||||
# Build Environment Setup
|
||||
|
||||
The build environments are a set of docker images that are created directly from the installation mediums and repositories from the OS provider. The following sections describe how to create those images. Don't worry, you only have to create each image once and can then reuse the images for future builds.
|
||||
|
||||
## Base image creation
|
||||
|
||||
Depending on the distribution that we want to build for the instructions to create a base image can vary. In the following sections we discuss the specific instructions for each distribution, whereas the instructions are identical again once we have the base image.
|
||||
|
||||
### Debian / Ubuntu and derivative OSs
|
||||
|
||||
For operating systems derived from Debian we can use the `debootstrap` tool to build a minimal OS image, that can then be transformed into a docker image. The packages for the minimal OS image are directly downloaded from the installation repositories operated by the OS provider.
|
||||
|
||||
We cannot really use the `debian` and `ubuntu` images from the docker hub, mainly because it'd be yet another trusted third party, but it is also complicated by the fact that the images have some of the packages updated. The latter means that if we disable the `updates` and `security` repositories for `apt` we find ourselves in a situation where we can't install any additional packages (wrongly updated packages depending on the versions not available in
|
||||
the non-updated repos).
|
||||
|
||||
The following table lists the codenames of distributions that we currently support:
|
||||
|
||||
- Ubuntu 18.06:
|
||||
- Distribution Version: 18.04
|
||||
- Codename: bionic
|
||||
- Ubuntu 20.04:
|
||||
- Distribution Version: 20.04
|
||||
- Codename: focal
|
||||
- Ubuntu 22.04:
|
||||
- Distribution Version: 22.04
|
||||
- Codename: jammy
|
||||
|
||||
Depending on your host OS release you might not have `debootstrap` manifests for versions newer than your host OS. Due to this we run the `debootstrap` commands in a container of the latest version itself:
|
||||
|
||||
```shell
|
||||
for v in bionic focal jammy; do
|
||||
echo "Building base image for $v"
|
||||
sudo docker run --rm -v $(pwd):/build ubuntu:22.04 \
|
||||
bash -c "apt-get update && apt-get install -y debootstrap && debootstrap $v /build/$v"
|
||||
sudo tar -C $v -c . | sudo docker import - $v
|
||||
done
|
||||
```
|
||||
|
||||
|
||||
|
||||
Verify that the image corresponds to our expectation and is runnable:
|
||||
|
||||
```shell
|
||||
sudo docker run bionic cat /etc/lsb-release
|
||||
```
|
||||
|
||||
|
||||
|
||||
Which should result in the following output for `bionic`:
|
||||
|
||||
```shell
|
||||
DISTRIB_ID=Ubuntu
|
||||
DISTRIB_RELEASE=18.04
|
||||
DISTRIB_CODENAME=bionic
|
||||
DISTRIB_DESCRIPTION="Ubuntu 18.04 LTS"
|
||||
```
|
||||
|
||||
|
||||
|
||||
## Builder image setup
|
||||
|
||||
Once we have the clean base image we need to customize it to be able to build Core Lightning. This includes disabling the update repositories, downloading the build dependencies and specifying the steps required to perform the build.
|
||||
|
||||
For this purpose we have a number of Dockerfiles in the [`contrib/reprobuild`](https://github.com/ElementsProject/lightning/tree/master/contrib/reprobuild) directory that have the specific instructions for each base image.
|
||||
|
||||
We can then build the builder image by calling `docker build` and passing it the `Dockerfile`:
|
||||
|
||||
```shell
|
||||
sudo docker build -t cl-repro-bionic - < contrib/reprobuild/Dockerfile.bionic
|
||||
sudo docker build -t cl-repro-focal - < contrib/reprobuild/Dockerfile.focal
|
||||
sudo docker build -t cl-repro-jammy - < contrib/reprobuild/Dockerfile.jammy
|
||||
```
|
||||
|
||||
|
||||
|
||||
Since we pass the `Dockerfile` through `stdin` the build command will not create a context, i.e., the current directory is not passed to `docker` and it'll be independent of the currently checked out version. This also means that you will be able to reuse the docker image for future builds, and don't have to repeat this dance every time. Verifying the `Dockerfile` therefore is
|
||||
sufficient to ensure that the resulting `cl-repro-<codename>` image is reproducible.
|
||||
|
||||
The dockerfiles assume that the base image has the codename as its image name.
|
||||
|
||||
# Building using the builder image
|
||||
|
||||
Finally, after this rather lengthy setup we can perform the actual build. At this point we have a container image that has been prepared to build reproducibly. As you can see from the `Dockerfile` above we assume the source git repository gets mounted as `/repo` in the docker container. The container will clone the repository to an internal path, in order to keep the repository clean, build the artifacts there, and then copy them back to `/repo/release`.
|
||||
We'll need the release directory available for this, so create it now if it doesn't exist:`mkdir release`, then we can simply execute the following command inside the git repository (remember to checkout the tag you are trying to build):
|
||||
|
||||
```bash
|
||||
sudo docker run --rm -v $(pwd):/repo -ti cl-repro-bionic
|
||||
sudo docker run --rm -v $(pwd):/repo -ti cl-repro-focal
|
||||
sudo docker run --rm -v $(pwd):/repo -ti cl-repro-jammy
|
||||
```
|
||||
|
||||
|
||||
|
||||
The last few lines of output also contain the `sha256sum` hashes of all artifacts, so if you're just verifying the build those are the lines that are of interest to you:
|
||||
|
||||
```shell
|
||||
ee83cf4948228ab1f644dbd9d28541fd8ef7c453a3fec90462b08371a8686df8 /repo/release/clightning-v0.9.0rc1-Ubuntu-18.04.tar.xz
|
||||
94bd77f400c332ac7571532c9f85b141a266941057e8fe1bfa04f054918d8c33 /repo/release/clightning-v0.9.0rc1.zip
|
||||
```
|
||||
|
||||
|
||||
|
||||
Repeat this step for each distribution and each architecture you wish to sign. Once all the binaries are in the `release/` subdirectory we can sign the hashes:
|
||||
|
||||
# (Co-)Signing the release manifest
|
||||
|
||||
The release captain is in charge of creating the manifest, whereas contributors and interested bystanders may contribute their signatures to further increase trust in the binaries.
|
||||
|
||||
The release captain creates the manifest as follows:
|
||||
|
||||
```shell
|
||||
cd release/
|
||||
sha256sum *v0.9.0* > SHA256SUMS
|
||||
gpg -sb --armor SHA256SUMS
|
||||
```
|
||||
|
||||
|
||||
|
||||
Co-maintainers and contributors wishing to add their own signature verify that the `SHA256SUMS` and `SHA256SUMS.asc` files created by the release captain matches their binaries before also signing the manifest:
|
||||
|
||||
```shell
|
||||
cd release/
|
||||
gpg --verify SHA256SUMS.asc
|
||||
sha256sum -c SHA256SUMS
|
||||
cat SHA256SUMS | gpg -sb --armor > SHA256SUMS.new
|
||||
```
|
||||
|
||||
|
||||
|
||||
Then send the resulting `SHA256SUMS.new` file to the release captain so it can be merged with the other signatures into `SHASUMS.asc`.
|
||||
|
||||
# Verifying a reproducible build
|
||||
|
||||
You can verify the reproducible build in two ways:
|
||||
|
||||
- Repeating the entire reproducible build, making sure from scratch that the binaries match. Just follow the instructions above for this.
|
||||
- Verifying that the downloaded binaries match match the hashes in `SHA256SUMS` and that the signatures in `SHA256SUMS.asc` are valid.
|
||||
|
||||
Assuming you have downloaded the binaries, the manifest and the signatures into the same directory, you can verify the signatures with the following:
|
||||
|
||||
```shell
|
||||
gpg --verify SHA256SUMS.asc
|
||||
```
|
||||
|
||||
|
||||
|
||||
And you should see a list of messages like the following:
|
||||
|
||||
```shell
|
||||
gpg: assuming signed data in 'SHA256SUMS'
|
||||
gpg: Signature made Fr 08 Mai 2020 07:46:38 CEST
|
||||
gpg: using RSA key 15EE8D6CAB0E7F0CF999BFCBD9200E6CD1ADB8F1
|
||||
gpg: Good signature from "Rusty Russell <rusty@rustcorp.com.au>" [full]
|
||||
gpg: Signature made Fr 08 Mai 2020 12:30:10 CEST
|
||||
gpg: using RSA key B7C4BE81184FC203D52C35C51416D83DC4F0E86D
|
||||
gpg: Good signature from "Christian Decker <decker.christian@gmail.com>" [ultimate]
|
||||
gpg: Signature made Fr 08 Mai 2020 21:35:28 CEST
|
||||
gpg: using RSA key 30DE693AE0DE9E37B3E7EB6BBFF0F67810C1EED1
|
||||
gpg: Good signature from "Lisa Neigut <niftynei@gmail.com>" [full]
|
||||
```
|
||||
|
||||
|
||||
|
||||
If there are any issues `gpg` will print `Bad signature`, it might be because the signatures in `SHA256SUMS.asc` do not match the `SHA256SUMS` file, and could be the result of a filename change. Do not continue using the binaries, and contact the maintainers, if this is not the case, a failure here means that the verification failed.
|
||||
|
||||
Next we verify that the binaries match the ones in the manifest:
|
||||
|
||||
```shell
|
||||
sha256sum -c SHA256SUMS
|
||||
```
|
||||
|
||||
|
||||
|
||||
Producing output similar to the following:
|
||||
|
||||
```shell
|
||||
sha256sum: clightning-v0.9.0-Fedora-28-amd64.tar.gz: No such file or directory
|
||||
clightning-v0.9.0-Fedora-28-amd64.tar.gz: FAILED open or read
|
||||
clightning-v0.9.0-Ubuntu-18.04.tar.xz: OK
|
||||
clightning-v0.9.0.zip: OK
|
||||
sha256sum: WARNING: 1 listed file could not be read
|
||||
```
|
||||
|
||||
|
||||
|
||||
Notice that the two files we downloaded are marked as `OK`, but we're missing one file. If you didn't download that file this is to be expected, and is nothing to worry about. A failure to verify the hash would give a warning like the following:
|
||||
|
||||
```shell
|
||||
sha256sum: WARNING: 1 computed checksum did NOT match
|
||||
```
|
||||
|
||||
|
||||
|
||||
If both the signature verification and the manifest checksum verification
|
||||
succeeded, then you have just successfully verified a reproducible build and,
|
||||
assuming you trust the maintainers, are good to install and use the
|
||||
binaries. Congratulations! 🎉🥳
|
399
doc/guides/Getting Started/advanced-setup/tor.md
Normal file
399
doc/guides/Getting Started/advanced-setup/tor.md
Normal file
|
@ -0,0 +1,399 @@
|
|||
---
|
||||
title: "Using Tor"
|
||||
slug: "tor"
|
||||
hidden: false
|
||||
createdAt: "2023-01-25T10:55:50.059Z"
|
||||
updatedAt: "2023-02-21T13:30:33.294Z"
|
||||
---
|
||||
To use any Tor features with Core Lightning you must have Tor installed and running.
|
||||
|
||||
Note that we only support Tor v3: you can check your installed Tor version with `tor --version` or `sudo tor --version`
|
||||
|
||||
If Tor is not installed you can install it on Debian based Linux systems (Ubuntu, Debian, etc) with the following command:
|
||||
|
||||
```shell
|
||||
sudo apt install tor
|
||||
```
|
||||
|
||||
|
||||
|
||||
then `/etc/init.d/tor start` or `sudo systemctl enable --now tor` depending on your system configuration.
|
||||
|
||||
Most default setting should be sufficient.
|
||||
|
||||
To keep a safe configuration for minimal harassment (See [Tor FAQ](https://www.torproject.org/docs/faq.html.en#WhatIsTor)) just check that this line is present in the Tor config file `/etc/tor/torrc`:
|
||||
|
||||
`ExitPolicy reject *:* # no exits allowed`
|
||||
|
||||
This does not affect Core Lightning connect, listen, etc. It will only prevent your node from becoming a Tor exit node. Only enable this if you are sure about the implications.
|
||||
|
||||
If you don't want to create .onion addresses this should be enough.
|
||||
|
||||
There are several ways by which a Core Lightning node can accept or make connections over Tor. The node can be reached over Tor by connecting to its .onion address.
|
||||
|
||||
To provide the node with a .onion address you can:
|
||||
|
||||
- create a **non-persistent** address with an auto service or
|
||||
|
||||
- create a **persistent** address with a hidden service.
|
||||
|
||||
### Quick Start On Linux
|
||||
|
||||
It is easy to create a single persistent Tor address and not announce a public IP. This is ideal for most setups where you have an ISP-provided router connecting your Internet to your local network and computer, as it does not require a stable public IP from your ISP (which might not give one to you for free), nor port forwarding (which can be hard to set up for random cheap router models). Tor provides NAT-traversal for free, so even if you or your ISP has a complex
|
||||
network between you and the Internet, as long as you can use Tor you can be connected to.
|
||||
|
||||
> 📘
|
||||
>
|
||||
> Core Lightning also support IPv4/6 address discovery behind NAT routers.
|
||||
|
||||
For this to work you need to forward the default TCP port 9735 to your node. In this case you don't need TOR to punch through your firewall. IP discovery is only active if no other addresses are announced. This usually has the benefit of quicker and more stable connections but does not
|
||||
offer additional privacy.
|
||||
|
||||
On most Linux distributions, making a standard installation of `tor` will automatically set it up to have a SOCKS5 proxy at port 9050. As well, you have to set up the Tor Control Port. On most Linux distributions there will be commented-out settings below in the
|
||||
`/etc/tor/torrc`:
|
||||
|
||||
```shell
|
||||
ControlPort 9051
|
||||
CookieAuthentication 1
|
||||
CookieAuthFile /var/lib/tor/control_auth_cookie
|
||||
CookieAuthFileGroupReadable 1
|
||||
```
|
||||
|
||||
|
||||
|
||||
Uncomment those in, then restart `tor` (usually `systemctl restart tor` or
|
||||
`sudo systemctl restart tor` on most SystemD-based systems, including recent Debian and Ubuntu, or just restart the entire computer if you cannot figure it out).
|
||||
|
||||
On some systems (such as Arch Linux), you may also need to add the following setting:
|
||||
|
||||
```shell
|
||||
DataDirectoryGroupReadable 1
|
||||
```
|
||||
|
||||
|
||||
|
||||
You also need to make your user a member of the Tor group.
|
||||
"Your user" here is whatever user will run `lightningd`. On Debian-derived systems, the Tor group will most likely be `debian-tor`. You can try listing all groups with the below command, and check for a `debian-tor` or `tor` groupname.
|
||||
|
||||
```shell
|
||||
getent group | cut -d: -f1 | sort
|
||||
```
|
||||
|
||||
|
||||
|
||||
Alternately, you could check the group of the cookie file directly.
|
||||
Usually, on most Linux systems, that would be `/run/tor/control.authcookie`:
|
||||
|
||||
```shell
|
||||
stat -c '%G' /run/tor/control.authcookie
|
||||
```
|
||||
|
||||
|
||||
|
||||
Once you have determined the `${TORGROUP}` and selected the
|
||||
`${LIGHTNINGUSER}` that will run `lightningd`, run this as root:
|
||||
|
||||
```shell
|
||||
usermod -a -G ${TORGROUP} ${LIGHTNINGUSER}
|
||||
```
|
||||
|
||||
|
||||
|
||||
Then restart the computer (logging out and logging in again should also work).
|
||||
Confirm that `${LIGHTNINGUSER}` is in `${TORGROUP}` by running the `groups` command as `${LIGHTNINGUSER}` and checking `${TORGROUP}` is listed.
|
||||
|
||||
If the `/run/tor/control.authcookie` exists in your system, then log in as the user that will run `lightningd` and check this command:
|
||||
|
||||
```shell
|
||||
cat /run/tor/control.authcookie > /dev/null
|
||||
```
|
||||
|
||||
|
||||
|
||||
If the above prints nothing and returns, then Core Lightning "should" work with your Tor.
|
||||
If it prints an error, some configuration problem will likely prevent Core Lightning from working with your Tor.
|
||||
|
||||
Then make sure these are in your `${LIGHTNING_DIR}/config` or other Core Lightning configuration (or prepend `--` to each of them and add them to your `lightningd` invocation
|
||||
command line):
|
||||
|
||||
```shell
|
||||
proxy=127.0.0.1:9050
|
||||
bind-addr=127.0.0.1:9735
|
||||
addr=statictor:127.0.0.1:9051
|
||||
always-use-proxy=true
|
||||
```
|
||||
|
||||
|
||||
|
||||
1. `proxy` informs Core Lightning that you have a SOCKS5 proxy at port 9050.
|
||||
Core Lightning will assume that this is a Tor proxy, port 9050 is the default in most Linux distributions; you can double-check `/etc/tor/torrc` for a `SocksPort` entry to confirm the port number.
|
||||
2. `bind-addr` informs Core Lightning to bind itself to port 9735.
|
||||
This is needed for the subsequent `statictor` to work.
|
||||
9735 is the normal Lightning Network port, so this setting may already be present.
|
||||
If you add a second `bind-addr=...` you may get errors, so choose this new one or keep the old one, but don't keep both.
|
||||
This has to appear before any `statictor:` setting.
|
||||
3. `addr=statictor:` informs Core Lightning that you want to create a persistent hidden service that is based on your node private key.
|
||||
This informs Core Lightning as well that the Tor Control Port is 9051. You can also use `bind-addr=statictor:` instead to not announce the persistent hidden service, but if anyone wants to make a channel with you, you either have to connect to them, or you have to reveal your address to them explicitly (i.e. autopilots and the like will likely never connect to you).
|
||||
4. `always-use-proxy` informs Core Lightning to always use Tor even when connecting to nodes with public IPs. You can set this to `false` or remove it, if you are not privacy-conscious **and** find Tor is too slow for you.
|
||||
|
||||
### Tor Browser and Orbot
|
||||
|
||||
It is possible to not install Tor on your computer, and rely on just Tor Browser.
|
||||
Tor Browser will run a built-in Tor instance, but with the proxy at port 9150 and the control port at 9151 (the normal Tor has, by default, the proxy at port 9050 and the control
|
||||
port at 9051). The mobile Orbot uses the same defaults as Tor Browser (9150 and 9151).
|
||||
|
||||
You can then use these settings for Core Lightning:
|
||||
|
||||
```shell
|
||||
proxy=127.0.0.1:9150
|
||||
bind-addr=127.0.0.1:9735
|
||||
addr=statictor:127.0.0.1:9151
|
||||
always-use-proxy=true
|
||||
```
|
||||
|
||||
|
||||
|
||||
You will have to run Core Lightning after launching Tor Browser or Orbot, and keep Tor Browser or Orbot open as long as Core Lightning is running, but this is a setup which allows others to connect and fund channels to you, anywhere (no port forwarding! works wherever Tor works!), and you do not have to do anything more complicated than download and install Tor Browser.
|
||||
This may be useful for operating system distributions that do not have Tor in their repositories, assuming we can ever get Core Lightning running on those.
|
||||
|
||||
### Detailed Discussion
|
||||
|
||||
#### Three Ways to Create .onion Addresses for Core Lightning
|
||||
|
||||
1. You can configure Tor to create an onion address for you, and tell Core Lightning to use that address
|
||||
2. You can have Core Lightning tell Tor to create a new onion address every time
|
||||
3. You can configure Core Lightning to tell Tor to create the same onion address every time it starts up
|
||||
|
||||
#### Tor-Created .onion Address
|
||||
|
||||
Having Tor create an onion address lets you run other services (e.g. a web server) at that same address, and you just tell that address to Core Lightning and it doesn't have to talk to the Tor server at all.
|
||||
|
||||
Put the following in your `/etc/tor/torrc` file:
|
||||
|
||||
```shell
|
||||
HiddenServiceDir /var/lib/tor/lightningd-service_v3/
|
||||
HiddenServiceVersion 3
|
||||
HiddenServicePort 1234 127.0.0.1:9735
|
||||
```
|
||||
|
||||
|
||||
|
||||
The hidden lightning service will be reachable at port 1234 (global port) of the .onion address, which will be created at the restart of the Tor service. Both types of addresses can coexist on the same node.
|
||||
|
||||
Save the file and restart the Tor service. In linux:
|
||||
|
||||
`/etc/init.d/tor restart` or `sudo systemctl restart tor` depending on the configuration of your system.
|
||||
|
||||
You will find the newly created address (myaddress.onion) with:
|
||||
|
||||
```shell
|
||||
sudo cat /var/lib/tor/lightningd-service_v3/hostname
|
||||
```
|
||||
|
||||
|
||||
|
||||
Now you need to tell Core Lightning to advertize that onion hostname and port, by placing `announce-addr=myaddress.onion` in your lightning config.
|
||||
|
||||
#### Letting Core Lightning Control Tor
|
||||
|
||||
To have Core Lightning control your Tor addresses, you have to tell Tor to accept control commands from Core Lightning, either by using a cookie, or a password.
|
||||
|
||||
##### Service authenticated by cookie
|
||||
|
||||
This tells Tor to create a cookie file each time: lightningd will have to be in the same group as tor (e.g. debian-tor): you can look at `/run/tor/control.authcookie` to check the group name.
|
||||
|
||||
Add the following lines in the `/etc/tor/torrc` file:
|
||||
|
||||
```shell
|
||||
ControlPort 9051
|
||||
CookieAuthentication 1
|
||||
CookieAuthFileGroupReadable 1
|
||||
```
|
||||
|
||||
|
||||
|
||||
Save the file and restart the Tor service.
|
||||
|
||||
##### Service authenticated by password
|
||||
|
||||
This tells Tor to allow password access: you also need to tell lightningd what the password is.
|
||||
|
||||
Create a hash of your password with
|
||||
|
||||
```shell
|
||||
tor --hash-password yourpassword
|
||||
```
|
||||
|
||||
|
||||
|
||||
This returns a line like
|
||||
|
||||
`16:533E3963988E038560A8C4EE6BBEE8DB106B38F9C8A7F81FE38D2A3B1F`
|
||||
|
||||
Put these lines in the `/etc/tor/torrc` file:
|
||||
|
||||
```shell
|
||||
ControlPort 9051
|
||||
HashedControlPassword 16:533E3963988E038560A8C4EE6BBEE8DB106B38F9C8A7F81FE38D2A3B1F
|
||||
```
|
||||
|
||||
|
||||
|
||||
Save the file and restart the Tor service.
|
||||
|
||||
Put `tor-service-password=yourpassword` (not the hash) in your lightning configuration file.
|
||||
|
||||
##### Core Lightning Creating Persistent Hidden Addresses
|
||||
|
||||
This is usually better than transient addresses, as nodes won't have to wait for gossip propagation to find out your new address each time you restart.
|
||||
|
||||
Once you've configured access to Tor as described above, you need to add _two_ lines in your lightningd config file:
|
||||
|
||||
1. A local address which lightningd can tell Tor to connect to when connections come in, e.g. `bind-addr=127.0.0.1:9735`.
|
||||
2. After that, a `addr=statictor:127.0.0.1:9051` to tell Core Lightning to set up and announce a Tor onion address (and tell Tor to send connections to our real address, above).
|
||||
|
||||
You can use `bind-addr` if you want to set up the onion address and not announce it to the world for some reason.
|
||||
|
||||
You may add more `addr` lines if you want to advertise other addresses.
|
||||
|
||||
There is an older method, called "autotor" instead of "statictor" which creates a different Tor address on each restart, which is usually not very helpful; you need to use lightning-cli getinfo\` to see what address it is currently using, and other peers need to wait for fresh gossip messages if you announce it, before they can connect.
|
||||
|
||||
### What do we support
|
||||
|
||||
| Case # | IP Number | Hidden service | Incoming / Outgoing Tor |
|
||||
| ------ | ------------- | ----------------------- | ----------------------- |
|
||||
| 1 | Public | NO | Outgoing |
|
||||
| 2 | Public | FIXED BY TOR | Incoming [1] |
|
||||
| 3 | Public | FIXED BY CORE LIGHTNING | Incoming [1] |
|
||||
| 4 | Not Announced | FIXED BY TOR | Incoming [1] |
|
||||
| 5 | Not Announced | FIXED BY CORE LIGHTNING | Incoming [1] |
|
||||
|
||||
> 📘
|
||||
>
|
||||
> In all the "Incoming" use case, the node can also make "Outgoing" Tor
|
||||
> connections (connect to a .onion address) by adding the `proxy=127.0.0.1:9050` option.
|
||||
|
||||
#### Case #1: Public IP address and no Tor address, but can connect to Tor addresses
|
||||
|
||||
Without a .onion address, the node won't be reachable through Tor by other nodes but it will always be able to `connect` to a Tor enabled node (outbound connections), passing the `connect` request through the Tor service socks5 proxy. When the Tor service starts it creates a socks5 proxy which is by default at the address 127.0.0.1:9050.
|
||||
|
||||
If the node is started with the option `proxy=127.0.0.1:9050` the node will be always able to connect to nodes with .onion address through the socks5 proxy.
|
||||
|
||||
**You can always add this option, also in the other use cases, to add outgoing
|
||||
Tor capabilities.**
|
||||
|
||||
If you want to `connect` to nodes ONLY via the Tor proxy, you have to add the `always-use-proxy=true` option (though if you only advertize Tor addresses, we also assume you want to always use the proxy).
|
||||
|
||||
You can announce your public IP address through the usual method: if your node is in an internal network:
|
||||
|
||||
```shell
|
||||
bind-addr=internalIPAddress:port
|
||||
announce-addr=externalIpAddress
|
||||
```
|
||||
|
||||
|
||||
|
||||
or if it has a public IP address:
|
||||
|
||||
```shell
|
||||
addr=externalIpAddress
|
||||
```
|
||||
|
||||
|
||||
|
||||
> 📘
|
||||
>
|
||||
> If you are unsure which of the two is suitable for you, find your internal and external address and see if they match.
|
||||
|
||||
In linux:
|
||||
|
||||
Discover your external IP address with: `curl ipinfo.io/ip` and your internal IP Address with: `ip route get 1 | awk '{print $NF;exit}'`.
|
||||
|
||||
If they match you can use the `--addr` command line option.
|
||||
|
||||
#### Case #2: Public IP address, and a fixed Tor address in torrc
|
||||
|
||||
Other nodes can connect to you entirely over Tor, and the Tor address doesn't change every time you restart.
|
||||
|
||||
You simply tell Core Lightning to advertize both addresses (you can use `sudo cat /var/lib/tor/lightningd-service_v3/hostname` to get your Tor-assigned onion address).
|
||||
|
||||
If you have an internal IP address:
|
||||
|
||||
```shell
|
||||
bind-addr=yourInternalIPAddress:port
|
||||
announce-addr=yourexternalIPAddress:port
|
||||
announce-addr=your.onionAddress:port
|
||||
```
|
||||
|
||||
|
||||
|
||||
Or an external address:
|
||||
|
||||
```shell
|
||||
addr=yourIPAddress:port
|
||||
announce-addr=your.onionAddress:port
|
||||
```
|
||||
|
||||
|
||||
|
||||
#### Case #3: Public IP address, and a fixed Tor address set by Core Lightning
|
||||
|
||||
Other nodes can connect to you entirely over Tor, and the Tor address doesn't change every time you restart.
|
||||
|
||||
See "Letting Core Lightning Control Tor" for how to get Core Lightning talking to Tor.
|
||||
|
||||
If you have an internal IP address:
|
||||
|
||||
```shell
|
||||
bind-addr=yourInternalIPAddress:port
|
||||
announce-addr=yourexternalIPAddress:port
|
||||
addr=statictor:127.0.0.1:9051
|
||||
```
|
||||
|
||||
|
||||
|
||||
Or an external address:
|
||||
|
||||
```shell
|
||||
addr=yourIPAddress:port
|
||||
addr=statictor:127.0.0.1:9051
|
||||
```
|
||||
|
||||
|
||||
|
||||
#### Case #4: Unannounced IP address, and a fixed Tor address in torrc
|
||||
|
||||
Other nodes can only connect to you over Tor.
|
||||
|
||||
You simply tell Core Lightning to advertize the Tor address (you can use `sudo cat /var/lib/tor/lightningd-service_v3/hostname` to get your Tor-assigned onion address).
|
||||
|
||||
```
|
||||
announce-addr=your.onionAddress:port
|
||||
proxy=127.0.0.1:9050
|
||||
always-use-proxy=true
|
||||
```
|
||||
|
||||
|
||||
|
||||
#### Case #4: Unannounced IP address, and a fixed Tor address set by Core Lightning
|
||||
|
||||
Other nodes can only connect to you over Tor.
|
||||
|
||||
See "Letting Core Lightning Control Tor" for how to get Core Lightning
|
||||
talking to Tor.
|
||||
|
||||
```
|
||||
addr=statictor:127.0.0.1:9051
|
||||
proxy=127.0.0.1:9050
|
||||
always-use-proxy=true
|
||||
```
|
||||
|
||||
|
||||
|
||||
## References
|
||||
|
||||
- [Configuring your node](doc:configuration) section (or [`lightningd-config`](ref:lightningd-config) manual page) covers the various address cases in detail.
|
||||
- [The Tor project](https://www.torproject.org/)
|
||||
- [Tor FAQ](https://www.torproject.org/docs/faq.html.en#WhatIsTor)
|
||||
- [Tor Hidden Service](https://www.torproject.org/docs/onion-services.html.en)
|
||||
- [.onion addresses version 3](https://blog.torproject.org/we-want-you-test-next-gen-onion-services)
|
33
doc/guides/Getting Started/getting-started.md
Normal file
33
doc/guides/Getting Started/getting-started.md
Normal file
|
@ -0,0 +1,33 @@
|
|||
---
|
||||
title: "Set up your node"
|
||||
slug: "getting-started"
|
||||
excerpt: "This guide will help you set up a Core Lightning node. You'll be up and running in a jiffy!"
|
||||
hidden: false
|
||||
createdAt: "2022-11-07T15:26:37.081Z"
|
||||
updatedAt: "2023-02-22T06:00:15.160Z"
|
||||
---
|
||||
The Core Lightning implementation has been in production use on the Bitcoin mainnet since early 2018, with the launch of the [Blockstream Store](https://blockstream.com/2018/01/16/en-lightning-charge/). We recommend getting started by experimenting on `testnet` (or `regtest`), but the implementation is considered stable and can be safely used on mainnet.
|
||||
|
||||
The following steps will get you up and running with Core Lightning:
|
||||
|
||||
## 1. Prerequisites
|
||||
|
||||
- [x] **Operating System**
|
||||
|
||||
Core Lightning is available on Linux and macOS. To run Core Lightning on Windows, consider using [docker](doc:installation#docker).
|
||||
- [x] **Hardware**
|
||||
|
||||
The requirements to run a Core Lightning node, at a minimum, are 4 GB RAM, ~500 GB of storage if you're running a Bitcoin Core full node, or less than 5 GB of storage if you run a pruned Bitcoin Core node or connect to Bitcoin Core remotely. Finally, a trivial amount of reliable network bandwidth is expected.
|
||||
|
||||
|
||||
|
||||
For a thorough understanding of the best hardware setup for your usage / scenario, see guidance at [hardware considerations](doc:hardware-considerations).
|
||||
- [x] **Bitcoin Core**
|
||||
|
||||
Core Lightning requires a locally (or remotely) running `bitcoind` (version 0.16 or above) that is fully caught up with the network you're running on, and relays transactions (ie with `blocksonly=0`). Pruning (`prune=n` option in `bitcoin.conf`) is partially supported, see [here](doc:bitcoin-core#using-a-pruned-bitcoin-core-node) for more details. You can also connect your Core Lightning node to a remotely running Bitcoin Core, see [here](doc:bitcoin-core#connecting-to-bitcoin-core-remotely) to learn how.
|
||||
|
||||
## 2. [Install](doc:installation) Core Lightning
|
||||
|
||||
## 3. [Configure your node](doc:configuration) as per your requirements (_optional_)
|
||||
|
||||
## 4. **[Run your node](doc:beginners-guide)**
|
515
doc/guides/Getting Started/getting-started/configuration.md
Normal file
515
doc/guides/Getting Started/getting-started/configuration.md
Normal file
|
@ -0,0 +1,515 @@
|
|||
---
|
||||
title: "Configuring your node"
|
||||
slug: "configuration"
|
||||
excerpt: "Choose from a variety of configuration options as per your needs."
|
||||
hidden: false
|
||||
createdAt: "2022-11-18T14:32:13.821Z"
|
||||
updatedAt: "2023-02-21T13:26:18.280Z"
|
||||
---
|
||||
`lightningd` can be configured either by passing options via the command line, or via a configuration file.
|
||||
|
||||
## Using a configuration file
|
||||
|
||||
To use a configuration file, create a file named `config` within your top-level lightning directory or network subdirectory (eg. `~/.lightning/config` or `~/.lightning/bitcoin/config`).
|
||||
|
||||
When `lightningd` starts up it usually reads a general configuration file (default: `$HOME/.lightning/config`) then a network-specific configuration file (default: `$HOME/.lightning/testnet/config`). This can be changed using `--conf` and `--lightning-dir`.
|
||||
|
||||
> 📘
|
||||
>
|
||||
> General configuration files are processed first, then network-specific ones, then command line options: later options override earlier ones except _addr_ options and _log-level_ with subsystems, which accumulate.
|
||||
|
||||
`include` followed by a filename includes another configuration file at that point, relative to the current configuration file.
|
||||
|
||||
All options listed below are mirrored as commandline arguments to lightningd(, so `--foo` becomes simply `foo` in the configuration file, and `--foo=bar` becomes `foo=bar` in the configuration file.
|
||||
|
||||
Blank lines and lines beginning with `#` are ignored.
|
||||
|
||||
## Debugging
|
||||
|
||||
`--help` will show you the defaults for many options; they vary with network settings so you can specify `--network` before `--help` to see the defaults for that network.
|
||||
|
||||
The [`lightning-listconfigs`](ref:lightning-listconfigs) command will output a valid configuration file using the current settings.
|
||||
|
||||
## Options
|
||||
|
||||
### General options
|
||||
|
||||
- **allow-deprecated-apis**=_BOOL_
|
||||
|
||||
Enable deprecated options, JSONRPC commands, fields, etc. It defaults to
|
||||
_true_, but you should set it to _false_ when testing to ensure that an
|
||||
upgrade won't break your configuration.
|
||||
|
||||
- **help**
|
||||
|
||||
Print help and exit. Not very useful inside a configuration file, but
|
||||
fun to put in other's config files while their computer is unattended.
|
||||
|
||||
- **version**
|
||||
|
||||
Print version and exit. Also useless inside a configuration file, but
|
||||
putting this in someone's config file may convince them to read this man
|
||||
page.
|
||||
|
||||
- **database-upgrade**=_BOOL_
|
||||
|
||||
Upgrades to Core Lightning often change the database: once this is done,
|
||||
downgrades are not generally possible. By default, Core Lightning will
|
||||
exit with an error rather than upgrade, unless this is an official released
|
||||
version. If you really want to upgrade to a non-release version, you can
|
||||
set this to _true_ (or _false_ to never allow a non-reversible upgrade!).
|
||||
|
||||
### Bitcoin control options:
|
||||
|
||||
**network**=_NETWORK_
|
||||
|
||||
- Select the network parameters (_bitcoin_, _testnet_, _signet_, or _regtest_).
|
||||
This is not valid within the per-network configuration file.
|
||||
|
||||
- **mainnet**
|
||||
|
||||
Alias for _network=bitcoin_.
|
||||
|
||||
- **testnet**
|
||||
|
||||
Alias for _network=testnet_.
|
||||
|
||||
- **signet**
|
||||
|
||||
Alias for _network=signet_.
|
||||
|
||||
- **bitcoin-cli**=_PATH_ [plugin `bcli`]
|
||||
|
||||
The name of _bitcoin-cli_ executable to run.
|
||||
|
||||
- **bitcoin-datadir**=_DIR_ [plugin `bcli`]
|
||||
|
||||
_-datadir_ argument to supply to bitcoin-cli(1).
|
||||
|
||||
- **bitcoin-rpcuser**=_USER_ [plugin `bcli`]
|
||||
|
||||
The RPC username for talking to bitcoind(1).
|
||||
|
||||
- **bitcoin-rpcpassword**=_PASSWORD_ [plugin `bcli`]
|
||||
|
||||
The RPC password for talking to bitcoind(1).
|
||||
|
||||
- **bitcoin-rpcconnect**=_HOST_ [plugin `bcli`]
|
||||
|
||||
The bitcoind(1) RPC host to connect to.
|
||||
|
||||
- **bitcoin-rpcport**=_PORT_ [plugin `bcli`]
|
||||
|
||||
The bitcoind(1) RPC port to connect to.
|
||||
|
||||
- **bitcoin-retry-timeout**=_SECONDS_ [plugin `bcli`]
|
||||
|
||||
Number of seconds to keep trying a bitcoin-cli(1) command. If the
|
||||
command keeps failing after this time, exit with a fatal error.
|
||||
|
||||
- **rescan**=_BLOCKS_
|
||||
|
||||
Number of blocks to rescan from the current head, or absolute
|
||||
blockheight if negative. This is only needed if something goes badly
|
||||
wrong.
|
||||
|
||||
### Lightning daemon options
|
||||
|
||||
- **lightning-dir**=_DIR_
|
||||
|
||||
Sets the working directory. All files (except _--conf_ and
|
||||
_--lightning-dir_ on the command line) are relative to this. This
|
||||
is only valid on the command-line, or in a configuration file specified
|
||||
by _--conf_.
|
||||
|
||||
- **subdaemon**=_SUBDAEMON_:_PATH_
|
||||
|
||||
Specifies an alternate subdaemon binary.
|
||||
Current subdaemons are _channeld_, _closingd_,
|
||||
_connectd_, _gossipd_, _hsmd_, _onchaind_, and _openingd_.
|
||||
If the supplied path is relative the subdaemon binary is found in the
|
||||
working directory. This option may be specified multiple times.
|
||||
|
||||
So, **subdaemon=hsmd:remote\_signer** would use a
|
||||
hypothetical remote signing proxy instead of the standard _lightning\_hsmd_
|
||||
binary.
|
||||
|
||||
- **pid-file**=_PATH_
|
||||
|
||||
Specify pid file to write to.
|
||||
|
||||
- **log-level**=_LEVEL_\[:_SUBSYSTEM_\]
|
||||
|
||||
What log level to print out: options are io, debug, info, unusual, broken. If _SUBSYSTEM_ is supplied, this sets the logging level for any subsystem (or _nodeid_) containing that string. This option may be specified multiple times. Subsystems include:
|
||||
|
||||
- _lightningd_: The main lightning daemon
|
||||
|
||||
- _database_: The database subsystem
|
||||
|
||||
- _wallet_: The wallet subsystem
|
||||
|
||||
- _gossipd_: The gossip daemon
|
||||
|
||||
- _plugin-manager_: The plugin subsystem
|
||||
|
||||
- _plugin-P_: Each plugin, P = plugin path without directory
|
||||
|
||||
- _hsmd_: The secret-holding daemon
|
||||
|
||||
- _connectd_: The network connection daemon
|
||||
|
||||
- _jsonrpc#FD_: Each JSONRPC connection, FD = file descriptor number
|
||||
|
||||
The following subsystems exist for each channel, where N is an incrementing internal integer id assigned for the lifetime of the channel:
|
||||
|
||||
- _openingd-chan#N_: Each opening / idling daemon
|
||||
|
||||
- _channeld-chan#N_: Each channel management daemon
|
||||
|
||||
- _closingd-chan#N_: Each closing negotiation daemon
|
||||
|
||||
- _onchaind-chan#N_: Each onchain close handling daemon
|
||||
|
||||
So, **log-level=debug:plugin** would set debug level logging on all plugins and the plugin manager. **log-level=io:chan#55** would set IO logging on channel number 55 (or 550, for that matter).
|
||||
|
||||
**log-level=debug:024b9a1fa8** would set debug logging for that channel (or any node id containing that string).
|
||||
|
||||
- **log-prefix**=_PREFIX_
|
||||
|
||||
Prefix for all log lines: this can be customized if you want to merge logs with multiple daemons. Usually you want to include a space at the end of _PREFIX_, as the timestamp follows immediately.
|
||||
|
||||
- **log-file**=_PATH_
|
||||
|
||||
Log to this file (instead of stdout). If you specify this more than once you'll get more than one log file: **-** is used to mean stdout. Sending lightningd(8) SIGHUP will cause it to reopen each file (useful for log rotation).
|
||||
|
||||
- **log-timestamps**=_BOOL_
|
||||
|
||||
Set this to false to turn off timestamp prefixes (they will still appear in crash log files).
|
||||
|
||||
- **rpc-file**=_PATH_
|
||||
|
||||
Set JSON-RPC socket (or /dev/tty), such as for lightning-cli.
|
||||
|
||||
- **rpc-file-mode**=_MODE_
|
||||
|
||||
Set JSON-RPC socket file mode, as a 4-digit octal number.
|
||||
Default is 0600, meaning only the user that launched lightningd can command it.
|
||||
Set to 0660 to allow users with the same group to access the RPC as well.
|
||||
|
||||
- **daemon**
|
||||
|
||||
Run in the background, suppress stdout and stderr. Note that you need to specify **log-file** for this case.
|
||||
|
||||
- **conf**=_PATH_
|
||||
|
||||
Sets configuration file, and disable reading the normal general and network ones. If this is a relative path, it is relative to the starting directory, not **lightning-dir** (unlike other paths). _PATH_ must exist and be readable (we allow missing files in the default case). Using this inside a configuration file is invalid.
|
||||
|
||||
- **wallet**=_DSN_
|
||||
|
||||
Identify the location of the wallet. This is a fully qualified data source name, including a scheme such as `sqlite3` or `postgres` followed by the connection parameters.
|
||||
|
||||
The default wallet corresponds to the following DSN:
|
||||
`--wallet=sqlite3://$HOME/.lightning/bitcoin/lightningd.sqlite31`
|
||||
|
||||
For the `sqlite3` scheme, you can specify a single backup database file by separating it with a `:` character, like so: `--wallet=sqlite3://$HOME/.lightning/bitcoin/lightningd.sqlite3:/backup/lightningd.sqlite3`
|
||||
|
||||
The following is an example of a postgresql wallet DSN:
|
||||
|
||||
`--wallet=postgres://user:pass@localhost:5432/db_name`
|
||||
|
||||
This will connect to a DB server running on `localhost` port `5432`, authenticate with username `user` and password `pass`, and then use the database `db_name`. The database must exist, but the schema will be managed automatically by `lightningd`.
|
||||
|
||||
- **bookkeeper-dir**=_DIR_ [plugin `bookkeeper`]
|
||||
|
||||
Directory to keep the accounts.sqlite3 database file in. Defaults to lightning-dir.
|
||||
|
||||
- **bookkeeper-db**=_DSN_ [plugin `bookkeeper`]
|
||||
|
||||
Identify the location of the bookkeeper data. This is a fully qualified data source name, including a scheme such as `sqlite3` or `postgres` followed by the connection parameters. Defaults to `sqlite3://accounts.sqlite3` in the `bookkeeper-dir`.
|
||||
|
||||
- **encrypted-hsm**
|
||||
If set, you will be prompted to enter a password used to encrypt the `hsm_secret`.
|
||||
Note that once you encrypt the `hsm_secret` this option will be mandatory for
|
||||
`lightningd` to start.
|
||||
If there is no `hsm_secret` yet, `lightningd` will create a new encrypted secret.
|
||||
If you have an unencrypted `hsm_secret` you want to encrypt on-disk, or vice versa,
|
||||
see [`lightning-hsmtool`](ref:lightning-hsmtool).
|
||||
|
||||
- **grpc-port**=_portnum_ [plugin `cln-grpc`]
|
||||
|
||||
The port number for the GRPC plugin to listen for incoming connections; default is not to activate the plugin at all.
|
||||
|
||||
### Lightning node customization options
|
||||
|
||||
- **alias**=_NAME_
|
||||
|
||||
Up to 32 bytes of UTF-8 characters to tag your node. Completely silly, since anyone can call their node anything they want. The default is an NSA-style codename derived from your public key, but "Peter Todd" and "VAULTERO" are good options, too.
|
||||
|
||||
- **rgb**=_RRGGBB_
|
||||
|
||||
Your favorite color as a hex code.
|
||||
|
||||
- **fee-base**=_MILLISATOSHI_
|
||||
|
||||
Default: 1000. The base fee to charge for every payment which passes through. Note that millisatoshis are a very, very small unit! Changing this value will only affect new channels and not existing ones. If you want to change fees for existing channels, use the RPC call [`lightning-setchannel`](ref:lightning-setchannel).
|
||||
|
||||
- **fee-per-satoshi**=_MILLIONTHS_
|
||||
|
||||
Default: 10 (0.001%). This is the proportional fee to charge for every payment which passes through. As percentages are too coarse, it's in millionths, so 10000 is 1%, 1000 is 0.1%. Changing this value will only affect new channels and not existing ones. If you want to change fees for existing channels, use the RPC call [`lightning-setchannel`](ref:lightning-setchannel).
|
||||
|
||||
- **min-capacity-sat**=_SATOSHI_
|
||||
|
||||
Default: 10000. This value defines the minimal effective channel capacity in satoshi to accept for channel opening requests. This will reject any opening of a channel which can't pass an HTLC of least this value. Usually this prevents a peer opening a tiny channel, but it
|
||||
can also prevent a channel you open with a reasonable amount and the peer requesting such a large reserve that the capacity of the channel falls below this.
|
||||
|
||||
- **ignore-fee-limits**=_BOOL_
|
||||
|
||||
Allow nodes which establish channels to us to set any fee they want. This may result in a channel which cannot be closed, should fees increase, but make channels far more reliable since we never close it due to unreasonable fees.
|
||||
|
||||
- **commit-time**=_MILLISECONDS_
|
||||
|
||||
How long to wait before sending commitment messages to the peer: in theory increasing this would reduce load, but your node would have to be extremely busy node for you to even notice.
|
||||
|
||||
- **force-feerates**==_VALUES_
|
||||
|
||||
Networks like regtest and testnet have unreliable fee estimates: we usually treat them as the minimum (253 sats/kw) if we can't get them.
|
||||
This allows override of one or more of our standard feerates (see [`lightning-feerates`](ref:lightning-feerates)). Up to 5 values, separated by '/' can be provided: if fewer are provided, then the final value is used for the remainder. The values are in per-kw (roughly 1/4 of bitcoind's per-kb values), and the order is "opening", "mutual_close", "unilateral_close", "delayed_to_us", "htlc_resolution", and "penalty".
|
||||
|
||||
You would usually put this option in the per-chain config file, to avoid setting it on Bitcoin mainnet! e.g. `~rusty/.lightning/regtest/config`.
|
||||
|
||||
- **htlc-minimum-msat**=_MILLISATOSHI_
|
||||
|
||||
Default: 0. Sets the minimal allowed HTLC value for newly created channels.
|
||||
If you want to change the `htlc_minimum_msat` for existing channels, use the RPC call [`lightning-setchannel`](ref:lightning-setchannel).
|
||||
|
||||
- **htlc-maximum-msat**=_MILLISATOSHI_
|
||||
|
||||
Default: unset (no limit). Sets the maximum allowed HTLC value for newly created channels. If you want to change the `htlc_maximum_msat` for existing channels, use the RPC call [`lightning-setchannel`](ref:lightning-setchannel).
|
||||
|
||||
- **disable-ip-discovery**
|
||||
|
||||
Turn off public IP discovery to send `node_announcement` updates that contain the discovered IP with TCP port 9735 as announced address. If unset and you open TCP port 9735 on your router towards your node, your node will remain connectable on changing IP addresses. Note: Will always be disabled if you use 'always-use-proxy'.
|
||||
|
||||
### Lightning channel and HTLC options
|
||||
|
||||
- **large-channels**
|
||||
|
||||
Removes capacity limits for channel creation. Version 1.0 of the specification limited channel sizes to 16777215 satoshi. With this option (which your node will advertize to peers), your node will accept larger incoming channels and if the peer supports it, will open larger channels. Note: this option is spelled **large-channels** but it's pronounced **wumbo**.
|
||||
|
||||
- **watchtime-blocks**=_BLOCKS_
|
||||
|
||||
How long we need to spot an outdated close attempt: on opening a channel we tell our peer that this is how long they'll have to wait if they perform a unilateral close.
|
||||
|
||||
- **max-locktime-blocks**=_BLOCKS_
|
||||
|
||||
The longest our funds can be delayed (ie. the longest **watchtime-blocks** our peer can ask for, and also the longest HTLC timeout we will accept). If our peer asks for longer, we'll refuse to create a channel, and if an HTLC asks for longer, we'll refuse it.
|
||||
|
||||
- **funding-confirms**=_BLOCKS_
|
||||
|
||||
Confirmations required for the funding transaction when the other side opens a channel before the channel is usable.
|
||||
|
||||
- **commit-fee**=_PERCENT_ [plugin `bcli`]
|
||||
|
||||
The percentage of _estimatesmartfee 2/CONSERVATIVE_ to use for the commitment
|
||||
transactions: default is 100.
|
||||
|
||||
- **max-concurrent-htlcs**=_INTEGER_
|
||||
|
||||
Number of HTLCs one channel can handle concurrently in each direction.
|
||||
Should be between 1 and 483 (default 30).
|
||||
|
||||
- **max-dust-htlc-exposure-msat**=_MILLISATOSHI_
|
||||
|
||||
Option which limits the total amount of sats to be allowed as dust on a channel.
|
||||
|
||||
- **cltv-delta**=_BLOCKS_
|
||||
|
||||
The number of blocks between incoming payments and outgoing payments: this needs to be enough to make sure that if we have to, we can close the outgoing payment before the incoming, or redeem the incoming once the outgoing is redeemed.
|
||||
|
||||
- **cltv-final**=_BLOCKS_
|
||||
|
||||
The number of blocks to allow for payments we receive: if we have to, we might need to redeem this on-chain, so this is the number of blocks we have to do that.
|
||||
|
||||
- **accept-htlc-tlv-types**=_types_
|
||||
|
||||
Normally HTLC onions which contain unknown even fields are rejected.
|
||||
This option specifies that these (comma-separated) types are to be
|
||||
accepted, and ignored.
|
||||
|
||||
### Cleanup control options:
|
||||
|
||||
- **autoclean-cycle**=_SECONDS_ [plugin `autoclean`]
|
||||
|
||||
Perform search for things to clean every _SECONDS_ seconds (default 3600, or 1 hour, which is usually sufficient).
|
||||
|
||||
- **autoclean-succeededforwards-age**=_SECONDS_ [plugin `autoclean`]
|
||||
|
||||
How old successful forwards (`settled` in listforwards `status`) have to be before deletion (default 0, meaning never).
|
||||
|
||||
- **autoclean-failedforwards-age**=_SECONDS_ [plugin `autoclean`]
|
||||
|
||||
How old failed forwards (`failed` or `local_failed` in listforwards `status`) have to be before deletion (default 0, meaning never).
|
||||
|
||||
- **autoclean-succeededpays-age**=_SECONDS_ [plugin `autoclean`]
|
||||
|
||||
How old successful payments (`complete` in listpays `status`) have to be before deletion (default 0, meaning never).
|
||||
|
||||
- **autoclean-failedpays-age**=_SECONDS_ [plugin `autoclean`]
|
||||
|
||||
How old failed payment attempts (`failed` in listpays `status`) have to be before deletion (default 0, meaning never).
|
||||
|
||||
- **autoclean-paidinvoices-age**=_SECONDS_ [plugin `autoclean`]
|
||||
|
||||
How old invoices which were paid (`paid` in listinvoices `status`) have to be before deletion (default 0, meaning never).
|
||||
|
||||
- **autoclean-expiredinvoices-age**=_SECONDS_ [plugin `autoclean`]
|
||||
|
||||
How old invoices which were not paid (and cannot be) (`expired` in listinvoices `status`) before deletion (default 0, meaning never).
|
||||
|
||||
Note: prior to v22.11, forwards for channels which were closed were not easily distinguishable. As a result, autoclean may delete more than one of these at once, and then suffer failures when it fails to delete the others.
|
||||
|
||||
### Payment control options:
|
||||
|
||||
- **disable-mpp** [plugin `pay`]
|
||||
|
||||
Disable the multi-part payment sending support in the `pay` plugin. By default the MPP support is enabled, but it can be desirable to disable in situations in which each payment should result in a single HTLC being forwarded in the network.
|
||||
|
||||
### Networking options
|
||||
|
||||
Note that for simple setups, the implicit _autolisten_ option does the right thing: for the mainnet (bitcoin) network it will try to bind to port 9735 on IPv4 and IPv6, and will announce it to peers if it seems like a public address (and other default ports for other networks, as described below).
|
||||
|
||||
Core Lightning also support IPv4/6 address discovery behind NAT routers. If your node detects an new public address, it will update its announcement. For this to work you need to forward the default TCP port 9735 to your node. IP discovery is only active if no other addresses are announced.
|
||||
|
||||
You can instead use _addr_ to override this (eg. to change the port), or precisely control where to bind and what to announce with the _bind-addr_ and _announce-addr_ options. These will **disable** the _autolisten_ logic, so you must specifiy exactly what you want!
|
||||
|
||||
- **addr**=_\[IPADDRESS[:PORT]]|autotor:TORIPADDRESS[:SERVICEPORT][/torport=TORPORT]|statictor:TORIPADDRESS[:SERVICEPORT]\[/torport=TORPORT]\[/torblob=[blob]]|DNS[:PORT]_
|
||||
|
||||
Set an IP address (v4 or v6) or automatic Tor address to listen on and (maybe) announce as our node address.
|
||||
|
||||
An empty 'IPADDRESS' is a special value meaning bind to IPv4 and/or IPv6 on all interfaces, '0.0.0.0' means bind to all IPv4 interfaces, '::' means 'bind to all IPv6 interfaces' (if you want to specify an IPv6 address _and_ a port, use `[]` around the IPv6 address, like `[::]:9750`).
|
||||
If 'PORT' is not specified, the default port 9735 is used for mainnet (testnet: 19735, signet: 39735, regtest: 19846). If we can determine a public IP address from the resulting binding,
|
||||
the address is announced.
|
||||
|
||||
If the argument begins with 'autotor:' then it is followed by the IPv4 or IPv6 address of the Tor control port (default port 9051), and this will be used to configure a Tor hidden service for port 9735 in case of mainnet (bitcoin) network whereas other networks (testnet,
|
||||
signet, regtest) will set the same default ports they use for non-Tor addresses (see above).
|
||||
The Tor hidden service will be configured to point to the first IPv4 or IPv6 address we bind to and is by default unique to your node's id.
|
||||
|
||||
If the argument begins with 'statictor:' then it is followed by the IPv4 or IPv6 address of the Tor control port (default port 9051), and this will be used to configure a static Tor hidden service.
|
||||
You can add the text '/torblob=BLOB' followed by up to 64 Bytes of text to generate from this text a v3 onion service address text unique to the first 32 Byte of this text. You can also use an postfix '/torport=TORPORT' to select the external tor binding. The result is that over tor your node is accessible by a port defined by you and possibly different from your local node port assignment.
|
||||
|
||||
This option can be used multiple times to add more addresses, and its use disables autolisten. If necessary, and 'always-use-proxy' is not specified, a DNS lookup may be done to resolve 'DNS' or 'TORIPADDRESS'.
|
||||
|
||||
If a 'DNS' hostname was given that resolves to a local interface, the daemon will bind to that interface: if **announce-addr-dns** is true then it will also announce that as type 'DNS' (rather than announcing the IP address).
|
||||
|
||||
- **bind-addr**=_\[IPADDRESS[:PORT]]|SOCKETPATH|DNS[:PORT]|DNS[:PORT]_
|
||||
|
||||
Set an IP address or UNIX domain socket to listen to, but do not announce. A UNIX domain socket is distinguished from an IP address by beginning with a _/_.
|
||||
|
||||
An empty 'IPADDRESS' is a special value meaning bind to IPv4 and/or IPv6 on all interfaces, '0.0.0.0' means bind to all IPv4 interfaces, '::' means 'bind to all IPv6 interfaces'. 'PORT' is
|
||||
not specified, 9735 is used.
|
||||
|
||||
This option can be used multiple times to add more addresses, and its use disables autolisten. If necessary, and 'always-use-proxy' is not specified, a DNS lookup may be done to resolve 'IPADDRESS'.
|
||||
|
||||
If a 'DNS' hostname was given and 'always-use-proxy' is not specified, a lookup may be done to resolve it and bind to a local interface (if found).
|
||||
|
||||
- **announce-addr**=_IPADDRESS\[:PORT\]|TORADDRESS.onion\[:PORT\]|DNS\[:PORT\]_
|
||||
|
||||
Set an IP (v4 or v6) address or Tor address to announce; a Tor address is distinguished by ending in _.onion_. _PORT_ defaults to 9735.
|
||||
|
||||
Empty or wildcard IPv4 and IPv6 addresses don't make sense here.
|
||||
Also, unlike the 'addr' option, there is no checking that your announced addresses are public (e.g. not localhost).
|
||||
|
||||
This option can be used multiple times to add more addresses, and its use disables autolisten.
|
||||
|
||||
Since v22.11 'DNS' hostnames can be used for announcement: see **announce-addr-dns**.
|
||||
|
||||
- **announce-addr-dns**=_BOOL_
|
||||
|
||||
Set to _true_ (default is \_false), this so that names given as arguments to **addr** and \_announce-addr\*\* are published in node announcement messages as names, rather than IP addresses. Please note that most mainnet nodes do not yet use, read or propagate this information correctly.
|
||||
|
||||
- **offline**
|
||||
|
||||
Do not bind to any ports, and do not try to reconnect to any peers. This can be useful for maintenance and forensics, so is usually specified on the command line. Overrides all _addr_ and _bind-addr_ options.
|
||||
|
||||
- **autolisten**=_BOOL_
|
||||
|
||||
By default, we bind (and maybe announce) on IPv4 and IPv6 interfaces if no _addr_, _bind-addr_ or _announce-addr_ options are specified. Setting this to _false_ disables that.
|
||||
|
||||
- **proxy**=_IPADDRESS\[:PORT\]_
|
||||
|
||||
Set a socks proxy to use to connect to Tor nodes (or for all connections if **always-use-proxy** is set). The port defaults to 9050 if not specified.
|
||||
|
||||
- **always-use-proxy**=_BOOL_
|
||||
|
||||
Always use the **proxy**, even to connect to normal IP addresses (you can still connect to Unix domain sockets manually). This also disables all DNS lookups, to avoid leaking information.
|
||||
|
||||
- **disable-dns**
|
||||
|
||||
Disable the DNS bootstrapping mechanism to find a node by its node ID.
|
||||
|
||||
- **tor-service-password**=_PASSWORD_
|
||||
|
||||
Set a Tor control password, which may be needed for _autotor:_ to authenticate to the Tor control port.
|
||||
|
||||
### Lightning Plugins
|
||||
|
||||
`lightningd` supports plugins, which offer additional configuration options and JSON-RPC methods, depending on the plugin. Some are supplied by default (usually located in **libexec/c-lightning/plugins/**). If a **plugins** directory exists under _lightning-dir_ that is searched for
|
||||
plugins along with any immediate subdirectories). You can specify additional paths too:
|
||||
|
||||
- **plugin**=_PATH_
|
||||
|
||||
Specify a plugin to run as part of Core Lightning. This can be specified multiple times to add multiple plugins. Note that unless plugins themselves specify ordering requirements for being called on various hooks, plugins will be ordered by commandline, then config file.
|
||||
|
||||
- **plugin-dir**=_DIRECTORY_
|
||||
|
||||
Specify a directory to look for plugins; all executable files not containing punctuation (other than _._, _-_ or _\_) in 'DIRECTORY_ are loaded. _DIRECTORY_ must exist; this can be specified multiple times to add multiple directories. The ordering of plugins within a directory is currently unspecified.
|
||||
|
||||
- **clear-plugins**
|
||||
|
||||
This option clears all _plugin_, _important-plugin_, and _plugin-dir_ options preceeding it, including the default built-in plugin directory. You can still add _plugin-dir_, _plugin_, and _important-plugin_ options following this and they will have the normal effect.
|
||||
|
||||
- **disable-plugin**=_PLUGIN_
|
||||
|
||||
If _PLUGIN_ contains a /, plugins with the same path as _PLUGIN_ will not be loaded at startup. Otherwise, no plugin with that base name will be loaded at startup, whatever directory it is in. This option is useful for disabling a single plugin inside a directory. You can still explicitly load plugins which have been disabled, using [lightning-plugin](ref:lightning-plugin) `start`.
|
||||
|
||||
- **important-plugin**=_PLUGIN_
|
||||
|
||||
Specify a plugin to run as part of Core Lightning.
|
||||
This can be specified multiple times to add multiple plugins.
|
||||
Plugins specified via this option are considered so important, that if the plugin stops for any reason (including via [lightning-plugin](ref:lightning-plugin) `stop`), Core Lightning will also stop running.
|
||||
This way, you can monitor crashes of important plugins by simply monitoring if Core Lightning terminates.
|
||||
Built-in plugins, which are installed with lightningd, are automatically considered important.
|
||||
|
||||
### Experimental Options
|
||||
|
||||
Experimental options are subject to breakage between releases: they are made available for advanced users who want to test proposed features. When the build is configured _without_ `--enable-experimental-features`, below options are available but disabled by default.
|
||||
Supported features can be listed with `lightningd --list-features-only`
|
||||
|
||||
A build _with_ `--enable-experimental-features` flag hard-codes some of below options as enabled, ignoring their command line flag. It may also add support for even more features. The safest way to determine the active configuration is by checking `listconfigs` or by looking at `our_features` (bits) in `getinfo`.
|
||||
|
||||
- **experimental-onion-messages**
|
||||
|
||||
Specifying this enables sending, forwarding and receiving onion messages, which are in draft status in the [bolt](https://github.com/lightning/bolts) specifications (PR #759). A build with `--enable-experimental-features` usually enables this via
|
||||
experimental-offers, see below.
|
||||
|
||||
- **experimental-offers**
|
||||
|
||||
Specifying this enables the `offers` and `fetchinvoice` plugins and corresponding functionality, which are in draft status [bolt](https://github.com/lightning/bolts)#798 as [bolt12](https://github.com/rustyrussell/lightning-rfc/blob/guilt/offers/12-offer-encoding.md).
|
||||
A build with `--enable-experimental-features` enables this permanently and usually
|
||||
enables experimental-onion-messages as well.
|
||||
|
||||
- **fetchinvoice-noconnect**
|
||||
|
||||
Specifying this prevents `fetchinvoice` and `sendinvoice` from trying to connect directly to the offering node as a last resort.
|
||||
|
||||
- **experimental-shutdown-wrong-funding**
|
||||
|
||||
Specifying this allows the `wrong_funding` field in \_shutdown: if a remote node has opened a channel but claims it used the incorrect txid (and the channel hasn't been used yet at all) this allows them to negotiate a clean shutdown with the txid they offer #[4421](https://github.com/ElementsProject/lightning/pull/4421).
|
||||
|
||||
- **experimental-dual-fund**
|
||||
|
||||
Specifying this enables support for the dual funding protocol ([bolt](https://github.com/lightning/bolts) #851), allowing both parties to contribute funds to a channel. The decision about whether to add funds or not to a proposed channel is handled automatically by a plugin that implements the appropriate logic for your needs. The default behavior is to not contribute funds.
|
||||
|
||||
- **experimental-websocket-port**=_PORT_
|
||||
|
||||
Specifying this enables support for accepting incoming WebSocket connections on that port, on any IPv4 and IPv6 addresses you listen to ([bolt](https://github.com/lightning/bolts) #891). The normal protocol is expected to be sent over WebSocket binary frames once the connection is upgraded.
|
|
@ -0,0 +1,43 @@
|
|||
---
|
||||
title: "Hardware considerations"
|
||||
slug: "hardware-considerations"
|
||||
excerpt: "A lightning node requires reasonable amount of memory and storage. Learn what's suitable for your scenario."
|
||||
hidden: true
|
||||
createdAt: "2022-11-18T14:31:38.695Z"
|
||||
updatedAt: "2023-04-01T00:09:20.148Z"
|
||||
---
|
||||
# Hobbyist
|
||||
|
||||
Off-the-shelf consumer computers
|
||||
|
||||
Single Board Computers
|
||||
|
||||
Raspberry Pi (thinking we should recommend against here)
|
||||
|
||||
For home users, laptops are the most suitable devices for running a Lightning node. This is because they include a built-in battery that can power your node in case of a power outage that otherwise could result in data corruption. Compared to desktop machines, laptops are more energy-efficient and quiet. In practically all cases, they have higher performance than SBCs like Raspberry Pis or Rock64s, and can even be cheaper to purchase.
|
||||
|
||||
# Power User
|
||||
|
||||
More advanced users, with more demanding use cases, will need a platform better suited for their CLN nodes. We suggest the following hardware and software options to ensure high uptime and data resiliency. At a minimum, the node should have ECC memory and a storage mirror (typically RAID-1).
|
||||
|
||||
**ECC memory**
|
||||
|
||||
ECC memory protects your data from corruption due to bit flips and hardware errors. When working with sensitive Lightning related data, it's important to make sure there is no data corruption occurring, and ECC memory detects and corrects errors that happen in RAM.
|
||||
|
||||
**Solid State Drives**
|
||||
|
||||
SSDs are generally more reliable than their HDD counterparts since there are no moving parts that can degrade over time. SSDs have much better random IO performance than HDDs, consume less power, and are relatively cheap.
|
||||
|
||||
**Storage mirroring **
|
||||
|
||||
Mirroring protects your node from a storage hardware failure that could potentially cause data loss and fund loss. Data is written simultaneously to two or more independent devices (ideally SSDs) so that if a device fails, there is an operational device with your data.
|
||||
|
||||
**Checksumming filesystem**
|
||||
|
||||
A checksumming filesystem, such as BTRFS or ZFS, compliments ECC memory by computing a cryptographic hash of your data before writing both the checksum and data to storage. This allows your node to verify the checksum while reading your data and correct corruption at the storage hardware level.
|
||||
|
||||
**Offsite replication**
|
||||
|
||||
Despite the data resiliency assurances we gain using ECC memory, storage mirroring, and filesystem-level checksumming, a Lightning node is still subject to other events such as fires or floods that could compromise the integrity of the node's data. Because of this, it's important to have offsite
|
||||
|
||||
# Commercial Grade(?)
|
689
doc/guides/Getting Started/getting-started/installation.md
Normal file
689
doc/guides/Getting Started/getting-started/installation.md
Normal file
|
@ -0,0 +1,689 @@
|
|||
---
|
||||
title: "Installation"
|
||||
slug: "installation"
|
||||
excerpt: "Core lightning is available on many platforms and environments. Learn how to install on your preferred platform."
|
||||
hidden: false
|
||||
createdAt: "2022-11-18T14:32:02.251Z"
|
||||
updatedAt: "2023-04-22T11:59:36.536Z"
|
||||
---
|
||||
# Binaries
|
||||
|
||||
If you're on Ubuntu:
|
||||
|
||||
```shell
|
||||
sudo apt-get install -y software-properties-common
|
||||
sudo add-apt-repository -u ppa:lightningnetwork/ppa
|
||||
sudo apt-get install lightningd snapd
|
||||
sudo snap install bitcoin-core
|
||||
sudo ln -s /snap/bitcoin-core/current/bin/bitcoin{d,-cli} /usr/local/bin/
|
||||
```
|
||||
|
||||
|
||||
|
||||
Alternatively, you can install a pre-compiled binary from the [releases](https://github.com/ElementsProject/lightning/releases) page on GitHub. Core Lightning provides binaries for both Ubuntu and Fedora distributions.
|
||||
|
||||
If you're on a different distribution or OS, you can compile the source by following the instructions from [Installing from Source](<>).
|
||||
|
||||
# Docker
|
||||
|
||||
To install the Docker image for the latest stable release:
|
||||
|
||||
```shell
|
||||
docker pull elementsproject/lightningd:latest
|
||||
```
|
||||
|
||||
|
||||
|
||||
To install for a specific version, for example, 22.11.1:
|
||||
|
||||
```shell
|
||||
docker pull elementsproject/lightningd:v22.11.1
|
||||
```
|
||||
|
||||
|
||||
|
||||
See all of the docker images for Core Lightning on [Docker Hub](https://hub.docker.com/r/elementsproject/lightningd/tags).
|
||||
|
||||
# Third-party apps
|
||||
|
||||
For a GUI experience, you can install and use Core Lightning via a variety of third-party applications such as [Ride the Lightning](https://www.ridethelightning.info/), [Umbrel](https://getumbrel.com/), [BTCPayServer](https://btcpayserver.org/), [Raspiblitz](https://raspiblitz.org/), [Embassy](https://start9.com/).
|
||||
|
||||
Core Lightning is also available on nixOS via the [nix-bitcoin](https://github.com/fort-nix/nix-bitcoin/) project.
|
||||
|
||||
# Installing from source
|
||||
|
||||
> 📘
|
||||
>
|
||||
> To build Core Lightning in a reproducible way, follow the steps at [Reproducible builds for Core Lightning](doc:repro).
|
||||
|
||||
## Library Requirements
|
||||
|
||||
You will need several development libraries:
|
||||
|
||||
- libsqlite3: for database support.
|
||||
- libgmp: for secp256k1
|
||||
- zlib: for compression routines.
|
||||
|
||||
For actually doing development and running the tests, you will also need:
|
||||
|
||||
- pip3: to install python-bitcoinlib
|
||||
- valgrind: for extra debugging checks
|
||||
|
||||
You will also need a version of bitcoind with segregated witness and `estimatesmartfee` with `ECONOMICAL` mode support, such as the 0.16 or above.
|
||||
|
||||
## To Build on Ubuntu
|
||||
|
||||
OS version: Ubuntu 15.10 or above
|
||||
|
||||
Get dependencies:
|
||||
|
||||
```shell
|
||||
sudo apt-get update
|
||||
sudo apt-get install -y \
|
||||
autoconf automake build-essential git libtool libgmp-dev libsqlite3-dev \
|
||||
python3 python3-pip net-tools zlib1g-dev libsodium-dev gettext
|
||||
pip3 install --upgrade pip
|
||||
pip3 install --user poetry
|
||||
```
|
||||
|
||||
|
||||
|
||||
If you don't have Bitcoin installed locally you'll need to install that as well. It's now available via [snapd](https://snapcraft.io/bitcoin-core).
|
||||
|
||||
```shell
|
||||
sudo apt-get install snapd
|
||||
sudo snap install bitcoin-core
|
||||
# Snap does some weird things with binary names; you'll
|
||||
# want to add a link to them so everything works as expected
|
||||
sudo ln -s /snap/bitcoin-core/current/bin/bitcoin{d,-cli} /usr/local/bin/
|
||||
```
|
||||
|
||||
|
||||
|
||||
Clone lightning:
|
||||
|
||||
```shell
|
||||
git clone https://github.com/ElementsProject/lightning.git
|
||||
cd lightning
|
||||
```
|
||||
|
||||
|
||||
|
||||
Checkout a release tag:
|
||||
|
||||
```shell
|
||||
git checkout v22.11.1
|
||||
```
|
||||
|
||||
|
||||
|
||||
For development or running tests, get additional dependencies:
|
||||
|
||||
```shell
|
||||
sudo apt-get install -y valgrind libpq-dev shellcheck cppcheck \
|
||||
libsecp256k1-dev jq lowdown
|
||||
```
|
||||
|
||||
|
||||
|
||||
If you can't install `lowdown`, a version will be built in-tree.
|
||||
|
||||
If you want to build the Rust plugins (currently, cln-grpc):
|
||||
|
||||
```shell
|
||||
sudo apt-get install -y cargo rustfmt protobuf-compiler
|
||||
```
|
||||
|
||||
|
||||
|
||||
There are two ways to build core lightning, and this depends on how you want use it.
|
||||
|
||||
To build cln to just install a tagged or master version you can use the following commands:
|
||||
|
||||
```shell
|
||||
pip3 install --upgrade pip
|
||||
pip3 install mako
|
||||
./configure
|
||||
make
|
||||
sudo make install
|
||||
```
|
||||
|
||||
|
||||
|
||||
> 📘
|
||||
>
|
||||
> If you want disable Rust because you do not want use it or simple you do not want the grpc-plugin, you can use `./configure --disable-rust`.
|
||||
|
||||
To build core lightning for development purpose you can use the following commands:
|
||||
|
||||
```shell
|
||||
pip3 install poetry
|
||||
poetry shell
|
||||
```
|
||||
|
||||
|
||||
|
||||
This will put you in a new shell to enter the following commands:
|
||||
|
||||
```shell
|
||||
poetry install
|
||||
./configure --enable-developer
|
||||
make
|
||||
make check VALGRIND=0
|
||||
```
|
||||
|
||||
|
||||
|
||||
Optionally, add `-j$(nproc)` after `make` to speed up compilation. (e.g. `make -j$(nproc)`)
|
||||
|
||||
Running lightning:
|
||||
|
||||
```shell
|
||||
bitcoind &
|
||||
./lightningd/lightningd &
|
||||
./cli/lightning-cli help
|
||||
```
|
||||
|
||||
|
||||
|
||||
## To Build on Fedora
|
||||
|
||||
OS version: Fedora 27 or above
|
||||
|
||||
Get dependencies:
|
||||
|
||||
```shell
|
||||
$ sudo dnf update -y && \
|
||||
sudo dnf groupinstall -y \
|
||||
'C Development Tools and Libraries' \
|
||||
'Development Tools' && \
|
||||
sudo dnf install -y \
|
||||
clang \
|
||||
gettext \
|
||||
git \
|
||||
gmp-devel \
|
||||
libsq3-devel \
|
||||
python3-devel \
|
||||
python3-pip \
|
||||
python3-setuptools \
|
||||
net-tools \
|
||||
valgrind \
|
||||
wget \
|
||||
zlib-devel \
|
||||
libsodium-devel && \
|
||||
sudo dnf clean all
|
||||
```
|
||||
|
||||
|
||||
|
||||
Make sure you have [bitcoind](https://github.com/bitcoin/bitcoin) available to run.
|
||||
|
||||
Clone lightning:
|
||||
|
||||
```shell
|
||||
$ git clone https://github.com/ElementsProject/lightning.git
|
||||
$ cd lightning
|
||||
```
|
||||
|
||||
|
||||
|
||||
Checkout a release tag:
|
||||
|
||||
```shell
|
||||
$ git checkout v22.11.1
|
||||
```
|
||||
|
||||
|
||||
|
||||
Build and install lightning:
|
||||
|
||||
```shell
|
||||
$lightning> ./configure
|
||||
$lightning> make
|
||||
$lightning> sudo make install
|
||||
```
|
||||
|
||||
|
||||
|
||||
Running lightning (mainnet):
|
||||
|
||||
```shell
|
||||
$ bitcoind &
|
||||
$ lightningd --network=bitcoin
|
||||
```
|
||||
|
||||
|
||||
|
||||
Running lightning on testnet:
|
||||
|
||||
```shell
|
||||
$ bitcoind -testnet &
|
||||
$ lightningd --network=testnet
|
||||
```
|
||||
|
||||
|
||||
|
||||
## To Build on FreeBSD
|
||||
|
||||
OS version: FreeBSD 11.1-RELEASE or above
|
||||
|
||||
Core Lightning is in the FreeBSD ports, so install it as any other port (dependencies are handled automatically):
|
||||
|
||||
```shell
|
||||
# pkg install c-lightning
|
||||
```
|
||||
|
||||
|
||||
|
||||
If you want to compile locally and fiddle with compile time options:
|
||||
|
||||
```shell
|
||||
# cd /usr/ports/net-p2p/c-lightning && make install
|
||||
```
|
||||
|
||||
|
||||
|
||||
See `/usr/ports/net-p2p/c-lightning/Makefile` for instructions on how to build from an arbitrary git commit, instead of the latest release tag.
|
||||
|
||||
> 📘
|
||||
>
|
||||
> Make sure you've set an utf-8 locale, e.g. `export LC_CTYPE=en_US.UTF-8`, otherwise manpage installation may fail.
|
||||
|
||||
Running lightning:
|
||||
|
||||
Configure bitcoind, if not already: add `rpcuser=<foo>` and `rpcpassword=<bar>` to `/usr/local/etc/bitcoin.conf`, maybe also `testnet=1`.
|
||||
|
||||
Configure lightningd: copy `/usr/local/etc/lightningd-bitcoin.conf.sample` to
|
||||
`/usr/local/etc/lightningd-bitcoin.conf` and edit according to your needs.
|
||||
|
||||
```shell
|
||||
# service bitcoind start
|
||||
# service lightningd start
|
||||
# lightning-cli --rpc-file /var/db/c-lightning/bitcoin/lightning-rpc --lightning-dir=/var/db/c-lightning help
|
||||
```
|
||||
|
||||
|
||||
|
||||
## To Build on OpenBSD
|
||||
|
||||
OS version: OpenBSD 6.7
|
||||
|
||||
Install dependencies:
|
||||
|
||||
```shell
|
||||
pkg_add git python gmake py3-pip libtool gmp
|
||||
pkg_add automake # (select highest version, automake1.16.2 at time of writing)
|
||||
pkg_add autoconf # (select highest version, autoconf-2.69p2 at time of writing)
|
||||
```
|
||||
|
||||
|
||||
|
||||
Install `mako` otherwise we run into build errors:
|
||||
|
||||
```shell
|
||||
pip3.7 install --user poetry
|
||||
poetry install
|
||||
```
|
||||
|
||||
|
||||
|
||||
Add `/home/<username>/.local/bin` to your path:
|
||||
|
||||
`export PATH=$PATH:/home/<username>/.local/bin`
|
||||
|
||||
Needed for `configure`:
|
||||
|
||||
```shell
|
||||
export AUTOCONF_VERSION=2.69
|
||||
export AUTOMAKE_VERSION=1.16
|
||||
./configure
|
||||
```
|
||||
|
||||
|
||||
|
||||
Finally, build `c-lightning`:
|
||||
|
||||
`gmake`
|
||||
|
||||
## To Build on NixOS
|
||||
|
||||
Use nix-shell launch a shell with a full Core Lightning dev environment:
|
||||
|
||||
```shell
|
||||
$ nix-shell -Q -p gdb sqlite autoconf git clang libtool gmp sqlite autoconf \
|
||||
autogen automake libsodium 'python3.withPackages (p: [p.bitcoinlib])' \
|
||||
valgrind --run make
|
||||
```
|
||||
|
||||
|
||||
|
||||
## To Build on macOS
|
||||
|
||||
Assuming you have Xcode and Homebrew installed. Install dependencies:
|
||||
|
||||
```shell
|
||||
$ brew install autoconf automake libtool python3 gmp gnu-sed gettext libsodium
|
||||
$ ln -s /usr/local/Cellar/gettext/0.20.1/bin/xgettext /usr/local/opt
|
||||
$ export PATH="/usr/local/opt:$PATH"
|
||||
```
|
||||
|
||||
|
||||
|
||||
If you need SQLite (or get a SQLite mismatch build error):
|
||||
|
||||
```shell
|
||||
$ brew install sqlite
|
||||
$ export LDFLAGS="-L/usr/local/opt/sqlite/lib"
|
||||
$ export CPPFLAGS="-I/usr/local/opt/sqlite/include"
|
||||
```
|
||||
|
||||
|
||||
|
||||
Some library paths are different when using `homebrew` with M1 macs, therefore the following two variables need to be set for M1 machines
|
||||
|
||||
```shell
|
||||
$ export CPATH=/opt/homebrew/include
|
||||
$ export LIBRARY_PATH=/opt/homebrew/lib
|
||||
```
|
||||
|
||||
|
||||
|
||||
If you need Python 3.x for mako (or get a mako build error):
|
||||
|
||||
```shell
|
||||
$ brew install pyenv
|
||||
$ echo -e 'if command -v pyenv 1>/dev/null 2>&1; then\n eval "$(pyenv init -)"\nfi' >> ~/.bash_profile
|
||||
$ source ~/.bash_profile
|
||||
$ pyenv install 3.8.10
|
||||
$ pip install --upgrade pip
|
||||
$ pip install poetry
|
||||
```
|
||||
|
||||
|
||||
|
||||
If you don't have bitcoind installed locally you'll need to install that as well:
|
||||
|
||||
```shell
|
||||
$ brew install berkeley-db4 boost miniupnpc pkg-config libevent
|
||||
$ git clone https://github.com/bitcoin/bitcoin
|
||||
$ cd bitcoin
|
||||
$ ./autogen.sh
|
||||
$ ./configure
|
||||
$ make src/bitcoind src/bitcoin-cli && make install
|
||||
```
|
||||
|
||||
|
||||
|
||||
Clone lightning:
|
||||
|
||||
```shell
|
||||
$ git clone https://github.com/ElementsProject/lightning.git
|
||||
$ cd lightning
|
||||
```
|
||||
|
||||
|
||||
|
||||
Checkout a release tag:
|
||||
|
||||
```shell
|
||||
$ git checkout v22.11.1
|
||||
```
|
||||
|
||||
|
||||
|
||||
Build lightning:
|
||||
|
||||
```shell
|
||||
$ poetry install
|
||||
$ ./configure
|
||||
$ poetry run make
|
||||
```
|
||||
|
||||
|
||||
|
||||
Running lightning:
|
||||
|
||||
> 📘
|
||||
>
|
||||
> Edit your `~/Library/Application\ Support/Bitcoin/bitcoin.conf`to include `rpcuser=<foo>` and `rpcpassword=<bar>` first, you may also need to include `testnet=1`.
|
||||
|
||||
```shell
|
||||
bitcoind &
|
||||
./lightningd/lightningd &
|
||||
./cli/lightning-cli help
|
||||
```
|
||||
|
||||
|
||||
|
||||
To install the built binaries into your system, you'll need to run `make install`:
|
||||
|
||||
```shell
|
||||
make install
|
||||
```
|
||||
|
||||
|
||||
|
||||
On an M1 mac you may need to use this command instead:
|
||||
|
||||
```shell
|
||||
sudo PATH="/usr/local/opt:$PATH" LIBRARY_PATH=/opt/homebrew/lib CPATH=/opt/homebrew/include make install
|
||||
```
|
||||
|
||||
|
||||
|
||||
## To Build on Arch Linux
|
||||
|
||||
Install dependencies:
|
||||
|
||||
```shell
|
||||
pacman --sync autoconf automake gcc git make python-pip
|
||||
pip install --user poetry
|
||||
```
|
||||
|
||||
|
||||
|
||||
Clone Core Lightning:
|
||||
|
||||
```shell
|
||||
$ git clone https://github.com/ElementsProject/lightning.git
|
||||
$ cd lightning
|
||||
```
|
||||
|
||||
|
||||
|
||||
Build Core Lightning:
|
||||
|
||||
```shell
|
||||
python -m poetry install
|
||||
./configure
|
||||
python -m poetry run make
|
||||
```
|
||||
|
||||
|
||||
|
||||
Launch Core Lightning:
|
||||
|
||||
```
|
||||
./lightningd/lightningd
|
||||
```
|
||||
|
||||
|
||||
|
||||
## To cross-compile for Android
|
||||
|
||||
Make a standalone toolchain as per <https://developer.android.com/ndk/guides/standalone_toolchain.html>.
|
||||
For Core Lightning you must target an API level of 24 or higher.
|
||||
|
||||
Depending on your toolchain location and target arch, source env variables such as:
|
||||
|
||||
```shell
|
||||
export PATH=$PATH:/path/to/android/toolchain/bin
|
||||
# Change next line depending on target device arch
|
||||
target_host=arm-linux-androideabi
|
||||
export AR=$target_host-ar
|
||||
export AS=$target_host-clang
|
||||
export CC=$target_host-clang
|
||||
export CXX=$target_host-clang++
|
||||
export LD=$target_host-ld
|
||||
export STRIP=$target_host-strip
|
||||
```
|
||||
|
||||
|
||||
|
||||
Two makefile targets should not be cross-compiled so we specify a native CC:
|
||||
|
||||
```shell
|
||||
make CC=clang clean ccan/tools/configurator/configurator
|
||||
make clean -C ccan/ccan/cdump/tools \
|
||||
&& make CC=clang -C ccan/ccan/cdump/tools
|
||||
```
|
||||
|
||||
|
||||
|
||||
Install the `qemu-user` package.
|
||||
This will allow you to properly configure the build for the target device environment.
|
||||
Build with:
|
||||
|
||||
```shell
|
||||
BUILD=x86_64 MAKE_HOST=arm-linux-androideabi \
|
||||
make PIE=1 DEVELOPER=0 \
|
||||
CONFIGURATOR_CC="arm-linux-androideabi-clang -static"
|
||||
```
|
||||
|
||||
|
||||
|
||||
## To cross-compile for Raspberry Pi
|
||||
|
||||
Obtain the [official Raspberry Pi toolchains](https://github.com/raspberrypi/tools). This document assumes compilation will occur towards the Raspberry Pi 3 (arm-linux-gnueabihf as of Mar. 2018).
|
||||
|
||||
Depending on your toolchain location and target arch, source env variables will need to be set. They can be set from the command line as such:
|
||||
|
||||
```shell
|
||||
export PATH=$PATH:/path/to/arm-linux-gnueabihf/bin
|
||||
# Change next line depending on specific Raspberry Pi device
|
||||
target_host=arm-linux-gnueabihf
|
||||
export AR=$target_host-ar
|
||||
export AS=$target_host-as
|
||||
export CC=$target_host-gcc
|
||||
export CXX=$target_host-g++
|
||||
export LD=$target_host-ld
|
||||
export STRIP=$target_host-strip
|
||||
```
|
||||
|
||||
|
||||
|
||||
Install the `qemu-user` package. This will allow you to properly configure the
|
||||
build for the target device environment.
|
||||
Config the arm elf interpreter prefix:
|
||||
|
||||
```shell
|
||||
export QEMU_LD_PREFIX=/path/to/raspberry/arm-bcm2708/arm-rpi-4.9.3-linux-gnueabihf/arm-linux-gnueabihf/sysroot/
|
||||
```
|
||||
|
||||
|
||||
|
||||
Obtain and install cross-compiled versions of sqlite3, gmp and zlib:
|
||||
|
||||
Download and build zlib:
|
||||
|
||||
```shell
|
||||
wget https://zlib.net/fossils/zlib-1.2.13.tar.gz
|
||||
tar xvf zlib-1.2.13.tar.gz
|
||||
cd zlib-1.2.13
|
||||
./configure --prefix=$QEMU_LD_PREFIX
|
||||
make
|
||||
make install
|
||||
```
|
||||
|
||||
|
||||
|
||||
Download and build sqlite3:
|
||||
|
||||
```shell
|
||||
wget https://www.sqlite.org/2018/sqlite-src-3260000.zip
|
||||
unzip sqlite-src-3260000.zip
|
||||
cd sqlite-src-3260000
|
||||
./configure --enable-static --disable-readline --disable-threadsafe --disable-load-extension --host=$target_host --prefix=$QEMU_LD_PREFIX
|
||||
make
|
||||
make install
|
||||
```
|
||||
|
||||
|
||||
|
||||
Download and build gmp:
|
||||
|
||||
```shell
|
||||
wget https://gmplib.org/download/gmp/gmp-6.1.2.tar.xz
|
||||
tar xvf gmp-6.1.2.tar.xz
|
||||
cd gmp-6.1.2
|
||||
./configure --disable-assembly --host=$target_host --prefix=$QEMU_LD_PREFIX
|
||||
make
|
||||
make install
|
||||
```
|
||||
|
||||
|
||||
|
||||
Then, build Core Lightning with the following commands:
|
||||
|
||||
```
|
||||
./configure
|
||||
make
|
||||
```
|
||||
|
||||
|
||||
|
||||
## To compile for Armbian
|
||||
|
||||
For all the other Pi devices out there, consider using [Armbian](https://www.armbian.com).
|
||||
|
||||
You can compile in `customize-image.sh` using the instructions for Ubuntu.
|
||||
|
||||
A working example that compiles both bitcoind and Core Lightning for Armbian can
|
||||
be found [here](https://github.com/Sjors/armbian-bitcoin-core).
|
||||
|
||||
## To compile for Alpine
|
||||
|
||||
Get dependencies:
|
||||
|
||||
```shell
|
||||
apk update
|
||||
apk add --virtual .build-deps ca-certificates alpine-sdk autoconf automake git libtool \
|
||||
gmp-dev sqlite-dev python3 py3-mako net-tools zlib-dev libsodium gettext
|
||||
```
|
||||
|
||||
|
||||
|
||||
Clone lightning:
|
||||
|
||||
```shell
|
||||
git clone https://github.com/ElementsProject/lightning.git
|
||||
cd lightning
|
||||
git submodule update --init --recursive
|
||||
```
|
||||
|
||||
|
||||
|
||||
Build and install:
|
||||
|
||||
```shell
|
||||
./configure
|
||||
make
|
||||
make install
|
||||
```
|
||||
|
||||
|
||||
|
||||
Clean up:
|
||||
|
||||
```shell
|
||||
cd .. && rm -rf lightning
|
||||
apk del .build-deps
|
||||
```
|
||||
|
||||
|
||||
|
||||
Install runtime dependencies:
|
||||
|
||||
```shell
|
||||
apk add gmp libgcc libsodium sqlite-libs zlib
|
||||
```
|
21
doc/guides/Getting Started/home.md
Normal file
21
doc/guides/Getting Started/home.md
Normal file
File diff suppressed because one or more lines are too long
15
doc/guides/Getting Started/upgrade.md
Normal file
15
doc/guides/Getting Started/upgrade.md
Normal file
|
@ -0,0 +1,15 @@
|
|||
---
|
||||
title: "Upgrade"
|
||||
slug: "upgrade"
|
||||
excerpt: "Upgrade to the latest stable releases without interruption."
|
||||
hidden: false
|
||||
createdAt: "2022-11-18T14:32:58.821Z"
|
||||
updatedAt: "2023-01-25T10:54:43.810Z"
|
||||
---
|
||||
Upgrading your Core Lightning node is the same as installing it. So if you previously installed it using a release binary, download the latest binary in the same directory as before. If you previously built it from the source, fetch the latest source and build it again.
|
||||
|
||||
> 🚧
|
||||
>
|
||||
> Upgrades to Core Lightning often change the database: once this is done, downgrades are not generally possible. By default, Core Lightning will exit with an error rather than upgrade, unless this is an official released version.
|
||||
>
|
||||
> If you really want to upgrade to a non-release version, you can set `database-upgrade=true` in your configuration file, or start `lightningd` with `--database-upgrade=true`(or set to false to never allow a non-reversible upgrade!)
|
22
doc/guides/Node Operator-s Guide/analytics.md
Normal file
22
doc/guides/Node Operator-s Guide/analytics.md
Normal file
|
@ -0,0 +1,22 @@
|
|||
---
|
||||
title: "Analytics"
|
||||
slug: "analytics"
|
||||
excerpt: "Analyse your node data for effective node management."
|
||||
hidden: false
|
||||
createdAt: "2022-12-09T09:54:38.377Z"
|
||||
updatedAt: "2023-02-21T13:39:32.669Z"
|
||||
---
|
||||
## Using SQL plugin
|
||||
|
||||
Since version 23.02, Core Lightning ships with a powerful SQL plugin that allows you to query your node and analyse data for channel / liquidity management, accounting and audit.
|
||||
|
||||
See [lightning-sql](ref:lightning-sql) for a full primer on its usage.
|
||||
|
||||
## Using third-party software
|
||||
|
||||
There are a handful of third-party GUI tools that provide analytics on the top of your node, apart from helping you manage your node:
|
||||
|
||||
- [Ride-the-Lightning](https://www.ridethelightning.info/)
|
||||
- [Umbrel](https://getumbrel.com/)
|
||||
- [bolt.observer](https://bolt.observer)
|
||||
- <https://lnnodeinsight.com/>
|
8
doc/guides/Node Operator-s Guide/channel-management.md
Normal file
8
doc/guides/Node Operator-s Guide/channel-management.md
Normal file
|
@ -0,0 +1,8 @@
|
|||
---
|
||||
title: "Channel Management"
|
||||
slug: "channel-management"
|
||||
excerpt: "Manage your channels and liquidity effectively and efficiently with careful planning and active monitoring."
|
||||
hidden: true
|
||||
createdAt: "2022-11-18T14:28:10.211Z"
|
||||
updatedAt: "2023-02-08T15:10:28.588Z"
|
||||
---
|
219
doc/guides/Node Operator-s Guide/faq.md
Normal file
219
doc/guides/Node Operator-s Guide/faq.md
Normal file
|
@ -0,0 +1,219 @@
|
|||
---
|
||||
title: "Troubleshooting & FAQ"
|
||||
slug: "faq"
|
||||
excerpt: "Common issues and frequently asked questions on operating a CLN node."
|
||||
hidden: false
|
||||
createdAt: "2023-01-25T13:15:09.290Z"
|
||||
updatedAt: "2023-02-21T13:47:49.406Z"
|
||||
---
|
||||
# General questions
|
||||
|
||||
### I don't know where to start, help me !
|
||||
|
||||
There is a Core Lightning plugin specifically for this purpose, it's called [`helpme`](https://github.com/lightningd/plugins/tree/master/helpme).
|
||||
|
||||
Assuming you have followed the [installation steps](doc:installation), have `lightningd` up and running, and `lightning-cli` in your `$PATH` you can start the plugin like so:
|
||||
|
||||
```shell
|
||||
# Clone the plugins repository
|
||||
git clone https://github.com/lightningd/plugins
|
||||
# Make sure the helpme plugin is executable (git should have already handled this)
|
||||
chmod +x plugins/helpme/helpme.py
|
||||
# Install its dependencies (there is only one actually)
|
||||
pip3 install --user -r plugins/helpme/requirements.txt
|
||||
# Then just start it :)
|
||||
lightning-cli plugin start $PWD/plugins/helpme/helpme.py
|
||||
```
|
||||
|
||||
|
||||
|
||||
The plugin registers a new command `helpme` which will guide you through the main
|
||||
components of C-lightning:
|
||||
|
||||
```shell
|
||||
lightning-cli helpme
|
||||
```
|
||||
|
||||
|
||||
|
||||
### How to get the balance of each channel ?
|
||||
|
||||
You can use the `listfunds` command and take a ratio of `our_amount_msat` over
|
||||
`amount_msat`. Note that this doesn't account for the [channel reserve](https://github.com/lightning/bolts/blob/master/02-peer-protocol.md#rationale).
|
||||
|
||||
A better option is to use the [`summary` plugin](https://github.com/lightningd/plugins/tree/master/summary) which nicely displays channel balances, along with other useful channel information.
|
||||
|
||||
### My channel is in state `STATE`, what does that mean ?
|
||||
|
||||
See the [listpeers](ref:lightning-listpeers) command.
|
||||
|
||||
### My payment is failing / all my payments are failing, why ?
|
||||
|
||||
There are many reasons for a payment failure. The most common one is a [failure](https://github.com/lightning/bolts/blob/master/04-onion-routing.md#failure-messages)
|
||||
along the route from you to the payee. The best (and most common) solution to a route failure problem is to open more channels, which should increase the available routes to the recipient and lower the probability of a failure.
|
||||
|
||||
**Hint:** use the [`pay`](ref:lightning-pay) command which is will iterate through trying all possible routes,
|
||||
instead of the low-level `sendpay` command which only tries the passed in route.
|
||||
|
||||
### How can I receive payments ?
|
||||
|
||||
In order to receive payments you need inbound liquidity. You get inbound liquidity when another node opens a channel to you or by successfully completing a payment out through a channel you opened.
|
||||
|
||||
If you need a lot of inbound liquidity, you can use a service that trustlessly swaps on-chain Bitcoin for Lightning channel capacity. There are a few online service providers that will create channels to you. A few of them charge fees for this service. Note that if you already have a channel open to them, you'll need to close it before requesting another channel.
|
||||
|
||||
### Are there any issues if my node changes its IP address? What happens to the channels if it does?
|
||||
|
||||
There is no risk to your channels if your IP address changes. Other nodes might not be able to connect to you, but your node can still connect to them. But Core Lightning also has an integrated IPv4/6 address discovery mechanism. If your node detects an new public address, it will update its announcement. For this to work binhind a NAT router you need to forward the default TCP port 9735 to your node. IP discovery is only active if no other addresses are announced.
|
||||
|
||||
Alternatively, you can [setup a TOR hidden service](doc:tor) for your node that will also work well behind NAT firewalls.
|
||||
|
||||
### Can I have two hosts with the same public key and different IP addresses, both online and operating at the same time?
|
||||
|
||||
No.
|
||||
|
||||
### Can I use a single `bitcoind` for multiple `lightningd` ?
|
||||
|
||||
Yes. All `bitcoind` calls are handled by the bundled `bcli` plugin. `lightningd` does not use `bitcoind`'s wallet. While on the topic, `lightningd` does not require the `-txindex` option on `bitcoind`.
|
||||
|
||||
If you use a single `bitcoind` for multiple `lightningd`'s, be sure to raise the `bitcoind`
|
||||
max RPC thread limit (`-rpcthreads`), each `lightningd` can use up to 4 threads, which is
|
||||
the default `bitcoind` max.
|
||||
|
||||
### Can I use Core Lightning on mobile ?
|
||||
|
||||
#### Remote control
|
||||
|
||||
[Spark-wallet](https://github.com/shesek/spark-wallet/) is the most popular remote control HTTP server for `lightningd`. Use it [behind tor](https://github.com/shesek/spark-wallet/blob/master/doc/onion.md).
|
||||
|
||||
#### `lightningd` on Android
|
||||
|
||||
Effort has been made to get `lightningd` running on Android, [see issue #3484](https://github.com/ElementsProject/lightning/issues/3484). Currently unusable.
|
||||
|
||||
### How to "backup my wallet" ?
|
||||
|
||||
See [Backup and recovery](doc:backup-and-recovery) for a more comprehensive discussion of your options.
|
||||
|
||||
In summary: as a Bitcoin user, one may be familiar with a file or a seed (or some mnemonics) from which it can recover all its funds.
|
||||
|
||||
Core Lightning has an internal bitcoin wallet, which you can use to make "on-chain" transactions, (see [withdraw](ref:lightning-withdraw)). These on-chain funds are backed up via the HD wallet seed, stored in byte-form in `hsm_secret`.
|
||||
|
||||
`lightningd` also stores information for funds locked in Lightning Network channels, which are stored in a database. This database is required for on-going channel updates as well as channel closure. There is no single-seed backup for funds locked in channels.
|
||||
|
||||
While crucial for node operation, snapshot-style backups of the `lightningd` database is **discouraged**, as _any_ loss of state may result in permanent loss of funds. See the [penalty mechanism](https://github.com/lightning/bolts/blob/master/05-onchain.md#revoked-transaction-close-handling) for more information on why any amount of state-loss results in fund loss.
|
||||
|
||||
Real-time database replication is the recommended approach to backing up node data.
|
||||
Tools for replication are currently in active development, using the [`db write` plugin hook](doc:hooks#db_write).
|
||||
|
||||
# Channel Management
|
||||
|
||||
### How to forget about a channel?
|
||||
|
||||
Channels may end up stuck during funding and never confirm on-chain. There is a variety of causes, the most common ones being that the funds have been double-spent, or the funding fee was too low to be confirmed. This is unlikely to happen in normal operation, as CLN tries to use sane defaults and prevents double-spends whenever possible, but using custom feerates or when the bitcoin backend has no good fee estimates it is still possible.
|
||||
|
||||
Before forgetting about a channel it is important to ensure that the funding transaction will never be confirmable by double-spending the funds. To do so you have to rescan the UTXOs using
|
||||
[`dev-rescan-outputs`](doc:faq#rescanning-the-blockchain-for-lost-utxos) to reset any funds that may have been used in the funding transaction, then move all the funds to a new address:
|
||||
|
||||
```bash
|
||||
lightning-cli dev-rescan-outputs
|
||||
ADDR=$(lightning-cli newaddr bech32 | jq .bech32)
|
||||
lightning-cli withdraw $ADDR all
|
||||
```
|
||||
|
||||
|
||||
|
||||
This step is not required if the funding transaction was already double-spent, however it is safe to do it anyway, just in case.
|
||||
|
||||
Then wait for the transaction moving the funds to confirm. This ensures any pending funding transaction can no longer be confirmed.
|
||||
|
||||
As an additional step you can also force-close the unconfirmed channel:
|
||||
|
||||
```bash
|
||||
lightning-cli close $PEERID 10 # Force close after 10 seconds
|
||||
```
|
||||
|
||||
|
||||
|
||||
This will store a unilateral close TX in the DB as last resort means of recovery should the channel unexpectedly confirm anyway.
|
||||
|
||||
Now you can use the `dev-forget-channel` command to remove the DB entries from the database.
|
||||
|
||||
```bash
|
||||
lightning-cli dev-forget-channel $NODEID
|
||||
```
|
||||
|
||||
|
||||
|
||||
This will perform additional checks on whether it is safe to forget the channel, and only then removes the channel from the DB. Notice that this command is only available if CLN was compiled with `DEVELOPER=1`.
|
||||
|
||||
### My channel is stuck in state `CHANNELD_AWAITING_LOCKIN`
|
||||
|
||||
There are two root causes to this issue:
|
||||
|
||||
- Funding transaction isn't confirmed yet. In this case we have to wait longer, or, in the case of a transaction that'll never confirm, forget the channel safely.
|
||||
- The peer hasn't sent a lockin message. This message acknowledges that the node has seen sufficiently many confirmations to consider the channel funded.
|
||||
|
||||
In the case of a confirmed funding transaction but a missing lockin message, a simple reconnection may be sufficient to nudge it to acknowledge the confirmation:
|
||||
|
||||
```bash
|
||||
lightning-cli disconnect $PEERID true # force a disconnect
|
||||
lightning-cli connect $PEERID
|
||||
```
|
||||
|
||||
|
||||
|
||||
The lack of funding locked messages is a bug we are trying to debug here at issue [5336](https://github.com/ElementsProject/lightning/issues/5366), if you have encountered this issue please drop us a comment and any information that may be helpful.
|
||||
|
||||
If this didn't work it could be that the peer is simply not caught up with the blockchain and hasn't seen the funding confirm yet. In this case we can either wait or force a unilateral close:
|
||||
|
||||
```bash
|
||||
lightning-cli close $PEERID 10 # Force a unilateral after 10 seconds
|
||||
```
|
||||
|
||||
|
||||
|
||||
If the funding transaction is not confirmed we may either wait or attempt to double-spend it. Confirmations may take a long time, especially when the fees used for the funding transaction were low. You can check if the transaction is still going to confirm by looking the funding transaction on a block explorer:
|
||||
|
||||
```bash
|
||||
TXID=$(lightning-cli listpeers $PEERID | jq -r ''.peers[].channels[].funding_txid')
|
||||
```
|
||||
|
||||
|
||||
|
||||
This will give you the funding transaction ID that can be looked up in any explorer.
|
||||
|
||||
If you don't want to wait for the channel to confirm, you could forget the channel (see [How to forget about a channel?](doc:faq#how-to-forget-about-a-channel) for details), however be careful as that may be dangerous and you'll need to rescan and double-spend the outputs so the funding cannot confirm.
|
||||
|
||||
# Loss of funds
|
||||
|
||||
### Rescanning the blockchain for lost utxos
|
||||
|
||||
There are 3 types of 'rescans' you can make:
|
||||
|
||||
- `rescanblockchain`: A `bitcoind` RPC call which rescans the blockchain starting at the given height. This does not have an effect on Core Lightning as `lightningd` tracks all block and wallet data independently.
|
||||
- `--rescan=depth`: A `lightningd` configuration flag. This flag is read at node startup and tells lightningd at what depth from current blockheight to rebuild its internal state.
|
||||
(You can specify an exact block to start scanning from, instead of depth from current height, by using a negative number)
|
||||
- `dev-rescan-outputs`: A `lightningd` RPC call. Only available if your node has been configured and built in DEVELOPER mode (i.e. `./configure --enable-developer`) This will sync the state for known UTXOs in the `lightningd` wallet with `bitcoind`. As it only operates on outputs already seen on chain by the `lightningd` internal wallet, this will not find missing wallet funds.
|
||||
|
||||
### Database corruption / channel state lost
|
||||
|
||||
If you lose data (likely corrupted `lightningd.sqlite3`) about a channel **with `option_static_remotekey` enabled**, you can wait for your peer to unilateraly close the channel, then use `tools/hsmtool` with the `guesstoremote` command to attempt to recover your funds from the peer's published unilateral close transaction.
|
||||
|
||||
If `option_static_remotekey` was not enabled, you're probably out of luck. The keys for your funds in your peer's unilateral close transaction are derived from information you lost. Fortunately, since version `0.7.3` channels are created with `option_static_remotekey` by default if your peer supports it. Which is to say that channels created after block [598000](https://blockstream.info/block/0000000000000000000dd93b8fb5c622b9c903bf6f921ef48e266f0ead7faedb)
|
||||
(short channel id starting with > 598000) have a high chance of supporting `option_static_remotekey`. You can verify it using the `features` field from the [`listpeers` command](ref:lightning-listpeers)'s result.
|
||||
|
||||
Here is an example in Python checking if [one of the `option_static_remotekey` bits](https://github.com/lightning/bolts/blob/master/09-features.md) is set in the negotiated features corresponding to `0x02aaa2`:
|
||||
|
||||
```python
|
||||
>>> bool(0x02aaa2 & ((1 << 12) | (1 << 13)))
|
||||
True
|
||||
```
|
||||
|
||||
|
||||
|
||||
If `option_static_remotekey` is enabled you can attempt to recover the funds in a channel following [this tutorial](https://github.com/mandelbit/bitcoin-tutorials/blob/master/CLightningRecoverFunds.md) on how to extract the necessary information from the network topology. If successful, result will be a private key matching a unilaterally closed channel, that you can import into any wallet, recovering the funds into that wallet.
|
||||
|
||||
# Technical Questions
|
||||
|
||||
### How do I get the `psbt` for RPC calls that need it?
|
||||
|
||||
A `psbt` is created and returned by a call to [`utxopsbt` with `reservedok=true`](ref:lightning-utxopsbt).
|
119
doc/guides/Node Operator-s Guide/plugins.md
Normal file
119
doc/guides/Node Operator-s Guide/plugins.md
Normal file
|
@ -0,0 +1,119 @@
|
|||
---
|
||||
title: "Plugins"
|
||||
slug: "plugins"
|
||||
excerpt: "Leverage a plethora of plugins on Core Lightning."
|
||||
hidden: false
|
||||
createdAt: "2022-12-09T09:55:05.629Z"
|
||||
updatedAt: "2023-02-14T12:47:46.112Z"
|
||||
---
|
||||
Power up your Core Lightning node and tailor it for your business needs with community built plugins.
|
||||
|
||||
## Reckless plugin manager
|
||||
|
||||
`reckless` is a plugin manager for Core Lightning that you can use to install and uninstall plugins with a single command.
|
||||
|
||||
> 📘
|
||||
>
|
||||
> Reckless currently supports python plugins only. Additional language support will be provided in future releases. For plugins built by the community in other languages, see the complete list of plugins [here](https://github.com/lightningd/plugins).
|
||||
|
||||
Typical plugin installation involves: finding the source plugin, copying, installing dependencies, testing, activating, and updating the lightningd config file. Reckless does all of these by invoking:
|
||||
|
||||
```shell
|
||||
reckless install plugin_name
|
||||
```
|
||||
|
||||
|
||||
|
||||
reckless will exit early in the event that:
|
||||
|
||||
- the plugin is not found in any available source repositories
|
||||
- dependencies are not sucessfully installed
|
||||
- the plugin fails to execute
|
||||
|
||||
Reckless-installed plugins reside in the 'reckless' subdirectory of the user's `.lightning` folder. By default, plugins are activated on the `bitcoin` network (and use lightningd's bitcoin network config), but regtest may also be used.
|
||||
|
||||
Other commands include:
|
||||
|
||||
Disable the plugin, remove the directory:
|
||||
|
||||
```shell
|
||||
reckless uninstall plugin_name
|
||||
```
|
||||
|
||||
|
||||
|
||||
Look through all available sources for a plugin matching this name:
|
||||
|
||||
```shell
|
||||
reckless search plugin_name
|
||||
```
|
||||
|
||||
|
||||
|
||||
Dynamically enable the reckless-installed plugin and update the config to match:
|
||||
|
||||
```shell
|
||||
reckless enable plugin_name
|
||||
```
|
||||
|
||||
|
||||
|
||||
Dynamically disable the reckless-installed plugin and update the config to match:
|
||||
|
||||
```shell
|
||||
reckless disable plugin_name
|
||||
```
|
||||
|
||||
|
||||
|
||||
List available plugin repositories:
|
||||
|
||||
```shell
|
||||
reckless source list
|
||||
```
|
||||
|
||||
|
||||
|
||||
Add another plugin repo for reckless to search:
|
||||
|
||||
```shell
|
||||
reckless source add repo_url
|
||||
```
|
||||
|
||||
|
||||
|
||||
Remove a plugin repo for reckless to search:
|
||||
|
||||
```shell
|
||||
reckless source rm repo_url
|
||||
```
|
||||
|
||||
|
||||
|
||||
## Options
|
||||
|
||||
Available option flags:
|
||||
|
||||
**-d**, **--reckless-dir** _reckless\_dir_
|
||||
specify an alternative data directory for reckless to use.
|
||||
Useful if your .lightning is protected from execution.
|
||||
|
||||
**-l**, **--lightning** _lightning\_data\_dir_
|
||||
lightning data directory (defaults to $USER/.lightning)
|
||||
|
||||
**-c**, **--conf** _lightning\_config_
|
||||
pass the config used by lightningd
|
||||
|
||||
**-r**, **--regtest**
|
||||
use the regtest network and config instead of bitcoin mainnet
|
||||
|
||||
**-v**, **--verbose**
|
||||
request additional debug output
|
||||
|
||||
> 📘
|
||||
>
|
||||
> Running the first time will prompt the user that their lightningd's bitcoin config will be appended (or created) to inherit the reckless config file (this config is specific to bitcoin by default.) Management of plugins will subsequently modify this file.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
Plugins must be executable. For python plugins, the shebang is invoked, so **python3** should be available in your environment. This can be verified with **which Python3**. The default reckless directory is $USER/.lightning/reckless and it should be possible for the lightningd user to execute files located here. If this is a problem, the option flag **reckless -d=\<my\_alternate\_dir>** may be used to relocate the reckless directory from its default. Consider creating a permanent alias in this case.
|
Loading…
Add table
Reference in a new issue