remove legacy docs

This commit is contained in:
Adi Shankara 2023-07-14 11:47:59 +04:00 committed by Rusty Russell
parent b2caa56c27
commit e4826fbf63
11 changed files with 0 additions and 5014 deletions

View file

@ -1,643 +0,0 @@
# Backing Up Your C-Lightning Node
Lightning Network channels get their scalability and privacy benefits
from the very simple technique of *not telling anyone else about your
in-channel activity*.
This is in contrast to onchain payments, where you have to tell everyone
about each and every payment and have it recorded on the blockchain,
leading to scaling problems (you have to push data to everyone, everyone
needs to validate every transaction) and privacy problems (everyone knows
every payment you were ever involved in).
Unfortunately, this removes a property that onchain users are so used
to, they react in surprise when learning about this removal.
Your onchain activity is recorded in all archival fullnodes, so if you
forget all your onchain activity because your storage got fried, you
just go redownload the activity from the nearest archival fullnode.
But in Lightning, since *you* are the only one storing all your
financial information, you ***cannot*** recover this financial
information from anywhere else.
This means that on Lightning, **you have to** responsibly back up your
financial information yourself, using various processes and automation.
The discussion below assumes that you know where you put your
`$LIGHTNINGDIR`, and you know the directory structure within.
By default your `$LIGHTNINGDIR` will be in `~/.lightning/${COIN}`.
For example, if you are running `--mainnet`, it will be
`~/.lightning/bitcoin`.
## `hsm_secret`
!!! note
WHO SHOULD DO THIS: Everyone.
You need a copy of the `hsm_secret` file regardless of whatever backup
strategy you use.
The `hsm_secret` is created when you first create the node, and does
not change.
Thus, a one-time backup of `hsm_secret` is sufficient.
This is just 32 bytes, and you can do something like the below and
write the hexadecimal digits a few times on a piece of paper:
cd $LIGHTNINGDIR
xxd hsm_secret
You can re-enter the hexdump into a text file later and use `xxd` to
convert it back to a binary `hsm_secret`:
cat > hsm_secret_hex.txt <<HEX
00: 30cc f221 94e1 7f01 cd54 d68c a1ba f124
10: e1f3 1d45 d904 823c 77b7 1e18 fd93 1676
HEX
xxd -r hsm_secret_hex.txt > hsm_secret
chmod 0400 hsm_secret
Notice that you need to ensure that the `hsm_secret` is only readable by
the user, and is not writable, as otherwise `lightningd` will refuse to
start.
Hence the `chmod 0400 hsm_secret` command.
Alternatively, if you are deploying a new node that has no funds and
channels yet, you can generate BIP39 words using any process, and
create the `hsm_secret` using the `hsmtool generatehsm` command.
If you did `make install` then `hsmtool` is installed as
`lightning-hsmtool`, else you can find it in the `tools/` directory
of the build directory.
lightning-hsmtool generatehsm hsm_secret
Then enter the BIP39 words, plus an optional passphrase. Then copy the
`hsm_secret` to `${LIGHTNINGDIR}`
You can regenerate the same `hsm_secret` file using the same BIP39
words, which again, you can back up on paper.
Recovery of the `hsm_secret` is sufficient to recover any onchain
funds.
Recovery of the `hsm_secret` is necessary, but insufficient, to recover
any in-channel funds.
To recover in-channel funds, you need to use one or more of the other
backup strategies below.
## SQLITE3 `--wallet=${main}:${backup}` And Remote NFS Mount
!!! note
WHO SHOULD DO THIS: Casual users.
!!! warning
This technique is only supported after the version v0.10.2 (not included) or later.
On earlier versions, the `:` character is not special and will be considered part of the path of the database file.
When using the SQLITE3 backend (the default), you can specify a
second database file to replicate to, by separating the second
file with a single `:` character in the `--wallet` option, after
the main database filename.
For example, if the user running `lightningd` is named `user`, and
you are on the Bitcoin mainnet with the default `${LIGHTNINGDIR}`, you
can specify in your `config` file:
```bash
wallet=sqlite3:///home/user/.lightning/bitcoin/lightningd.sqlite3:/my/backup/lightningd.sqlite3
```
Or via command line:
```bash
lightningd --wallet=sqlite3:///home/user/.lightning/bitcoin/lightningd.sqlite3:/my/backup/lightningd.sqlite3
```
If the second database file does not exist but the directory that would
contain it does exist, the file is created.
If the directory of the second database file does not exist, `lightningd` will
fail at startup.
If the second database file already exists, on startup it will be overwritten
with the main database.
During operation, all database updates will be done on both databases.
The main and backup files will **not** be identical at every byte, but they
will still contain the same data.
It is recommended that you use **the same filename** for both files, just on
different directories.
This has the advantage compared to the `backup` plugin below of requiring
exactly the same amount of space on both the main and backup storage.
The `backup` plugin will take more space on the backup than on the main
storage.
It has the disadvantage that it will only work with the SQLITE3 backend and
is not supported by the PostgreSQL backend, and is unlikely to be supported
on any future database backends.
You can only specify *one* replica.
It is recommended that you use a network-mounted filesystem for the backup
destination.
For example, if you have a NAS you can access remotely.
At the minimum, set the backup to a different storage device.
This is no better than just using RAID-1 (and the RAID-1 will probably be
faster) but this is easier to set up --- just plug in a commodity USB
flash disk (with metal casing, since a lot of writes are done and you need
to dissipate the heat quickly) and use it as the backup location, without
repartitioning your OS disk, for example.
Do note that files are not stored encrypted, so you should really not do
this with rented space ("cloud storage").
To recover, simply get **all** the backup database files.
Note that SQLITE3 will sometimes create a `-journal` or `-wal` file, which
is necessary to ensure correct recovery of the backup; you need to copy
those too, with corresponding renames if you use a different filename for
the backup database, e.g. if you named the backup `backup.sqlite3` and
when you recover you find `backup.sqlite3` and `backup.sqlite3-journal`
files, you rename `backup.sqlite3` to `lightningd.sqlite3` and
`backup.sqlite3-journal` to `lightningd.sqlite3-journal`.
Note that the `-journal` or `-wal` file may or may not exist, but if they
*do*, you *must* recover them as well
(there can be an `-shm` file as well in WAL mode, but it is unnecessary;
it is only used by SQLITE3 as a hack for portable shared memory, and
contains no useful data; SQLITE3 will ignore its contents always).
It is recommended that you use **the same filename** for both main and
backup databases (just on different directories), and put the backup in
its own directory, so that you can just recover all the files in that
directory without worrying about missing any needed files or correctly
renaming.
If your backup destination is a network-mounted filesystem that is in a
remote location, then even loss of all hardware in one location will allow
you to still recover your Lightning funds.
However, if instead you are just replicating the database on another
storage device in a single location, you remain vulnerable to disasters
like fire or computer confiscation.
## `backup` Plugin And Remote NFS Mount
!!! note
WHO SHOULD DO THIS: Casual users.
You can find the full source for the `backup` plugin here:
https://github.com/lightningd/plugins/tree/master/backup
The `backup` plugin requires Python 3.
* Download the source for the plugin.
* `git clone https://github.com/lightningd/plugins.git`
* `cd` into its directory and install requirements.
* `cd plugins/backup`
* `pip3 install -r requirements.txt`
* Figure out where you will put the backup files.
* Ideally you have an NFS or other network-based mount on your system,
into which you will put the backup.
* Stop your Lightning node.
* `/path/to/backup-cli init --lightning-dir ${LIGHTNINGDIR} file:///path/to/nfs/mount/file.bkp`.
This creates an initial copy of the database at the NFS mount.
* Add these settings to your `lightningd` configuration:
* `important-plugin=/path/to/backup.py`
* Restart your Lightning node.
It is recommended that you use a network-mounted filesystem for the backup
destination.
For example, if you have a NAS you can access remotely.
Do note that files are not stored encrypted, so you should really not do
this with rented space ("cloud storage").
Alternately, you *could* put it in another storage device (e.g. USB flash
disk) in the same physical location.
To recover:
* Re-download the `backup` plugin and install Python 3 and the
requirements of `backup`.
* `/path/to/backup-cli restore file:///path/to/nfs/mount ${LIGHTNINGDIR}`
If your backup destination is a network-mounted filesystem that is in a
remote location, then even loss of all hardware in one location will allow
you to still recover your Lightning funds.
However, if instead you are just replicating the database on another
storage device in a single location, you remain vulnerable to disasters
like fire or computer confiscation.
## Filesystem Redundancy
!!! note
WHO SHOULD DO THIS: Filesystem nerds, data hoarders, home labs, enterprise users.
You can set up a RAID-1 with multiple storage devices, and point the
`$LIGHTNINGDIR` to the RAID-1 setup.
That way, failure of one storage device will still let you recover
funds.
You can use a hardware RAID-1 setup, or just buy multiple commodity
storage media you can add to your machine and use a software RAID,
such as (not an exhaustive list!):
* `mdadm` to create a virtual volume which is the RAID combination
of multiple physical media.
* BTRFS RAID-1 or RAID-10, a filesystem built into Linux.
* ZFS RAID-Z, a filesystem that cannot be legally distributed with the Linux
kernel, but can be distributed in a BSD system, and can be installed
on Linux with some extra effort, see
[ZFSonLinux](https://zfsonlinux.org).
RAID-1 (whether by hardware, or software) like the above protects against
failure of a single storage device, but does not protect you in case of
certain disasters, such as fire or computer confiscation.
You can "just" use a pair of high-quality metal-casing USB flash devices
(you need metal-casing since the devices will have a lot of small writes,
which will cause a lot of heating, which needs to dissipate very fast,
otherwise the flash device firmware will internally disconnect the flash
device from your computer, reducing your reliability) in RAID-1, if you
have enough USB ports.
### Example: BTRFS on Linux
On a Linux system, one of the simpler things you can do would be to use
BTRFS RAID-1 setup between a partition on your primary storage and a USB
flash disk.
The below "should" work, but assumes you are comfortable with low-level
Linux administration.
If you are on a system that would make you cry if you break it, you **MUST**
stop your Lightning node and back up all files before doing the below.
* Install `btrfs-progs` or `btrfs-tools` or equivalent.
* Get a 32Gb USB flash disk.
* Stop your Lightning node and back up everything, do not be stupid.
* Repartition your hard disk to have a 30Gb partition.
* This is risky and may lose your data, so this is best done with a
brand-new hard disk that contains no data.
* Connect the USB flash disk.
* Find the `/dev/sdXX` devices for the HDD 30Gb partition and the flash disk.
* `lsblk -o NAME,TYPE,SIZE,MODEL` should help.
* Create a RAID-1 `btrfs` filesystem.
* `mkfs.btrfs -m raid1 -d raid1 /dev/${HDD30GB} /dev/${USB32GB}`
* You may need to add `-f` if the USB flash disk is already formatted.
* Create a mountpoint for the `btrfs` filesystem.
* Create a `/etc/fstab` entry.
* Use the `UUID` option instad of `/dev/sdXX` since the exact device letter
can change across boots.
* You can get the UUID by `lsblk -o NAME,UUID`.
Specifying *either* of the devices is sufficient.
* Add `autodefrag` option, which tends to work better with SQLITE3
databases.
* e.g. `UUID=${UUID} ${BTRFSMOUNTPOINT} btrfs defaults,autodefrag 0 0`
* `mount -a` then `df` to confirm it got mounted.
* Copy the contents of the `$LIGHTNINGDIR` to the BTRFS mount point.
* Copy the entire directory, then `chown -R` the copy to the user who will
run the `lightningd`.
* If you are paranoid, run `diff -r` on both copies to check.
* Remove the existing `$LIGHTNINGDIR`.
* `ln -s ${BTRFSMOUNTPOINT}/lightningdirname ${LIGHTNINGDIR}`.
* Make sure the `$LIGHTNINGDIR` has the same structure as what you
originally had.
* Add `crontab` entries for `root` that perform regular `btrfs` maintenance
tasks.
* `0 0 * * * /usr/bin/btrfs balance start -dusage=50 -dlimit=2 -musage=50 -mlimit=4 ${BTRFSMOUNTPOINT}`
This prevents BTRFS from running out of blocks even if it has unused
space *within* blocks, and is run at midnight everyday.
You may need to change the path to the `btrfs` binary.
* `0 0 * * 0 /usr/bin/btrfs scrub start -B -c 2 -n 4 ${BTRFSMOUNTPOINT}`
This detects bit rot (i.e. bad sectors) and auto-heals the filesystem,
and is run on Sundays at midnight.
* Restart your Lightning node.
If one or the other device fails completely, shut down your computer, boot
on a recovery disk or similar, then:
* Connect the surviving device.
* Mount the partition/USB flash disk in `degraded` mode:
* `mount -o degraded /dev/sdXX /mnt/point`
* Copy the `lightningd.sqlite3` and `hsm_secret` to new media.
* Do **not** write to the degraded `btrfs` mount!
* Start up a `lightningd` using the `hsm_secret` and `lightningd.sqlite3`
and close all channels and move all funds to onchain cold storage you
control, then set up a new Lightning node.
If the device that fails is the USB flash disk, you can replace it using
BTRFS commands.
You should probably stop your Lightning node while doing this.
* `btrfs replace start /dev/sdOLD /dev/sdNEW ${BTRFSMOUNTPOINT}`.
* If `/dev/sdOLD` no longer even exists because the device is really
really broken, use `btrfs filesystem show` to see the number after
`devid` of the broken device, and use that number instead of
`/dev/sdOLD`.
* Monitor status with `btrfs replace status ${BTRFSMOUNTPOINT}`.
More sophisticated setups with more than two devices are possible.
Take note that "RAID 1" in `btrfs` means "data is copied on up to two
devices", meaning only up to one device can fail.
You may be interested in `raid1c3` and `raid1c4` modes if you have
three or four storage devices.
BTRFS would probably work better if you were purchasing an entire set
of new storage devices to set up a new node.
## PostgreSQL Cluster
!!! note
WHO SHOULD DO THIS: Enterprise users, whales.
`lightningd` may also be compiled with PostgreSQL support.
PostgreSQL is generally faster than SQLITE3, and also supports running a
PostgreSQL cluster to be used by `lightningd`, with automatic replication
and failover in case an entire node of the PostgreSQL cluster fails.
Setting this up, however, is more involved.
By default, `lightningd` compiles with PostgreSQL support **only** if it
finds `libpq` installed when you `./configure`.
To enable it, you have to install a developer version of `libpq`.
On most Debian-derived systems that would be `libpq-dev`.
To verify you have it properly installed on your system, check if the
following command gives you a path:
pg_config --includedir
Versioning may also matter to you.
For example, Debian Stable ("buster") as of late 2020 provides PostgreSQL 11.9
for the `libpq-dev` package, but Ubuntu LTS ("focal") of 2020 provides
PostgreSQL 12.5.
Debian Testing ("bullseye") uses PostgreSQL 13.0 as of this writing.
PostgreSQL 12 had a non-trivial change in the way the restore operation is
done for replication.
You should use the same PostgreSQL version of `libpq-dev` as what you run
on your cluster, which probably means running the same distribution on
your cluster.
Once you have decided on a specific version you will use throughout, refer
as well to the "synchronous replication" document of PostgreSQL for the
**specific version** you are using:
* [PostgreSQL 11](https://www.postgresql.org/docs/11/runtime-config-replication.html)
* [PostgreSQL 12](https://www.postgresql.org/docs/12/runtime-config-replication.html)
* [PostgreSQL 13](https://www.postgresql.org/docs/13/runtime-config-replication.html)
You then have to compile `lightningd` with PostgreSQL support.
* Clone or untar a new source tree for `lightning` and `cd` into it.
* You *could* just use `make clean` on an existing one, but for the
avoidance of doubt (and potential bugs in our `Makefile` cleanup rules),
just create a fresh source tree.
* `./configure`
* Add any options to `configure` that you normally use as well.
* Double-check the `config.vars` file contains `HAVE_POSTGRES=1`.
* `grep 'HAVE_POSTGRES' config.vars`
* `make`
* If you install `lightningd`, `sudo make install`.
If you were not using PostgreSQL before but have compiled and used
`lightningd` on your system, the resulting `lightningd` will still
continue supporting and using your current SQLITE3 database;
it just gains the option to use a PostgreSQL database as well.
If you just want to use PostgreSQL without using a cluster (for
example, as an initial test without risking any significant funds),
then after setting up a PostgreSQL database, you just need to add
`--wallet=postgres://${USER}:${PASSWORD}@${HOST}:${PORT}/${DB}`
to your `lightningd` config or invocation.
To set up a cluster for a brand new node, follow this (external)
[guide by @gabridome][gabridomeguide].
[gabridomeguide]: https://bit.ly/3KffmN3
The above guide assumes you are setting up a new node from scratch.
It is also specific to PostgreSQL 12, and setting up for other versions
**will** have differences; read the PostgreSQL manuals linked above.
If you want to continue a node that started using an SQLITE3 database,
note that we do not support this.
You should set up a new PostgreSQL node, move funds from the SQLITE3
node to the PostgreSQL node, then shut down the SQLITE3 node
permanently.
There are also more ways to set up PostgreSQL replication.
In general, you should use [synchronous replication (13)][pqsyncreplication],
since `lightningd` assumes that once a transaction is committed, it is
saved in all permanent storage.
This can be difficult to create remote replicas due to the latency.
[pqsyncreplication]: https://www.postgresql.org/docs/13/warm-standby.html#SYNCHRONOUS-REPLICATION
## SQLite Litestream Replication
!!! warning
Previous versions of this document recommended this technique, but we no longer do so.
According to [issue 4857][], even with a 60-second timeout that we added
in 0.10.2, this leads to constant crashing of `lightningd` in some
situations.
This section will be removed completely six months after 0.10.3.
Consider using
```
--wallet=sqlite3://${main}:${backup}
```
above, instead.
[issue 4857]: https://github.com/ElementsProject/lightning/issues/4857
One of the simpler things on any system is to use Litestream to replicate the SQLite database.
It continuously streams SQLite changes to file or external storage - the cloud storage option
should not be used.
Backups/replication should not be on the same disk as the original SQLite DB.
You need to enable WAL mode on your database.
To do so, first stop `lightningd`, then:
$ sqlite3 lightningd.sqlite3
sqlite3> PRAGMA journal_mode = WAL;
sqlite3> .quit
Then just restart `lightningd`.
/etc/litestream.yml :
dbs:
- path: /home/bitcoin/.lightning/bitcoin/lightningd.sqlite3
replicas:
- path: /media/storage/lightning_backup
and start the service using systemctl:
$ sudo systemctl start litestream
Restore:
$ litestream restore -o /media/storage/lightning_backup /home/bitcoin/restore_lightningd.sqlite3
Because Litestream only copies small changes and not the entire
database (holding a read lock on the file while doing so), the
60-second timeout on locking should not be reached unless
something has made your backup medium very very slow.
Litestream has its own timer, so there is a tiny (but
non-negligible) probability that `lightningd` updates the
database, then irrevocably commits to the update by sending
revocation keys to the counterparty, and *then* your main
storage media crashes before Litestream can replicate the
update.
Treat this as a superior version of "Database File Backups"
section below and prefer recovering via other backup methods
first.
## Database File Backups
!!! note
WHO SHOULD DO THIS: Those who already have at least one of the
other backup methods, those who are #reckless.
This is the least desirable backup strategy, as it *can* lead to loss
of all in-channel funds if you use it.
However, having *no* backup strategy at all *will* lead to loss of all
in-channel funds, so this is still better than nothing.
This backup method is undesirable, since it cannot recover the following
channels:
* Channels with peers that do not support `option_dataloss_protect`.
* Most nodes on the network already support `option_dataloss_protect`
as of November 2020.
* If the peer does not support `option_dataloss_protect`, then the entire
channel funds will be revoked by the peer.
* Peers can *claim* to honestly support this, but later steal funds
from you by giving obsolete state when you recover.
* Channels created *after* the copy was made are not recoverable.
* Data for those channels does not exist in the backup, so your node
cannot recover them.
Because of the above, this strategy is discouraged: you *can* potentially
lose all funds in open channels.
However, again, note that a "no backups #reckless" strategy leads to
*definite* loss of funds, so you should still prefer *this* strategy rather
than having *no* backups at all.
Even if you have one of the better options above, you might still want to do
this as a worst-case fallback, as long as you:
* Attempt to recover using the other backup options above first.
Any one of them will be better than this backup option.
* Recover by this method **ONLY** as a ***last*** resort.
* Recover using the most recent backup you can find.
Take time to look for the most recent available backup.
Again, this strategy can lead to only ***partial*** recovery of funds,
or even to complete failure to recover, so use the other methods first to
recover!
### Offline Backup
While `lightningd` is not running, just copy the `lightningd.sqlite3` file
in the `$LIGHTNINGDIR` on backup media somewhere.
To recover, just copy the backed up `lightningd.sqlite3` into your new
`$LIGHTNINGDIR` together with the `hsm_secret`.
You can also use any automated backup system as long as it includes the
`lightningd.sqlite3` file (and optionally `hsm_secret`, but note that
as a secret key, thieves getting a copy of your backups may allow them
to steal your funds, even in-channel funds) and as long as it copies the
file while `lightningd` is not running.
### Backing Up While `lightningd` Is Running
Since `sqlite3` will be writing to the file while `lightningd` is running,
`cp`ing the `lightningd.sqlite3` file while `lightningd` is running may
result in the file not being copied properly if `sqlite3` happens to be
committing database transactions at that time, potentially leading to a
corrupted backup file that cannot be recovered from.
You have to stop `lightningd` before copying the database to backup in
order to ensure that backup files are not corrupted, and in particular,
wait for the `lightningd` process to exit.
Obviously, this is disruptive to node operations, so you might prefer
to just perform the `cp` even if the backup potentially is corrupted.
As long as you maintain multiple backups sampled at different times,
this may be more acceptable than stopping and restarting `lightningd`;
the corruption only exists in the backup, not in the original file.
If the filesystem or volume manager containing `$LIGHTNINGDIR` has a
snapshot facility, you can take a snapshot of the filesystem, then
mount the snapshot, copy `lightningd.sqlite3`, unmount the snapshot,
and then delete the snapshot.
Similarly, if the filesystem supports a "reflink" feature, such as
`cp -c` on an APFS on MacOS, or `cp --reflink=always` on an XFS or
BTRFS on Linux, you can also use that, then copy the reflinked copy
to a different storage medium; this is equivalent to a snapshot of
a single file.
This *reduces* but does not *eliminate* this race condition, so you
should still maintain multiple backups.
You can additionally perform a check of the backup by this command:
echo 'PRAGMA integrity_check;' | sqlite3 ${BACKUPFILE}
This will result in the string `ok` being printed if the backup is
**likely** not corrupted.
If the result is anything else than `ok`, the backup is definitely
corrupted and you should make another copy.
In order to make a proper uncorrupted backup of the SQLITE3 file
while `lightningd` is running, we would need to have `lightningd`
perform the backup itself, which, as of the version at the time of
this writing, is not yet implemented.
Even if the backup is not corrupted, take note that this backup
strategy should still be a last resort; recovery of all funds is
still not assured with this backup strategy.
`sqlite3` has `.dump` and `VACUUM INTO` commands, but note that
those lock the main database for long time periods, which will
negatively affect your `lightningd` instance.
### `sqlite3` `.dump` or `VACUUM INTO` Commands
!!! warning
Previous versions of this document recommended
this technique, but we no longer do so.
According to [issue 4857][issue 4857], even with a 60-second timeout that we added
in 0.10.2, this may lead to constant crashing of `lightningd` in some
situations; this technique uses substantially the same techniques as
`litestream`.
This section will be removed completely six months after 0.10.3.
Consider using `--wallet=sqlite3://${main}:${backup}` above, instead.
Use the `sqlite3` command on the `lightningd.sqlite3` file, and
feed it with `.dump "/path/to/backup.sqlite3"` or `VACUUM INTO
"/path/to/backup.sqlite3";`.
These create a snapshot copy that, unlike the previous technique,
is assuredly uncorrupted (barring any corruption caused by your
backup media).
However, if the copying process takes a long time (approaching the
timeout of 60 seconds) then you run the risk of `lightningd`
attempting to grab a write lock, waiting up to 60 seconds, and
then failing with a "database is locked" error.
Your backup system could `.dump` to a fast `tmpfs` RAMDISK or
local media, and *then* copy to the final backup media on a remote
system accessed via slow network, for example, to reduce this
risk.
It is recommended that you use `.dump` instead of `VACUUM INTO`,
as that is assuredly faster; you can just open the backup copy
in a new `sqlite3` session and `VACUUM;` to reduce the size
of the backup.

View file

@ -1,289 +0,0 @@
# Frequently Asked Questions (FAQ)
## General questions
### I don't know where to start, help me !
There is a C-lightning plugin specifically for this purpose, it's called
[`helpme`](https://github.com/lightningd/plugins/tree/master/helpme).
Assuming you have followed the [installation steps](INSTALL.md), have `lightningd`
up and running, and `lightning-cli` in your `$PATH` you can start the plugin like so:
```
# Clone the plugins repository
git clone https://github.com/lightningd/plugins
# Make sure the helpme plugin is executable (git should have already handled this)
chmod +x plugins/helpme/helpme.py
# Install its dependencies (there is only one actually)
pip3 install --user -r plugins/helpme/requirements.txt
# Then just start it :)
lightning-cli plugin start $PWD/plugins/helpme/helpme.py
```
The plugin registers a new command `helpme` which will guide you through the main
components of C-lightning:
```
lightning-cli helpme
```
### How to get the balance of each channel ?
You can use the `listfunds` command and take a ratio of `our_amount_msat` over
`amount_msat`. Note that this doesn't account for the [channel reserve](https://github.com/lightning/bolts/blob/master/02-peer-protocol.md#rationale).
A better option is to use the [`summary` plugin](https://github.com/lightningd/plugins/tree/master/summary)
which nicely displays channel balances, along with other useful channel information.
### My channel is in state `STATE`, what does that mean ?
See the [listpeers command manpage](https://lightning.readthedocs.io/lightning-listpeers.7.html#return-value).
### My payment is failing / all my payments are failing, why ?
There are many reasons for a payment failure. The most common one is a
[failure](https://github.com/lightning/bolts/blob/master/04-onion-routing.md#failure-messages)
along the route from you to the payee.
The best (and most common) solution to a route failure problem is to open more channels,
which should increase the available routes to the recipient and lower the probability of a failure.
Hint: use the [`pay`](lightning-pay.7.md) command which is will iterate through trying all possible routes,
instead of the low-level `sendpay` command which only tries the passed in route.
### How can I receive payments ?
In order to receive payments you need inbound liquidity. You get inbound liquidity when
another node opens a channel to you or by successfully completing a payment out through a channel you opened.
If you need a lot of inbound liquidity, you can use a service that trustlessly swaps on-chain Bitcoin
for Lightning channel capacity.
There are a few online service providers that will create channels to you.
A few of them charge fees for this service.
Note that if you already have a channel open to them, you'll need to close it before requesting another channel.
### Are there any issues if my node changes its IP address? What happens to the channels if it does?
There is no risk to your channels if your IP address changes.
Other nodes might not be able to connect to you, but your node can still connect to them.
But Core Lightning also has an integrated IPv4/6 address discovery mechanism.
If your node detects an new public address, it can update its announcement.
For this to work binhind a NAT router you need to forward the default TCP port 9735 to your node.
Note: Per default and for privacy reasons IP discovery will only be active
if no other addresses would be announced (as kind of a fallback).
You can set `--announce-addr-discovered=true` to explicitly activate it.
Your node will then update discovered IP addresses even if it also announces e.g. a TOR address.
Alternatively, you can [setup a TOR hidden service](TOR.md) for your node that
will also work well behind NAT firewalls.
### Can I have two hosts with the same public key and different IP addresses, both online and operating at the same time?
No.
### Can I use a single `bitcoind` for multiple `lightningd` ?
Yes. All `bitcoind` calls are handled by the bundled `bcli` plugin. `lightningd` does not use
`bitcoind`'s wallet. While on the topic, `lightningd` does not require the `-txindex` option on `bitcoind`.
If you use a single `bitcoind` for multiple `lightningd`'s, be sure to raise the `bitcoind`
max RPC thread limit (`-rpcthreads`), each `lightningd` can use up to 4 threads, which is
the default `bitcoind` max.
### Can I use Core Lightning on mobile ?
#### Remote control
[Spark-wallet](https://github.com/shesek/spark-wallet/) is the most popular remote control
HTTP server for `lightningd`.
**Use it [behind tor](https://github.com/shesek/spark-wallet/blob/master/doc/onion.md)**.
#### `lightningd` on Android
Effort has been made to get `lightningd` running on Android,
[see issue #3484](https://github.com/ElementsProject/lightning/issues/3484). Currently unusable.
### How to "backup my wallet" ?
See [BACKUP.md](https://lightning.readthedocs.io/BACKUP.html) for a more
comprehensive discussion of your options.
In summary: as a Bitcoin user, one may be familiar with a file or a seed
(or some mnemonics) from which
it can recover all its funds.
Core Lightning has an internal bitcoin wallet, which you can use to make "on-chain"
transactions, (see [withdraw](https://lightning.readthedocs.io/lightning-withdraw.7.html)).
These on-chain funds are backed up via the HD wallet seed, stored in byte-form in `hsm_secret`.
`lightningd` also stores information for funds locked in Lightning Network channels, which are stored
in a database. This database is required for on-going channel updates as well as channel closure.
There is no single-seed backup for funds locked in channels.
While crucial for node operation, snapshot-style backups of the `lightningd` database is **discouraged**,
as _any_ loss of state may result in permanent loss of funds.
See the [penalty mechanism](https://github.com/lightning/bolts/blob/master/05-onchain.md#revoked-transaction-close-handling)
for more information on why any amount of state-loss results in fund loss.
Real-time database replication is the recommended approach to backing up node data.
Tools for replication are currently in active development, using the `db_write`
[plugin hook](https://lightning.readthedocs.io/PLUGINS.html#db-write).
## Channel Management
### How to forget about a channel? <a name="forget-channel"></a>
Channels may end up stuck during funding and never confirm
on-chain. There is a variety of causes, the most common ones being
that the funds have been double-spent, or the funding fee was too low
to be confirmed. This is unlikely to happen in normal operation, as
CLN tries to use sane defaults and prevents double-spends whenever
possible, but using custom feerates or when the bitcoin backend has no
good fee estimates it is still possible.
Before forgetting about a channel it is important to ensure that the
funding transaction will never be confirmable by double-spending the
funds. To do so you have to rescan the UTXOs using
[`dev-rescan-outputs`](#rescanning) to reset any funds that may have
been used in the funding transaction, then move all the funds to a new
address:
```bash
lightning-cli dev-rescan-outputs
ADDR=$(lightning-cli newaddr bech32 | jq .bech32)
lightning-cli withdraw $ADDR all
```
This step is not required if the funding transaction was already
double-spent, however it is safe to do it anyway, just in case.
Then wait for the transaction moving the funds to confirm. This
ensures any pending funding transaction can no longer be
confirmed.
As an additional step you can also force-close the unconfirmed channel:
```bash
lightning-cli close $PEERID 10 # Force close after 10 seconds
```
This will store a unilateral close TX in the DB as last resort means
of recovery should the channel unexpectedly confirm anyway.
Now you can use the `dev-forget-channel` command to remove
the DB entries from the database.
```bash
lightning-cli dev-forget-channel $NODEID
```
This will perform additional checks on whether it is safe to forget
the channel, and only then removes the channel from the DB. Notice
that this command is only available if CLN was compiled with
`DEVELOPER=1`.
### My channel is stuck in state `CHANNELD_AWAITING_LOCKIN`
There are two root causes to this issue:
- Funding transaction isn't confirmed yet. In this case we have to
wait longer, or, in the case of a transaction that'll never
confirm, forget the channel safely.
- The peer hasn't sent a lockin message. This message ackowledges
that the node has seen sufficiently many confirmations to consider
the channel funded.
In the case of a confirmed funding transaction but a missing lockin
message, a simple reconnection may be sufficient to nudge it to
acknowledge the confirmation:
```bash
lightning-cli disconnect $PEERID true # force a disconnect
lightning-cli connect $PEERID
```
The lack of funding locked messages is a bug we are trying to debug
here at issue [#5366][5366], if you have encountered this issue please
drop us a comment and any information that may be helpful.
If this didn't work it could be that the peer is simply not caught up
with the blockchain and hasn't seen the funding confirm yet. In this
case we can either wait or force a unilateral close:
```bash
lightning-cli close $PEERID 10 # Force a unilateral after 10 seconds
```
If the funding transaction is not confirmed we may either wait or
attempt to double-spend it. Confirmations may take a long time,
especially when the fees used for the funding transaction were
low. You can check if the transaction is still going to confirm by
looking the funding transaction on a block explorer:
```bash
TXID=$(lightning-cli listpeers $PEERID | jq -r ''.peers[].channels[].funding_txid')
```
This will give you the funding transaction ID that can be looked up in
any explorer.
If you don't want to wait for the channel to confirm, you could forget
the channel (see [How to forget about a channel?](#forget-channel) for
details), however be careful as that may be dangerous and you'll need
to rescan and double-spend the outputs so the funding cannot confirm.
## Loss of funds
### Rescanning the block chain for lost utxos <a name="rescanning"></a>
There are 3 types of 'rescans' you can make:
- `rescanblockchain`: A `bitcoind` RPC call which rescans the blockchain
starting at the given height. This does not have an effect on Core Lightning
as `lightningd` tracks all block and wallet data independently.
- `--rescan=depth`: A `lightningd` configuration flag. This flag is read at node startup
and tells lightningd at what depth from current blockheight to rebuild its internal state.
(You can specify an exact block to start scanning from, instead of depth from current height,
by using a negative number.)
- `dev-rescan-outputs`: A `lightningd` RPC call. Only available if your node has been
configured and built in DEVELOPER mode (i.e. `./configure --enable-developer`) This
will sync the state for known UTXOs in the `lightningd` wallet with `bitcoind`.
As it only operates on outputs already seen on chain by the `lightningd` internal
wallet, this will not find missing wallet funds.
### Database corruption / channel state lost
If you lose data (likely corrupted `lightningd.sqlite3`) about a channel __with `option_static_remotekey` enabled__,
you can wait for your peer to unilateraly close the channel, then use `tools/hsmtool` with the
`guesstoremote` command to attempt to recover your funds from the peer's published unilateral close transaction.
If `option_static_remotekey` was not enabled, you're probably out of luck. The keys for your funds in your peer's
unilateral close transaction are derived from information you lost. Fortunately, since version `0.7.3` channels
are created with `option_static_remotekey` by default if your peer supports it.
Which is to say that channels created after block [598000](https://blockstream.info/block/0000000000000000000dd93b8fb5c622b9c903bf6f921ef48e266f0ead7faedb)
(short channel id starting with > 598000) have a high chance of supporting `option_static_remotekey`.
You can verify it using the `features` field from the [`listpeers` command](https://lightning.readthedocs.io/lightning-listpeers.7.html)'s result.
Here is an example in Python checking if [one of the `option_static_remotekey` bits][spec-features] is set in the negotiated features corresponding to `0x02aaa2`:
```python
>>> bool(0x02aaa2 & ((1 << 12) | (1 << 13)))
True
```
If `option_static_remotekey` is enabled you can attempt to recover the
funds in a channel following [this tutorial][mandelbit-recovery] on
how to extract the necessary information from the network topology. If
successful, result will be a private key matching a unilaterally
closed channel, that you can import into any wallet, recovering the
funds into that wallet.
[spec-features]: https://github.com/lightning/bolts/blob/master/09-features.md
[mandelbit-recovery]: https://github.com/mandelbit/bitcoin-tutorials/blob/master/CLightningRecoverFunds.md
[5366]: https://github.com/ElementsProject/lightning/issues/5366
## Technical Questions
### How do I get the `psbt` for RPC calls that need it?
A `psbt` is created and returned by a call to [`utxopsbt` with `reservedok=true`](https://lightning.readthedocs.io/lightning-utxopsbt.7.html?highlight=psbt).

View file

@ -1,115 +0,0 @@
# Fuzz testing
C-lightning currently supports coverage-guided fuzz testing using [LLVM's libfuzzer](https://www.llvm.org/docs/LibFuzzer.html)
when built with `clang`.
The goal of fuzzing is to generate mutated -and often unexpected- inputs (`seed`s) to pass
to (parts of) a program (`target`) in order to make sure the codepaths used:
- do not crash
- are valid (if combined with sanitizers)
The generated seeds can be stored and form a `corpus`, which we try to optimise (don't
store two seeds that lead to the same codepath).
For more info about fuzzing see [here](https://github.com/google/fuzzing/tree/master/docs),
and for more about `libfuzzer` in particular see [here](https://www.llvm.org/docs/LibFuzzer.html).
## Build the fuzz targets
In order to build the C-lightning binaries with code coverage you will need a recent
[clang](http://clang.llvm.org/). The more recent the compiler version the better.
Then you'll need to enable support at configuration time. You likely want to enable
a few sanitizers for bug detections as well as experimental features for an extended
coverage (not required though).
```
./configure --enable-developer --enable-address-sanitizer --enable-ub-sanitizer --enable-fuzzing --disable-valgrind CC=clang && make
```
The targets will be built in `tests/fuzz/` as `fuzz-` binaries, with their best
known seed corpora stored in `tests/fuzz/corpora/`.
You can run the fuzz targets on their seed corpora to check for regressions:
```
make check-fuzz
```
## Run one or more target(s)
You can run each target independently. Pass `-help=1` to see available options, for
example:
```
./tests/fuzz/fuzz-addr -help=1
```
Otherwise, you can use the Python runner to either run the targets against a given seed
corpus:
```
./tests/fuzz/run.py fuzz_corpus -j2
```
Or extend this corpus:
```
./tests/fuzz/run.py fuzz_corpus -j2 --generate --runs 12345
```
The latter will run all targets two by two `12345` times.
If you want to contribute new seeds, be sure to merge your corpus with the main one:
```
./tests/fuzz/run.py my_locally_extended_fuzz_corpus -j2 --generate --runs 12345
./tests/fuzz/run.py tests/fuzz/corpora --merge_dir my_locally_extended_fuzz_corpus
```
## Improve seed corpora
If you find coverage increasing inputs while fuzzing, please create a pull
request to add them into `tests/fuzz/corpora`. Be sure to minimize any additions
to the corpora first.
### Example
Here's an example workflow to contribute new inputs for the `fuzz-addr` target.
Create a directory for newly found corpus inputs and begin fuzzing:
```shell
mkdir -p local_corpora/fuzz-addr
./tests/fuzz/fuzz-addr -jobs=4 local_corpora/fuzz-addr tests/fuzz/corpora/fuzz-addr/
```
After some time, libFuzzer may find some potential coverage increasing inputs
and save them in `local_corpora/fuzz-addr`. We can then merge them into the seed
corpora in `tests/fuzz/corpora`:
```shell
./tests/fuzz/run.py tests/fuzz/corpora --merge_dir local_corpora
```
This will copy over any inputs that improve the coverage of the existing corpus.
If any new inputs were added, create a pull request to improve the upstream seed
corpus:
```shell
git add tests/fuzz/corpora/fuzz-addr/*
git commit
...
```
## Write new fuzzing targets
In order to write a new target:
- include the `libfuzz.h` header
- fill two functions: `init()` for static stuff and `run()` which will be called
repeatedly with mutated data.
- read about [what makes a good fuzz target](https://github.com/google/fuzzing/blob/master/docs/good-fuzz-target.md).
A simple example is [`fuzz-addr`][fuzz-addr]. It setups the
chainparams and context (wally, tmpctx, ..) in `init()` then
bruteforces the bech32 encoder in `run()`.
[fuzz-addr]: https://github.com/ElementsProject/lightning/blob/master/tests/fuzz/fuzz-addr.c

View file

@ -1,146 +0,0 @@
# gossip_store: Direct Access To Lightning Gossip
Hi!
The lightning_gossipd dameon stores the gossip messages, along with
some internal data, in a file called the "gossip_store". Various
plugins and daemons access this (in a read-only manner), and the
format is documented here.
## The File Header
```
u8 version;
```
The gossmap header consists of one byte. The top 3 bits are the major
version: if these are not all zero, you need to re-read this (updated)
document to see what changed. The lower 5 bits are the minor version,
which won't worry you: currently they will be 11.
After the file header comes a number of records.
## The Record Header
```
be16 flags;
be16 len;
be32 crc;
be32 timestamp;
```
Each record consists of a header and a message. The header is
big-endian, containing flags, the length (of the following body), the
crc32c (of the following message, starting with the timestamp field in
the header) and a timestamp extracted from certain messages (zero
where not relevant, but ignore it in those cases).
The flags currently defined are:
```
#define DELETED 0x8000
#define PUSH 0x4000
#define RATELIMIT 0x2000
```
Deleted fields should be ignored: on restart, they will be removed as
the gossip_store is rewritten.
The push flag indicates gossip which is generated locally: this is
important for gossip timestamp filtering, where peers request gossip
and we always send our own gossip messages even if the timestamp
wasn't within their request.
The ratelimit flag indicates that this gossip message came too fast:
we record it, but don't relay it to peers.
Other flags should be ignored.
## The Message
Each messages consists of a 16-bit big-endian "type" field (for
efficiency, an implementation may read this along with the header),
and optional data. Some messages are defined by the BOLT 7 gossip
protocol, others are for internal use. Unknown message types should be
skipped over.
### BOLT 7 Messages
These are the messages which gossipd has validated, and ensured are in
order.
* `channel_announcement` (256): a complete, validated channel announcement. This will always come before any `channel_update` which refers to it, or `node_announcement` which refers to a node.
* `channel_update` (258): a complete, validated channel update. Note that you can see multiple of these (old ones will be deleted as they are replaced though).
* `node_announcement` (257): a complete, validated node announcement. Note that you can also see multiple of these (old ones will be deleted as they are replaced).
### Internal Gossip Daemon Messages
These messages contain additional data, which may be useful.
* `gossip_store_channel_amount` (4101)
* `satoshis`: u64
This always immediately follows `channel_announcement` messages, and
contains the actual capacity of the channel.
* `gossip_store_private_channel` (4104)
* `amount_sat`: u64
* `len`: u16
* `announcement`: u8[len]
This contains information about a private (could be made public
later!) channel, with announcement in the same format as a normal
`channel_announcement` with invalid signatures.
* `gossip_store_private_update` (4102)
* `len`: u16
* `update`: u8[len]
This contains a private `channel_update` (i.e. for a channel described
by `gossip_store_private_channel`.
* `gossip_store_delete_chan` (4103)
* `scid`: u64
This is added when a channel is deleted. You won't often see this if
you're reading the file once (as the channel record header will have
been marked `deleted` first), but useful if you are polling the file
for updates.
* `gossip_store_ended` (4105)
* `equivalent_offset`: u64
This is only ever added as the final entry in the gossip_store. It
means the file has been deleted (usually because lightningd has been
restarted), and you should re-open it. As an optimization, the
`equivalent_offset` in the new file reflects the point at which the
new gossip_store is equivalent to this one (with deleted records
removed). However, if lightningd has been restarted multiple times it
is possible that this offset is not valid, so it's really only useful
if you're actively monitoring the file.
* `gossip_store_chan_dying` (4106)
* `scid`: u64
* `blockheight`: u32
This is placed in the gossip_store file when a funding transaction is
spent. `blockheight` is set to 12 blocks beyond the block containing
the spend: at this point, gossipd will delete the channel.
## Using the Gossip Store File
- Always check the major version number! We will increment it if the format
changes in a way that breaks readers.
- Ignore unknown flags in the header.
- Ignore message types you don't know.
- You don't need to check the messages, as they have been validated.
- It is possible to see a partially-written record at the end. Ignore it.
If you are keeping the file open to watch for changes:
- The file is append-only, so you can simply try reading more records
using inotify (or equivalent) or simply checking every few seconds.
- If you see a `gossip_store_ended` message, reopen the file.
Happy hacking!
Rusty.

View file

@ -1,361 +0,0 @@
Hacking
=======
Welcome, fellow coder!
This repository contains a code to run a lightning protocol daemon.
It's broken into subdaemons, with the idea being that we can add more
layers of separation between different clients and extra barriers to
exploits.
It is designed to implement the lightning protocol as specified in
[various BOLTs](https://github.com/lightning/bolts).
Getting Started
---------------
It's in C, to encourage alternate implementations. Patches are welcome!
You should read our [Style Guide](STYLE.md).
To read the code, you should start from
[lightningd.c](https://github.com/ElementsProject/lightning/blob/master/lightningd/lightningd.c) and hop your way through
the '~' comments at the head of each daemon in the suggested
order.
The Components
--------------
Here's a list of parts, with notes:
* ccan - useful routines from http://ccodearchive.net
- Use make update-ccan to update it.
- Use make update-ccan CCAN_NEW="mod1 mod2..." to add modules
- Do not edit this! If you want a wrapper, add one to common/utils.h.
* bitcoin/ - bitcoin script, signature and transaction routines.
- Not a complete set, but enough for our purposes.
* external/ - external libraries from other sources
- libbacktrace - library to provide backtraces when things go wrong.
- libsodium - encryption library (should be replaced soon with built-in)
- libwally-core - bitcoin helper library
- secp256k1 - bitcoin curve encryption library within libwally-core
- jsmn - tiny JSON parsing helper
* tools/ - tools for building
- check-bolt.c: check the source code contains correct BOLT quotes
(as used by check-source)
- generate-wire.py: generates wire marshal/unmarshal-ing
routines for subdaemons and BOLT specs.
- mockup.sh / update-mocks.sh: tools to generate mock functions for
unit tests.
* tests/ - blackbox tests (mainly)
- unit tests are in tests/ subdirectories in each other directory.
* doc/ - you are here
* devtools/ - tools for developers
- Generally for decoding our formats.
* contrib/ - python support and other stuff which doesn't belong :)
* wire/ - basic marshalling/un for messages defined in the BOLTs
* common/ - routines needed by any two or more of the directories below
* cli/ - commandline utility to control lightning daemon.
* lightningd/ - master daemon which controls the subdaemons and passes
peer file descriptors between them.
* wallet/ - database code used by master for tracking what's happening.
* hsmd/ - daemon which looks after the cryptographic secret, and performs
commitment signing.
* gossipd/ - daemon to maintain routing information and broadcast gossip.
* connectd/ - daemon to connect to other peers, and receive incoming.
* openingd/ - daemon to open a channel for a single peer, and chat to
a peer which doesn't have any channels/
* channeld/ - daemon to operate a single peer once channel is operating
normally.
* closingd/ - daemon to handle mutual closing negotiation with a single peer.
* onchaind/ - daemon to handle a single channel which has had its funding
transaction spent.
Debugging
---------
You can build Core Lightning with DEVELOPER=1 to use dev commands listed in
``cli/lightning-cli help``. ``./configure --enable-developer`` will do that.
You can log console messages with log_info() in lightningd and status_debug()
in other subdaemons.
You can debug crashing subdaemons with the argument
`--dev-debugger=channeld`, where `channeld` is the subdaemon name. It
will run `gnome-terminal` by default with a gdb attached to the
subdaemon when it starts. You can change the terminal used by setting
the `DEBUG_TERM` environment variable, such as `DEBUG_TERM="xterm -e"`
or `DEBUG_TERM="konsole -e"`.
It will also print out (to stderr) the gdb command for manual connection. The
subdaemon will be stopped (it sends itself a SIGSTOP); you'll need to
`continue` in gdb.
Database
--------
Core Lightning state is persisted in `lightning-dir`.
It is a sqlite database stored in the `lightningd.sqlite3` file, typically
under `~/.lightning/<network>/`.
You can run queries against this file like so:
$ sqlite3 ~/.lightning/bitcoin/lightningd.sqlite3 \
"SELECT HEX(prev_out_tx), prev_out_index, status FROM outputs"
Or you can launch into the sqlite3 repl and check things out from there:
$ sqlite3 ~/.lightning/bitcoin/lightningd.sqlite3
SQLite version 3.21.0 2017-10-24 18:55:49
Enter ".help" for usage hints.
sqlite> .tables
channel_configs invoices peers vars
channel_htlcs outputs shachain_known version
channels payments shachains
sqlite> .schema outputs
...
Some data is stored as raw bytes, use `HEX(column)` to pretty print these.
Make sure that clightning is not running when you query the database,
as some queries may lock the database and cause crashes.
#### Common variables
Table `vars` contains global variables used by lightning node.
$ sqlite3 ~/.lightning/bitcoin/lightningd.sqlite3
SQLite version 3.21.0 2017-10-24 18:55:49
Enter ".help" for usage hints.
sqlite> .headers on
sqlite> select * from vars;
name|val
next_pay_index|2
bip32_max_index|4
...
Variables:
* `next_pay_index` next resolved invoice counter that will get assigned.
* `bip32_max_index` last wallet derivation counter.
Note: Each time `newaddr` command is called, `bip32_max_index` counter
is increased to the last derivation index.
Each address generated after `bip32_max_index` is not included as
lightning funds.
Build and Development
---------------------
Install the following dependencies for best results:
```
sudo apt update
sudo apt install valgrind cppcheck shellcheck libsecp256k1-dev libpq-dev
```
Re-run `configure` and build using `make`:
```
./configure --enable-developer
make -j$(nproc)
```
Testing
-------
Tests are run with: `make check [flags]` where the pertinent flags are:
```
DEVELOPER=[0|1] - developer mode increases test coverage
VALGRIND=[0|1] - detects memory leaks during test execution but adds a significant delay
PYTEST_PAR=n - runs pytests in parallel
```
A modern desktop can build and run through all the tests in a couple of minutes with:
make -j12 full-check PYTEST_PAR=24 DEVELOPER=1 VALGRIND=0
Adjust `-j` and `PYTEST_PAR` accordingly for your hardware.
There are four kinds of tests:
* **source tests** - run by `make check-source`, looks for whitespace,
header order, and checks formatted quotes from BOLTs if BOLTDIR
exists.
* **unit tests** - standalone programs that can be run individually. You can
also run all of the unit tests with `make check-units`.
They are `run-*.c` files in test/ subdirectories used to test routines
inside C source files.
You should insert the lines when implementing a unit test:
`/* AUTOGENERATED MOCKS START */`
`/* AUTOGENERATED MOCKS END */`
and `make update-mocks` will automatically generate stub functions which will
allow you to link (and conveniently crash if they're called).
* **blackbox tests** - These tests setup a mini-regtest environment and test
lightningd as a whole. They can be run individually:
`PYTHONPATH=contrib/pylightning:contrib/pyln-client:contrib/pyln-testing:contrib/pyln-proto py.test -v tests/`
You can also append `-k TESTNAME` to run a single test. Environment variables
`DEBUG_SUBD=<subdaemon>` and `TIMEOUT=<seconds>` can be useful for debugging
subdaemons on individual tests.
* **pylightning tests** - will check contrib pylightning for codestyle and run
the tests in `contrib/pylightning/tests` afterwards:
`make check-python`
Our Github Actions instance (see `.github/workflows/*.yml`) runs all these for each
pull request.
#### Additional Environment Variables
```
TEST_CHECK_DBSTMTS=[0|1] - When running blackbox tests, this will
load a plugin that logs all compiled
and expanded database statements.
Note: Only SQLite3.
TEST_DB_PROVIDER=[sqlite3|postgres] - Selects the database to use when running
blackbox tests.
EXPERIMENTAL_DUAL_FUND=[0|1] - Enable dual-funding tests.
```
#### Troubleshooting
##### Valgrind complains about code we don't control
Sometimes `valgrind` will complain about code we do not control
ourselves, either because it's in a library we use or it's a false
positive. There are generally three ways to address these issues
(in descending order of preference):
1. Add a suppression for the one specific call that is causing the
issue. Upon finding an issue `valgrind` is instructed in the
testing framework to print filters that'd match the issue. These
can be added to the suppressions file under
`tests/valgrind-suppressions.txt` in order to explicitly skip
reporting these in future. This is preferred over the other
solutions since it only disables reporting selectively for things
that were manually checked. See the [valgrind docs][vg-supp] for
details.
2. Add the process that `valgrind` is complaining about to the
`--trace-children-skip` argument in `pyln-testing`. This is used
in cases of full binaries not being under our control, such as the
`python3` interpreter used in tests that run plugins. Do not use
this for binaries that are compiled from our code, as it tends to
mask real issues.
3. Mark the test as skipped if running under `valgrind`. It's mostly
used to skip tests that otherwise would take considerably too long
to test on CI. We discourage this for suppressions, since it is a
very blunt tool.
[vg-supp]: https://valgrind.org/docs/manual/manual-core.html#manual-core.suppress
Making BOLT Modifications
-------------------------
All of code for marshalling/unmarshalling BOLT protocol messages is generated
directly from the spec. These are pegged to the BOLTVERSION, as specified in
`Makefile`.
Source code analysis
--------------------
An updated version of the NCC source code analysis tool is available at
https://github.com/bitonic-cjp/ncc
It can be used to analyze the lightningd source code by running
`make clean && make ncc`. The output (which is built in parallel with the
binaries) is stored in .nccout files. You can browse it, for instance, with
a command like `nccnav lightningd/lightningd.nccout`.
Code Coverage
-------------
Code coverage can be measured using Clang's source-based instrumentation.
First, build with the instrumentation enabled:
```shell
make clean
./configure --enable-coverage CC=clang
make -j$(nproc)
```
Then run the test for which you want to measure coverage. By default, the raw
coverage profile will be written to `./default.profraw`. You can change the
output file by setting `LLVM_PROFILE_FILE`:
```shell
LLVM_PROFILE_FILE="full_channel.profraw" ./channeld/test/run-full_channel
```
Finally, generate an HTML report from the profile. We have a script to make this
easier:
```shell
./contrib/clang-coverage-report.sh channeld/test/run-full_channel \
full_channel.profraw full_channel.html
firefox full_channel.html
```
For more advanced report generation options, see the [Clang coverage
documentation](https://clang.llvm.org/docs/SourceBasedCodeCoverage.html).
Subtleties
----------
There are a few subtleties you should be aware of as you modify deeper
parts of the code:
* `ccan/structeq`'s STRUCTEQ_DEF will define safe comparison function foo_eq()
for struct foo, failing the build if the structure has implied padding.
* `command_success`, `command_fail`, and `command_fail_detailed` will free the
`cmd` you pass in.
This also means that if you `tal`-allocated anything from the `cmd`, they
will also get freed at those points and will no longer be accessible
afterwards.
* When making a structure part of a list, you will instance a
`struct list_node`.
This has to be the *first* field of the structure, or else `dev-memleak`
command will think your structure has leaked.
Protocol Modifications
----------------------
The source tree contains CSV files extracted from the v1.0 BOLT
specifications (wire/extracted_peer_wire_csv and
wire/extracted_onion_wire_csv). You can regenerate these by
first deleting the local copy(if any) at directory .tmp.bolts,
setting `BOLTDIR` and `BOLTVERSION` appropriately, and finally running `make
extract-bolt-csv`. By default the bolts will be retrieved from the
directory `../bolts` and a recent git version.
e.g., `make extract-bolt-csv BOLTDIR=../bolts BOLTVERSION=ee76043271f79f45b3392e629fd35e47f1268dc8`
Further Information
-------------------
Feel free to ask questions on the lightning-dev mailing list, or on
`#c-lightning` on IRC, or email me at rusty@rustcorp.com.au.
Cheers!<br>
Rusty.

View file

@ -1,485 +0,0 @@
Install
=======
- [Library Requirements](#library-requirements)
- [Ubuntu](#to-build-on-ubuntu)
- [Fedora](#to-build-on-fedora)
- [FreeBSD](#to-build-on-freebsd)
- [OpenBSD](#to-build-on-openbsd)
- [NixOS](#to-build-on-nixos)
- [macOS](#to-build-on-macos)
- [Arch Linux](#to-build-on-arch-linux)
- [Android](#to-cross-compile-for-android)
- [Raspberry Pi](#to-cross-compile-for-raspberry-pi)
- [Armbian](#to-compile-for-armbian)
- [Alpine](#to-compile-for-alpine)
- [Additional steps](#additional-steps)
Library Requirements
--------------------
You will need several development libraries:
* libsqlite3: for database support.
* zlib: for compression routines.
For actually doing development and running the tests, you will also need:
* pip3: to install python-bitcoinlib
* valgrind: for extra debugging checks
You will also need a version of bitcoind with segregated witness and `estimatesmartfee` with `ECONOMICAL` mode support, such as the 0.16 or above.
To Build on Ubuntu
---------------------
OS version: Ubuntu 15.10 or above
Get dependencies:
sudo apt-get update
sudo apt-get install -y \
autoconf automake build-essential git libtool libsqlite3-dev \
python3 python3-pip net-tools zlib1g-dev libsodium-dev gettext
pip3 install --upgrade pip
pip3 install --user poetry
If you don't have Bitcoin installed locally you'll need to install that
as well. It's now available via [snapd](https://snapcraft.io/bitcoin-core).
sudo apt-get install snapd
sudo snap install bitcoin-core
# Snap does some weird things with binary names; you'll
# want to add a link to them so everything works as expected
sudo ln -s /snap/bitcoin-core/current/bin/bitcoin{d,-cli} /usr/local/bin/
Clone lightning:
git clone https://github.com/ElementsProject/lightning.git
cd lightning
Checkout a release tag:
git checkout v22.11.1
For development or running tests, get additional dependencies:
sudo apt-get install -y valgrind libpq-dev shellcheck cppcheck \
libsecp256k1-dev jq lowdown
If you can't install `lowdown`, a version will be built in-tree.
If you want to build the Rust plugins (currently, cln-grpc):
sudo apt-get install -y cargo rustfmt protobuf-compiler
There are two ways to build core lightning, and this depends on how you want use it.
To build cln to just install a tagged or master version you can use the following commands:
pip3 install --upgrade pip
pip3 install mako
./configure
make
sudo make install
N.B: if you want disable Rust because you do not want use it or simple you do not want the grpc-plugin, you can use `./configure --disable-rust`.
To build core lightning for development purpose you can use the following commands:
pip3 install poetry
poetry shell
This will put you in a new shell to enter the following commands:
poetry install
./configure --enable-developer
make
make check VALGRIND=0
optionally, add `-j$(nproc)` after `make` to speed up compilation. (e.g. `make -j$(nproc)`)
Running lightning:
bitcoind &
./lightningd/lightningd &
./cli/lightning-cli help
To Build on Fedora
---------------------
OS version: Fedora 27 or above
Get dependencies:
```
$ sudo dnf update -y && \
sudo dnf groupinstall -y \
'C Development Tools and Libraries' \
'Development Tools' && \
sudo dnf install -y \
clang \
gettext \
git \
libsq3-devel \
python3-devel \
python3-pip \
python3-setuptools \
net-tools \
valgrind \
wget \
zlib-devel \
libsodium-devel && \
sudo dnf clean all
```
Make sure you have [bitcoind](https://github.com/bitcoin/bitcoin) available to run
Clone lightning:
```
$ git clone https://github.com/ElementsProject/lightning.git
$ cd lightning
```
Checkout a release tag:
```
$ git checkout v22.11.1
```
Build and install lightning:
```
$lightning> ./configure
$lightning> make
$lightning> sudo make install
```
Running lightning (mainnet):
```
$ bitcoind &
$ lightningd --network=bitcoin
```
Running lightning on testnet:
```
$ bitcoind -testnet &
$ lightningd --network=testnet
```
To Build on FreeBSD
-------------------
OS version: FreeBSD 11.1-RELEASE or above
```
pkg install git python py39-pip gmake libtool gmp sqlite3 \
postgresql13-client gettext autotools
https://github.com/ElementsProject/lightning.git
pip install --upgrade pip
pip3 install mako
./configure
gmake -j$(nproc)
gmake install
```
Alternately, Core Lightning is in the FreeBSD ports, so install it as any other port
(dependencies are handled automatically):
# pkg install c-lightning
for a binary, pre-compiled package. If you want to compile locally and
fiddle with compile time options:
# cd /usr/ports/net-p2p/c-lightning && make install
See `/usr/ports/net-p2p/c-lightning/Makefile` for instructions on how to
build from an arbitrary git commit, instead of the latest release tag.
**Note**: Make sure you've set an utf-8 locale, e.g.
`export LC_CTYPE=en_US.UTF-8`, otherwise manpage installation may fail.
Running lightning:
Configure bitcoind, if not already: add `rpcuser=<foo>` and `rpcpassword=<bar>`
to `/usr/local/etc/bitcoin.conf`, maybe also `testnet=1`.
Configure lightningd: copy `/usr/local/etc/lightningd-bitcoin.conf.sample` to
`/usr/local/etc/lightningd-bitcoin.conf` and edit according to your needs.
# service bitcoind start
# service lightningd start
# lightning-cli --rpc-file /var/db/c-lightning/bitcoin/lightning-rpc --lightning-dir=/var/db/c-lightning help
To Build on OpenBSD
--------------------
OS version: OpenBSD 7.3
Install dependencies:
```
pkg_add git python gmake py3-pip libtool gettext-tools
pkg_add automake # (select highest version, automake1.16.2 at time of writing)
pkg_add autoconf # (select highest version, autoconf-2.69p2 at time of writing)
```
Install `mako` otherwise we run into build errors:
```
pip3.7 install --user poetry
poetry install
```
Add `/home/<username>/.local/bin` to your path:
`export PATH=$PATH:/home/<username>/.local/bin`
Needed for `configure`:
```
export AUTOCONF_VERSION=2.69
export AUTOMAKE_VERSION=1.16
./configure
```
Finally, build `c-lightning`:
`gmake`
To Build on NixOS
--------------------
Use nix-shell launch a shell with a full clightning dev environment:
```
$ nix-shell -Q -p gdb sqlite autoconf git clang libtool sqlite autoconf \
autogen automake libsodium 'python3.withPackages (p: [p.bitcoinlib])' \
valgrind --run make
```
To Build on macOS
---------------------
Assuming you have Xcode and Homebrew installed. Install dependencies:
$ brew install autoconf automake libtool python3 gnu-sed gettext libsodium protobuf
$ ln -s /usr/local/Cellar/gettext/0.20.1/bin/xgettext /usr/local/opt
$ export PATH="/usr/local/opt:$PATH"
If you need SQLite (or get a SQLite mismatch build error):
$ brew install sqlite
$ export LDFLAGS="-L/usr/local/opt/sqlite/lib"
$ export CPPFLAGS="-I/usr/local/opt/sqlite/include"
Some library paths are different when using `homebrew` with M1 macs, therefore the following two variables need to be set for M1 machines
$ export CPATH=/opt/homebrew/include
$ export LIBRARY_PATH=/opt/homebrew/lib
If you need Python 3.x for mako (or get a mako build error):
$ brew install pyenv
$ echo -e 'if command -v pyenv 1>/dev/null 2>&1; then\n eval "$(pyenv init -)"\nfi' >> ~/.bash_profile
$ source ~/.bash_profile
$ pyenv install 3.8.10
$ pip3 install --upgrade pip
$ pip3 install poetry
If you don't have bitcoind installed locally you'll need to install that
as well:
$ brew install berkeley-db4 boost miniupnpc pkg-config libevent
$ git clone https://github.com/bitcoin/bitcoin
$ cd bitcoin
$ ./autogen.sh
$ ./configure
$ make src/bitcoind src/bitcoin-cli && make install
Clone lightning:
$ git clone https://github.com/ElementsProject/lightning.git
$ cd lightning
Checkout a release tag:
$ git checkout v22.11.1
Build lightning:
$ poetry install
$ ./configure
$ poetry run make
Running lightning:
**Note**: Edit your `~/Library/Application\ Support/Bitcoin/bitcoin.conf`
to include `rpcuser=<foo>` and `rpcpassword=<bar>` first, you may also
need to include `testnet=1`
bitcoind &
./lightningd/lightningd &
./cli/lightning-cli help
To install the built binaries into your system, you'll need to run `sudo make install`:
sudo make install
On an M1 mac you may need to use this command instead:
sudo PATH="/usr/local/opt:$PATH" LIBRARY_PATH=/opt/homebrew/lib CPATH=/opt/homebrew/include make install
To Build on Arch Linux
---------------------
Install dependencies:
```
pacman --sync autoconf automake gcc git make python-pip
pip install --user poetry
```
Clone Core Lightning:
```
$ git clone https://github.com/ElementsProject/lightning.git
$ cd lightning
```
Build Core Lightning:
```
python -m poetry install
./configure
python -m poetry run make
```
Launch Core Lightning:
```
./lightningd/lightningd
```
To cross-compile for Android
--------------------
Make a standalone toolchain as per
https://developer.android.com/ndk/guides/standalone_toolchain.html.
For Core Lightning you must target an API level of 24 or higher.
Depending on your toolchain location and target arch, source env variables
such as:
export PATH=$PATH:/path/to/android/toolchain/bin
# Change next line depending on target device arch
target_host=arm-linux-androideabi
export AR=$target_host-ar
export AS=$target_host-clang
export CC=$target_host-clang
export CXX=$target_host-clang++
export LD=$target_host-ld
export STRIP=$target_host-strip
Two makefile targets should not be cross-compiled so we specify a native CC:
make CC=clang clean ccan/tools/configurator/configurator
make clean -C ccan/ccan/cdump/tools \
&& make CC=clang -C ccan/ccan/cdump/tools
Install the `qemu-user` package.
This will allow you to properly configure
the build for the target device environment.
Build with:
BUILD=x86_64 MAKE_HOST=arm-linux-androideabi \
make PIE=1 DEVELOPER=0 \
CONFIGURATOR_CC="arm-linux-androideabi-clang -static"
To cross-compile for Raspberry Pi
--------------------
Obtain the [official Raspberry Pi toolchains](https://github.com/raspberrypi/tools).
This document assumes compilation will occur towards the Raspberry Pi 3
(arm-linux-gnueabihf as of Mar. 2018).
Depending on your toolchain location and target arch, source env variables
will need to be set. They can be set from the command line as such:
export PATH=$PATH:/path/to/arm-linux-gnueabihf/bin
# Change next line depending on specific Raspberry Pi device
target_host=arm-linux-gnueabihf
export AR=$target_host-ar
export AS=$target_host-as
export CC=$target_host-gcc
export CXX=$target_host-g++
export LD=$target_host-ld
export STRIP=$target_host-strip
Install the `qemu-user` package. This will allow you to properly configure the
build for the target device environment.
Config the arm elf interpreter prefix:
export QEMU_LD_PREFIX=/path/to/raspberry/arm-bcm2708/arm-rpi-4.9.3-linux-gnueabihf/arm-linux-gnueabihf/sysroot/
Obtain and install cross-compiled versions of sqlite3 and zlib:
Download and build zlib:
wget https://zlib.net/fossils/zlib-1.2.13.tar.gz
tar xvf zlib-1.2.13.tar.gz
cd zlib-1.2.13
./configure --prefix=$QEMU_LD_PREFIX
make
make install
Download and build sqlite3:
wget https://www.sqlite.org/2018/sqlite-src-3260000.zip
unzip sqlite-src-3260000.zip
cd sqlite-src-3260000
./configure --enable-static --disable-readline --disable-threadsafe --disable-load-extension --host=$target_host --prefix=$QEMU_LD_PREFIX
make
make install
Then, build Core Lightning with the following commands:
./configure
make
To compile for Armbian
--------------------
For all the other Pi devices out there, consider using [Armbian](https://www.armbian.com).
You can compile in `customize-image.sh` using the instructions for Ubuntu.
A working example that compiles both bitcoind and Core Lightning for Armbian can
be found [here](https://github.com/Sjors/armbian-bitcoin-core).
To compile for Alpine
---------------------
Get dependencies:
```
apk update
apk add --virtual .build-deps ca-certificates alpine-sdk autoconf automake git libtool \
sqlite-dev python3 py3-mako net-tools zlib-dev libsodium gettext
```
Clone lightning:
```
git clone https://github.com/ElementsProject/lightning.git
cd lightning
git submodule update --init --recursive
```
Build and install:
```
./configure
make
make install
```
Clean up:
```
cd .. && rm -rf lightning
apk del .build-deps
```
Install runtime dependencies:
```
apk add libgcc libsodium sqlite-libs zlib
```
Additional steps
--------------------
Go to [README](https://github.com/ElementsProject/lightning/blob/master/README.md) for more information how to create an address, add funds, connect to a node, etc.

View file

@ -1,118 +0,0 @@
## Release checklist
Here's a checklist for the release process.
### Leading Up To The Release
1. Talk to team about whether there are any changes which MUST go in
this release which may cause delay.
2. Look through outstanding issues, to identify any problems that might
be necessary to fixup before the release. Good candidates are reports
of the project not building on different architectures or crashes.
3. Identify a good lead for each outstanding issue, and ask them about
a fix timeline.
4. Create a milestone for the *next* release on Github, and go though
open issues and PRs and mark accordingly.
5. Ask (via email) the most significant contributor who has not
already named a release to name the release (use
`devtools/credit --verbose v<PREVIOUS-VERSION>` to find this contributor).
CC previous namers and team.
### Preparing for -rc1
1. Check that `CHANGELOG.md` is well formatted, ordered in areas,
covers all signficant changes, and sub-ordered approximately by user impact
& coolness.
2. Use `devtools/changelog.py` to collect the changelog entries from pull
request commit messages and merge them into the manually maintained
`CHANGELOG.md`. This does API queries to GitHub, which are severely
ratelimited unless you use an API token: set the `GH_TOKEN` environment
variable to a Personal Access Token from https://github.com/settings/tokens
3. Create a new CHANGELOG.md heading to `v<VERSION>rc1`, and create a link at
the bottom. Note that you should exactly copy the date and name format from
a previous release, as the `build-release.sh` script relies on this.
4. Update the contrib/pyln package versions: `make update-pyln-versions NEW_VERSION=<VERSION>`
5. Create a PR with the above.
### Releasing -rc1
1. Merge the above PR.
2. Tag it `git pull && git tag -s v<VERSION>rc1`. Note that you
should get a prompt to give this tag a 'message'. Make sure you fill this in.
3. Confirm that the tag will show up for builds with `git describe`
4. Push the tag to remote `git push --tags`.
5. Announce rc1 release on core-lightning's release-chat channel on Discord
& [BuildOnL2](https://community.corelightning.org/c/general-questions/).
6. Use `devtools/credit --verbose v<PREVIOUS-VERSION>` to get commits, days
and contributors data for release note.
7. Prepare draft release notes including information from above step, and share
with the team for editing.
8. Upgrade your personal nodes to the rc1, to help testing.
9. Follow [reproducible build](REPRODUCIBLE.md) for [Builder image setup](https://lightning.readthedocs.io/REPRODUCIBLE.html#builder-image-setup). It will create builder images `cl-repro-<codename>` which are required for the next step.
10. Run `tools/build-release.sh bin-Fedora-28-amd64 bin-Ubuntu sign` script to prepare required builds for the release. With `bin-Fedora-28-amd64 bin-Ubuntu sign`, it will build a zipfile, a non-reproducible Fedora, reproducible Ubuntu images. Once it is done, the script will sign the release contents and create SHA256SUMS and SHA256SUMS.asc in the release folder.
10. RC images are not uploaded on Docker. Hence they can be removed from the target list for RC versions. Each docker image takes approx. 90 minutes to bundle but it is highly recommended to test docker setup once, if you haven't done that before. Prior to building docker images, ensure that `multiarch/qemu-user-static` setup is working on your system as described [here](https://lightning.readthedocs.io/REPRODUCIBLE.html#setting-up-multiarch-qemu-user-static).
### Releasing -rc2, ..., -rcN
1. Change rc(N-1) to rcN in CHANGELOG.md.
2. Update the contrib/pyln package versions: `make update-pyln-versions NEW_VERSION=<VERSION>`
3. Add a PR with the rcN.
4. Tag it `git pull && git tag -s v<VERSION>rcN && git push --tags`
5. Announce tagged rc release on core-lightning's release-chat channel on Discord
& [BuildOnL2](https://community.corelightning.org/c/general-questions/).
6. Upgrade your personal nodes to the rcN.
### Tagging the Release
1. Update the CHANGELOG.md; remove -rcN in both places, update the date and add title and namer.
2. Update the contrib/pyln package versions: `make update-pyln-versions NEW_VERSION=<VERSION>`
3. Add a PR with that release.
4. Merge the PR, then:
- `export VERSION=23.05`
- `git pull`
- `git tag -a -s v${VERSION} -m v${VERSION}`
- `git push --tags`
5. Run `tools/build-release.sh` to:
- Create reproducible zipfile
- Build non-reproducible Fedora image
- Build reproducible Ubuntu-v18.04, Ubuntu-v20.04, Ubuntu-v22.04 images. Follow [link](https://lightning.readthedocs.io/REPRODUCIBLE.html#building-using-the-builder-image) for manually Building Ubuntu Images.
- Build Docker images for amd64 and arm64v8
- Create and sign checksums. Follow [link](https://lightning.readthedocs.io/REPRODUCIBLE.html#co-signing-the-release-manifest) for manually signing the release.
6. The tarballs may be owned by root, so revert ownership if necessary:
`sudo chown ${USER}:${USER} *${VERSION}*`
7. Upload the resulting files to github and save as a draft.
(https://github.com/ElementsProject/lightning/releases/)
8. Send `SHA256SUMS` & `SHA256SUMS.asc` files to the rest of the team to check and sign the release.
9. Team members can verify the release with the help of `build-release.sh`:
9.1 Rename release captain's `SHA256SUMS` to `SHA256SUMS-v${VERSION}` and `SHA256SUMS.asc` to `SHA256SUMS-v${VERSION}.asc`.
9.2 Copy them in the root folder (`lightning`).
9.3 Run `tools/build-release.sh --verify`. It will create reproducible images, verify checksums and sign.
9.4 Send your signatures from `release/SHA256SUMS.new` to release captain.
9.5 Or follow [link](https://lightning.readthedocs.io/REPRODUCIBLE.html#verifying-a-reproducible-build) for manual verification instructions.
10. Append signatures shared by the team into the `SHA256SUMS.asc` file, verify
with `gpg --verify SHA256SUMS.asc` and include the file in the draft release.
11. `make pyln-release` to upload pyln modules to pypi.org. This requires keys
for each of pyln-client, pyln-proto, and pyln-testing accessible to poetry.
This can be done by configuring the python keyring library along with a
suitable backend. Alternatively, the key can be set as an environment
variable and each of the pyln releases can be built and published
independently:
- `export POETRY_PYPI_TOKEN_PYPI=<pyln-client token>`
- `make pyln-release-client`
- ... repeat for each pyln package.
### Performing the Release
1. Edit the GitHub draft and include the `SHA256SUMS.asc` file.
2. Publish the release as not a draft.
3. Announce the final release on core-lightning's release-chat channel on Discord
& [BuildOnL2](https://community.corelightning.org/c/general-questions/).
4. Send a mail to c-lightning and lightning-dev mailing lists, using the
same wording as the Release Notes in github.
5. Write release blog, post it on [Blockstream](https://blog.blockstream.com/) and announce the release on Twitter.
### Post-release
1. Look through PRs which were delayed for release and merge them.
2. Close out the Milestone for the now-shipped release.
3. Update this file with any missing or changed instructions.

File diff suppressed because it is too large Load diff

View file

@ -1,335 +0,0 @@
# Reproducible builds for Core Lightning
This document describes the steps involved to build Core Lightning in a
reproducible way. Reproducible builds close the final gap in the lifecycle of
open-source projects by allowing maintainers to verify and certify that a
given binary was indeed produced by compiling an unmodified version of the
publicly available source. In particular the maintainer certifies that the
binary corresponds a) to the exact version of the and b) that no malicious
changes have been applied before or after the compilation.
Core Lightning has provided a manifest of the binaries included in a release,
along with signatures from the maintainers since version 0.6.2.
The steps involved in creating reproducible builds are:
- Creation of a known environment in which to build the source code
- Removal of variance during the compilation (randomness, timestamps, etc)
- Packaging of binaries
- Creation of a manifest (`SHA256SUMS` file containing the crytographic
hashes of the binaries and packages)
- Signing of the manifest by maintainers and volunteers that have reproduced
the files in the manifest starting from the source.
The bulk of these operations is handled by the [`repro-build.sh`][script]
script, but some manual operations are required to setup the build
environment. Since a binary is built against platorm specific libraries we
also need to replicate the steps once for each OS distribution and
architecture, so the majority of this guide will describe how to set up those
starting from a minimal trusted base. This minimal trusted base in most cases
is the official installation medium from the OS provider.
Note: Since your signature certifies the integrity of the resulting binaries,
please familiarize youself with both the [`repro-build.sh`][script] script, as
well as with the setup instructions for the build environments before signing
anything.
[script]: https://github.com/ElementsProject/lightning/blob/master/tools/repro-build.sh
# Build Environment Setup
The build environments are a set of docker images that are created directly
from the installation mediums and repositories from the OS provider. The
following sections describe how to create those images. Don't worry, you only
have to create each image once and can then reuse the images for future
builds.
## Base image creation
Depending on the distribution that we want to build for the instructions to
create a base image can vary. In the following sections we discuss the
specific instructions for each distribution, whereas the instructions are
identical again once we have the base image.
### Debian / Ubuntu and derivative OSs
For operating systems derived from Debian we can use the `debootstrap` tool to
build a minimal OS image, that can then be transformed into a docker
image. The packages for the minimal OS image are directly downloaded from the
installation repositories operated by the OS provider.
We cannot really use the `debian` and `ubuntu` images from the docker hub,
mainly because it'd be yet another trusted third party, but it is also
complicated by the fact that the images have some of the packages updated. The
latter means that if we disable the `updates` and `security` repositories for
`apt` we find ourselves in a situation where we can't install any additional
packages (wrongly updated packages depending on the versions not available in
the non-updated repos).
The following table lists the codenames of distributions that we
currently support:
- Ubuntu 18.06:
- Distribution Version: 18.04
- Codename: bionic
- Ubuntu 20.04:
- Distribution Version: 20.04
- Codename: focal
- Ubuntu 22.04:
- Distribution Version: 22.04
- Codename: jammy
Depending on your host OS release you might not have `debootstrap`
manifests for versions newer than your host OS. Due to this we run the
`debootstrap` commands in a container of the latest version itself:
```bash
for v in bionic focal jammy; do
echo "Building base image for $v"
sudo docker run --rm -v $(pwd):/build ubuntu:22.04 \
bash -c "apt-get update && apt-get install -y debootstrap && debootstrap $v /build/$v"
sudo tar -C $v -c . | sudo docker import - $v
done
```
Verify that the image corresponds to our expectation and is runnable:
```bash
sudo docker run bionic cat /etc/lsb-release
```
Which should result in the following output for `bionic`:
```text
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=18.04
DISTRIB_CODENAME=bionic
DISTRIB_DESCRIPTION="Ubuntu 18.04 LTS"
```
## Builder image setup
Once we have the clean base image we need to customize it to be able to build
Core Lightning. This includes disabling the update repositories, downloading the
build dependencies and specifying the steps required to perform the build.
For this purpose we have a number of Dockerfiles in the
[`contrib/reprobuild`][repro-dir] directory that have the specific
instructions for each base image.
We can then build the builder image by calling `docker build` and passing it
the `Dockerfile`:
```bash
sudo docker build -t cl-repro-bionic - < contrib/reprobuild/Dockerfile.bionic
sudo docker build -t cl-repro-focal - < contrib/reprobuild/Dockerfile.focal
sudo docker build -t cl-repro-jammy - < contrib/reprobuild/Dockerfile.jammy
```
Since we pass the `Dockerfile` through `stdin` the build command will not
create a context, i.e., the current directory is not passed to `docker` and
it'll be independent of the currently checked out version. This also means
that you will be able to reuse the docker image for future builds, and don't
have to repeat this dance every time. Verifying the `Dockerfile` therefore is
sufficient to ensure that the resulting `cl-repro-<codename>` image is
reproducible.
The dockerfiles assume that the base image has the codename as its image name.
[repro-dir]: https://github.com/ElementsProject/lightning/tree/master/contrib/reprobuild
# Building using the builder image
Finally, after this rather lengthy setup we can perform the actual build. At
this point we have a container image that has been prepared to build
reproducibly. As you can see from the `Dockerfile` above we assume the source
git repository gets mounted as `/repo` in the docker container. The container
will clone the repository to an internal path, in order to keep the repository
clean, build the artifacts there, and then copy them back to `/repo/release`.
We'll need the release directory available for this, so create it now if it
doesn't exist:
`mkdir release`
Then we can simply execute the following command inside the git
repository (remember to checkout the tag you are trying to build):
```bash
sudo docker run --rm -v $(pwd):/repo -ti cl-repro-bionic
sudo docker run --rm -v $(pwd):/repo -ti cl-repro-focal
sudo docker run --rm -v $(pwd):/repo -ti cl-repro-jammy
```
The last few lines of output also contain the `sha256sum` hashes of all
artifacts, so if you're just verifying the build those are the lines that are
of interest to you:
```text
ee83cf4948228ab1f644dbd9d28541fd8ef7c453a3fec90462b08371a8686df8 /repo/release/clightning-v0.9.0rc1-Ubuntu-18.04.tar.xz
94bd77f400c332ac7571532c9f85b141a266941057e8fe1bfa04f054918d8c33 /repo/release/clightning-v0.9.0rc1.zip
```
Repeat this step for each distribution and each architecture you wish to
sign. Once all the binaries are in the `release/` subdirectory we can sign the
hashes:
# Setting up Docker's Buildx
Docker Buildx is an extension of Docker's build command, that provides a more efficient way to create images. It is part of Docker 19.03 and can also be manually installed as a CLI plugin for older versions.
1: Enable Docker CLI experimental features
Docker CLI experimental features are required to use Buildx. Enable them by setting the DOCKER_CLI_EXPERIMENTAL environment variable to enabled.
You can do this by adding the following line to your shell profile file (.bashrc, .zshrc, etc.):
```
export DOCKER_CLI_EXPERIMENTAL=enabled
```
After adding it, source your shell profile file or restart your shell to apply the changes.
2: Create a new builder instance
By default, Docker uses the "legacy" builder. You need to create a new builder instance that uses BuildKit. To create a new builder instance, use the following command:
```
docker buildx create --use
```
The --use flag sets the newly created builder as the current one.
# Setting up multiarch/qemu-user-static
1: Check Buildx is working
Use the `docker buildx inspect --bootstrap` command to verify that Buildx is working correctly. The `--bootstrap` option ensures the builder instance is running before inspecting it. The output should look something like this:
```
Name: my_builder
Driver: docker-container
Last Activity: 2023-06-13 04:37:30 +0000 UTC
Nodes:
Name: my_builder0
Endpoint: unix:///var/run/docker.sock
Status: running
Buildkit: v0.11.6
Platforms: linux/amd64, linux/amd64/v2, linux/amd64/v3, linux/amd64/v4, linux/386
```
2:Install `binfmt-support` and `qemu-user-static` if not installed already.
```
sudo apt-get update
sudo apt-get install docker.io binfmt-support qemu-user-static
sudo systemctl restart docker
```
3: Setup QEMU to run binaries from multiple different architectures
```
docker run --rm --privileged multiarch/qemu-user-static --reset -p yes
```
4: Confirm QEMU is working
Again run `docker buildx inspect --bootstrap` command to verify that `linux/arm64` is in the list of platforms.
```
Name: my_builder
Driver: docker-container
Last Activity: 2023-06-13 04:37:30 +0000 UTC
Nodes:
Name: my_builder0
Endpoint: unix:///var/run/docker.sock
Status: running
Buildkit: v0.11.6
Platforms: linux/amd64, linux/amd64/v2, linux/amd64/v3, linux/amd64/v4, linux/386, linux/arm64, linux/riscv64, linux/ppc64, linux/ppc64le, linux/s390x, linux/mips64le, linux/mips64
```
# (Co-)Signing the release manifest
The release captain is in charge of creating the manifest, whereas
contributors and interested bystanders may contribute their signatures to
further increase trust in the binaries.
The release captain creates the manifest as follows:
```bash
cd release/
sha256sum *v0.9.0* > SHA256SUMS
gpg -sb --armor SHA256SUMS
```
Co-maintainers and contributors wishing to add their own signature verify that
the `SHA256SUMS` and `SHA256SUMS.asc` files created by the release captain
matches their binaries before also signing the manifest:
```bash
cd release/
gpg --verify SHA256SUMS.asc
sha256sum -c SHA256SUMS
cat SHA256SUMS | gpg -sb --armor > SHA256SUMS.new
```
Then send the resulting `SHA256SUMS.new` file to the release captain so it can
be merged with the other signatures into `SHASUMS.asc`.
# Verifying a reproducible build
You can verify the reproducible build in two ways:
- Repeating the entire reproducible build, making sure from scratch that the
binaries match. Just follow the instructions above for this.
- Verifying that the downloaded binaries match match the hashes in
`SHA256SUMS` and that the signatures in `SHA256SUMS.asc` are valid.
Assuming you have downloaded the binaries, the manifest and the signatures
into the same directory, you can verify the signatures with the following:
```bash
gpg --verify SHA256SUMS.asc
```
And you should see a list of messages like the following:
```text
gpg: assuming signed data in 'SHA256SUMS'
gpg: Signature made Fr 08 Mai 2020 07:46:38 CEST
gpg: using RSA key 15EE8D6CAB0E7F0CF999BFCBD9200E6CD1ADB8F1
gpg: Good signature from "Rusty Russell <rusty@rustcorp.com.au>" [full]
gpg: Signature made Fr 08 Mai 2020 12:30:10 CEST
gpg: using RSA key B7C4BE81184FC203D52C35C51416D83DC4F0E86D
gpg: Good signature from "Christian Decker <decker.christian@gmail.com>" [ultimate]
gpg: Signature made Fr 08 Mai 2020 21:35:28 CEST
gpg: using RSA key 30DE693AE0DE9E37B3E7EB6BBFF0F67810C1EED1
gpg: Good signature from "Lisa Neigut <niftynei@gmail.com>" [full]
```
If there are any issues `gpg` will print `Bad signature`, it might be because
the signatures in `SHA256SUMS.asc` do not match the `SHA256SUMS` file, and
could be the result of a filename change. Do not continue using the binaries,
and contact the maintainers, if this is not the case, a failure here means
that the verification failed.
Next we verify that the binaries match the ones in the manifest:
```bash
sha256sum -c SHA256SUMS
```
Producing output similar to the following:
```
sha256sum: clightning-v0.9.0-Fedora-28-amd64.tar.gz: No such file or directory
clightning-v0.9.0-Fedora-28-amd64.tar.gz: FAILED open or read
clightning-v0.9.0-Ubuntu-18.04.tar.xz: OK
clightning-v0.9.0.zip: OK
sha256sum: WARNING: 1 listed file could not be read
```
Notice that the two files we downloaded are marked as `OK`, but we're missing
one file. If you didn't download that file this is to be expected, and is
nothing to worry about. A failure to verify the hash would give a warning like
the following:
```text
sha256sum: WARNING: 1 computed checksum did NOT match
```
If both the signature verification and the manifest checksum verification
succeeded, then you have just successfully verified a reproducible build and,
assuming you trust the maintainers, are good to install and use the
binaries. Congratulations! 🎉🥳

View file

@ -1,252 +0,0 @@
# Care And Feeding of Your Fellow Coders
Style is an individualistic thing, but working on software is group
activity, so consistency is important. Generally our coding style
is similar to the [Linux coding style][style].
[style]: https://www.kernel.org/doc/html/v4.10/process/coding-style.html
## Communication
We communicate with each other via code; we polish each others code,
and give nuanced feedback. Exceptions to the rules below always
exist: accept them. Particularly if they're funny!
## Prefer Short Names
`num_foos` is better than `number_of_foos`, and `i` is better than
`counter`. But `bool found;` is better than `bool ret;`. Be as
short as you can but still descriptive.
## Prefer 80 Columns
We have to stop somewhere. The two tools here are extracting
deeply-indented code into their own functions, and use of short-cuts
using early returns or continues, eg:
```C
for (i = start; i != end; i++) {
if (i->something)
continue;
if (!i->something_else)
continue;
do_something(i);
}
```
## Tabs and indentaion
The C code uses TAB charaters with a visual indentation of 8 whitespaces.
If you submit code for a review, make sure your editor knows this.
When breaking a line with more than 80 characters, align parameters and
arguments like so:
```C
static void subtract_received_htlcs(const struct channel *channel,
struct amount_msat *amount)
```
Note: For more details, the files `.clang-format` and `.editorconfig` are
located in the projects root directory.
## Prefer Simple Statements
Notice the statement above uses separate tests, rather than combining
them. We prefer to only combine conditionals which are fundamentally
related, eg:
```C
if (i->something != NULL && *i->something < 100)
```
## Use of `take()`
Some functions have parameters marked with `TAKES`, indicating that
they can take lifetime ownership of a parameter which is passed using
`take()`. This can be a useful optimization which allows the function
to avoid making a copy, but if you hand `take(foo)` to something which
doesn't support `take()` you'll probably leak memory!
In particular, our automatically generated marshalling code doesn't
support `take()`.
If you're allocating something simply to hand it via `take()` you
should use NULL as the parent for clarity, eg:
```C
msg = towire_shutdown(NULL, &peer->channel_id, peer->final_scriptpubkey);
enqueue_peer_msg(peer, take(msg));
```
## Use of `tmpctx`
There's a convenient temporary tal context which gets cleaned
regularly: you should use this for throwaways rather than (as you'll
see some of our older code do!) grabbing some passing object to hang
your temporaries off!
## Enums and Switch Statements
If you handle various enumerated values in a `switch`, don't use
`default:` but instead mention every enumeration case-by-case. That
way when a new enumeration case is added, most compilers will warn that you
don't cover it. This is particularly valuable for code auto-generated
from the specification!
## Initialization of Variables
Avoid double-initialization of variables; it's better to set them when
they're known, eg:
```C
bool is_foo;
if (bar == foo)
is_foo = true;
else
is_foo = false;
...
if (is_foo)...
```
This way the compiler will warn you if you have one path which doesn't set the
variable. If you initialize with `bool is_foo = false;` then you'll
simply get that value without warning when you change the code and
forget to set it on one path.
## Initialization of Memory
`valgrind` warns about decisions made on uninitialized memory. Prefer
`tal` and `tal_arr` to `talz` and `tal_arrz` for this reason, and
initialize only the fields you expect to be used.
Similarly, you can use `memcheck(mem, len)` to explicitly assert that
memory should have been initialized, rather than having valgrind
trigger later. We use this when placing things on queues, for example.
## Use of static and const
Everything should be declared static and const by default. Note that
`tal_free()` can free a const pointer (also, that it returns `NULL`, for
convenience).
## Typesafety Is Worth Some Pain
If code is typesafe, refactoring is as simple as changing a type and
compiling to find where to refactor. We rely on this,
so most places in the code will break if you hand the wrong type, eg
`type_to_string` and `structeq`.
The two tools we have to help us are complicated macros in
`ccan/typesafe_cb` allow you to create callbacks which must match the
type of their argument, rather than using `void *`. The other is
`ARRAY_SIZE`, a macro which won't compile if you hand it a pointer
instead of an actual array.
## Use of `FIXME`
There are two cases in which you should use a `/* FIXME: */` comment:
one is where an optimization is possible but it's not clear that it's
yet worthwhile, and the second one is to note an ugly corner case
which could be improved (and may be in a following patch).
There are always compromises in code: eventually it needs to ship.
`FIXME` is `grep`-fodder for yourself and others, as well as useful
warning signs if we later encounter an issue in some part of the code.
## If You Don't Know The Right Thing, Do The Simplest Thing
Sometimes the right way is unclear, so it's best not to spend time on
it. It's far easier to rewrite simple code than complex code, too.
## Write For Today: Unused Code Is Buggy Code
Don't overdesign: complexity is a killer. If you need a fancy data
structure, start with a brute force linked list. Once that's working,
perhaps consider your fancy structure, but don't implement a generic
thing. Use `/* FIXME: ...*/` to salve your conscience.
## Keep Your Patches Reviewable
Try to make a single change at a time. It's tempting to do "drive-by"
fixes as you see other things, and a minimal amount is unavoidable, but
you can end up shaving infinite yaks. This is a good time to drop a
`/* FIXME: ...*/` comment and move on.
## Creating JSON APIs
Our JSON RPCs always return a top-level object. This allows us to add
warnings (e.g. that we're still starting up) or other optional fields
later.
Prefer to use JSON names which are already in use, or otherwise names
from the BOLT specifications.
The same command should always return the same JSON format: this is
why e.g. `listchannels` return an array even if given an argument so
there's only zero or one entries.
All `warning` fields should have unique names which start with
`warning_`, the value of which should be an explanation. This allows
for programs to deal with them sanely, and also perform translations.
### Documenting JSON APIs
We use JSON schemas to validate that JSON-RPC returns are in the
correct form, and also to generate documentation. See
[writing schemas manpage](schemas/WRITING_SCHEMAS.md).
## Changing JSON APIs
All JSON API changes need a Changelog line (see below).
You can always add a new output JSON field (Changelog-Added), but you
cannot remove one without going through a 6-month deprecation cycle
(Changelog-Deprecated)
So, only output it if `allow-deprecated-apis` is true, so users can test
their code is futureproof. In 6 months remove it (Changelog-Removed).
Changing existing input parameters is harder, and should generally be
avoided. Adding input parameters is possible, but should be done
cautiously as too many parameters get unwieldy quickly.
## Github Workflows
We have adopted a number of workflows to facilitate the development of
Core Lightning, and to make things more pleasant for contributors.
### Changelog Entries in Commit Messages
We are maintaining a changelog in the top-level directory of this
project. However since every pull request has a tendency to touch the file and
therefore create merge-conflicts we decided to derive the changelog file from
the pull requests that were added between releases. In order for a pull
request to show up in the changelog at least one of its commits will have to
have a line with one of the following prefixes:
- `Changelog-Added: ` if the pull request adds a new feature
- `Changelog-Changed: ` if a feature has been modified and might require
changes on the user side
- `Changelog-Deprecated: ` if a feature has been marked for deprecation, but
not yet removed
- `Changelog-Fixed: ` if a bug has been fixed
- `Changelog-Removed: ` if a (previously deprecated) feature has been removed
- `Changelog-Experimental: ` if it only affects experimental- config options.
In case you think the pull request is small enough not to require a changelog
entry please use `Changelog-None` in one of the commit messages to opt out.
Under some circumstances a feature may be removed even without deprecation
warning if it was not part of a released version yet, or the removal is
urgent.
In order to ensure that each pull request has the required `Changelog-*:` line
for the changelog our trusty @bitcoin-bot will check logs whenever a pull
request is created or updated and search for the required line. If there is no
such line it'll mark the pull request as `pending` to call out the need for an
entry.

View file

@ -1,448 +0,0 @@
# Setting up TOR with Core Lightning
To use any Tor features with Core Lightning you must have Tor installed and running.
Note that we only support Tor v3: you can check your installed Tor version with `tor --version` or `sudo tor --version`
If Tor is not installed you can install it on Debian based Linux systems (Ubuntu, Debian, etc) with the following command:
```bash
sudo apt install tor
```
then `/etc/init.d/tor start` or `sudo systemctl enable --now tor` depending
on your system configuration.
Most default setting should be sufficient.
To keep a safe configuration for minimal harassment (See [Tor FAQ])
just check that this line is present in the Tor config file `/etc/tor/torrc`:
`ExitPolicy reject *:* # no exits allowed`
This does not affect Core Lightning connect, listen, etc..
It will only prevent your node from becoming a Tor exit node.
Only enable this if you are sure about the implications.
If you don't want to create .onion addresses this should be enough.
There are several ways by which a Core Lightning node can accept or make connections over Tor.
The node can be reached over Tor by connecting to its .onion address.
To provide the node with a .onion address you can:
* create a **non-persistent** address with an auto service or
* create a **persistent** address with a hidden service.
### Quick Start On Linux
It is easy to create a single persistent Tor address and not announce a public IP.
This is ideal for most setups where you have an ISP-provided router connecting your
Internet to your local network and computer, as it does not require a stable
public IP from your ISP (which might not give one to you for free), nor port
forwarding (which can be hard to set up for random cheap router models).
Tor provides NAT-traversal for free, so even if you or your ISP has a complex
network between you and the Internet, as long as you can use Tor you can
be connected to.
Note: Core Lightning also support IPv4/6 address discovery behind NAT routers.
If your node detects an new public address, it can update its announcement.
For this to work you need to forward the TCP port 9735 on your NAT router to your node.
In this case you don't need TOR to punch through your firewall.
Note: Per default and for privacy reasons IP discovery will only be active
if no other addresses would be announced (as kind of a fallback).
You can set `--announce-addr-discovered=true` to explicitly activate it.
Your node will then update discovered IP addresses even if it also announces e.g. a TOR address.
This usually has the benefit of quicker and more stable connections but does not
offer additional privacy.
On most Linux distributions, making a standard installation of `tor` will
automatically set it up to have a SOCKS5 proxy at port 9050.
As well, you have to set up the Tor Control Port.
On most Linux distributions there will be commented-out settings below in the
`/etc/tor/torrc`:
```
ControlPort 9051
CookieAuthentication 1
CookieAuthFile /var/lib/tor/control_auth_cookie
CookieAuthFileGroupReadable 1
```
Uncomment those in, then restart `tor` (usually `systemctl restart tor` or
`sudo systemctl restart tor` on most SystemD-based systems, including recent
Debian and Ubuntu, or just restart the entire computer if you cannot figure
it out).
On some systems (such as Arch Linux), you may also need to add the following
setting:
```
DataDirectoryGroupReadable 1
```
You also need to make your user a member of the Tor group.
"Your user" here is whatever user will run `lightningd`.
On Debian-derived systems, the Tor group will most likely be `debian-tor`.
You can try listing all groups with the below command, and check for a
`debian-tor` or `tor` groupname.
```
getent group | cut -d: -f1 | sort
```
Alternately, you could check the group of the cookie file directly.
Usually, on most Linux systems, that would be `/run/tor/control.authcookie`:
```
stat -c '%G' /run/tor/control.authcookie
```
Once you have determined the `${TORGROUP}` and selected the
`${LIGHTNINGUSER}` that will run `lightningd`, run this as root:
```
usermod -a -G ${TORGROUP} ${LIGHTNINGUSER}
```
Then restart the computer (logging out and logging in again should also
work).
Confirm that `${LIGHTNINGUSER}` is in `${TORGROUP}` by running the
`groups` command as `${LIGHTNINGUSER}` and checking `${TORGROUP}` is listed.
If the `/run/tor/control.authcookie` exists in your system, then log in as
the user that will run `lightningd` and check this command:
```
cat /run/tor/control.authcookie > /dev/null
```
If the above prints nothing and returns, then Core Lightning "should" work
with your Tor.
If it prints an error, some configuration problem will likely prevent
Core Lightning from working with your Tor.
Then make sure these are in your `${LIGHTNING_DIR}/config` or other Core Lightning configuration
(or prepend `--` to each of them and add them to your `lightningd` invocation
command line):
```
proxy=127.0.0.1:9050
bind-addr=127.0.0.1:9735
addr=statictor:127.0.0.1:9051
always-use-proxy=true
```
1. `proxy` informs Core Lightning that you have a SOCKS5 proxy at port 9050.
Core Lightning will assume that this is a Tor proxy, port 9050 is the
default in most Linux distributions; you can double-check `/etc/tor/torrc`
for a `SocksPort` entry to confirm the port number.
2. `bind-addr` informs Core Lightning to bind itself to port 9735.
This is needed for the subsequent `statictor` to work.
9735 is the normal Lightning Network port, so this setting may already be present.
If you add a second `bind-addr=...` you may get errors, so choose this new one
or keep the old one, but don't keep both.
This has to appear before any `statictor:` setting.
3. `addr=statictor:` informs Core Lightning that you want to create a persistent
hidden service that is based on your node private key.
This informs Core Lightning as well that the Tor Control Port is 9051.
You can also use `bind-addr=statictor:` instead to not announce the
persistent hidden service, but if anyone wants to make a channel with
you, you either have to connect to them, or you have to reveal your
address to them explicitly (i.e. autopilots and the like will likely
never connect to you).
4. `always-use-proxy` informs Core Lightning to always use Tor even when
connecting to nodes with public IPs.
You can set this to `false` or remove it,
if you are not privacy-conscious **and** find Tor is too slow for you.
### Tor Browser and Orbot
It is possible to not install Tor on your computer, and rely on just
Tor Browser.
Tor Browser will run a built-in Tor instance, but with the proxy at port
9150 and the control port at 9151
(the normal Tor has, by default, the proxy at port 9050 and the control
port at 9051).
The mobile Orbot uses the same defaults as Tor Browser (9150 and 9151).
You can then use these settings for Core Lightning:
```
proxy=127.0.0.1:9150
bind-addr=127.0.0.1:9735
addr=statictor:127.0.0.1:9151
always-use-proxy=true
```
You will have to run Core Lightning after launching Tor Browser or Orbot,
and keep Tor Browser or Orbot open as long as Core Lightning is running,
but this is a setup which allows others to connect and fund channels
to you, anywhere (no port forwarding! works wherever Tor works!), and
you do not have to do anything more complicated than download and
install Tor Browser.
This may be useful for operating system distributions that do not have
Tor in their repositories, assuming we can ever get Core Lightning running
on those.
### Detailed Discussion
#### Three Ways to Create .onion Addresses for Core Lightning
1. You can configure Tor to create an onion address for you, and tell Core Lightning to use that address
2. You can have Core Lightning tell Tor to create a new onion address every time
3. You can configure Core Lightning to tell Tor to create the same onion address every time it starts up
#### Tor-Created .onion Address
Having Tor create an onion address lets you run other services (e.g.
a web server) at that same address, and you just tell that address to
Core Lightning and it doesn't have to talk to the Tor server at all.
Put the following in your `/etc/tor/torrc` file:
```
HiddenServiceDir /var/lib/tor/lightningd-service_v3/
HiddenServiceVersion 3
HiddenServicePort 1234 127.0.0.1:9735
```
The hidden lightning service will be reachable at port 1234 (global port)
of the .onion address, which will be created at the restart of the
Tor service. Both types of addresses can coexist on the same node.
Save the file and restart the Tor service. In linux:
`/etc/init.d/tor restart` or `sudo systemctl restart tor` depending
on the configuration of your system.
You will find the newly created address (myaddress.onion) with:
```
sudo cat /var/lib/tor/lightningd-service_v3/hostname
```
Now you need to tell Core Lightning to advertize that onion hostname and
port, by placing `announce-addr=myaddress.onion` in your lightning
config.
#### Letting Core Lightning Control Tor
To have Core Lightning control your Tor addresses, you have to tell Tor
to accept control commands from Core Lightning, either by using a cookie,
or a password.
##### Service authenticated by cookie
This tells Tor to create a cookie file each time: lightningd will have
to be in the same group as tor (e.g. debian-tor): you can look at
`/run/tor/control.authcookie` to check the group name.
Add the following lines in the `/etc/tor/torrc` file:
```
ControlPort 9051
CookieAuthentication 1
CookieAuthFileGroupReadable 1
```
Save the file and restart the Tor service.
##### Service authenticated by password
This tells Tor to allow password access: you also need to tell lightningd
what the password is.
Create a hash of your password with
```
tor --hash-password yourpassword
```
This returns a line like
`16:533E3963988E038560A8C4EE6BBEE8DB106B38F9C8A7F81FE38D2A3B1F`
Put these lines in the `/etc/tor/torrc` file:
```
ControlPort 9051
HashedControlPassword 16:533E3963988E038560A8C4EE6BBEE8DB106B38F9C8A7F81FE38D2A3B1F
```
Save the file and restart the Tor service.
Put `tor-service-password=yourpassword` (not the hash) in your
lightning configuration file.
##### Core Lightning Creating Persistent Hidden Addresses
This is usually better than transient addresses, as nodes won't have
to wait for gossip propagation to find out your new address each time
you restart.
Once you've configured access to Tor as described above, you need
to add *two* lines in your lightningd config file:
1. A local address which lightningd can tell Tor to connect to when
connections come in, e.g. `bind-addr=127.0.0.1:9735`.
2. After that, a `addr=statictor:127.0.0.1:9051` to tell
Core Lightning to set up and announce a Tor onion address (and tell
Tor to send connections to our real address, above).
You can use `bind-addr` if you want to set up the onion address and
not announce it to the world for some reason.
You may add more `addr` lines if you want to advertize other
addresses.
There is an older method, called "autotor" instead of "statictor"
which creates a different Tor address on each restart, which is
usually not very helpful; you need to use `lightning-cli getinfo` to
see what address it is currently using, and other peers need to wait
for fresh gossip messages if you announce it, before they can connect.
### What do we support
| Case # | IP Number | Hidden service |Incoming / Outgoing Tor |
| ------- | ------------- | ------------------------- |-------------------------
| 1 | Public | NO | Outgoing |
| 2 | Public | FIXED BY TOR | Incoming [1] |
| 3 | Public | FIXED BY CORE LIGHTNING | Incoming [1] |
| 4 | Not Announced | FIXED BY TOR | Incoming [1] |
| 5 | Not Announced | FIXED BY CORE LIGHTNING | Incoming [1] |
NOTE:
1. In all the "Incoming" use case, the node can also make "Outgoing" Tor
connections (connect to a .onion address) by adding the `proxy=127.0.0.1:9050` option.
#### Case #1: Public IP address and no Tor address, but can connect to Tor addresses
Without a .onion address, the node won't be reachable through Tor by other
nodes but it will always be able to `connect` to a Tor enabled node
(outbound connections), passing the `connect` request through the Tor
service socks5 proxy. When the Tor service starts it creates a socks5
proxy which is by default at the address 127.0.0.1:9050.
If the node is started with the option `proxy=127.0.0.1:9050` the node
will be always able to connect to nodes with .onion address through the socks5
proxy.
**You can always add this option, also in the other use cases, to add outgoing
Tor capabilities.**
If you want to `connect` to nodes ONLY via the Tor proxy, you have to add the
`always-use-proxy=true` option (though if you only advertize Tor addresses,
we also assume you want to always use the proxy).
You can announce your public IP address through the usual method: if
your node is in an internal network:
```
bind-addr=internalIPAddress:port
announce-addr=externalIpAddress
```
or if it has a public IP address:
```
addr=externalIpAddress
```
TIP: If you are unsure which of the two is suitable for you, find your internal
and external address and see if they match.
In linux:
Discover your external IP address with: `curl ipinfo.io/ip`
and your internal IP Address with: `ip route get 1 | awk '{print $NF;exit}'`
If they match you can use the `--addr` command line option.
#### Case #2: Public IP address, and a fixed Tor address in torrc
Other nodes can connect to you entirely over Tor, and the Tor address
doesn't change every time you restart.
You simply tell Core Lightning to advertize both addresses (you can use
`sudo cat /var/lib/tor/lightningd-service_v3/hostname` to get your
Tor-assigned onion address).
If you have an internal IP address:
```
bind-addr=yourInternalIPAddress:port
announce-addr=yourexternalIPAddress:port
announce-addr=your.onionAddress:port
```
Or an external address:
```
addr=yourIPAddress:port
announce-addr=your.onionAddress:port
```
#### Case #3: Public IP address, and a fixed Tor address set by Core Lightning
Other nodes can connect to you entirely over Tor, and the Tor address
doesn't change every time you restart.
See "Letting Core Lightning Control Tor" for how to get Core Lightning
talking to Tor.
If you have an internal IP address:
```
bind-addr=yourInternalIPAddress:port
announce-addr=yourexternalIPAddress:port
addr=statictor:127.0.0.1:9051
```
Or an external address:
```
addr=yourIPAddress:port
addr=statictor:127.0.0.1:9051
```
#### Case #4: Unannounced IP address, and a fixed Tor address in torrc
Other nodes can only connect to you over Tor.
You simply tell Core Lightning to advertize the Tor address (you can use
`sudo cat /var/lib/tor/lightningd-service_v3/hostname` to get your
Tor-assigned onion address).
```
announce-addr=your.onionAddress:port
proxy=127.0.0.1:9050
always-use-proxy=true
```
#### Case #4: Unannounced IP address, and a fixed Tor address set by Core Lightning
Other nodes can only connect to you over Tor.
See "Letting Core Lightning Control Tor" for how to get Core Lightning
talking to Tor.
```
addr=statictor:127.0.0.1:9051
proxy=127.0.0.1:9050
always-use-proxy=true
```
## References
The lightningd-config manual page covers the various address cases in detail.
[The Tor project](https://www.torproject.org/)
[tor FAQ]: https://www.torproject.org/docs/faq.html.en#WhatIsTor
[Tor Hidden Service]: https://www.torproject.org/docs/onion-services.html.en
[.onion addresses version 3]: https://blog.torproject.org/we-want-you-test-next-gen-onion-services