Compare commits

...

507 commits

Author SHA1 Message Date
Oliver Gugger
6531d45050
Merge pull request #9458 from Crypt-iQ/banning_010072025
multi+server.go: add initial permissions for some peers
2025-03-12 08:14:31 -06:00
Eugene Siegel
6309b8a0f4
release-notes: update for 0.19.0 2025-03-11 20:42:35 -04:00
Eugene Siegel
a4acfcb0ef
sample-lnd.conf: update for num-restricted-slots 2025-03-11 20:42:35 -04:00
Eugene Siegel
d5ecad3834
itest: new test to check server access perms 2025-03-11 20:42:35 -04:00
Eugene Siegel
68ec766b61
funding+server.go: modify notifications to pass through server
This modifies the various channelnotifier notification functions
to instead hit the server and then call the notification routine.
This allows us to accurately modify the server's maps.
2025-03-11 20:42:35 -04:00
Eugene Siegel
6eb746fbba
server.go+accessman.go: introduce caches for access permissions
Here we introduce the access manager which has caches that will
determine the access control status of our peers. Peers that have
had their funding transaction confirm with us are protected. Peers
that only have pending-open channels with us are temporary access
and can have their access revoked. The rest of the peers are granted
restricted access.
2025-03-11 20:42:34 -04:00
Eugene Siegel
4cfc92f420
channelnotifier: add FundingTimeout and NotifyFundingTimeout
This signal will be used in the server.go code to potentially
demote temporary-access peers to restricted-access peers.
2025-03-11 20:42:34 -04:00
Eugene Siegel
15f17633aa
channeldb: FetchPermAndTempPeers to load access perms on startup
We introduce a new func FetchPermAndTempPeers that returns two maps.
The first map indicates the nodes that will have "protected" access
to the server. The second map indicates the nodes that have
"temporary" access to the server. This will be used in a future
commit in the server.go code.
2025-03-11 20:42:34 -04:00
Oliver Gugger
5d8309ea6b
Merge pull request #9596 from JoeGruffins/testingbtcwalletchange
deps: Update btcwallet to v0.16.10.
2025-03-11 11:31:25 -06:00
Oliver Gugger
04c76101dd
Merge pull request #9595 from yyforyongyu/fix-gossip-syncer
multi: fix flakes and gossip syncer
2025-03-11 09:33:05 -06:00
JoeGruff
c8d032afa9 deps: Update btcwallet to v0.16.10. 2025-03-11 11:11:29 +09:00
Oliver Gugger
a673826dee
Merge pull request #9563 from yyforyongyu/fix-unit-test
chanbackup: fix test flake in `TestUpdateAndSwap`
2025-03-10 16:01:57 -06:00
Oliver Gugger
97bba57520
Merge pull request #9597 from guggero/rebase-fix
contractcourt: fix rebase issue with removed variadic option
2025-03-10 15:46:51 -06:00
Oliver Gugger
0e9b7c5fa2
contractcourt: fix rebase issue with removed variadic option
The optional variadic functional parameters probably got lost during a
rebase.
2025-03-10 21:52:02 +01:00
yyforyongyu
faf8ce161a
docs: update release notes 2025-03-10 17:03:00 +08:00
yyforyongyu
d0abfbbaff
itest: remove redundant resume action
Removed a redundant resume action found in
`testForwardInterceptorRestart`, which was put there likely due to a
mistake.
2025-03-10 16:58:16 +08:00
yyforyongyu
51daa13cd7
routerrpc: improve logging in forwardInterceptor 2025-03-10 16:58:16 +08:00
yyforyongyu
37799b95b7
discovery: fix state transition in GossipSyncer
Previously, we would set the state of the syncer after sending the msg,
which has the following flow,

1. In state `queryNewChannels`, we send the msg `QueryShortChanIDs`.
2. Once the msg is sent, we change to state `waitingQueryChanReply`.

But there's no guarantee the remote won't reply back inbetween the two
step. When that happens, our syncer would still be in state
`queryNewChannels`, causing the following error,
```
[ERR] DISC gossiper.go:873: Process query msg from peer [Alice] got unexpected msg *lnwire.ReplyShortChanIDsEnd received in state queryNewChannels
```

To fix it, we now make sure the state is updated before sending the msg.
2025-03-10 16:58:16 +08:00
yyforyongyu
4d05730c79
tor: fix msg order in TestReconnectSucceed 2025-03-10 16:58:16 +08:00
yyforyongyu
76ade177af
chanbackup: fix test flake in TestUpdateAndSwap
To make it easier to debug, we break the old test into smaller ones and
fix a flake caused by uncleaned test dir.
2025-03-09 13:41:22 +08:00
Olaoluwa Osuntokun
b21b1e3acb
Merge pull request #9592 from guggero/tor-update
mod: bump tor submodule to v1.1.5 to fix flake
2025-03-07 16:21:03 -08:00
Oliver Gugger
342a75891c
mod: bump tor submodule to v1.1.5 to fix flake
After merging #9581, the flake in the coverage unit test should be gone.
All we have to do is update the submodule version to the fixed one
(since during unit tests the module is used not the physical directory
on disk).
2025-03-07 19:39:00 +01:00
Oliver Gugger
d5a7e8957f
Merge pull request #9581 from yyforyongyu/fix-TestReconnectSucceed
tor: fix `TestReconnectSucceed`
2025-03-07 12:37:19 -06:00
Oliver Gugger
115b452abe
Merge pull request #9562 from NishantBansal2003/test-num-block-fund
Make MaxWaitNumBlocksFundingConf Configurable for itest
2025-03-07 12:35:30 -06:00
Oliver Gugger
ab2dc09eb7
Merge pull request #9582 from yyforyongyu/flake-doc
itest: properly document the known flakes
2025-03-07 10:37:14 -06:00
Nishant Bansal
9a9086b26f
docs: add release notes.
Signed-off-by: Nishant Bansal <nishant.bansal.282003@gmail.com>
2025-03-07 21:33:15 +05:30
Nishant Bansal
4569c07e08
multi: Add itest for funding timeout
This commit adds an integration test that
verifies the funding timeout behavior in the
funding manager, in dev/integration test.
Signed-off-by: Nishant Bansal <nishant.bansal.282003@gmail.com>
2025-03-07 21:32:38 +05:30
Oliver Gugger
a5f54d1d6b
Merge pull request #9573 from ellemouton/checkUpdateStalenessBeforeRateLimit
discovery: obtain channelMtx before doing any DB calls in `handleChannelUpdate`
2025-03-07 04:17:00 -06:00
Oliver Gugger
76808a81a0
Merge pull request #9587 from guggero/mining-block-limit-configurable
lntest: make mining block limit configurable
2025-03-07 02:57:29 -06:00
Oliver Gugger
25c83104b7
lntest: make mining block limit configurable
Nudging test authors towards not mining too many blocks makes sense,
especially in lnd where we have a lot of integration tests.
But the lntest package is also used in other projects where this
restriction might lead to large refactors.
To be able to stage those refactors we also want to allow this limit to
be configurable if lntest is used as a library.
2025-03-07 09:22:51 +01:00
Oliver Gugger
72b570320a
Merge pull request #9586 from ellemouton/logConfig
config: only create a sub-log manager if one doesnt already exist
2025-03-07 01:30:02 -06:00
Elle Mouton
090e687144
config: only create a sub-log manager if one doesnt already exist
The ValidateConfig method needs to account for the caller already having
an initialised build.SubLoggerManager and then should not override it.
2025-03-06 18:02:25 +02:00
Oliver Gugger
7d7e1872c8
Merge pull request #9574 from ellemouton/fixWatcherPanic
lntest: wait for ChanUpdate req to be fully processed before sending another
2025-03-06 09:21:35 -06:00
yyforyongyu
95569df92e
itest: document a flake found on macOS 2025-03-06 22:46:31 +08:00
yyforyongyu
0f8f092ddd
itest: document a flake from TxNotifier 2025-03-06 09:11:30 +08:00
yyforyongyu
2f545717c9
itest: remove old TODOs
As they are fixed now.
2025-03-06 09:11:30 +08:00
yyforyongyu
f349323923
itest: move test testDisconnectingTargetPeer
Hence finish an old TODO.
2025-03-06 09:11:30 +08:00
yyforyongyu
4755eff0a8
itest: remove time.Sleep before closing channels
Remove an old TODO, which has been fixed with the `NoWait` flag for coop
close.
2025-03-06 09:11:13 +08:00
Elle Mouton
95277bbc35
docs: update release notes 2025-03-05 14:12:56 +02:00
Elle Mouton
2e85e08556
discovery: grab channel mutex before any DB calls
In `handleChanUpdate`, make sure to grab the `channelMtx` lock before
making any DB calls so that the logic remains consistent.
2025-03-05 14:12:55 +02:00
Elle Mouton
5e35bd8328
discovery: demonstrate channel update rate limiting bug
This commit adds a test to demonstrate that if we receive two identical
updates (which can happen if we get the same update from two peers in
quick succession), then our rate limiting logic will be hit early as
both updates might be counted towards the rate limit. This will be fixed
in an upcoming commit.
2025-03-05 14:12:23 +02:00
yyforyongyu
a116eef5eb
tor: fix TestReconnectSucceed
We now make sure the proxy server is running on a unique port. In
addition, we close the old conn before making a new conn.
2025-03-05 19:09:15 +08:00
yyforyongyu
ec7c36fd6a
itest: document missing wallet UTXO 2025-03-05 18:54:13 +08:00
yyforyongyu
061b7abf76
itest: document the flake found in preimage extraction
We now use dedicated methods to properly document the flakes in *one*
place.
2025-03-05 18:54:13 +08:00
Oliver Gugger
9feb761b4e
Merge pull request #9576 from ellemouton/logConfYamlTags
build: add yaml tags to some LogConfig fields
2025-03-04 13:27:11 -06:00
Elle Mouton
1dec8ab985
build: add yaml tags to embedded LogConfig structs
For any embedded struct, the `yaml:",inline"` tag is required.
2025-03-04 13:32:22 +02:00
Elle Mouton
57c6c236d8
lntest: wait for ChanUpdate req to be fully processed before sending another
Before this commit, it was possible for a request to be sent on the
`chanWatchRequests` channel in `WaitForChannelPolicyUpdate` and then for
the `ticker.C` case to select _before_ the `eventChan` select gets
triggered when the `topologyWatcher` closes the `eventChan` in its call
to `handlePolicyUpdateWatchRequest`. This could lead to a "close of a
closed channel" panic.

To fix this, this commit ensures that we only move on to the next
iteration of the select statement in `WaitForChannelPolicyUpdate` once
the request sent on `chanWatchRequests` has been fully handled.
2025-03-03 17:53:15 +02:00
Olaoluwa Osuntokun
f744a5477f
Merge pull request #9565 from guggero/bot-typo-fix-spam
GitHub+docs: no longer accept typo fixes to fight PR spam
2025-02-28 16:18:17 -08:00
Oliver Gugger
f1182e4338
Merge pull request #9521 from guggero/coverage-fixes
unit: remove GOACC, use Go 1.20 native coverage functionality
2025-02-28 09:54:12 -06:00
Oliver Gugger
7761e37522
GitHub: remove generated files from coverage 2025-02-28 14:55:34 +01:00
Oliver Gugger
124137e31a
GitHub+make: debug failing test, use official coveralls action 2025-02-28 14:55:34 +01:00
Oliver Gugger
576da75a07
multi: remove unneeded env variables
With Go 1.23 we don't need to set any of these variables anymore, as
they're the default values now.
2025-02-28 14:55:34 +01:00
Oliver Gugger
70ac201cb8
make+tools: remove goacc, use Go 1.20 builtin functionality
Starting with Go 1.20 the -coverprofile flag does the same that GOACC
did before.
2025-02-28 14:55:33 +01:00
Oliver Gugger
dc0ba72271
Merge pull request #9566 from yyforyongyu/improve-itest
lntest+itest: change the method signature of `AssertPaymentStatus`
2025-02-28 07:52:49 -06:00
Oliver Gugger
a78f9f6c0a
GitHub+docs: no longer accept typo fixes to fight PR spam 2025-02-28 13:02:49 +01:00
yyforyongyu
2d5a2ce78a
lntest+itest: change the method signature of AssertPaymentStatus
So this can be used in other tests when we only care about the payment
hash.
2025-02-28 19:07:38 +08:00
Oliver Gugger
dfd43c972c
Merge pull request #9519 from fuyangpengqi/master
refactor: use a more straightforward return value
2025-02-28 04:41:18 -06:00
fuyangpengqi
150f72414a refactor: use a more straightforward return value
Signed-off-by: fuyangpengqi <995764973@qq.com>
2025-02-28 17:09:56 +08:00
Oliver Gugger
8532955b35
Merge pull request #9540 from ellemouton/backwardsCompat
scripts+make+GH: Add simple backwards compatibility test to the CI
2025-02-27 05:05:03 -06:00
Elle Mouton
b3133b99d4
docs: add release note entry 2025-02-27 11:33:15 +02:00
Elle Mouton
343bdff26b
make+gh: add make helper and GH action
Add a makefile helper to run the neww backwards compatability test and
then add a new GH actions job to call it.
2025-02-27 11:33:15 +02:00
Elle Mouton
f0d4ea10a2
scripts/bw-compatibility-test: add backwards compat test
In this commit, a new backwards compatibility test is added. See the
added README.md file in this commit for all the info.
2025-02-27 11:33:15 +02:00
Oliver Gugger
e3d9fcb5ac
Merge pull request #9549 from yyforyongyu/fix-bitcond-test
Fix unit test flake `TestHistoricalConfDetailsTxIndex`
2025-02-26 07:23:30 -06:00
yyforyongyu
cfa4341740
routing: fix flake in TestFilteredChainView/bitcoind_polling 2025-02-26 19:51:53 +08:00
yyforyongyu
56fa3aae34
routing/chainview: refactor TestFilteredChainView
So each test has its own miner and chainView.
2025-02-26 19:51:53 +08:00
yyforyongyu
4bfcfea2ee
lntest+chainntnfs: make sure bitcoind node is synced
This commit makes sure the bitcoind node is synced to the miner when
initialized and returned from `NewBitcoindBackend`.
2025-02-26 19:51:52 +08:00
yyforyongyu
99d49dec6a
multi: change NewBitcoindBackend to take a miner as its param
Prepares for the following commit where we assert the chain backend is
synced to the miner.
2025-02-26 19:51:52 +08:00
yyforyongyu
c725ba9f25
docs: update release notes re flake fix 2025-02-26 19:51:52 +08:00
yyforyongyu
1618d2c789
chainntnfs+lntest: fix sync to miner block flake
In this commit we document an unexpected behavior found when connecting
a bitcoind node to a btcd node. We mitigate this in our test by
reconnecting the nodes when the connection is broken. We also limit the
connection made from `bitcoind` to be v1 only.
2025-02-26 19:51:52 +08:00
yyforyongyu
175301628f
lntest/unittest: make sure miner is connected to bitcoind
We change how the `bitcoind` node connects to the miner by creating a
temp RPC client and use it to connect to the miner. In addition we also
assert the connection is made.
2025-02-25 21:11:26 +08:00
yyforyongyu
245ea85894
lntest/unittest: assert bitcoind is shut down
Make sure the shutdown of `bitcoind` is finished without any errors.
2025-02-25 21:11:25 +08:00
yyforyongyu
7666d62a43
lntest/unittest: update config for miner and bitcoind
The config used for the miner is updated to skip banning and debug. For
bitcoind, the config is updated to use a unique port for P2P conn.
2025-02-25 21:10:40 +08:00
yyforyongyu
fa8527af09
Makefile+scripts: add unit test flake hunter
This commit adds a script to hunt flakes for a specific unit test with
trace logs. Also rename the make commands to make them more clear on
whether it's a unit test, itest, or paralleled itest.
2025-02-25 21:10:40 +08:00
Oliver Gugger
5d3680a6f6
Merge pull request #9484 from Abdulkbk/refactor-makedir
lnutils: add CreateDir util function
2025-02-25 01:00:45 -06:00
Oliver Gugger
b8c5e85821
Merge pull request #8900 from guggero/go-cc
Makefile: add GOCC variable
2025-02-25 00:59:14 -06:00
Yong
fca9fae2d8
Merge pull request #9433 from hieblmi/estimate-route-fee-fix
routerrpc: fix estimateroutefee for public route hints
2025-02-25 13:58:43 +08:00
Slyghtning
f867954a68
docs: update release notes 2025-02-24 09:56:24 +01:00
Slyghtning
6399d77c18
routerrpc: fix estimateroutefee for public route hints 2025-02-24 09:56:22 +01:00
Oliver Gugger
ad021a290d
Makefile: add GOCC variable 2025-02-23 09:48:54 +01:00
Oliver Gugger
5fe900d18d
Merge pull request #9534 from ellemouton/graph13
graph: refactor Builder network message handling
2025-02-21 08:38:35 -06:00
Elle Mouton
c89b616e7d
graph: refactor Builder network message handling
The exposed AddNode, AddEdge and UpdateEdge methods of the Builder are
currently synchronous since even though they pass messages to the
network handler which spins off the handling in a goroutine, the public
methods still wait for a response from the handling before returning.
The only part that is actually done asynchronously is the topology
notifications.

We previously tried to simplify things in [this
commit](d757b3bcfc)
but we soon realised that there was a reason for sending the messages to
the central/synchronous network handler first: it was to ensure
consistency for topology clients: ie, the ordering between when there is
a new topology client or if it is cancelled needs to be consistent and
handled synchronously with new network updates. So for example, if a new
update comes in right after a topology client cancels its subscription,
then it should _not_ be notified. Similariy for new subscriptions. So
this commit was reverted soon after.

We can, however, still simplify things as is done in this commit by
noting that _only topology subscriptions and notifications_ need to be
handled separately. The actual network updates do not need to. So that
is what is done here.

This refactor will make moving the topology subscription logic to a new
subsystem later on much easier.
2025-02-21 10:39:00 -03:00
Oliver Gugger
1227eb1cce
Merge pull request #9491 from ziggie1984/closechannel-rpc
Allow coop closing a channel with HTLCs on it via lncli
2025-02-21 05:05:53 -06:00
Olaoluwa Osuntokun
27440e8957
Merge pull request #9535 from guggero/remove-caching
GitHub: remove duplicate caching
2025-02-20 16:57:55 -08:00
Olaoluwa Osuntokun
553899bffb
Merge pull request #9447 from yyforyongyu/yy-sweeper-fix
sweep: start tracking input spending status in the fee bumper
2025-02-20 16:56:45 -08:00
Oliver Gugger
dc64ea97a2
GitHub: remove duplicate caching
Turns out that actions/setup-go starting with @v4 also adds caching.
With that, our cache size on disk has almost doubled, leading to the
GitHub runner running out of space in certain situation.
We fix that by disabling the automated caching since we already have our
own, custom-tailored version.
2025-02-20 18:14:29 +01:00
ziggie
8017139df5
docs: add release-notes 2025-02-20 17:43:18 +01:00
ziggie
f994c2cb9f
htlcswitch: fix log output 2025-02-20 17:43:18 +01:00
ziggie
f458844412
lnd: add max fee rate check to closechannel rpc 2025-02-20 17:43:18 +01:00
ziggie
59443faa36
multi: coop close with active HTLCs on the channel
For the lncli cmd we now always initiate the coop close even if
there are active HTLCs on the channel. In case HTLCs are on the
channel and the coop close is initiated LND handles the closing
flow in the background and the lncli cmd will block until the
transaction is broadcasted to the mempool. In the background LND
disallows any new HTLCs and waits until all HTLCs are resolved
before kicking of the negotiation process.
Moreover if active HTLCs are present and the no_wait param is not
set the error msg is now highlightning it so the user can react
accordingly.
2025-02-20 17:43:18 +01:00
Abdullahi Yunus
45a913ee91
lnutils: add createdir util function
This utility function replaces repetitive logic patterns
throughout LND.
2025-02-20 17:08:21 +01:00
Oliver Gugger
09a4d7e224
Merge pull request #9530 from ziggie1984/fix-debug-log
multi: fix debug log
2025-02-20 09:28:14 -06:00
yyforyongyu
9f7e2bfd96
contractcourt: fix errorlint 2025-02-20 23:14:12 +08:00
ziggie
9382fcb801
multi: fix debug log 2025-02-20 10:44:19 +01:00
yyforyongyu
7ab0e15937
sweep: fix error logging 2025-02-20 14:41:52 +08:00
yyforyongyu
353f208031
sweep: refactor IsOurTx to not return an error
Before this commit, the only error returned from `IsOurTx` is when the
root bucket was not created. In that case, we should consider the tx to
be not found in our db, since technically our db is empty.

A future PR may consider treating our wallet as the single source of
truth and query the wallet instead to check for past sweeping txns.
2025-02-20 14:41:52 +08:00
yyforyongyu
8d49246a54
docs: add release notes 2025-02-20 14:41:50 +08:00
yyforyongyu
c61f781be7
itest: split up force close tests
So we can focus on testing normal flow vs persistence flow.
2025-02-20 14:40:54 +08:00
yyforyongyu
74161f0d57
sweep: make sure recovered inputs are retried
Previously, when a given input is found spent in the mempool, we'd mark
it as Published and never offer it to the fee bumper. This is dangerous
as the input will never be fee bumped. We now fix it by always
initializing the input with state Init, and only use mempool to check
for fee and fee rate.

This changes the current restart behavior - as previously when a
sweeping tx is broadcast, the node shuts down, when it starts again, the
input will be offered to the sweeper again, but not to the fee bumper,
which means the sweeping tx will stay in the mempool with the last-tried
fee rate. After this change, after a restart, the input will be swept
again, and the fee bumper will monitor its status. The restart will also
behave like a fee bump if there's already an existing sweeping tx in the
mempool.
2025-02-20 14:40:54 +08:00
yyforyongyu
4bd1a344b9
sweep: signal tx in markInputFatal
This commit adds the failed tx to the result when marking the input as
fatal, which is used in the commit resolver when handling revoked
outputs.
2025-02-20 14:40:54 +08:00
yyforyongyu
b184afe227
sweep: handle missing inputs during fee bumping
This commit handles the case when the input is missing during the RBF
process, which could happen when the bumped tx has inputs being spent by
a third party. Normally we should be able to catch the spend early via
the spending notification and never attempt to fee bump the record.
However, due to the possible race between block notification and spend
notification, this cannot be guaranteed. Thus, we need to handle the
case during the RBF when seeing a `ErrMissingInputs`, which can only
happen when the inputs are spent by others.
2025-02-20 14:40:53 +08:00
yyforyongyu
4f469de18e
sweep: refactor handleInitialTxError and createAndCheckTx
This commit refactors `handleInitialTxError` and `createAndCheckTx` to
take a `monitorRecord` param, which prepares for the following commit
where we start handling missing inputs.
2025-02-20 14:40:53 +08:00
yyforyongyu
f614e7aed9
sweep: add createUnknownSpentBumpResult
A minor refactor to break the method `handleUnknownSpent` into two
steps, which prepares the following commit where we start handling
missing inputs.
2025-02-20 14:40:53 +08:00
yyforyongyu
db8319d70b
sweep: add method handleReplacementTxError
This is a minor refactor so the `createAndPublishTx` flow becomes more
clear, also prepares for the following commit where we start to handle
missing inputs.
2025-02-20 14:40:53 +08:00
yyforyongyu
42818949dc
sweep: retry sweeping inputs upon TxUnknownSpend
We now start handling `TxUnknownSpend` in our sweeper to make sure the
failed inputs are retried when possible.
2025-02-20 14:40:53 +08:00
yyforyongyu
2f1205a394
sweep: start tracking inputs spent by unknown tx
This commit adds a new field `InputsSpent` to the `BumpResult` so they
can be used to track inputs spent by txns not recoginized by the fee
bumper.
2025-02-20 14:40:53 +08:00
yyforyongyu
388183e173
itest: add fee replacement test 2025-02-20 14:40:52 +08:00
yyforyongyu
db351e1908
sweep: rename methods for clarity
We now rename "third party" to "unknown" as the inputs can be spent via
an older sweeping tx, a third party (anchor), or a remote party (pin).
In fee bumper we don't have the info to distinguish the above cases, and
leave them to be further handled by the sweeper as it has more context.
2025-02-20 14:40:52 +08:00
yyforyongyu
121116cff7
sweep: remove dead code and add better logging 2025-02-20 14:40:52 +08:00
yyforyongyu
50bc191feb
sweep: handle unknown spent in processRecords
This commit refactors the `processRecords` to always handle the inputs
spent when processing the records. We now make sure to handle unknown
spends for all backends (previously only neutrino), and rely solely on
the spending notification to give us the onchain status of inputs.
2025-02-20 14:40:52 +08:00
yyforyongyu
61cec43951
sweep: add a new event TxUnknownSpend 2025-02-20 14:40:52 +08:00
yyforyongyu
8c9ba327cc
sweep: add method getSpentInputs
To track the input and its spending tx, which will be used later to
detect unknown spends.
2025-02-20 14:40:52 +08:00
Oliver Gugger
0e87863481
Merge pull request #9523 from ellemouton/graph9
graph/db: use View directly instead of manual DB tx creation and commit
2025-02-19 01:55:18 -06:00
Oliver Gugger
31418e4b85
Merge pull request #9525 from ellemouton/graph10
graph: Restrict interface to update channel proof instead of entire channel
2025-02-18 11:34:38 -06:00
Elle
f9d29f90cd
Merge pull request #9513 from ellemouton/graph5
graph+routing: refactor to remove `graphsession`
2025-02-18 11:54:24 -03:00
Elle Mouton
9df8773163
graph: update proof by ID
This commit restricts the graph CRUD interface such that one can only
add a proof to a channel announcement and not update any other fields.
This pattern is better suited for SQL land too.
2025-02-18 11:05:28 -03:00
Elle Mouton
e58abbf0e5
graph/db: fix edges bucket check
This commit fixes a bug that would check that the passed `edge` argument
of UpdateChannelEdge is nil but it should actually be checking if the
`edges` bucket is nil.
2025-02-18 11:04:27 -03:00
Elle Mouton
03899ab1f9
graph/db: use View directly instead of manual Tx handling
In this commit, we use the available kvdb `View` method directly for
starting a graph session instead of manually creating and commiting the
transaction. Note this changes behaviour since failed tx create/commit
will now error instead of just log.
2025-02-18 10:18:55 -03:00
Elle Mouton
f3805002ff
graph/db: unexport methods that take a transaction
Unexport and rename the methods that were previously used by the
graphsession package.
2025-02-18 10:15:55 -03:00
Elle Mouton
e004447da6
multi: remove the need for the graphsession package
In this commit, we add a `GraphSession` method to the `ChannelGraph`.
This method provides a caller with access to a `NodeTraverser`. This is
used by pathfinding to create a graph "session" overwhich to perform a
set of queries for a pathfinding attempt. With this refactor, we hide
details such as DB transaction creation and transaction commits from the
caller. So with this, pathfinding does not need to remember to "close
the graph session". With this commit, the `graphsession` package may be
completely removed.
2025-02-18 10:15:41 -03:00
Elle Mouton
dfe2314a2a
routing: refactor pathfinding loop
In preparation for the next commit where we will start hiding underlying
graph details such as that a graph session needs to be "closed" after
pathfinding is done with it, we refactor things here so that the main
pathfinding logic is done in a call-back.
2025-02-18 10:11:34 -03:00
Elle Mouton
99c9440520
graph/db: define a NodeTraverser interface
Which describes methods that will use the graph cache if it is available
for fast read-only calls.
2025-02-18 10:10:04 -03:00
Elle Mouton
8ec08fbfa4
multi: remove the need for NewRoutingGraph
The `graphsession.NewRoutingGraph` method was used to create a
RoutingGraph instance with no consistent read transaction across calls.
But now that the ChannelGraph directly implements this, we can remove
The NewRoutingGraph method.
2025-02-18 07:59:57 -03:00
Elle Mouton
5d5cfe36c7
routing: rename routing Graph method
In preparation for having the ChannelGraph directly implement the
`routing.Graph` interface, we rename the `ForEachNodeChannel` method to
`ForEachNodeDirectedChannel` since the ChannelGraph already uses the
`ForEachNodeChannel` name and the new name is more appropriate since the
ChannelGraph currently has a `ForEachNodeDirectedChannelTx` method which
passes the same DirectedChannel type to the given call-back.
2025-02-18 07:59:57 -03:00
Elle Mouton
971832c792
graph: temporarily rename some ChannelGraph methods
Add the `Tx` suffix to both ForEachNodeDirectedChannelTx and
FetchNodeFeatures temporarily so that we free up the original names for
other use. The renamed methods will be removed or unexported in an
upcoming commit. The aim is to have no exported methods on the
ChannelGraph that accept a kvdb.RTx as a parameter.
2025-02-18 07:59:57 -03:00
Elle Mouton
9068ffcd8b
graph: let FetchNodeFeatures take an optional read tx
For consistency in the graphsessoin.graph interface, we let the
FetchNodeFeatures method take a read transaction just like the
ForEachNodeDirectedChannel. This is nice because then all calls in the
same pathfinding transaction use the same read transaction.
2025-02-18 07:59:57 -03:00
Oliver Gugger
b08bc99945
Merge pull request #9518 from starius/bumpfee-immediate-doc-fix
walletrpc: fix description of bumpfee.immediate
2025-02-18 02:02:36 -06:00
Olaoluwa Osuntokun
5ee81e1876
Merge pull request #9512 from Roasbeef/go-1-23
multi: update build system to Go 1.23
2025-02-17 16:45:12 -08:00
Boris Nagaev
0916f3e9b3
walletrpc: fix description of bumpfee.immediate
It waits for the next block and sends CPFP even if there are no other
inputs to form a batch.
2025-02-17 13:33:16 -03:00
Oliver Gugger
01819f0d42
Merge pull request #9516 from ellemouton/graph6
invoicesrpc: remove direct access to ChannelGraph pointer
2025-02-17 07:40:56 -06:00
Yong
65a18c5b35
Merge pull request #9515 from ellemouton/graphFixFlake
graph: ensure topology subscriber handling and network msg handling is synchronous
2025-02-17 20:48:05 +08:00
Oliver Gugger
fa10991545
multi: fix linter by updating to latest version 2025-02-17 10:40:55 +01:00
Olaoluwa Osuntokun
c40ea0cefc
invoices: fix new linter error w/ Go 1.23 2025-02-13 17:09:28 -08:00
Olaoluwa Osuntokun
bf192d292f
docs: update INSTALL.md with Go 1.23 instructions 2025-02-13 16:57:07 -08:00
Olaoluwa Osuntokun
7bc88e8360
build: update go.mod to use Go 1.23
This enables us to use the new language features that are a part of this
release.
2025-02-13 16:57:06 -08:00
Olaoluwa Osuntokun
61852fbe95
multi: update build system to Go 1.23 2025-02-13 16:57:06 -08:00
Oliver Gugger
68105be1eb
Merge pull request #9507 from saubyk/18.5-releasenotes-patch
Update release-notes-0.18.5.md
2025-02-13 13:03:23 -06:00
Suheb
5b93ab6a8f
Update release-notes-0.18.5.md and release-notes-0.19.0.md 2025-02-13 10:45:38 -08:00
Oliver Gugger
319a0ee470
Merge pull request #9493 from ziggie1984/lncli-no-replacement
For some lncli cmds we should not replace the content with other data
2025-02-13 09:52:52 -06:00
Oliver Gugger
2ec972032e
Merge pull request #9517 from yyforyongyu/fix-coverage
workflows: skip coverage error in job `finish`
2025-02-13 09:11:57 -06:00
yyforyongyu
2b0c25e31d
workflows: skip coverage error in job finish
Also updates the coverage action used.
2025-02-13 22:05:38 +08:00
Oliver Gugger
d5ac05ce87
Merge pull request #9501 from yyforyongyu/getinfo-blockheight
rpcserver: check `blockbeatDispatcher` when deciding `isSynced`
2025-02-13 04:17:45 -06:00
Elle Mouton
da43541c63
invoicesrpc: remove direct access to ChannelGraph pointer 2025-02-13 11:45:09 +02:00
Elle Mouton
f39a004662
Revert "graph: refactor Builder network message handling"
This reverts commit d757b3bcfc.
2025-02-13 11:19:07 +02:00
Yong
9c2c95d46f
Merge pull request #9478 from ellemouton/graph3
discovery+graph:  move funding tx validation to the gossiper
2025-02-13 15:00:14 +08:00
ziggie
3562767b23
lncli: only add additional info to specific cmds.
We only append the chan_id and the human readable scid
for the commands `listchannels` and `closedchannels`. This
ensures that other commands like `buildroute` are not affected
by this change so their output can be piped into other cmds.

For some cmds it is not very practical to replace the json output
because we might pipe it into other commands. For example when
creating the route we want to pipe the result of buildroute into
sendtoRoute.
2025-02-12 23:29:56 +01:00
Elle Mouton
e5db0d6314
graph+discovery: move funding tx validation to gossiper
This commit is a pure refactor. We move the transaction validation
(existence, spentness, correctness) from the `graph.Builder` to the
gossiper since this is where all protocol level checks should happen.
All tests involved are also updated/moved.
2025-02-12 15:48:08 +02:00
Elle Mouton
39bb23ea5e
discovery: lock the channelMtx before making the funding script
As we move the funding transaction validation logic out of the builder
and into the gossiper, we want to ensure that the behaviour stays
consistent with what we have today. So we should aquire this lock before
performing any expensive checks such as building the funding tx or
valdating it.
2025-02-12 13:59:09 +02:00
Oliver Gugger
693b3991ee
Merge pull request #9508 from ellemouton/ignoreSendCoverageFail
.github: ignore Send coverage errors
2025-02-12 05:23:29 -06:00
Elle Mouton
7853e36488
graph+discovery: calculate funding tx script in gossiper
In preparation for an upcoming commit which will move all channel
funding tx validation to the gossiper, we first move the helper method
which helps build the expected funding transaction script based on the
fields in the channel announcement. We will still want this script later
on in the builder for updating the ChainView though, and so we pass this
field along with the ChannelEdgeInfo. With this change, we can remove
the TapscriptRoot field from the ChannelEdgeInfo since the only reason
it was there was so that the builder could reconstruct the full funding
script.
2025-02-12 13:15:54 +02:00
Elle Mouton
8a07bb0950
discovery: prepare tests for preparing the mock chain
Here, we add a new fundingTxOption modifier which will configure how we
set-up expected calls to the mock Chain once we have moved funding tx
logic to the gossiper. Note that in this commit, these modifiers don't
yet do anything.
2025-02-12 13:15:54 +02:00
Elle Mouton
22e391f055
discovery: add AssumeChannelValid config option
in preparation for later on when we need to know when to skip funding
transaction validation.
2025-02-12 13:15:54 +02:00
Elle Mouton
00f5fd9b7f
graph: add IsZombieEdge method
This is in preparation for the commit where we move across all the
funding tx validation so that we can test that we are correctly updating
the zombie index.
2025-02-12 13:15:54 +02:00
Elle Mouton
870c865763
graph: export addZombieEdge and rename to MarkZombieEdge
The `graph.Builder`'s `addZombieEdge` method is currently called during
funding transaction validation for the case where the funding tx is not
found. In preparation for moving this code to the gossiper, we export
the method and add it to the ChannelGraphSource interface so that the
gossiper will be able to call it later on.
2025-02-12 13:15:53 +02:00
Elle Mouton
a67df6815c
.github: ignore Send coverage errors
Sometimes only the "Send coverage" step of a CI job will fail. This
commit turns this step into a "best effort" step instead so that it
does not block a CI job from passing.

It can, for example, often happen that a single job is re-run from the
GH UI, it then passes but the "Send coverage" step fails due to the
"Can't add a job to a build that is already closed." error meaning that
the only way to get the CI step to pass is to re-push and retrigger a
full CI run.
2025-02-12 12:34:02 +02:00
Oliver Gugger
4dbbd837c0
Merge pull request #9496 from ellemouton/graph4
graph: remove redundant iteration through a node's persisted channels
2025-02-12 03:36:32 -06:00
Olaoluwa Osuntokun
7b294311bc
Merge pull request #9150 from yyforyongyu/fix-stuck-payment
routing+htlcswitch: fix stuck inflight payments
2025-02-11 18:02:02 -08:00
yyforyongyu
759dc2066e
docs: update release notes 2025-02-12 09:48:03 +08:00
yyforyongyu
59759f861f
rpcserver: check blockbeatDispatcher when deciding isSynced
This commit changes `GetInfo` to include `blockbeatDispatcher`'s current
state when deciding whether the system is synced to chain. Previously we
check the best height against the wallet and the channel graph, we
should also do this to the blockbeat dispatcher to make sure the
internal consumers are also synced to the best block.
2025-02-12 09:48:02 +08:00
yyforyongyu
89c4a8dfd7
chainio: add method CurrentHeight
Add a new method `CurrentHeight` to query the current best height of the
dispatcher.
2025-02-12 09:48:02 +08:00
Oliver Gugger
fbc668ca53
Merge pull request #9500 from MPins/release-note-0.20.0
doc: creating release-notes-0.20.0.md
2025-02-11 02:22:38 -06:00
Elle Mouton
5c2c00e414
graph/db: remove GraphCacheNode interface
With the previous commit, the AddNode method was removed and since that
was the only method making use of the ForEachChannel on the
GraphCacheNode interface, we can remove that method. Since the only two
methods left just expose the node's pub key and features, it really is
not required anymore and so the entire thing can be removed along with
the implementation of it.
2025-02-11 08:19:33 +02:00
MPins
097c262341 doc: creating file release-notes-0.20.0.md 2025-02-10 23:50:04 -03:00
Elle Mouton
90179b651e
graph/db: remove unnecessary AddNode method on GraphCache
The AddNode method on the GraphCache calls `AddNodeFeatures` underneath
and then iterates through all the node's persisted channels and adds
them to the cache too via `AddChannel`.

This is, however, not required since at the time the cache is populated
in `NewChannelGraph`, the cache is populated will all persisted nodes
and all persisted channels. Then, once any new channels come in, via
`AddChannelEdge`, they are added to the cache via AddChannel. If any new
nodes come in via `AddLightningNode`, then currently the cache's AddNode
method is called which the both adds the node and again iterates through
all persisted channels and re-adds them to the cache. This is definitely
redundent since the initial cache population and updates via
AddChannelEdge should keep the cache fresh in terms of channels.

So we remove this for 2 reasons: 1) to remove the redundent DB calls and
2) this requires a kvdb.RTx to be passed in to the GraphCache calls
   which will make it hard to extract the cache out of the CRUD layer
and be used more generally.

The AddNode method made sense when the cache was first added in the
code-base
[here](369c09be61 (diff-ae36bdb6670644d20c4e43f3a0ed47f71886c2bcdf3cc2937de24315da5dc072R213))
since then during graph cache population, nodes and channels would be
added to the cache in a single DB transaction. This was, however,
changed [later
on](352008a0c2)
to be done in 2 separate DB calls for efficiency reasons.
2025-02-10 17:10:53 +02:00
Oliver Gugger
d10ab03b75
Merge pull request #9480 from ellemouton/autopilotRefactor
graph+autopilot: remove `autopilot` access to raw `graphdb.ChannelGraph`
2025-02-10 09:07:47 -06:00
Oliver Gugger
2a0dca77a0
Merge pull request #9495 from ziggie1984/fix-graphbuilder-flake
fix graphbuilder flake
2025-02-10 08:43:59 -06:00
ziggie
6373d84baf
graph: fix flake in unit test 2025-02-10 14:07:04 +01:00
Oliver Gugger
6eb8f1f6e3
Merge pull request #9477 from ellemouton/graph2
discovery+graph: various preparations for moving funding tx validation to the gossiper
2025-02-10 05:58:05 -06:00
Oliver Gugger
6f312d457f
Merge pull request #9451 from Juneezee/minmax
refactor: replace min/max helpers with built-in min/max
2025-02-10 05:57:34 -06:00
Oliver Gugger
6bf6603fb8
Merge pull request #9492 from yyforyongyu/itest-flake-interceptor
itest: fix flake in `testForwardInterceptorRestart`
2025-02-10 05:50:13 -06:00
Elle Mouton
3d0ae966c8
docs: update release notes 2025-02-10 09:46:15 +02:00
Elle Mouton
e7988a2c2b
autopilot: remove access to *graphdb.ChannelGraph
Define a new GraphSource interface that describes the access required by
the autopilot server. Let its constructor take this interface instead of
a raw pointer to the graphdb.ChannelGraph.
2025-02-10 09:46:15 +02:00
Elle Mouton
9b86ee53db
graph+autopilot: let autopilot use new graph ForEachNode method
Which passes a NodeRTx to the call-back instead of a `kvdb.RTx`.
2025-02-10 09:46:15 +02:00
Elle Mouton
14cedef58e
graph/db: add NodeRTx interface and implement it
In this commit, a new NodeRTx interface is added which represents
consistent access to a persisted models.LightningNode. The
ForEachChannel method of the interface gives the caller access to the
node's channels under the same read transaction (if any) that was used
to fetch the node in the first place. The FetchNode method returns
another NodeRTx which again will have the same underlying read
transaction.

The main point of this interface is to provide this consistent access
without needing to expose the `kvdb.RTx` type as a method parameter.
This will then make it much easier in future to add new implementations
of this interface that are backed by other databases (or RPC
connections) where the `kvdb.RTx` type does not apply.

We will make use of the new interface in the `autopilot` package in
upcoming commits in order to remove the `autopilot`'s dependence on the
pointer to the `*graphdb.ChannelGraph` which it has today.
2025-02-10 08:23:58 +02:00
Elle Mouton
3e5d807773
autopilot: add testDBGraph type
Introduce a new type for testing code so the main databaseChannelGraph
type does not need to make various write calls to the
`graphdb.ChannelGraph` but the new testDBGraph type still can for tests.
2025-02-10 08:17:37 +02:00
Elle Mouton
1184c9eaf8
autopilot: move tests code to test files
This is a pure code move commit where we move any code that is only ever
used by tests to test files. Many of the calls to the
graphdb.ChannelGraph pointer are only coming from tests code.
2025-02-10 08:16:34 +02:00
Elle Mouton
7cf5b5be02
graph: remove unused ForEachNode method from Builder
And from various interfaces where it is not needed.
2025-02-10 08:16:34 +02:00
yyforyongyu
e45e1f2b0e
itest: fix flake in testForwardInterceptorRestart
We need to make sure the links are recreated before calling the
interceptor, otherwise we'd get the following error, causing the test to
fail.
```
2025-02-07 15:01:38.991 [ERR] RPCS interceptor.go:624: [/routerrpc.Router/HtlcInterceptor]: fwd (Chan ID=487:3:0, HTLC ID=0) not found
```
2025-02-10 12:24:23 +08:00
yyforyongyu
58e76b726e
routing: add docs and monior refactor decideNextStep
Add verbose docs and refactor the method to exit early when `allow` is
true.
2025-02-10 11:36:18 +08:00
yyforyongyu
ca10707b26
itest+lntest: assert payment is failed once the htlc times out 2025-02-10 11:31:51 +08:00
yyforyongyu
faa3110127
docs: update release notes 2025-02-10 11:31:50 +08:00
yyforyongyu
5a62528fd7
routing: improve loggings for attempt result handling 2025-02-10 11:31:50 +08:00
yyforyongyu
1acf4d7d4d
routing: always update payment in the same goroutine
This commit refactors `collectResultAsync` such that this method is now
only responsible for collecting results from the switch. The method
`decideNextStep` is expanded to process these results in the same
goroutine where we fetch the payment from db, to make sure the lifecycle
loop always have a consistent view of a given payment.
2025-02-10 11:29:29 +08:00
yyforyongyu
e1279aab20
routing: add new method reloadPayment
To further shorten the lifecycle loop.
2025-02-10 11:28:23 +08:00
yyforyongyu
1fe2cdb765
routing: move amendFirstHopData into requestRoute 2025-02-10 11:28:23 +08:00
yyforyongyu
966cfccb94
routing: add new method reloadInflightAttempts
To shorten the method `resumePayment` and make each step more clear.
2025-02-10 11:28:23 +08:00
yyforyongyu
f96eb50ca8
channeldb+routing: cache circuit and onion blob for htlc attempt
This commit caches the creation of sphinx circuit and onion blob to
avoid re-creating them again.
2025-02-10 11:28:23 +08:00
yyforyongyu
46eb811543
routing: remove redundant GetHash 2025-02-10 11:28:18 +08:00
Olaoluwa Osuntokun
ce8cde6911
Merge pull request #9470 from ziggie1984/fix-sweepInput-bug
Make BumpFee RPC user inputs more stricter.
2025-02-07 14:39:31 -08:00
Elle Mouton
d025587135
docs: update existing release notes 2025-02-07 16:29:22 +02:00
Elle Mouton
011d819315
discovery: update chanAnn creation methods to take modifier options
In preparation for adding more modifiers. We want to later add a
modifier that will tweak the errors returned by the mock chain once
funding transaction validation has been moved to the gossiper.
2025-02-07 16:29:19 +02:00
Elle Mouton
b6210632f2
discovery: prep testCtx with a mock Chain
This is in preparation for moving the funding transaction validation
code to the gossiper from the graph.Builder since then the gossiper will
start making GetBlockHash/GetBlock and GetUtxo calls.
2025-02-07 16:28:39 +02:00
Elle Mouton
c354fd8f70
lnmock: let MockChain also implement lnwallet.BlockChainIO
Note that a compile-time assertion was not added as this leads to an
import cycle.
2025-02-07 16:28:00 +02:00
ziggie
022b3583b4
docs: update release-notes 2025-02-07 15:14:05 +01:00
Elle Mouton
8f37699db3
discovery: prepare tests for shared chain state
Convert a bunch of the helper functions to instead be methods on the
testCtx type. This is in preparation for adding a mockChain to the
testCtx that these helpers can then use to add blocks and utxos to.

See `notifications_test.go` for an idea of what we are trying to emulate
here. Once the funding tx code has moved to the gossiper, then the logic
in `notifications_test.go` will be removed.
2025-02-07 15:26:34 +02:00
Elle Mouton
b117daaa3c
discovery+graph: convert errors from codes to variables
In preparation for moving funding transaction validiation from the
Builder to the Gossiper in later commit, we first convert these graph
Error Codes to normal error variables. This will help make the later
commit a pure code move.
2025-02-07 15:26:16 +02:00
Oliver Gugger
3c0350e481
Merge pull request #9476 from ellemouton/graph1
graph: refactor `graph.Builder` update handling
2025-02-07 07:23:41 -06:00
Elle Mouton
a86a5edbd5
docs: update release notes 2025-02-07 13:01:39 +02:00
Elle Mouton
6169b47d65
graph: rename routerStats to builderStats
This logic used to be handled by the router. Update to reflect new
owner.
2025-02-07 13:01:39 +02:00
Elle Mouton
d757b3bcfc
graph: refactor Builder network message handling
The point of the `graph.Builder`'s `networkHandler` goroutine is to
ensure that certain requests are handled in a synchronous fashion.
However, any requests received on the `networkUpdates` channel, are
currently immediately handled in a goroutine which calls
`handleNetworkUpdate` which calls `processUpdate` before doing topology
notifications. In other words, there is no reason for these
`networkUpdates` to be handled in the `networkHandler` since they are
always handled asynchronously anyways. This design is most likely due to
the fact that originally the gossiper and graph builder code lived in
the same system and so the pattern was copied across.

So in this commit, we just remove the complexity. The only part we need
to spin off in a goroutine is the topology notifications.
2025-02-07 13:01:35 +02:00
ziggie
34c4d12c71
walletrpc: add new deadline-delta param to bumpfee rpc
Add new parameter deadline-delta to the bumpfee request and only
allow it to be used when the budget value is used as well.
2025-02-07 11:28:41 +01:00
Yong
5b1eaf9978
Merge pull request #9474 from ellemouton/nodeAnnConversion
graph/db: correctly handle de(ser)ialisation of `models.LightningNode` opaque addresses
2025-02-07 18:06:10 +08:00
Elle Mouton
276b335cf5
graph: refactor announcement handling logic
In this commit, we remove the `processUpdate` method which handles each
announement type (node, channel, channel update) in a separate switch
case. Each of these cases currently has a non-trivial amount of code.
This commit creates separate methods for each message type we want to
handle instead. This removes a level of indentation and will make things
easier to review when we start editing the code for each handler.
2025-02-07 07:30:00 +02:00
Elle Mouton
1974903fb2
multi: move node ann validation code to netann pkg
The `netann` package is a more appropriate place for this code to live.
Also, once the funding transaction code is moved out of the
`graph.Builder`, then no `lnwire` validation will occur in the `graph`
package.
2025-02-07 07:30:00 +02:00
Yong
457a245a4e
Merge pull request #9482 from yyforyongyu/itest-log-ts
lntest: log timestamp when printing errors
2025-02-06 16:16:54 +08:00
Eng Zer Jun
82a221be13
docs: add release note for #9451
Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>
2025-02-06 09:46:41 +08:00
Eng Zer Jun
e56a7945be
refactor: replace math.Min and math.Max with built-in min/max
Reference: https://github.com/lightningnetwork/lnd/pull/9451#pullrequestreview-2580942691
Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>
2025-02-06 09:46:40 +08:00
Eng Zer Jun
0899cee987
refactor: replace min/max helpers with built-in min/max
We can use the built-in `min` and `max` functions since Go 1.21.

Reference: https://go.dev/ref/spec#Min_and_max
Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>
2025-02-06 09:45:10 +08:00
yyforyongyu
5f4716c699
lntest: log timestamp when printing errors 2025-02-05 22:51:46 +08:00
Yong
bac699df8f
Merge pull request #9446 from yyforyongyu/yy-prepare-fee-replace
sweeper: rename `Failed` to `Fatal` and minor refactor
2025-02-05 22:48:41 +08:00
yyforyongyu
b98542bd96
docs: update release notes 2025-02-05 19:49:09 +08:00
yyforyongyu
e5f39dd644
sweep: refactor storeRecord to updateRecord
To make it clear we are only updating fields, which will be handy for
the following commit where we start tracking for spending notifications.
2025-02-05 19:49:09 +08:00
yyforyongyu
7eea7a7e9a
sweep: add requestID to monitorRecord
This way we can greatly simplify the method signatures, also paving the
upcoming changes where we wanna make it clear when updating the
monitorRecord, we only touch a portion of it.
2025-02-05 19:49:04 +08:00
yyforyongyu
bde5124e1b
sweep: shorten storeRecord method signature
This commit shortens the function signature of `storeRecord`, also makes
sure we don't call `t.records.Store` directly but always using
`storeRecord` instead so it's easier to trace the record creation.
2025-02-05 19:48:18 +08:00
yyforyongyu
c68b8e8c1e
sweep: rename Failed to Fatal
This commit renames `Failed` to `Fatal` as it sounds too close to
`PublishFailed`. We also wanna emphasize that inputs in this state won't
be retried.
2025-02-05 19:48:18 +08:00
Elle Mouton
16e2a48d0f
docs: update release notes 2025-02-05 12:41:50 +02:00
Elle Mouton
71b2338d53
graph/db: de(ser)ialise opaque node addrs
In this commit, we fix the bug demonstrated in the prior commit. We
correctly handle the persistence of lnwire.OpaqueAddrs.
2025-02-05 12:41:50 +02:00
Elle Mouton
d68d24d97e
graph/db: demonstrate LightningNode serialisation bug 2025-02-05 08:24:45 +02:00
Elle Mouton
b7509897d5
models: create a helper to convert wire NodeAnn to models.LNNode type
And use it in the gossiper. This helps ensure that we do this conversion
consistently.
2025-02-05 08:20:10 +02:00
Elle Mouton
c5cff4052b
lnwire_test: fix test doc string
This test started out demonstrating a bug. But that bug has since been
fixed. Fix the comment to reflect.
2025-02-05 08:17:42 +02:00
András Bánki-Horváth
6bf895aeb9
Merge pull request #9469 from bhandras/use-sqldb-1.0.7 2025-02-03 10:31:56 +01:00
András Bánki-Horváth
327eb8d8ca
Merge pull request #9438 from bhandras/invoice-bucket-tombstone
channeldb+lnd: set invoice bucket tombstone after migration
2025-02-01 10:59:51 +01:00
Andras Banki-Horvath
c10b765fff
build: use the tagged 1.0.7 version of sqldb 2025-02-01 10:54:11 +01:00
Olaoluwa Osuntokun
e40324358a
Merge pull request #9459 from ziggie1984/amp-htlc-invoices
invoices: amp invoices bugfix.
2025-01-31 13:11:52 -06:00
Oliver Gugger
d2c0279647
Merge pull request #9456 from mohamedawnallah/deprecate-warning-sendpayment-and-sendtoroute
lnrpc+docs: deprecate warning `SendToRoute`, `SendToRouteSync`, `SendPayment`, and `SendPaymentSync` in Release 0.19
2025-01-31 11:05:27 -06:00
Mohamed Awnallah
d4044c2fb6 docs: update release-notes-0.19.0.md
In this commit, we warn users about the removal
of RPCs `SendToRoute`, `SendToRouteSync`, `SendPayment`,
and `SendPaymentSync` in the next release 0.20.
2025-01-31 16:23:43 +00:00
ziggie
715cafa59a
docs: add release-notes. 2025-01-31 13:10:03 +01:00
ziggie
118261aca4
invoices+channeldb: Fix AMP invoices behaviour.
We now cancel all HTLCs of an AMP invoice as soon as it expires.
Otherwise because we mark the invoice as cancelled we would not
allow accepted HTLCs to be resolved via the invoiceEventLoop.
2025-01-31 13:10:02 +01:00
Oliver Gugger
f4bf99b161
Merge pull request #9462 from guggero/go-1-22-11
Update to Go 1.22.11
2025-01-30 10:18:50 -06:00
Oliver Gugger
94efe06495
docs: add release notes 2025-01-30 16:14:56 +01:00
Oliver Gugger
191c838ad1
multi: bump Go version to v1.22.11 2025-01-30 16:13:26 +01:00
Olaoluwa Osuntokun
32cdbb43f6
Merge pull request #9454 from ziggie1984/add_custom_error_msg
Add Custom Error msg and Prioritise replayed HTLCs
2025-01-29 22:48:22 -06:00
ziggie
f4e2f2a396 docs: add release-notes. 2025-01-29 22:45:37 -06:00
ziggie
15e6e35cdb
invoices: fix log entries and add a TODO.
We need to make sure if we cancel an AMP invoice we also cancel
all remaining HTLCs back.
2025-01-29 18:21:41 +01:00
ziggie
46f3260924
invoices: make sure the db uses the same testTime. 2025-01-29 18:21:40 +01:00
ziggie
34e56b69e9
invoicerpc: add clarifying comment. 2025-01-29 18:21:40 +01:00
ziggie
c95d73c898
invoices: remove obsolete code for AMP invoices.
We always fetch the HTLCs for the specific setID, so there is no
need to keep this code. In earlier versions we would call the
UpdateInvoice method with `nil` for the setID therefore we had
to lookup the AMPState. However this was error prune because in
case one partial payment times-out the AMPState would change to
cancelled and that could lead to not resolve HTLCs.
2025-01-29 18:21:40 +01:00
ziggie
0532990a04
invoices: enhance the unit test suite.
The invoiceregistry test suite also includes unit tests for
multi part payment especially also including payments to AMP
invoices.
2025-01-29 18:21:40 +01:00
ziggie
17e37bd7c2
multi: introduce new traffic shaper method.
We introduce a new specific fail resolution error when the
external HTLC interceptor denies the incoming HTLC. Moreover
we introduce a new traffic shaper method which moves the
implementation of asset HTLC to the external layers.
Moreover itests are adopted to reflect this new change.
2025-01-29 09:59:02 +01:00
ziggie
9ee12ee029
invoices: treat replayed HTLCs beforehand.
We make sure that HTLCs which have already been decided upon
are resolved before before allowing the external interceptor to
potentially cancel them back. This makes the implementation for
the external HTLC interceptor more streamlined.
2025-01-29 09:59:02 +01:00
Mohamed Awnallah
8d3611a3bd lnrpc: deprecate legacy RPCs
In this commit, we deprecate `SendToRouteSync`
and `SendPaymentSync` RPC endpoints.
2025-01-29 02:48:22 +00:00
Oliver Gugger
f25e44712f
Merge pull request #9445 from yyforyongyu/itest-flake
itest: fix flake in `testAnchorThirdPartySpend`
2025-01-27 02:12:34 -06:00
Oliver Gugger
1ed76af179
Merge pull request #9442 from ellemouton/miscErrorFormats
misc: fix incorrect inclusion of nil err in various formatted strings
2025-01-26 06:49:31 -06:00
yyforyongyu
bc8c1643c6
itest: move channel force closes into one sub test var 2025-01-26 09:16:02 +08:00
yyforyongyu
e4b205cd90
itest: fix flake in testAnchorThirdPartySpend
When mining lots of blocks in the itest, the subsystems can be out of
sync in terms of the best block height. We now assert the num of pending
sweeps on Alice's node to give her more time to sync the blocks,
essentially behaving like a `time.Sleep` as in reality, the blocks would
never be generated this fast.
2025-01-26 09:15:36 +08:00
Olaoluwa Osuntokun
c3cbfd8fb2
Merge pull request #9241 from Crypt-iQ/fix_vb
discovery+graph: track job set dependencies in vb
2025-01-24 11:21:35 -08:00
Oliver Gugger
baa34b06d3
Merge pull request #9232 from Abdulkbk/archive-channel-backups
chanbackup: archive old channel backup files
2025-01-24 05:42:13 -06:00
Elle Mouton
e3b94e4578
routerrpc: only log TrackPayment error if err is not nil 2025-01-24 12:49:45 +02:00
Elle Mouton
af8c8b4bb3
lntest: remove always-nil error from formatted error 2025-01-24 12:49:41 +02:00
Abdullahi Yunus
3bf15485ce
docs: add release note 2025-01-24 10:58:42 +01:00
Abdullahi Yunus
84f039db45
lnd+chanbackup: add lnd config flag
In this commit, we add the --no-backup-archive with a default
as false. When set to true then previous channel backup file will
not be archived but replaced. We also modify TestUpdateAndSwap
test to make sure the new behaviour works as expected.
2025-01-24 10:56:15 +01:00
Abdullahi Yunus
a51dfe9a7d
chanbackup: archive old channel backups
In this commit, we first check if a previous backup file exists,
if it does we copy it to archive folder before replacing it with
a new backup file. We also added a test for archiving chan backups.
2025-01-24 10:56:14 +01:00
Eugene Siegel
323b633895 graph -> discovery: move ValidationBarrier to discovery 2025-01-23 13:04:39 -05:00
Eugene Siegel
e0e4073bcd release-notes: update for 0.19.0 2025-01-23 11:43:07 -05:00
Eugene Siegel
6a47a501c3 discovery+graph: track job set dependencies in ValidationBarrier
This commit does two things:
- removes the concept of allow / deny. Having this in place was a
  minor optimization and removing it makes the solution simpler.
- changes the job dependency tracking to track sets of abstact
  parent jobs rather than individual parent jobs.

As a note, the purpose of the ValidationBarrier is that it allows us
to launch gossip validation jobs in goroutines while still ensuring
that the validation order of these goroutines is adhered to when it
comes to validating ChannelAnnouncement _before_ ChannelUpdate and
_before_ NodeAnnouncement.
2025-01-23 11:43:07 -05:00
Eugene Siegel
2731d09a0b graph: change ValidationBarrier usage in the builder code
This omits calls to InitJobDependencies, SignalDependants, and
WaitForDependants. These changes have been made here because
the router / builder code does not actually need job dependency
management. Calls to the builder code (i.e. AddNode, AddEdge,
UpdateEdge) are all blocking in the gossiper. This, combined
with the fact that child jobs are run after parent jobs in the
gossiper, means that the calls to the router will happen in the
proper dependency order.
2025-01-23 11:43:07 -05:00
Andras Banki-Horvath
3f6f6c19c1
docs: update release-notes-0.19.0.md 2025-01-23 15:14:26 +01:00
Andras Banki-Horvath
444524a762
channeldb+lnd: set invoice bucket tombstone after migration
This commit introduces the functionality to set a tombstone key
in the invoice bucket after the migration to the native SQL
database. The tombstone prevents the user from switching back
to the KV invoice database, ensuring data consistency and
avoiding potential issues like lingering invoices or partial
state in KV tables. The tombstone is checked on startup to
block any manual overrides that attempt to revert the migration.
2025-01-23 15:06:09 +01:00
Oliver Gugger
6cabc74c20
Merge pull request #8831 from bhandras/sql-invoice-migration
invoices: migrate KV invoices to native SQL for users of KV SQL backends
2025-01-23 05:48:25 -06:00
Oliver Gugger
49affa2dc3
Merge pull request #9424 from yyforyongyu/fix-gossip-ann
multi: fix inconsistent state in gossip syncer
2025-01-23 05:25:01 -06:00
Andras Banki-Horvath
b1a462ddba
docs: update release notes for 0.19.0 2025-01-23 09:11:03 +01:00
Andras Banki-Horvath
97c025f289
invoices: raise the number of allowed clients for the Postgres fixture 2025-01-23 09:11:02 +01:00
Andras Banki-Horvath
84598b6dc1
sqldb: ensure schema definitions are fully SQLite compatible
Previously, we applied replacements to our schema definitions
to make them compatible with both SQLite and Postgres backends,
as the files were not fully compatible with either.

With this change, the only replacement required for SQLite has
been moved to the generator script. This adjustment ensures
compatibility by enabling auto-incrementing primary keys that
are treated as 64-bit integers by sqlc.
2025-01-23 09:11:02 +01:00
Andras Banki-Horvath
ea98933317
invoices: allow migration test to work on kv sqlite channeldb 2025-01-23 09:11:02 +01:00
Andras Banki-Horvath
5e3ef3ec0c
invoices+sql: use the stored AmtPaid value instead of recalculating
Previously we'd recalculate the paid amount by summing amounts of
settled HTLCs. This approach while correct would stop the SQL migration
process as some KV invoices may have incorrectly stored paid amounts.
2025-01-23 09:11:02 +01:00
Andras Banki-Horvath
0839d4ba7b
itest: remove obsolete itest 2025-01-23 09:11:02 +01:00
Andras Banki-Horvath
a29f2430c1
itest: add integration test for invoice migration 2025-01-23 09:11:01 +01:00
Andras Banki-Horvath
8d20e2a23b
lnd: run invoice migration on startup
This commit runs the invoice migration if the user has a KV SQL backend
configured.
2025-01-23 09:11:01 +01:00
Andras Banki-Horvath
94e2724a34
sqldb+invoices: Optimize invoice fetching when the reference is only a hash
The current sqlc GetInvoice query experiences incremental slowdowns during
the migration of large invoice databases, primarily due to its complex
predicate set. For this specific use case, a streamlined GetInvoiceByHash
function provides a more efficient solution, maintaining near-constant
lookup times even with extensive table sizes.
2025-01-23 09:11:01 +01:00
Andras Banki-Horvath
b92f57e0ae
invoices: add migration code that runs a full invoice DB SQL migration 2025-01-23 09:11:01 +01:00
Andras Banki-Horvath
708bed517d
invoices: add migration code for a single invoice 2025-01-23 09:11:01 +01:00
Andras Banki-Horvath
43797d6be7
invoices: add method to create payment hash index
Certain invoices may not have a deterministic payment hash. For such
invoices we still store the payment hashes in our KV database, but we do
not have a sufficient index to retrieve them. This PR adds such index to
the SQL database that will be used during migration to retrieve payment
hashes.
2025-01-23 09:11:00 +01:00
Andras Banki-Horvath
be18f55ca1
invoices: extract method to create invoice insertion params 2025-01-23 09:11:00 +01:00
Andras Banki-Horvath
d65b630568
sqldb: remove unused preimage query parameter 2025-01-23 09:11:00 +01:00
Andras Banki-Horvath
b7d743929d
sqldb: add a temporary index to store KV invoice hash to ID mapping 2025-01-23 09:11:00 +01:00
Andras Banki-Horvath
3820497d7f
sqldb: set settled_at and settle_index on invocie insertion is set
Previously we intentially did not set settled_at and settle_index when
inserting a new invoice as those fields are set when we settle an
invoice through the usual invoice update. As migration requires that we
set these nullable fields, we can safely add them.
2025-01-23 09:11:00 +01:00
Andras Banki-Horvath
115f96c29a
multi: add call to directly insert an AMP sub-invoice 2025-01-23 09:10:59 +01:00
Andras Banki-Horvath
91c3e1496f
sqldb: separate migration execution from construction
This commit separates the execution of SQL and in-code migrations
from their construction. This change is necessary because,
currently, the SQL schema is migrated during the construction
phase in the lncfg package. However, migrations are typically
executed when individual stores are constructed within the
configuration builder.
2025-01-23 09:10:59 +01:00
Andras Banki-Horvath
b789fb2db3
sqldb: add support for custom in-code migrations
This commit introduces support for custom, in-code migrations, allowing
a specific Go function to be executed at a designated database version
during sqlc migrations. If the current database version surpasses the
specified version, the migration will be skipped.
2025-01-23 09:10:59 +01:00
Andras Banki-Horvath
9acd06d296
sqldb: add table to track custom SQL migrations
This commit adds the migration_tracker table which we'll use to track if
a custom migration has already been done.
2025-01-23 09:10:59 +01:00
Andras Banki-Horvath
680394518f
mod: temporarily replace sqldb with local version 2025-01-23 09:10:58 +01:00
Oliver Gugger
1f20bd352f
Merge pull request #9429 from yyforyongyu/update-action
.github: update actions versions
2025-01-20 09:01:46 -06:00
yyforyongyu
29603954bd
.github: update actions versions 2025-01-20 21:55:20 +08:00
Oliver Gugger
baa3b0dea3
Merge pull request #9425 from yyforyongyu/flake-fix
itest: fix flake in `testSweepHTLCs`
2025-01-17 10:36:55 -06:00
Oliver Gugger
27df4af53e
Merge pull request #9359 from NishantBansal2003/fix-timeout
routerrpc: add a default value for timeout_seconds in SendPaymentV2
2025-01-17 09:43:38 -06:00
yyforyongyu
c24f839fbe
itest: fix flake in testSweepHTLCs
We need to make sure Caarol finishes settling her invoice with Bob
before shutting down, so we make sure `AssertHTLCNotActive` on Bob
happens before shutting node Carol.
2025-01-17 22:58:37 +08:00
Yong
fb91b04906
Merge pull request #9405 from yyforyongyu/fix-unused-params
discovery+lnd: make param `ProofMatureDelta` configurable
2025-01-17 22:57:26 +08:00
yyforyongyu
ae2bcfe3d8
docs: add release notes 2025-01-17 21:44:23 +08:00
yyforyongyu
27a05694cb
multi: make ProofMatureDelta configurable
We add a new config option to set the `ProofMatureDelta` so the users
can tune their graphs based on their own preference over the num of
confs found in the announcement signatures.
2025-01-17 21:44:23 +08:00
yyforyongyu
56ff6d1fe0
docs: update release notes 2025-01-17 18:58:20 +08:00
yyforyongyu
772a9d5f42
discovery: fix mocked peer in unit tests
The mocked peer used here blocks on `sendToPeer`, which is not the
behavior of the `SendMessageLazy` of `lnpeer.Peer`. To reflect the
reality, we now make sure the `sendToPeer` is non-blocking in the tests.
2025-01-17 17:59:06 +08:00
yyforyongyu
9fecfed3b5
discovery: fix race access to syncer's state
This commit fixes the following race,
1. syncer(state=syncingChans) sends QueryChannelRange
2. remote peer replies ReplyChannelRange
3. ProcessQueryMsg fails to process the remote peer's msg as its state
   is neither waitingQueryChanReply nor waitingQueryRangeReply.
4. syncer marks its new state waitingQueryChanReply, but too late.

The historical sync will now fail, and the syncer will be stuck at this
state. What's worse is it cannot forward channel announcements to other
connected peers now as it will skip the broadcasting during initial
graph sync.

This is now fixed to make sure the following two steps are atomic,
1. syncer(state=syncingChans) sends QueryChannelRange
2. syncer marks its new state waitingQueryChanReply.
2025-01-17 02:39:07 +08:00
yyforyongyu
4b30b09d1c
discovery: add new method handleSyncingChans
This is a pure refactor to add a dedicated handler when the gossiper is
in state syncingChans.
2025-01-17 00:22:22 +08:00
yyforyongyu
eb2b0c783f
graph: fix staticcheck suggestion
From staticcheck: QF1002 - Convert untagged switch to tagged switch.
2025-01-17 00:21:45 +08:00
yyforyongyu
001e5599b6
multi: add debug logs for edge policy flow
This commit adds more logs around the ChannelUpdate->edge policy process
flow.
2025-01-17 00:17:23 +08:00
Yong
e0a920af44
Merge pull request #9420 from yyforyongyu/assert-channel-graph
itest+lntest: make sure to assert edge in both graph db and cache
2025-01-17 00:13:02 +08:00
yyforyongyu
faa1f67480
lntest: assert channel edge in both graph db and cache
We need to make sure the channel edge has been updated in both the graph
DB and cache.
2025-01-16 23:00:22 +08:00
yyforyongyu
e576d661ef
itest: fix docs 2025-01-16 23:00:21 +08:00
yyforyongyu
848f42ea1d
itest: fix flake in graph_topology_notifications
We need to make sure to wait for the `ListPeers` to give us the latest
response.
2025-01-16 16:19:37 +08:00
Nishant Bansal
3a3002e281
docs: add release notes.
Signed-off-by: Nishant Bansal <nishant.bansal.282003@gmail.com>
2025-01-15 19:46:09 +05:30
Nishant Bansal
23efbef946
itest: update tests with timeout_seconds
Signed-off-by: Nishant Bansal <nishant.bansal.282003@gmail.com>
2025-01-15 19:46:09 +05:30
Nishant Bansal
b577ad4661
routerrpc: default timeout_seconds to 60 in SendPaymentV2
If timeout_seconds is not set or is 0, the default value
of 60 seconds will be used.

Signed-off-by: Nishant Bansal <nishant.bansal.282003@gmail.com>
2025-01-15 19:45:44 +05:30
Oliver Gugger
572784a6a1
Merge pull request #9390 from NishantBansal2003/append-channel
Enhance `lncli` listchannels command with the chan_id and short_chan_id (human readable format)
2025-01-15 07:19:27 -06:00
Nishant Bansal
c2897a4c78
docs: add release notes.
Signed-off-by: Nishant Bansal <nishant.bansal.282003@gmail.com>
2025-01-15 17:58:40 +05:30
Nishant Bansal
103a194e5c
cmd/lncli: update listchannels output fields
Added scid_str as a string representation of chan_id,
replacing chan_id with scid, and including chan_id(BOLT02)
in lncli listchannels output.

Signed-off-by: Nishant Bansal <nishant.bansal.282003@gmail.com>
2025-01-15 17:58:06 +05:30
Nishant Bansal
7f8e89f3b4
lnwire: add 'x' separator in ShortChannelID method
Add the `AltString` method for `ShortChannelID` to produce a
human-readable format with 'x' as a separator
(block x transaction x output).

Signed-off-by: Nishant Bansal <nishant.bansal.282003@gmail.com>
2025-01-15 15:25:40 +05:30
Yong
4b16c2902e
Merge pull request #9383 from ziggie1984/bugfix-createmissingedge
bugfix createmissingedge
2025-01-14 16:20:15 +08:00
ziggie
e008190a85
localchans: bugfix so that we always use the correct chanID 2025-01-14 08:14:02 +01:00
Elle
e9dd01b60d
Merge pull request #9274 from ziggie1984/remove-2x-value
The sweeper subsystem uses now also the configured budget values for HTLCs
2025-01-13 19:09:11 +02:00
Oliver Gugger
b958667811
Merge pull request #9355 from Roasbeef/rapid-fuzz-htlc-blobs
contractcourt: add rapid derived fuzz test for HtlcAuxBlob
2025-01-13 10:35:53 -06:00
Oliver Gugger
1b0f41da48
Merge pull request #9378 from yyforyongyu/fix-unit-test
chainntnfs: fix test `testSingleConfirmationNotification`
2025-01-13 03:13:02 -06:00
Oliver Gugger
67a8c7cf51
Merge pull request #9391 from mohamedawnallah/choreRegisterToRegisteredInStartingDebugLogs
chore: change 'register' to 'registered' in lnrpc starting debug logs [skip ci]
2025-01-13 03:11:03 -06:00
Elle
98033f876d
Merge pull request #9344 from ellemouton/useUpdatedContextGuard
htlcswitch+go.mod: use updated fn.ContextGuard
2025-01-11 12:28:45 +02:00
Elle Mouton
950194a2da
htlcswitch+go.mod: use updated fn.ContextGuard
This commit updates the fn dep to the version containing the updates to
the ContextGuard implementation. Only the htlcswitch/link uses the guard
at the moment so this is updated to make use of the new implementation.
2025-01-11 06:17:43 +02:00
Elle
77848c402d
Merge pull request #9342 from ellemouton/slogProtofsm
protofsm: update GR Manager usage and start using structured logging
2025-01-10 19:55:18 +02:00
Elle Mouton
42ce9d639f
docs: add release notes entry 2025-01-10 18:34:26 +02:00
Elle Mouton
65c4c2c4d0
protofsm: use pointer to GoroutineManager 2025-01-10 18:25:19 +02:00
Elle Mouton
dfddeec8d4
protofsm: use structured logging 2025-01-10 18:25:19 +02:00
Elle Mouton
575ea7af83
protofsm: use prefixed logger for StateMachine
So that we dont have to remember to add the `FSM(%v)` prefix each time
we write a log line.
2025-01-10 18:25:19 +02:00
Elle Mouton
b887c1cc5d
protofsm: use updated GoroutineManager API
Update to use the latest version of the GoroutineManager which takes a
context via the `Go` method instead of the constructor.
2025-01-10 18:23:28 +02:00
Elle Mouton
4e0498faa4
go.mod: update btclog dep
This bump includes a fix which prevents attribute value quoting if the
value string contains a newline character. This is so that if we call
spew.DumpS(), the output will stay nicely formatted.

The update also includes a couple more Hex helpers which we can make use
of now.
2025-01-10 18:23:28 +02:00
Oliver Gugger
dd25e6eb22
Merge pull request #9361 from starius/optimize-context-guard
fn: optimize context guard
2025-01-10 09:33:21 -06:00
Oliver Gugger
70e7b56713
Merge pull request #9388 from chloefeal/fix
Fix some typos
2025-01-10 09:30:50 -06:00
Oliver Gugger
e449adb03a
Merge pull request #9404 from peicuiping/master
chore: fix some typos
2025-01-10 09:28:37 -06:00
Oliver Gugger
83a2100810
Merge pull request #9386 from ellemouton/updateElleKey
scripts/keys: update pub key for ellemouton
2025-01-10 09:25:37 -06:00
Oliver Gugger
1d9b30f139
Merge pull request #9411 from ellemouton/bumpUploadArtifactAction
.github: bump upload-artifact action to v4
2025-01-10 09:17:31 -06:00
Elle Mouton
a7a01f684d
.github: bump upload-artifact action to v4 2025-01-10 08:31:57 +02:00
ziggie
f36f7c1d84
docs: add release-notes 2025-01-09 07:56:40 +01:00
ziggie
5cd88f7e48
contractcourt: remove 2xamount requirement for the sweeper. 2025-01-08 08:57:50 +01:00
chloefeal
852a8d8746
chore: fix typo 2025-01-05 20:45:35 +08:00
peicuiping
06fef749a7 chore: fix some typos
Signed-off-by: peicuiping <ezc5@sina.cn>
2025-01-03 21:48:29 +08:00
Boris Nagaev
07c46680e9
fn/ContextGuard: use context.AfterFunc to wait
Simplifies context cancellation handling by using context.AfterFunc instead of a
goroutine to wait for context cancellation. This approach avoids the overhead of
a goroutine during the waiting period.

For ctxQuitUnsafe, since g.quit is closed only in the Quit method (which also
cancels all associated contexts), waiting on context cancellation ensures the
same behavior without unnecessary dependency on g.quit.

Added a test to ensure that the Create method does not launch any goroutines.
2025-01-02 10:38:26 -03:00
Boris Nagaev
e9ab603735
fn/ContextGuard: clear store of cancel funcs
If ContextGuard lives for some time after Quit method is called, the map won't
be collected by GC. Optimization.
2025-01-02 10:38:26 -03:00
Boris Nagaev
1750aec13d
fn: remove uneeded argument of ctxBlocking
Removed 'cancel' argument, because it is called only in case the context has
already expired and the only action that cancel function did was cancelling the
context.
2025-01-02 10:38:26 -03:00
Boris Nagaev
865da9c525
fn/ContextGuard: test cancelling blocking context
Make sure WgWait() doesn't block.
2025-01-02 10:38:26 -03:00
Mohamed Awnallah
e636c76300 chore: change register to registered [skip ci] 2024-12-28 23:06:07 +02:00
Elle Mouton
b986f57206
scripts/keys: update pub key for ellemouton 2024-12-21 08:11:38 +02:00
yyforyongyu
bafe5d009f
chainntnfs: fix test testSingleConfirmationNotification 2024-12-20 22:08:32 +08:00
Oliver Gugger
a388c1f39d
Merge pull request #9368 from lightningnetwork/yy-waiting-on-merge
Fix itest re new behaviors introduced by `blockbeat`
2024-12-20 07:44:54 -06:00
yyforyongyu
2913f6e4c9
itest: fix flake in testCoopCloseWithExternalDeliveryImpl
The response from `ClosedChannels` may not be up-to-date, so we wrap it
inside a wait closure.
2024-12-20 19:38:15 +08:00
yyforyongyu
76eeae32d6
itest: document and fix wallet UTXO flake 2024-12-20 19:38:14 +08:00
yyforyongyu
7ab4081ffd
lntest: make sure chain backend is synced to miner
We sometimes see `timeout waiting for UTXOs` error from bitcoind-related
itests due to the chain backend not synced to the miner. We now assert
it's synced before continue.
2024-12-20 19:38:14 +08:00
yyforyongyu
1dec926165
workflows: increase num of tranches to 16
Keep the SQL, etcd, bitcoin rpcpolling builds and non-ubuntu builds at 8
since they are less stable.
2024-12-20 19:38:14 +08:00
yyforyongyu
31b66962d8
lntest: properly handle shutdown error
This commit removes the panic used in checking the shutdown log.
Instead, the error is returned and asserted in `shutdownAllNodes` so
it's easier to check which node failed in which test. We also catch all
the errors returned from `StopDaemon` call to properly access the
shutdown behavior.
2024-12-20 19:38:14 +08:00
yyforyongyu
73574d919d
lntest: add timeouts for windows
For Windows the tests run much slower so we create customized timeouts
for them.
2024-12-20 19:38:14 +08:00
yyforyongyu
d7f8fa6ab6
lntest: increase port timeout 2024-12-20 19:38:14 +08:00
yyforyongyu
33b07be8c3
itest: even out num of tests per tranche
Previous splitting logic simply put all the remainder in the last
tranche, which could make the last tranche run significantly more test
cases. We now change it so the remainder is evened out across tranches.
2024-12-20 19:38:14 +08:00
yyforyongyu
c536bc220f
itest: add a prefix before appending a subtest case 2024-12-20 19:38:13 +08:00
yyforyongyu
686a7dd31c
docs: update release notes 2024-12-20 19:38:13 +08:00
yyforyongyu
becbdce64c
lntest: limit the num of blocks mined in each test 2024-12-20 19:38:13 +08:00
yyforyongyu
5236c05dc6
itest+lntest: add new method FundNumCoins
Most of the time we only need to fund the node with given number of
UTXOs without concerning the amount, so we add the more efficient
funding method as it mines a single block in the end.
2024-12-20 19:38:13 +08:00
yyforyongyu
691a6267be
workflows: use btcd for macOS
To increase the speed from 40m per run to roughly 20m per run.
2024-12-20 19:38:13 +08:00
yyforyongyu
77b2fa0271
lntest: make sure policies are populated in AssertChannelInGraph 2024-12-20 19:38:13 +08:00
yyforyongyu
c97c31a70b
lntest: increase node start timeout and payment benchmark timeout 2024-12-20 19:38:12 +08:00
yyforyongyu
efe81f2d3c
itest: track and skip flaky tests for windows
To make the CI indicative, we now starting tracking the flaky tests
found when running on Windows. As a starting point, rather than ignore
the windows CI entirely, we now identify there are cases where lnd can
be buggy when running in windows.

We should fix the tests in the future, otherwise the windows build
should be deleted.
2024-12-20 19:38:12 +08:00
yyforyongyu
e79ad6e5aa
itest: further reduce block mined in tests 2024-12-20 19:38:12 +08:00
yyforyongyu
6f2e7feb94
itest: breakdown testSendDirectPayment
Also fixes a wrong usage of `ht.Subtest`.
2024-12-20 19:38:12 +08:00
yyforyongyu
c7b8379602
itest: break down channel fundmax tests 2024-12-20 19:38:12 +08:00
yyforyongyu
37b8210f37
itest: break down taproot tests 2024-12-20 19:38:12 +08:00
yyforyongyu
efae8ea56f
itest: break down single hop send to route
Also removed the duplicate test cases.
2024-12-20 19:38:11 +08:00
yyforyongyu
c029f0a84f
itest: break down basic funding flow tests 2024-12-20 19:38:11 +08:00
yyforyongyu
c58fa01a66
itest: break down wallet import account tests 2024-12-20 19:38:11 +08:00
yyforyongyu
31aada65a4
itest: break down channel backup restore tests 2024-12-20 19:38:11 +08:00
yyforyongyu
7b1427a565
itest: break down payment failed tests 2024-12-20 19:38:11 +08:00
yyforyongyu
3319d0d983
itest: break down open channel fee policy 2024-12-20 19:38:11 +08:00
yyforyongyu
21c5d36812
itest: break down scid alias channel update tests 2024-12-20 19:38:10 +08:00
yyforyongyu
04a15039d7
itest: break all multihop test cases 2024-12-20 19:38:10 +08:00
yyforyongyu
a76ff79adc
itest: break down utxo selection funding tests 2024-12-20 19:38:10 +08:00
yyforyongyu
b1cb819f07
itest: break down channel restore commit types cases 2024-12-20 19:38:10 +08:00
yyforyongyu
5663edfc2f
itest: break remote signer into independent cases
So the test can run faster.
2024-12-20 19:38:10 +08:00
yyforyongyu
cca2364097
itest: optimize blocks mined in testGarbageCollectLinkNodes
There's no need to mine 80ish blocks here.
2024-12-20 19:38:10 +08:00
yyforyongyu
1950d89925
itest: document a flake found in SendToRoute 2024-12-20 19:38:10 +08:00
yyforyongyu
7e80b77535
itest: document a rare flake found in macOS 2024-12-20 19:38:09 +08:00
yyforyongyu
39104c53d4
lntest: fix flakeness in openChannelsForNodes
We now make sure the channel participants have heard their private
channel when opening channels.
2024-12-20 19:38:09 +08:00
yyforyongyu
cfb5713cda
itest+lntest: fix flake in MPP-related tests 2024-12-20 19:38:09 +08:00
yyforyongyu
fb59669ae8
itest: document details about MPP-related tests
This is needed so we can have one place to fix the flakes found in the
MPP-related tests, which is fixed in the following commit.
2024-12-20 19:38:09 +08:00
yyforyongyu
f912f407c8
lntest: increase rpcmaxwebsockets for btcd
This has been seen in the itest which can lead to the node startup
failure,
```
2024-11-20 18:55:15.727 [INF] RPCS: Max websocket clients exceeded [25] - disconnecting client 127.0.0.1:57224
```
2024-12-20 19:38:09 +08:00
yyforyongyu
4e85d86a2e
itest: fix flake in update_pending_open_channels 2024-12-20 19:38:09 +08:00
yyforyongyu
23edca8d19
itest: fix flake in testPrivateUpdateAlias 2024-12-20 19:38:08 +08:00
yyforyongyu
7c3564eeb6
itest: fix flake in runPsbtChanFundingWithNodes 2024-12-20 19:38:08 +08:00
yyforyongyu
7ceb9a4af5
lntest+itest: remove AssertNumActiveEdges
This is no longer needed since we don't have standby nodes, plus it's
causing panic in windows build due to `edge.Policy` being nil.
2024-12-20 19:38:08 +08:00
yyforyongyu
8f3100c984
itest: put mpp tests in one file 2024-12-20 19:38:08 +08:00
yyforyongyu
fee6b70519
itest: use ht.CreateSimpleNetwork whenever applicable
So we won't forget to assert the topology after opening a chain of
channels.
2024-12-20 19:38:08 +08:00
yyforyongyu
4eea2078fb
itest+routing: fix flake in runFeeEstimationTestCase
The test used 10s as the timeout value, which can easily cause a timeout
in a slow build so we increase it to 60s.
2024-12-20 19:38:08 +08:00
yyforyongyu
8b8f0c4eb4
itest: fix flake in testSwitchOfflineDelivery
The reconnection will happen automatically when the nodes have a
channel, so we just ensure the connection instead of reconnecting
directly.
2024-12-20 19:38:07 +08:00
yyforyongyu
66b35018b8
itest: fix flake in testRevokedCloseRetributionZeroValueRemoteOutput
We need to mine an empty block as the tx may already have entered the
mempool. This should be fixed once we start using the sweeper to handle
the justice tx.
2024-12-20 19:38:07 +08:00
yyforyongyu
c07162603d
itest: remove loop in wsTestCaseBiDirectionalSubscription
So we know which open channel operation failed.
2024-12-20 19:38:07 +08:00
yyforyongyu
782edde213
itest: fix and document flake in sweeping tests
We previously didn't see this issue because we always have nodes being
over-funded.
2024-12-20 19:38:07 +08:00
yyforyongyu
9f764c25f9
itest: flatten PSBT funding test cases
So it's easier to get the logs and debug.
2024-12-20 19:38:07 +08:00
yyforyongyu
762e59d78c
itest: fix flake for neutrino backend 2024-12-20 19:38:07 +08:00
yyforyongyu
3a45492398
itest: fix spawning temp miner 2024-12-20 19:38:07 +08:00
yyforyongyu
2a9b7ec536
itest: fix flake in testSendDirectPayment
This bug was hidden because we used standby nodes before, which always
have more-than-necessary wallet utxos.
2024-12-20 19:38:06 +08:00
yyforyongyu
e7310ff1b6
itest: fix testOpenChannelUpdateFeePolicy
This commit fixes a misuse of `ht.Subtest`, where the nodes should
always be created inside the subtest.
2024-12-20 19:38:06 +08:00
yyforyongyu
010a4f1571
lntest: add human-readble names and check num of nodes 2024-12-20 19:38:06 +08:00
yyforyongyu
f64b2ce8f2
lntest: make sure node is properly shut down
Soemtimes the node may be hanging for a while without being noticed,
causing failures in its following tests, thus making the debugging
extrememly difficult. We now assert the node has been shut down from the
logs to assert the shutdown process behaves as expected.
2024-12-20 19:38:06 +08:00
yyforyongyu
72f3f41d41
itest: remove unnecessary channel close and node shutdown
Since we don't have standby nodes anymore, we don't need to close the
channels when the test finishes. Previously we would do so to make sure
the standby nodes have a clean state for the next test case, which is no
longer relevant.
2024-12-20 19:38:06 +08:00
yyforyongyu
00772ae281
itest+lntest: remove standby nodes
This commit removes the standby nodes Alice and Bob.
2024-12-20 19:38:06 +08:00
yyforyongyu
11c9dd5ff2
itest: remove unused method setupFourHopNetwork 2024-12-20 19:38:05 +08:00
yyforyongyu
de8f14bed2
itest: remove the use of standby nodes
This commit removes the usage of the standby nodes and uses
`CreateSimpleNetwork` when applicable. Also introduces a helper method
`NewNodeWithCoins` to quickly start a node with funds.
2024-12-20 19:38:05 +08:00
yyforyongyu
3eda87fff9
itest: remove direct reference to stanby nodes
Prepares the upcoming refactor. We now never call `ht.Alice` directly,
instead, we always init `alice := ht.Alice` so it's easier to see how
they are removed in a following commit.
2024-12-20 19:38:05 +08:00
yyforyongyu
ef167835dd
workflows: pass action ID as the shuffle seed
To make sure each run is shuffled, we use the action ID as the seed.
2024-12-20 19:38:05 +08:00
yyforyongyu
88bd0cb806
itest: shuffle test cases to even out blocks mined in tranches
This commit adds a new flag to shuffle all the test cases before running
them so tests which require lots of blocks to be mined are less likely
to be run in the same tranch.

The other benefit is this approach provides a more efficient way to
figure which tests are broken since all the differnet backends are
running different tranches in their builds, we can identify more failed
tests in one push.
2024-12-20 19:38:05 +08:00
yyforyongyu
2c27df6c30
itest: print num of blocks for debugging 2024-12-20 19:38:05 +08:00
Oliver Gugger
fe48e65f42
Merge pull request #9381 from yyforyongyu/fix-no-space-left
workflows: fix no space left error
2024-12-20 12:21:53 +01:00
Oliver Gugger
8859dbc288
Merge pull request #9315 from lightningnetwork/yy-feature-blockbeat
Implement `blockbeat`
2024-12-20 12:20:32 +01:00
yyforyongyu
ecd82a3bcb
contractcourt: include custom records on replayed htlc
Add another case in addition to #9357.
2024-12-20 17:54:13 +08:00
yyforyongyu
a1bd8943db
lntest+itest: export DeriveFundingShim 2024-12-20 17:54:12 +08:00
yyforyongyu
c4a6abb14d
lntest+itest: remove the usage of ht.AssertActiveHtlcs
The method `AssertActiveHtlcs` is now removed due to it's easy to be
misused. To assert a given htlc, use `AssertOutgoingHTLCActive` and
`AssertIncomingHTLCActive` instead for ensuring the HTLC exists in the
right direction. Although often the case `AssertNumActiveHtlcs` would be
enough as it implicitly checks the forwarding behavior for an
intermediate node by asserting there are always num_payment*2 HTLCs.
2024-12-20 17:54:12 +08:00
yyforyongyu
36a87ad5f4
itest: assert payment status after sending 2024-12-20 17:54:12 +08:00
yyforyongyu
425877e745
itest: remove redundant block in multiple tests 2024-12-20 17:54:12 +08:00
yyforyongyu
9c0c373b7e
itest: flatten and fix testWatchtower 2024-12-20 17:54:12 +08:00
yyforyongyu
b7feeba008
itest+lntest: fix channel force close test
Also flatten the tests to make them easier to be maintained.
2024-12-20 17:54:12 +08:00
yyforyongyu
9d4a60d613
itest: remove redundant blocks in channel backup tests 2024-12-20 17:54:12 +08:00
yyforyongyu
a55408d4a1
itest: remove redudant block in testPsbtChanFundingWithUnstableUtxos 2024-12-20 17:54:11 +08:00
yyforyongyu
22b9350320
itest: remove redunant block mining in testChannelFundingWithUnstableUtxos 2024-12-20 17:54:11 +08:00
yyforyongyu
2494f1ed32
itest: remove redundant block mining in testFailingChannel 2024-12-20 17:54:11 +08:00
yyforyongyu
e1d5bbf171
itest: remove unnecessary force close 2024-12-20 17:54:11 +08:00
yyforyongyu
2adb356668
itest: rename file to reflect the tests 2024-12-20 17:54:11 +08:00
yyforyongyu
2f0256775e
itest: flatten testHtlcTimeoutResolverExtractPreimageRemote
Also remove unused code.
2024-12-20 17:54:11 +08:00
yyforyongyu
34951a6153
itest: flatten testHtlcTimeoutResolverExtractPreimageLocal
This commit simplfies the test since we only test the preimage
extraction logic in the htlc timeout resolver, there's no need to test
it for all different channel types as the resolver is made to be
oblivious about them.
2024-12-20 17:54:10 +08:00
yyforyongyu
f95e64f084
itest: flatten testMultiHopHtlcAggregation 2024-12-20 17:54:10 +08:00
yyforyongyu
52e6fb1161
itest: flatten testMultiHopHtlcRemoteChainClaim 2024-12-20 17:54:10 +08:00
yyforyongyu
8dd73a08a9
itest: flatten testMultiHopHtlcLocalChainClaim 2024-12-20 17:54:10 +08:00
yyforyongyu
d7b2025248
lntest+itest: flatten testMultiHopRemoteForceCloseOnChainHtlcTimeout 2024-12-20 17:54:10 +08:00
yyforyongyu
bef17f16cf
lntest+itest: flatten testMultiHopLocalForceCloseOnChainHtlcTimeout 2024-12-20 17:54:10 +08:00
yyforyongyu
bc31979f7b
itest: simplify and flatten testMultiHopReceiverChainClaim 2024-12-20 17:54:10 +08:00
yyforyongyu
9ab9cd5f99
lntest+itest: start flattening the multi-hop tests
Starting from this commit, we begin the process of flattening the
multi-hop itests to make them easier to be maintained. The tests are
refactored into their own test cases, with each test focusing on testing
one channel type. This is necessary to save effort for future
development.

These tests are also updated to reflect the new `blockbeat` behavior.
2024-12-20 17:54:09 +08:00
yyforyongyu
e45005b310
itest: fix testPaymentSucceededHTLCRemoteSwept 2024-12-20 17:54:09 +08:00
yyforyongyu
0778009ac2
itest: fix testBumpForceCloseFee 2024-12-20 17:54:09 +08:00
yyforyongyu
1aeea8a90f
itest: fix testSweepCommitOutputAndAnchor 2024-12-20 17:54:09 +08:00
yyforyongyu
d260a87f3b
itest: fix testSweepHTLCs 2024-12-20 17:54:09 +08:00
yyforyongyu
cacf222e11
itest: fix testSweepCPFPAnchorIncomingTimeout 2024-12-20 17:54:09 +08:00
yyforyongyu
40ac04a254
lntest+itest: fix testSweepCPFPAnchorOutgoingTimeout 2024-12-20 17:54:08 +08:00
yyforyongyu
4806b2fda7
multi: optimize loggings around changes from blockbeat 2024-12-20 17:54:08 +08:00
yyforyongyu
fecd5ac735
contractcourt: make sure launchResolvers is called on new blockbeat
This is an oversight from addressing this commment:
https://github.com/lightningnetwork/lnd/pull/9277#discussion_r1882410396

where we should focus on skipping the close events but not the
resolvers.
2024-12-20 17:54:08 +08:00
yyforyongyu
bd88948264
docs: add release notes for blockbeat series 2024-12-20 17:54:08 +08:00
yyforyongyu
cc60d2b41c
chainntnfs: skip dispatched conf details
We need to check `dispatched` before sending conf details, otherwise the
channel `ntfn.Event.Confirmed` will be blocking, which is the leftover
from #9275.
2024-12-20 17:54:08 +08:00
yyforyongyu
ea7d6a509b
contractcourt: register conf notification once and cancel when confirmed 2024-12-20 17:54:08 +08:00
yyforyongyu
a6d3a0fa99
contractcourt: process channel close event on new beat 2024-12-20 17:54:07 +08:00
yyforyongyu
c5b3033427
contractcourt: add close event handlers in ChannelArbitrator
To prepare the next commit where we would handle the event upon
receiving a blockbeat.
2024-12-20 17:54:07 +08:00
yyforyongyu
6eb9bb1ed6
multi: add new method ChainArbitrator.RedispatchBlockbeat
This commit adds a new method to enable us resending the blockbeat in
`ChainArbitrator`, which is needed for the channel restore as the chain
watcher and channel arbitrator are added after the start of the chain
arbitrator.
2024-12-20 17:54:07 +08:00
yyforyongyu
4d765668cc
contractcourt: use close height instead of best height
This commit adds the closing height to the logging and fixes a wrong
height used in handling the breach event.
2024-12-20 17:54:07 +08:00
yyforyongyu
3822c23833
contractcourt: notify blockbeat for chainWatcher
We now start notifying the blockbeat from the ChainArbitrator to the
chainWatcher.
2024-12-20 17:54:07 +08:00
yyforyongyu
8237598ed1
contractcourt: handle blockbeat in chainWatcher 2024-12-20 17:54:07 +08:00
yyforyongyu
4e30598263
contractcourt: add method handleCommitSpend
To prepare for the blockbeat handler.
2024-12-20 17:54:07 +08:00
yyforyongyu
c1a9390c36
contractcourt: register spend notification during init
This commit moves the creation of the spending notification from `Start`
to `newChainWatcher` so we subscribe the spending event before handling
the block, which is needed to properly handle the blockbeat.
2024-12-20 17:54:06 +08:00
yyforyongyu
07cb3aef00
contractcourt: implement Consumer on chainWatcher 2024-12-20 17:54:06 +08:00
yyforyongyu
63aa5aa6e9
contractcourt: offer outgoing htlc one block earlier before its expiry
We need to offer the outgoing htlc one block earlier to make sure when
the expiry height hits, the sweeper will not miss sweeping it in the
same block. This also means the outgoing contest resolver now only does
one thing - watch for preimage spend till height expiry-1, which can
easily be moved into the timeout resolver instead in the future.
2024-12-20 17:54:06 +08:00
yyforyongyu
819c15fa0b
contractcourt: break launchResolvers into two steps
In this commit, we break the old `launchResolvers` into two steps - step
one is to launch the resolvers synchronously, and step two is to
actually waiting for the resolvers to be resolved. This is critical as
in the following commit we will require the resolvers to be launched at
the same blockbeat when a force close event is sent by the chain watcher.
2024-12-20 17:54:06 +08:00
yyforyongyu
d2e81a19fd
contractcourt: fix concurrent access to launched 2024-12-20 17:54:06 +08:00
yyforyongyu
4f5ccb8650
contractcourt: fix concurrent access to resolved
This commit makes `resolved` an atomic bool to avoid data race. This
field is now defined in `contractResolverKit` to avoid code duplication.
2024-12-20 17:54:06 +08:00
yyforyongyu
47722292c5
contractcourt: add Launch method to outgoing contest resolver 2024-12-20 17:54:05 +08:00
yyforyongyu
ef98c52d10
contractcourt: add Launch method to incoming contest resolver
A minor refactor is done to support implementing `Launch`.
2024-12-20 17:54:05 +08:00
yyforyongyu
025d787fd2
invoices: exit early when the subscriber chan is nil
When calling `NotifyExitHopHtlc` it is allowed to pass a chan to
subscribe to the HTLC's resolution when it's settled. However, this
method will also return immediately if there's already a resolution,
which means it behaves like a notifier and a getter. If the caller
decides to only use the getter to do a non-blocking lookup, it can pass
a nil subscriber chan to bypass the notification.
2024-12-20 17:54:05 +08:00
yyforyongyu
71aec7bd94
contractcourt: add Launch method to htlc timeout resolver
This commit breaks the `Resolve` into two parts - the first part is
moved into a `Launch` method that handles sending sweep requests, and
the second part remains in `Resolve` which handles waiting for the
spend. Since we are using both utxo nursery and sweeper at the same
time, to make sure this change doesn't break the existing behavior, we
implement the `Launch` as following,
- zero-fee htlc - handled by the sweeper
- direct output from the remote commit - handled by the sweeper
- legacy htlc - handled by the utxo nursery
2024-12-20 17:54:05 +08:00
yyforyongyu
cf105e67f4
contractcourt: add Launch method to htlc success resolver
This commit breaks the `Resolve` into two parts - the first part is
moved into a `Launch` method that handles sending sweep requests, and
the second part remains in `Resolve` which handles waiting for the
spend. Since we are using both utxo nursery and sweeper at the same
time, to make sure this change doesn't break the existing behavior, we
implement the `Launch` as following,
- zero-fee htlc - handled by the sweeper
- direct output from the remote commit - handled by the sweeper
- legacy htlc - handled by the utxo nursery
2024-12-20 17:54:05 +08:00
yyforyongyu
913f5d4657
contractcourt: add Launch method to commit resolver 2024-12-20 17:54:05 +08:00
yyforyongyu
a98763494f
contractcourt: add Launch method to anchor/breach resolver
We will use this and its following commits to break the original
`Resolve` methods into two parts - the first part is moved to a new
method `Launch`, which handles sending a sweep request to the sweeper.
The second part remains in `Resolve`, which is mainly waiting for a
spending tx.

Breach resolver currently doesn't do anything in its `Launch` since the
sweeping of justice outputs are not handled by the sweeper yet.
2024-12-20 17:54:04 +08:00
yyforyongyu
730b605ed4
contractcourt: add resolve handlers in htlcTimeoutResolver
This commit adds more methods to handle resolving the spending of the
output based on different spending paths.
2024-12-20 17:54:04 +08:00
yyforyongyu
7083302fa0
contractcourt: add methods to checkpoint states
This commit adds checkpoint methods in `htlcTimeoutResolver`, which are
similar to those used in `htlcSuccessResolver`.
2024-12-20 17:54:04 +08:00
yyforyongyu
bfc95b8b2c
contractcourt: add sweep senders in htlcTimeoutResolver
This commit adds new methods to handle making sweep requests based on
the spending path used by the outgoing htlc output.
2024-12-20 17:54:04 +08:00
yyforyongyu
cb18940e75
contractcourt: remove redundant return value in claimCleanUp 2024-12-20 17:54:04 +08:00
yyforyongyu
c92d7f0fd0
contractcourt: add resolver handlers in htlcSuccessResolver
This commit refactors the `Resolve` method by adding two resolver
handlers to handle waiting for spending confirmations.
2024-12-20 17:54:04 +08:00
yyforyongyu
fb499bc4cc
contractcourt: add sweep senders in htlcSuccessResolver
This commit is a pure refactor in which moves the sweep handling logic
into the new methods.
2024-12-20 17:54:04 +08:00
yyforyongyu
10e5a43e46
contractcourt: add spend path helpers in timeout/success resolver
This commit adds a few helper methods to decide how the htlc output
should be spent.
2024-12-20 17:54:03 +08:00
yyforyongyu
1f2cfc6a60
contractcourt: add verbose logging in resolvers
We now put the outpoint in the resolvers's logging so it's easier to
debug.
2024-12-20 17:54:03 +08:00
yyforyongyu
0bab6b3419
chainio: use errgroup to limit num of goroutines 2024-12-20 17:54:03 +08:00
yyforyongyu
1d53e7d081
multi: improve loggings 2024-12-20 17:54:03 +08:00
yyforyongyu
45b243c91c
contractcourt: fix linter funlen
Refactor the `Start` method to fix the linter error:
```
contractcourt/chain_arbitrator.go:568: Function 'Start' is too long (242 > 200) (funlen)
```
2024-12-20 17:54:03 +08:00
yyforyongyu
8fc9154506
lnd: start blockbeatDispatcher and register consumers 2024-12-20 17:54:03 +08:00
yyforyongyu
16a8b623b3
lnd: add new method startLowLevelServices
In this commit we start to break up the starting process into smaller
pieces, which is needed in the following commit to initialize blockbeat
consumers.
2024-12-20 17:54:02 +08:00
yyforyongyu
545cea0546
multi: start consumers with a starting blockbeat
This is needed so the consumers have an initial state about the current
block.
2024-12-20 17:54:02 +08:00
yyforyongyu
802353036e
contractcourt: start channel arbitrator with blockbeat
To avoid calling GetBestBlock again.
2024-12-20 17:54:02 +08:00
yyforyongyu
e2e59bd90c
contractcourt: remove the immediate param used in Resolve
This `immediate` flag was added as a hack so during a restart, the
pending resolvers would offer the inputs to the sweeper and ask it to
sweep them immediately. This is no longer need due to `blockbeat`, as
now during restart, a block is always sent to all subsystems via the
flow `ChainArb` -> `ChannelArb` -> resolvers -> sweeper. Thus, when
there are pending inputs offered, they will be processed by the sweeper
immediately.
2024-12-20 17:54:02 +08:00
yyforyongyu
71295534bb
contractcourt: remove block subscription in channel arbitrator
This commit removes the block subscriptions used in `ChannelArbitrator`,
replaced them with the blockbeat managed by `BlockbeatDispatcher`.
2024-12-20 17:54:02 +08:00
yyforyongyu
045f8432b7
contractcourt: remove block subscription in chain arbitrator
This commit removes the block subscriptions used in `ChainArbitrator`
and replaced them with the blockbeat managed by `BlockbeatDispatcher`.
2024-12-20 17:54:02 +08:00
yyforyongyu
5f9d473702
contractcourt: remove waitForHeight in resolvers
The sweeper can handle the waiting so there's no need to wait for blocks
inside the resolvers. By offering the inputs prior to their mature
heights also guarantees the inputs with the same deadline are
aggregated.
2024-12-20 17:54:02 +08:00
yyforyongyu
3ac6752a77
sweep: remove redundant notifications during shutdown
This commit removes the hack introduced in #4851. Previously we had this
issue because the chain notifier was stopped before the sweeper, which
was changed a while back and we now always stop the chain notifier last.
In addition, since we no longer subscribe to the block epoch chan
directly, this issue can no longer happen.
2024-12-20 17:54:01 +08:00
yyforyongyu
e113f39d26
sweep: remove block subscription in UtxoSweeper and TxPublisher
This commit removes the independent block subscriptions in `UtxoSweeper`
and `TxPublisher`. These subsystems now listen to the `BlockbeatChan`
for new blocks.
2024-12-20 17:54:01 +08:00
yyforyongyu
801fd6b85b
multi: implement Consumer on subsystems
This commit implements `Consumer` on `TxPublisher`, `UtxoSweeper`,
`ChainArbitrator` and `ChannelArbitrator`.
2024-12-20 17:54:01 +08:00
yyforyongyu
b5a3a27c77
chainio: add partial implementation of Consumer interface 2024-12-20 17:54:01 +08:00
yyforyongyu
4b83d87baa
chainio: add BlockbeatDispatcher to dispatch blockbeats
This commit adds a blockbeat dispatcher which handles sending new blocks
to all subscribed consumers.
2024-12-20 17:54:01 +08:00
yyforyongyu
a1eb87e280
chainio: add helper methods to dispatch beats
This commit adds two methods to handle dispatching beats. These are
exported methods so other systems can send beats to their managed
subinstances.
2024-12-20 17:54:01 +08:00
yyforyongyu
01ac713aec
chainio: implement Blockbeat
In this commit, a minimal implementation of `Blockbeat` is added to
synchronize block heights, which will be used in `ChainArb`, `Sweeper`,
and `TxPublisher` so blocks are processed sequentially among them.
2024-12-20 17:54:00 +08:00
yyforyongyu
060ff013c1
chainio: introduce chainio to handle block synchronization
This commit inits the package `chainio` and defines the interface
`Blockbeat` and `Consumer`. The `Consumer` must be implemented by other
subsystems if it requires block epoch subscription.
2024-12-20 17:54:00 +08:00
yyforyongyu
30ee450091
sweep: make sure nil tx is handled
After previous commit, it should be clear that the tx may be failed to
created in a `TxFailed` event. We now make sure to catch it to avoid
panic.
2024-12-20 17:54:00 +08:00
yyforyongyu
78ce757e7b
sweep: break initialBroadcast into two steps
With the combination of the following commit we can have a more granular
control over the bump result when handling it in the sweeper.
2024-12-20 17:54:00 +08:00
yyforyongyu
f0c4e6dba1
sweep: remove redundant loopvar assign 2024-12-20 17:54:00 +08:00
yyforyongyu
7545bbfa92
sweep: make sure defaultDeadline is derived from the mature height 2024-12-20 17:54:00 +08:00
yyforyongyu
afc08c6623
sweep: add method isMature on SweeperInput
Also updated `handlePendingSweepsReq` to skip immature inputs so the
returned results are the same as those in pre-0.18.0.
2024-12-20 17:53:59 +08:00
yyforyongyu
ba238962d6
sweep: add method handleBumpEventError and fix markInputFailed
Previously in `markInputFailed`, we'd remove all inputs under the same
group via `removeExclusiveGroup`. This is wrong as when the current
sweep fails for this input, it shouldn't affect other inputs.
2024-12-20 17:53:59 +08:00
yyforyongyu
719ca5b229
sweep: remove redundant error from Broadcast 2024-12-20 17:53:59 +08:00
yyforyongyu
77ff2c0585
sweep: add handleInitialBroadcast to handle initial broadcast
This commit adds a new method `handleInitialBroadcast` to handle the
initial broadcast. Previously we'd broadcast immediately inside
`Broadcast`, which soon will not work after the `blockbeat` is
implemented as the action to publish is now always triggered by a new
block. Meanwhile, we still keep the option to bypass the block trigger
so users can broadcast immediately by setting `Immediate` to true.
2024-12-20 17:53:59 +08:00
yyforyongyu
2479dc7f2e
sweep: handle inputs locally instead of relying on the tx
This commit changes how inputs are handled upon receiving a bump result.
Previously the inputs are taken from the `BumpResult.Tx`, which is now
instead being handled locally as we will remember the input set when
sending the bump request, and handle this input set when a result is
received.
2024-12-20 17:53:59 +08:00
yyforyongyu
d0c7fd8aac
sweep: add new interface method Immediate
This prepares the following commit where we now let the fee bumpr
decides whether to broadcast immediately or not.
2024-12-20 17:53:59 +08:00
yyforyongyu
5f64280df4
sweep: add new error ErrZeroFeeRateDelta 2024-12-20 17:53:59 +08:00
yyforyongyu
6c2e8b9a00
sweep: add new state TxFatal for erroneous sweepings
Also updated the loggings. This new state will be used in the following
commit.
2024-12-20 17:53:58 +08:00
Oliver Gugger
1dfb5a0c20
Merge pull request #9382 from guggero/linter-update
lint: deprecate old linters, use new ref commit
2024-12-20 10:51:08 +01:00
Oliver Gugger
f3ddf4d8ea
.golangci.yml: speed up linter by updating start commit
With this we allow the linter to only look at recent changes, since
everything between that old commit and this most recent one has been
linted correctly anyway.
2024-12-20 10:02:24 +01:00
Oliver Gugger
ad29096aa1
.golangci.yml: turn off deprecated linters
All these linters produced a deprecation warning. They've all been
replaced by new linters, so we can safely turn them off.
2024-12-20 10:01:42 +01:00
Oliver Gugger
03eab4db64
Merge pull request #9377 from alingse/fix-nilnesserr
fix check node1Err !=nil but return a nil value error err
2024-12-20 09:26:38 +01:00
yyforyongyu
1167181d06
workflows: fix no space left error 2024-12-20 14:00:30 +08:00
Oliver Gugger
fd4531c751
Merge pull request #9379 from ziggie1984/release-doc-fix
Correct release notes for 18.4 and 19.0
2024-12-19 20:14:16 +01:00
ziggie
90db546b64
docs: update release-notes
Correct 18.4 release-notes for a change which will only be part
of 19.0.
2024-12-19 18:58:13 +01:00
alingse
a79fd08294 fix check node1Err !=nil but return a nil value error err
Signed-off-by: alingse <alingse@foxmail.com>
2024-12-19 14:57:28 +00:00
Oliver Gugger
2d629174e7
Merge pull request #9376 from yyforyongyu/remove-replace
gomod: remove replaces of `kvdb` and `sqldb`
2024-12-19 13:27:27 +01:00
yyforyongyu
785cef2a96
gomod: remove replace of sqldb and kvdb 2024-12-19 19:02:46 +08:00
Olaoluwa Osuntokun
78cbed985f
contractcourt: add rapid derived fuzz test for HtlcAuxBlob
In this commit, we add a rapid derived fuzz test for the HtlcAuxBlob
test. This uses the rapid (randomized property testing) into Go's built
in fuzzer. This wrapper will use the fuzz stream, and pass that into
rapid where the stream is used to make structured test inputs which are
tested against the existing properties.

This can be done more widely in the codebase, we pick a simple example
to port first before tackling others.
2024-12-12 16:57:58 +01:00
364 changed files with 32120 additions and 17982 deletions

View file

@ -19,13 +19,14 @@ runs:
steps:
- name: setup go ${{ inputs.go-version }}
uses: actions/setup-go@v3
uses: actions/setup-go@v5
with:
go-version: '${{ inputs.go-version }}'
cache: 'false'
- name: go module and build cache
- name: go cache
if: ${{ inputs.use-build-cache == 'yes' }}
uses: actions/cache@v3
uses: actions/cache@v4
with:
# In order:
# * Module download cache
@ -44,7 +45,7 @@ runs:
- name: go module cache
if: ${{ inputs.use-build-cache == 'no' }}
uses: actions/cache@v3
uses: actions/cache@v4
with:
# Just the module download cache.
path: |

View file

@ -11,8 +11,9 @@ Steps for reviewers to follow to test the change.
- [ ] Bug fixes contain tests triggering the bug to prevent regressions.
### Code Style and Documentation
- [ ] The change obeys the [Code Documentation and Commenting](https://github.com/lightningnetwork/lnd/blob/master/docs/code_contribution_guidelines.md#CodeDocumentation) guidelines, and lines wrap at 80.
- [ ] Commits follow the [Ideal Git Commit Structure](https://github.com/lightningnetwork/lnd/blob/master/docs/code_contribution_guidelines.md#IdealGitCommitStructure).
- [ ] The change is not [insubstantial](https://github.com/lightningnetwork/lnd/blob/master/docs/code_contribution_guidelines.md#substantial-contributions-only). Typo fixes are not accepted to fight bot spam.
- [ ] The change obeys the [Code Documentation and Commenting](https://github.com/lightningnetwork/lnd/blob/master/docs/code_contribution_guidelines.md#code-documentation-and-commenting) guidelines, and lines wrap at 80.
- [ ] Commits follow the [Ideal Git Commit Structure](https://github.com/lightningnetwork/lnd/blob/master/docs/code_contribution_guidelines.md#ideal-git-commit-structure).
- [ ] Any new logging statements use an appropriate subsystem and logging level.
- [ ] Any new lncli commands have appropriate tags in the comments for the rpc in the proto file.
- [ ] [There is a change description in the release notes](https://github.com/lightningnetwork/lnd/tree/master/docs/release-notes), or `[skip ci]` in the commit message for small changes.

View file

@ -23,11 +23,18 @@ defaults:
env:
BITCOIN_VERSION: "28"
TRANCHES: 8
# TRANCHES defines the number of tranches used in the itests.
TRANCHES: 16
# SMALL_TRANCHES defines the number of tranches used in the less stable itest
# builds
#
# TODO(yy): remove this value and use TRANCHES.
SMALL_TRANCHES: 8
# If you change this please also update GO_VERSION in Makefile (then run
# `make lint` to see where else it needs to be updated as well).
GO_VERSION: 1.22.6
GO_VERSION: 1.23.6
jobs:
########################
@ -37,8 +44,11 @@ jobs:
name: Sqlc check
runs-on: ubuntu-latest
steps:
- name: cleanup space
run: rm -rf /opt/hostedtoolcache
- name: git checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: setup go ${{ env.GO_VERSION }}
uses: ./.github/actions/setup-go
@ -60,8 +70,11 @@ jobs:
name: RPC and mobile compilation check
runs-on: ubuntu-latest
steps:
- name: cleanup space
run: rm -rf /opt/hostedtoolcache
- name: git checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: setup go ${{ env.GO_VERSION }}
uses: ./.github/actions/setup-go
@ -88,8 +101,11 @@ jobs:
name: check commits
runs-on: ubuntu-latest
steps:
- name: cleanup space
run: rm -rf /opt/hostedtoolcache
- name: git checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
fetch-depth: 0
@ -111,8 +127,11 @@ jobs:
name: lint code
runs-on: ubuntu-latest
steps:
- name: cleanup space
run: rm -rf /opt/hostedtoolcache
- name: git checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
fetch-depth: 0
@ -151,8 +170,11 @@ jobs:
- name: arm
sys: darwin-arm64 freebsd-arm linux-armv6 linux-armv7 linux-arm64 windows-arm
steps:
- name: cleanup space
run: rm -rf /opt/hostedtoolcache
- name: git checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: setup go ${{ env.GO_VERSION }}
uses: ./.github/actions/setup-go
@ -171,8 +193,11 @@ jobs:
name: sample configuration check
runs-on: ubuntu-latest
steps:
- name: cleanup space
run: rm -rf /opt/hostedtoolcache
- name: git checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: setup go ${{ env.GO_VERSION }}
uses: ./.github/actions/setup-go
@ -193,16 +218,19 @@ jobs:
fail-fast: false
matrix:
unit_type:
- btcd unit-cover
- unit-cover
- unit tags="kvdb_etcd"
- unit tags="kvdb_postgres"
- unit tags="kvdb_sqlite"
- btcd unit-race
- unit-race
- unit-module
steps:
- name: cleanup space
run: rm -rf /opt/hostedtoolcache
- name: git checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
fetch-depth: 0
@ -211,7 +239,7 @@ jobs:
uses: ./.github/actions/rebase
- name: git checkout fuzzing seeds
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
repository: lightninglabs/lnd-fuzz
path: lnd-fuzz
@ -231,20 +259,26 @@ jobs:
- name: run ${{ matrix.unit_type }}
run: make ${{ matrix.unit_type }}
- name: Clean coverage
run: grep -Ev '(\.pb\.go|\.pb\.json\.go|\.pb\.gw\.go)' coverage.txt > coverage-norpc.txt
if: matrix.unit_type == 'unit-cover'
- name: Send coverage
uses: ziggie1984/actions-goveralls@c440f43938a4032b627d2b03d61d4ae1a2ba2b5c
if: matrix.unit_type == 'btcd unit-cover'
uses: coverallsapp/github-action@v2
if: matrix.unit_type == 'unit-cover'
continue-on-error: true
with:
path-to-profile: coverage.txt
file: coverage-norpc.txt
flag-name: 'unit'
format: 'golang'
parallel: true
########################
# run ubuntu integration tests
# run integration tests with TRANCHES
########################
ubuntu-integration-test:
name: run ubuntu itests
basic-integration-test:
name: basic itests
runs-on: ubuntu-latest
if: '!contains(github.event.pull_request.labels.*.name, ''no-itest'')'
strategy:
@ -258,23 +292,14 @@ jobs:
args: backend=bitcoind cover=1
- name: bitcoind-notxindex
args: backend="bitcoind notxindex"
- name: bitcoind-rpcpolling
args: backend="bitcoind rpcpolling" cover=1
- name: bitcoind-etcd
args: backend=bitcoind dbbackend=etcd
- name: bitcoind-postgres
args: backend=bitcoind dbbackend=postgres
- name: bitcoind-sqlite
args: backend=bitcoind dbbackend=sqlite
- name: bitcoind-postgres-nativesql
args: backend=bitcoind dbbackend=postgres nativesql=true
- name: bitcoind-sqlite-nativesql
args: backend=bitcoind dbbackend=sqlite nativesql=true
- name: neutrino
args: backend=neutrino cover=1
steps:
- name: cleanup space
run: rm -rf /opt/hostedtoolcache
- name: git checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
fetch-depth: 0
@ -292,14 +317,20 @@ jobs:
run: ./scripts/install_bitcoind.sh $BITCOIN_VERSION
- name: run ${{ matrix.name }}
run: make itest-parallel tranches=${{ env.TRANCHES }} ${{ matrix.args }}
run: make itest-parallel tranches=${{ env.TRANCHES }} ${{ matrix.args }} shuffleseed=${{ github.run_id }}${{ strategy.job-index }}
- name: Clean coverage
run: grep -Ev '(\.pb\.go|\.pb\.json\.go|\.pb\.gw\.go)' coverage.txt > coverage-norpc.txt
if: ${{ contains(matrix.args, 'cover=1') }}
- name: Send coverage
if: ${{ contains(matrix.args, 'cover=1') }}
uses: ziggie1984/actions-goveralls@c440f43938a4032b627d2b03d61d4ae1a2ba2b5c
continue-on-error: true
uses: coverallsapp/github-action@v2
with:
path-to-profile: coverage.txt
file: coverage-norpc.txt
flag-name: 'itest-${{ matrix.name }}'
format: 'golang'
parallel: true
- name: Zip log files on failure
@ -308,68 +339,40 @@ jobs:
run: 7z a logs-itest-${{ matrix.name }}.zip itest/**/*.log itest/postgres.log
- name: Upload log files on failure
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
if: ${{ failure() }}
with:
name: logs-itest-${{ matrix.name }}
path: logs-itest-${{ matrix.name }}.zip
retention-days: 5
########################
# run windows integration test
# run integration tests with SMALL_TRANCHES
########################
windows-integration-test:
name: run windows itest
runs-on: windows-latest
integration-test:
name: itests
runs-on: ubuntu-latest
if: '!contains(github.event.pull_request.labels.*.name, ''no-itest'')'
strategy:
# Allow other tests in the matrix to continue if one fails.
fail-fast: false
matrix:
include:
- name: bitcoind-rpcpolling
args: backend="bitcoind rpcpolling"
- name: bitcoind-etcd
args: backend=bitcoind dbbackend=etcd
- name: bitcoind-sqlite
args: backend=bitcoind dbbackend=sqlite
- name: bitcoind-sqlite-nativesql
args: backend=bitcoind dbbackend=sqlite nativesql=true
- name: bitcoind-postgres
args: backend=bitcoind dbbackend=postgres
- name: bitcoind-postgres-nativesql
args: backend=bitcoind dbbackend=postgres nativesql=true
steps:
- name: git checkout
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: fetch and rebase on ${{ github.base_ref }}
if: github.event_name == 'pull_request'
uses: ./.github/actions/rebase
- name: setup go ${{ env.GO_VERSION }}
uses: ./.github/actions/setup-go
with:
go-version: '${{ env.GO_VERSION }}'
key-prefix: integration-test
- name: run itest
run: make itest-parallel tranches=${{ env.TRANCHES }} windows=1
- name: kill any remaining lnd processes
if: ${{ failure() }}
shell: powershell
run: taskkill /IM lnd-itest.exe /T /F
- name: Zip log files on failure
if: ${{ failure() }}
timeout-minutes: 5 # timeout after 5 minute
run: 7z a logs-itest-windows.zip itest/**/*.log
- name: Upload log files on failure
uses: actions/upload-artifact@v3
if: ${{ failure() }}
with:
name: logs-itest-windows
path: logs-itest-windows.zip
retention-days: 5
########################
# run macOS integration test
########################
macos-integration-test:
name: run macOS itest
runs-on: macos-14
if: '!contains(github.event.pull_request.labels.*.name, ''no-itest'')'
steps:
- name: git checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
fetch-depth: 0
@ -384,13 +387,108 @@ jobs:
key-prefix: integration-test
- name: install bitcoind
run: |
wget https://bitcoincore.org/bin/bitcoin-core-${BITCOIN_VERSION}.0/bitcoin-${BITCOIN_VERSION}.0-arm64-apple-darwin.tar.gz
tar zxvf bitcoin-${BITCOIN_VERSION}.0-arm64-apple-darwin.tar.gz
mv bitcoin-${BITCOIN_VERSION}.0 /tmp/bitcoin
run: ./scripts/install_bitcoind.sh $BITCOIN_VERSION
- name: run ${{ matrix.name }}
run: make itest-parallel tranches=${{ env.SMALL_TRANCHES }} ${{ matrix.args }} shuffleseed=${{ github.run_id }}${{ strategy.job-index }}
- name: Clean coverage
run: grep -Ev '(\.pb\.go|\.pb\.json\.go|\.pb\.gw\.go)' coverage.txt > coverage-norpc.txt
if: ${{ contains(matrix.args, 'cover=1') }}
- name: Send coverage
if: ${{ contains(matrix.args, 'cover=1') }}
continue-on-error: true
uses: coverallsapp/github-action@v2
with:
file: coverage-norpc.txt
flag-name: 'itest-${{ matrix.name }}'
format: 'golang'
parallel: true
- name: Zip log files on failure
if: ${{ failure() }}
timeout-minutes: 5 # timeout after 5 minute
run: 7z a logs-itest-${{ matrix.name }}.zip itest/**/*.log
- name: Upload log files on failure
uses: actions/upload-artifact@v4
if: ${{ failure() }}
with:
name: logs-itest-${{ matrix.name }}
path: logs-itest-${{ matrix.name }}.zip
retention-days: 5
########################
# run windows integration test
########################
windows-integration-test:
name: windows itest
runs-on: windows-latest
if: '!contains(github.event.pull_request.labels.*.name, ''no-itest'')'
steps:
- name: git checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: fetch and rebase on ${{ github.base_ref }}
if: github.event_name == 'pull_request'
uses: ./.github/actions/rebase
- name: setup go ${{ env.GO_VERSION }}
uses: ./.github/actions/setup-go
with:
go-version: '${{ env.GO_VERSION }}'
key-prefix: integration-test
- name: run itest
run: PATH=$PATH:/tmp/bitcoin/bin make itest-parallel tranches=${{ env.TRANCHES }} backend=bitcoind
run: make itest-parallel tranches=${{ env.SMALL_TRANCHES }} windows=1 shuffleseed=${{ github.run_id }}
- name: kill any remaining lnd processes
if: ${{ failure() }}
shell: powershell
run: taskkill /IM lnd-itest.exe /T /F
- name: Zip log files on failure
if: ${{ failure() }}
timeout-minutes: 5 # timeout after 5 minute
run: 7z a logs-itest-windows.zip itest/**/*.log
- name: Upload log files on failure
uses: actions/upload-artifact@v4
if: ${{ failure() }}
with:
name: logs-itest-windows
path: logs-itest-windows.zip
retention-days: 5
########################
# run macOS integration test
########################
macos-integration-test:
name: macOS itest
runs-on: macos-14
if: '!contains(github.event.pull_request.labels.*.name, ''no-itest'')'
steps:
- name: git checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: fetch and rebase on ${{ github.base_ref }}
if: github.event_name == 'pull_request'
uses: ./.github/actions/rebase
- name: setup go ${{ env.GO_VERSION }}
uses: ./.github/actions/setup-go
with:
go-version: '${{ env.GO_VERSION }}'
key-prefix: integration-test
- name: run itest
run: make itest-parallel tranches=${{ env.SMALL_TRANCHES }} shuffleseed=${{ github.run_id }}
- name: Zip log files on failure
if: ${{ failure() }}
@ -398,7 +496,7 @@ jobs:
run: 7z a logs-itest-macos.zip itest/**/*.log
- name: Upload log files on failure
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
if: ${{ failure() }}
with:
name: logs-itest-macos
@ -420,8 +518,11 @@ jobs:
- github.com/golang/protobuf v1.5.3
steps:
- name: cleanup space
run: rm -rf /opt/hostedtoolcache
- name: git checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: ensure dependencies at correct version
run: if ! grep -q "${{ matrix.pinned_dep }}" go.mod; then echo dependency ${{ matrix.pinned_dep }} should not be altered ; exit 1 ; fi
@ -434,18 +535,39 @@ jobs:
runs-on: ubuntu-latest
if: '!contains(github.event.pull_request.labels.*.name, ''no-changelog'')'
steps:
- name: cleanup space
run: rm -rf /opt/hostedtoolcache
- name: git checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: release notes check
run: scripts/check-release-notes.sh
########################
# Backwards Compatibility Test
########################
backwards-compatability-test:
name: backwards compatability test
runs-on: ubuntu-latest
steps:
- name: git checkout
uses: actions/checkout@v4
- name: 🐳 Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: 🛡️ backwards compatibility test
run: make backwards-compat-test
# Notify about the completion of all coverage collecting jobs.
finish:
if: ${{ always() }}
needs: [unit-test, ubuntu-integration-test]
needs: [unit-test, basic-integration-test]
runs-on: ubuntu-latest
steps:
- uses: ziggie1984/actions-goveralls@c440f43938a4032b627d2b03d61d4ae1a2ba2b5c
- name: Send coverage
uses: coverallsapp/github-action@v2
continue-on-error: true
with:
parallel-finished: true

View file

@ -12,7 +12,7 @@ defaults:
env:
# If you change this please also update GO_VERSION in Makefile (then run
# `make lint` to see where else it needs to be updated as well).
GO_VERSION: 1.22.6
GO_VERSION: 1.23.6
jobs:
main:
@ -20,12 +20,12 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: git checkout
uses: actions/checkout@v2
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: setup go ${{ env.GO_VERSION }}
uses: actions/setup-go@v2
uses: actions/setup-go@v5
with:
go-version: '${{ env.GO_VERSION }}'
@ -121,8 +121,8 @@ jobs:
```
tar -xvzf vendor.tar.gz
tar -xvzf lnd-source-${{ env.RELEASE_VERSION }}.tar.gz
GO111MODULE=on go install -v -mod=vendor -ldflags "-X github.com/lightningnetwork/lnd/build.Commit=${{ env.RELEASE_VERSION }}" ./cmd/lnd
GO111MODULE=on go install -v -mod=vendor -ldflags "-X github.com/lightningnetwork/lnd/build.Commit=${{ env.RELEASE_VERSION }}" ./cmd/lncli
go install -v -mod=vendor -ldflags "-X github.com/lightningnetwork/lnd/build.Commit=${{ env.RELEASE_VERSION }}" ./cmd/lnd
go install -v -mod=vendor -ldflags "-X github.com/lightningnetwork/lnd/build.Commit=${{ env.RELEASE_VERSION }}" ./cmd/lncli
```
The `-mod=vendor` flag tells the `go build` command that it doesn't need to fetch the dependencies, and instead, they're all enclosed in the local vendor directory.

View file

@ -1,7 +1,7 @@
run:
# If you change this please also update GO_VERSION in Makefile (then run
# `make lint` to see where else it needs to be updated as well).
go: "1.22.6"
go: "1.23.6"
# Abort after 10 minutes.
timeout: 10m
@ -38,10 +38,6 @@ linters-settings:
# Check for incorrect fmt.Errorf error wrapping.
errorf: true
govet:
# Don't report about shadowed variables
check-shadowing: false
gofmt:
# simplify code: gofmt with `-s` option, true by default
simplify: true
@ -60,6 +56,7 @@ linters-settings:
- G402 # Look for bad TLS connection settings.
- G306 # Poor file permissions used when writing to a new file.
- G601 # Implicit memory aliasing in for loop.
- G115 # Integer overflow in conversion.
staticcheck:
checks: ["-SA1019"]
@ -145,16 +142,13 @@ linters:
- unparam
- wastedassign
# Disable gofumpt as it has weird behavior regarding formatting multiple
# lines for a function which is in conflict with our contribution
# guidelines. See https://github.com/mvdan/gofumpt/issues/235.
- gofumpt
# Disable whitespace linter as it has conflict rules against our
# contribution guidelines. See https://github.com/bombsimon/wsl/issues/109.
#
# TODO(yy): bring it back when the above issue is fixed.
# contribution guidelines.
- wsl
# Allow using default empty values.
@ -183,7 +177,7 @@ linters:
- wrapcheck
# Allow dynamic errors.
- goerr113
- err113
# We use ErrXXX instead.
- errname
@ -198,7 +192,7 @@ linters:
# The linter is too aggressive and doesn't add much value since reviewers
# will also catch magic numbers that make sense to extract.
- gomnd
- mnd
# Some of the tests cannot be parallelized. On the other hand, we don't
# gain much performance with this check so we disable it for now until
@ -218,10 +212,12 @@ linters:
- intrange
- goconst
# Deprecated linters that have been replaced by newer ones.
- tenv
issues:
# Only show newly introduced problems.
new-from-rev: 77c7f776d5cbf9e147edc81d65ae5ba177a684e5
new-from-rev: 03eab4db64540aa5f789c617793e4459f4ba9e78
# Skip autogenerated files for mobile and gRPC as well as copied code for
# internal use.

View file

@ -1,6 +1,6 @@
# If you change this please also update GO_VERSION in Makefile (then run
# `make lint` to see where else it needs to be updated as well).
FROM golang:1.22.6-alpine as builder
FROM golang:1.23.6-alpine as builder
# Force Go to use the cgo based DNS resolver. This is required to ensure DNS
# queries required to connect to linked containers succeed.

View file

@ -1,19 +1,17 @@
PKG := github.com/lightningnetwork/lnd
ESCPKG := github.com\/lightningnetwork\/lnd
MOBILE_PKG := $(PKG)/mobile
TOOLS_DIR := tools
GOCC ?= go
PREFIX ?= /usr/local
BTCD_PKG := github.com/btcsuite/btcd
GOACC_PKG := github.com/ory/go-acc
GOIMPORTS_PKG := github.com/rinchsan/gosimports/cmd/gosimports
GO_BIN := ${GOPATH}/bin
BTCD_BIN := $(GO_BIN)/btcd
GOIMPORTS_BIN := $(GO_BIN)/gosimports
GOMOBILE_BIN := $(GO_BIN)/gomobile
GOACC_BIN := $(GO_BIN)/go-acc
MOBILE_BUILD_DIR :=${GOPATH}/src/$(MOBILE_PKG)/build
IOS_BUILD_DIR := $(MOBILE_BUILD_DIR)/ios
@ -24,22 +22,17 @@ ANDROID_BUILD := $(ANDROID_BUILD_DIR)/Lndmobile.aar
COMMIT := $(shell git describe --tags --dirty)
# Determine the minor version of the active Go installation.
ACTIVE_GO_VERSION := $(shell go version | sed -nre 's/^[^0-9]*(([0-9]+\.)*[0-9]+).*/\1/p')
ACTIVE_GO_VERSION := $(shell $(GOCC) version | sed -nre 's/^[^0-9]*(([0-9]+\.)*[0-9]+).*/\1/p')
ACTIVE_GO_VERSION_MINOR := $(shell echo $(ACTIVE_GO_VERSION) | cut -d. -f2)
LOOPVARFIX :=
ifeq ($(shell expr $(ACTIVE_GO_VERSION_MINOR) \>= 21), 1)
LOOPVARFIX := GOEXPERIMENT=loopvar
endif
# GO_VERSION is the Go version used for the release build, docker files, and
# GitHub Actions. This is the reference version for the project. All other Go
# versions are checked against this version.
GO_VERSION = 1.22.6
GO_VERSION = 1.23.6
GOBUILD := $(LOOPVARFIX) go build -v
GOINSTALL := $(LOOPVARFIX) go install -v
GOTEST := $(LOOPVARFIX) go test
GOBUILD := $(GOCC) build -v
GOINSTALL := $(GOCC) install -v
GOTEST := $(GOCC) test
GOFILES_NOVENDOR = $(shell find . -type f -name '*.go' -not -path "./vendor/*" -not -name "*pb.go" -not -name "*pb.gw.go" -not -name "*.pb.json.go")
@ -73,8 +66,8 @@ endif
DOCKER_TOOLS = docker run \
--rm \
-v $(shell bash -c "go env GOCACHE || (mkdir -p /tmp/go-cache; echo /tmp/go-cache)"):/tmp/build/.cache \
-v $(shell bash -c "go env GOMODCACHE || (mkdir -p /tmp/go-modcache; echo /tmp/go-modcache)"):/tmp/build/.modcache \
-v $(shell bash -c "$(GOCC) env GOCACHE || (mkdir -p /tmp/go-cache; echo /tmp/go-cache)"):/tmp/build/.cache \
-v $(shell bash -c "$(GOCC) env GOMODCACHE || (mkdir -p /tmp/go-modcache; echo /tmp/go-modcache)"):/tmp/build/.modcache \
-v $(shell bash -c "mkdir -p /tmp/go-lint-cache; echo /tmp/go-lint-cache"):/root/.cache/golangci-lint \
-v $$(pwd):/build lnd-tools
@ -91,17 +84,13 @@ all: scratch check install
# ============
# DEPENDENCIES
# ============
$(GOACC_BIN):
@$(call print, "Installing go-acc.")
cd $(TOOLS_DIR); go install -trimpath -tags=tools $(GOACC_PKG)
$(BTCD_BIN):
@$(call print, "Installing btcd.")
cd $(TOOLS_DIR); go install -trimpath $(BTCD_PKG)
cd $(TOOLS_DIR); $(GOCC) install -trimpath $(BTCD_PKG)
$(GOIMPORTS_BIN):
@$(call print, "Installing goimports.")
cd $(TOOLS_DIR); go install -trimpath $(GOIMPORTS_PKG)
cd $(TOOLS_DIR); $(GOCC) install -trimpath $(GOIMPORTS_PKG)
# ============
# INSTALLATION
@ -220,7 +209,7 @@ clean-itest-logs:
itest-only: clean-itest-logs db-instance
@$(call print, "Running integration tests with ${backend} backend.")
date
EXEC_SUFFIX=$(EXEC_SUFFIX) scripts/itest_part.sh 0 1 $(TEST_FLAGS) $(ITEST_FLAGS) -test.v
EXEC_SUFFIX=$(EXEC_SUFFIX) scripts/itest_part.sh 0 1 $(SHUFFLE_SEED) $(TEST_FLAGS) $(ITEST_FLAGS) -test.v
$(COLLECT_ITEST_COVERAGE)
#? itest: Build and run integration tests
@ -233,7 +222,7 @@ itest-race: build-itest-race itest-only
itest-parallel: clean-itest-logs build-itest db-instance
@$(call print, "Running tests")
date
EXEC_SUFFIX=$(EXEC_SUFFIX) scripts/itest_parallel.sh $(ITEST_PARALLELISM) $(NUM_ITEST_TRANCHES) $(TEST_FLAGS) $(ITEST_FLAGS)
EXEC_SUFFIX=$(EXEC_SUFFIX) scripts/itest_parallel.sh $(ITEST_PARALLELISM) $(NUM_ITEST_TRANCHES) $(SHUFFLE_SEED) $(TEST_FLAGS) $(ITEST_FLAGS)
$(COLLECT_ITEST_COVERAGE)
#? itest-clean: Kill all running itest processes
@ -257,12 +246,12 @@ unit-debug: $(BTCD_BIN)
$(UNIT_DEBUG)
#? unit-cover: Run unit tests in coverage mode
unit-cover: $(GOACC_BIN)
unit-cover: $(BTCD_BIN)
@$(call print, "Running unit coverage tests.")
$(GOACC)
$(UNIT_COVER)
#? unit-race: Run unit tests in race detector mode
unit-race:
unit-race: $(BTCD_BIN)
@$(call print, "Running unit race tests.")
env CGO_ENABLED=1 GORACE="history_size=7 halt_on_errors=1" $(UNIT_RACE)
@ -275,18 +264,18 @@ unit-bench: $(BTCD_BIN)
# FLAKE HUNTING
# =============
#? flakehunter: Run the integration tests continuously until one fails
flakehunter: build-itest
#? flakehunter-itest: Run the integration tests continuously until one fails
flakehunter-itest: build-itest
@$(call print, "Flake hunting ${backend} integration tests.")
while [ $$? -eq 0 ]; do make itest-only icase='${icase}' backend='${backend}'; done
#? flake-unit: Run the unit tests continuously until one fails
flake-unit:
@$(call print, "Flake hunting unit tests.")
while [ $$? -eq 0 ]; do GOTRACEBACK=all $(UNIT) -count=1; done
#? flakehunter-unit: Run the unit tests continuously until one fails
flakehunter-unit:
@$(call print, "Flake hunting unit test.")
scripts/unit-test-flake-hunter.sh ${pkg} ${case}
#? flakehunter-parallel: Run the integration tests continuously until one fails, running up to ITEST_PARALLELISM test tranches in parallel (default 4)
flakehunter-parallel:
#? flakehunter-itest-parallel: Run the integration tests continuously until one fails, running up to ITEST_PARALLELISM test tranches in parallel (default 4)
flakehunter-itest-parallel:
@$(call print, "Flake hunting ${backend} integration tests in parallel.")
while [ $$? -eq 0 ]; do make itest-parallel tranches=1 parallel=${ITEST_PARALLELISM} icase='${icase}' backend='${backend}'; done
@ -363,6 +352,11 @@ help: Makefile
@$(call print, "Listing commands:")
@sed -n 's/^#?//p' $< | column -t -s ':' | sort | sed -e 's/^/ /'
#? backwards-compat-test: Run basic backwards compatibility test
backwards-compat-test:
@$(call print, "Running backwards compatability test")
./scripts/bw-compatibility-test/test.sh
#? sqlc: Generate sql models and queries in Go
sqlc:
@$(call print, "Generating sql models and queries in Go")
@ -407,7 +401,7 @@ mobile-rpc:
#? vendor: Create a vendor directory with all dependencies
vendor:
@$(call print, "Re-creating vendor directory.")
rm -r vendor/; go mod vendor
rm -r vendor/; $(GOCC) mod vendor
#? apple: Build mobile RPC stubs and project template for iOS and macOS
apple: mobile-rpc

384
accessman.go Normal file
View file

@ -0,0 +1,384 @@
package lnd
import (
"fmt"
"sync"
"github.com/btcsuite/btcd/btcec/v2"
"github.com/lightningnetwork/lnd/channeldb"
)
// accessMan is responsible for managing the server's access permissions.
type accessMan struct {
cfg *accessManConfig
// banScoreMtx is used for the server's ban tracking. If the server
// mutex is also going to be locked, ensure that this is locked after
// the server mutex.
banScoreMtx sync.RWMutex
// peerCounts is a mapping from remote public key to {bool, uint64}
// where the bool indicates that we have an open/closed channel with
// the peer and where the uint64 indicates the number of pending-open
// channels we currently have with them. This mapping will be used to
// determine access permissions for the peer. The map key is the
// string-version of the serialized public key.
//
// NOTE: This MUST be accessed with the banScoreMtx held.
peerCounts map[string]channeldb.ChanCount
// peerScores stores each connected peer's access status. The map key
// is the string-version of the serialized public key.
//
// NOTE: This MUST be accessed with the banScoreMtx held.
peerScores map[string]peerSlotStatus
// numRestricted tracks the number of peers with restricted access in
// peerScores. This MUST be accessed with the banScoreMtx held.
numRestricted int64
}
type accessManConfig struct {
// initAccessPerms checks the channeldb for initial access permissions
// and then populates the peerCounts and peerScores maps.
initAccessPerms func() (map[string]channeldb.ChanCount, error)
// shouldDisconnect determines whether we should disconnect a peer or
// not.
shouldDisconnect func(*btcec.PublicKey) (bool, error)
// maxRestrictedSlots is the number of restricted slots we'll allocate.
maxRestrictedSlots int64
}
func newAccessMan(cfg *accessManConfig) (*accessMan, error) {
a := &accessMan{
cfg: cfg,
peerCounts: make(map[string]channeldb.ChanCount),
peerScores: make(map[string]peerSlotStatus),
}
counts, err := a.cfg.initAccessPerms()
if err != nil {
return nil, err
}
// We'll populate the server's peerCounts map with the counts fetched
// via initAccessPerms. Also note that we haven't yet connected to the
// peers.
for peerPub, count := range counts {
a.peerCounts[peerPub] = count
}
return a, nil
}
// assignPeerPerms assigns a new peer its permissions. This does not track the
// access in the maps. This is intentional.
func (a *accessMan) assignPeerPerms(remotePub *btcec.PublicKey) (
peerAccessStatus, error) {
// Default is restricted unless the below filters say otherwise.
access := peerStatusRestricted
shouldDisconnect, err := a.cfg.shouldDisconnect(remotePub)
if err != nil {
// Access is restricted here.
return access, err
}
if shouldDisconnect {
// Access is restricted here.
return access, ErrGossiperBan
}
peerMapKey := string(remotePub.SerializeCompressed())
// Lock banScoreMtx for reading so that we can update the banning maps
// below.
a.banScoreMtx.RLock()
defer a.banScoreMtx.RUnlock()
if count, found := a.peerCounts[peerMapKey]; found {
if count.HasOpenOrClosedChan {
access = peerStatusProtected
} else if count.PendingOpenCount != 0 {
access = peerStatusTemporary
}
}
// If we've reached this point and access hasn't changed from
// restricted, then we need to check if we even have a slot for this
// peer.
if a.numRestricted >= a.cfg.maxRestrictedSlots &&
access == peerStatusRestricted {
return access, ErrNoMoreRestrictedAccessSlots
}
return access, nil
}
// newPendingOpenChan is called after the pending-open channel has been
// committed to the database. This may transition a restricted-access peer to a
// temporary-access peer.
func (a *accessMan) newPendingOpenChan(remotePub *btcec.PublicKey) error {
a.banScoreMtx.Lock()
defer a.banScoreMtx.Unlock()
peerMapKey := string(remotePub.SerializeCompressed())
// Fetch the peer's access status from peerScores.
status, found := a.peerScores[peerMapKey]
if !found {
// If we didn't find the peer, we'll return an error.
return ErrNoPeerScore
}
switch status.state {
case peerStatusProtected:
// If this peer's access status is protected, we don't need to
// do anything.
return nil
case peerStatusTemporary:
// If this peer's access status is temporary, we'll need to
// update the peerCounts map. The peer's access status will
// stay temporary.
peerCount, found := a.peerCounts[peerMapKey]
if !found {
// Error if we did not find any info in peerCounts.
return ErrNoPendingPeerInfo
}
// Increment the pending channel amount.
peerCount.PendingOpenCount += 1
a.peerCounts[peerMapKey] = peerCount
case peerStatusRestricted:
// If the peer's access status is restricted, then we can
// transition it to a temporary-access peer. We'll need to
// update numRestricted and also peerScores. We'll also need to
// update peerCounts.
peerCount := channeldb.ChanCount{
HasOpenOrClosedChan: false,
PendingOpenCount: 1,
}
a.peerCounts[peerMapKey] = peerCount
// A restricted-access slot has opened up.
a.numRestricted -= 1
a.peerScores[peerMapKey] = peerSlotStatus{
state: peerStatusTemporary,
}
default:
// This should not be possible.
return fmt.Errorf("invalid peer access status")
}
return nil
}
// newPendingCloseChan is called when a pending-open channel prematurely closes
// before the funding transaction has confirmed. This potentially demotes a
// temporary-access peer to a restricted-access peer. If no restricted-access
// slots are available, the peer will be disconnected.
func (a *accessMan) newPendingCloseChan(remotePub *btcec.PublicKey) error {
a.banScoreMtx.Lock()
defer a.banScoreMtx.Unlock()
peerMapKey := string(remotePub.SerializeCompressed())
// Fetch the peer's access status from peerScores.
status, found := a.peerScores[peerMapKey]
if !found {
return ErrNoPeerScore
}
switch status.state {
case peerStatusProtected:
// If this peer is protected, we don't do anything.
return nil
case peerStatusTemporary:
// If this peer is temporary, we need to check if it will
// revert to a restricted-access peer.
peerCount, found := a.peerCounts[peerMapKey]
if !found {
// Error if we did not find any info in peerCounts.
return ErrNoPendingPeerInfo
}
currentNumPending := peerCount.PendingOpenCount - 1
if currentNumPending == 0 {
// Remove the entry from peerCounts.
delete(a.peerCounts, peerMapKey)
// If this is the only pending-open channel for this
// peer and it's getting removed, attempt to demote
// this peer to a restricted peer.
if a.numRestricted == a.cfg.maxRestrictedSlots {
// There are no available restricted slots, so
// we need to disconnect this peer. We leave
// this up to the caller.
return ErrNoMoreRestrictedAccessSlots
}
// Otherwise, there is an available restricted-access
// slot, so we can demote this peer.
a.peerScores[peerMapKey] = peerSlotStatus{
state: peerStatusRestricted,
}
// Update numRestricted.
a.numRestricted++
return nil
}
// Else, we don't need to demote this peer since it has other
// pending-open channels with us.
peerCount.PendingOpenCount = currentNumPending
a.peerCounts[peerMapKey] = peerCount
return nil
case peerStatusRestricted:
// This should not be possible. This indicates an error.
return fmt.Errorf("invalid peer access state transition")
default:
// This should not be possible.
return fmt.Errorf("invalid peer access status")
}
}
// newOpenChan is called when a pending-open channel becomes an open channel
// (i.e. the funding transaction has confirmed). If the remote peer is a
// temporary-access peer, it will be promoted to a protected-access peer.
func (a *accessMan) newOpenChan(remotePub *btcec.PublicKey) error {
a.banScoreMtx.Lock()
defer a.banScoreMtx.Unlock()
peerMapKey := string(remotePub.SerializeCompressed())
// Fetch the peer's access status from peerScores.
status, found := a.peerScores[peerMapKey]
if !found {
// If we didn't find the peer, we'll return an error.
return ErrNoPeerScore
}
switch status.state {
case peerStatusProtected:
// If the peer's state is already protected, we don't need to
// do anything more.
return nil
case peerStatusTemporary:
// If the peer's state is temporary, we'll upgrade the peer to
// a protected peer.
peerCount, found := a.peerCounts[peerMapKey]
if !found {
// Error if we did not find any info in peerCounts.
return ErrNoPendingPeerInfo
}
peerCount.HasOpenOrClosedChan = true
a.peerCounts[peerMapKey] = peerCount
newStatus := peerSlotStatus{
state: peerStatusProtected,
}
a.peerScores[peerMapKey] = newStatus
return nil
case peerStatusRestricted:
// This should not be possible. For the server to receive a
// state-transition event via NewOpenChan, the server must have
// previously granted this peer "temporary" access. This
// temporary access would not have been revoked or downgraded
// without `CloseChannel` being called with the pending
// argument set to true. This means that an open-channel state
// transition would be impossible. Therefore, we can return an
// error.
return fmt.Errorf("invalid peer access status")
default:
// This should not be possible.
return fmt.Errorf("invalid peer access status")
}
}
// checkIncomingConnBanScore checks whether, given the remote's public hex-
// encoded key, we should not accept this incoming connection or immediately
// disconnect. This does not assign to the server's peerScores maps. This is
// just an inbound filter that the brontide listeners use.
func (a *accessMan) checkIncomingConnBanScore(remotePub *btcec.PublicKey) (
bool, error) {
a.banScoreMtx.RLock()
defer a.banScoreMtx.RUnlock()
peerMapKey := string(remotePub.SerializeCompressed())
if _, found := a.peerCounts[peerMapKey]; !found {
// Check numRestricted to see if there is an available slot. In
// the future, it's possible to add better heuristics.
if a.numRestricted < a.cfg.maxRestrictedSlots {
// There is an available slot.
return true, nil
}
// If there are no slots left, then we reject this connection.
return false, ErrNoMoreRestrictedAccessSlots
}
// Else, the peer is either protected or temporary.
return true, nil
}
// addPeerAccess tracks a peer's access in the maps. This should be called when
// the peer has fully connected.
func (a *accessMan) addPeerAccess(remotePub *btcec.PublicKey,
access peerAccessStatus) {
// Add the remote public key to peerScores.
a.banScoreMtx.Lock()
defer a.banScoreMtx.Unlock()
peerMapKey := string(remotePub.SerializeCompressed())
a.peerScores[peerMapKey] = peerSlotStatus{state: access}
// Increment numRestricted.
if access == peerStatusRestricted {
a.numRestricted++
}
}
// removePeerAccess removes the peer's access from the maps. This should be
// called when the peer has been disconnected.
func (a *accessMan) removePeerAccess(remotePub *btcec.PublicKey) {
a.banScoreMtx.Lock()
defer a.banScoreMtx.Unlock()
peerMapKey := string(remotePub.SerializeCompressed())
status, found := a.peerScores[peerMapKey]
if !found {
return
}
if status.state == peerStatusRestricted {
// If the status is restricted, then we decrement from
// numRestrictedSlots.
a.numRestricted--
}
delete(a.peerScores, peerMapKey)
}

153
accessman_test.go Normal file
View file

@ -0,0 +1,153 @@
package lnd
import (
"testing"
"github.com/btcsuite/btcd/btcec/v2"
"github.com/lightningnetwork/lnd/channeldb"
"github.com/stretchr/testify/require"
)
// assertInboundConnection asserts that we're able to accept an inbound
// connection successfully without any access permissions being violated.
func assertInboundConnection(t *testing.T, a *accessMan,
remotePub *btcec.PublicKey, status peerAccessStatus) {
remotePubSer := string(remotePub.SerializeCompressed())
isSlotAvailable, err := a.checkIncomingConnBanScore(remotePub)
require.NoError(t, err)
require.True(t, isSlotAvailable)
peerAccess, err := a.assignPeerPerms(remotePub)
require.NoError(t, err)
require.Equal(t, status, peerAccess)
a.addPeerAccess(remotePub, peerAccess)
peerScore, ok := a.peerScores[remotePubSer]
require.True(t, ok)
require.Equal(t, status, peerScore.state)
}
func assertAccessState(t *testing.T, a *accessMan, remotePub *btcec.PublicKey,
expectedStatus peerAccessStatus) {
remotePubSer := string(remotePub.SerializeCompressed())
peerScore, ok := a.peerScores[remotePubSer]
require.True(t, ok)
require.Equal(t, expectedStatus, peerScore.state)
}
// TestAccessManRestrictedSlots tests that the configurable number of
// restricted slots are properly allocated. It also tests that certain peers
// with access permissions are allowed to bypass the slot mechanism.
func TestAccessManRestrictedSlots(t *testing.T) {
t.Parallel()
// We'll pre-populate the map to mock the database fetch. We'll make
// three peers. One has an open/closed channel. One has both an open
// / closed channel and a pending channel. The last one has only a
// pending channel.
peerPriv1, err := btcec.NewPrivateKey()
require.NoError(t, err)
peerKey1 := peerPriv1.PubKey()
peerKeySer1 := string(peerKey1.SerializeCompressed())
peerPriv2, err := btcec.NewPrivateKey()
require.NoError(t, err)
peerKey2 := peerPriv2.PubKey()
peerKeySer2 := string(peerKey2.SerializeCompressed())
peerPriv3, err := btcec.NewPrivateKey()
require.NoError(t, err)
peerKey3 := peerPriv3.PubKey()
peerKeySer3 := string(peerKey3.SerializeCompressed())
initPerms := func() (map[string]channeldb.ChanCount, error) {
return map[string]channeldb.ChanCount{
peerKeySer1: {
HasOpenOrClosedChan: true,
},
peerKeySer2: {
HasOpenOrClosedChan: true,
PendingOpenCount: 1,
},
peerKeySer3: {
HasOpenOrClosedChan: false,
PendingOpenCount: 1,
},
}, nil
}
disconnect := func(*btcec.PublicKey) (bool, error) {
return false, nil
}
cfg := &accessManConfig{
initAccessPerms: initPerms,
shouldDisconnect: disconnect,
maxRestrictedSlots: 1,
}
a, err := newAccessMan(cfg)
require.NoError(t, err)
// Check that the peerCounts map is correctly populated with three
// peers.
require.Equal(t, 0, int(a.numRestricted))
require.Equal(t, 3, len(a.peerCounts))
peerCount1, ok := a.peerCounts[peerKeySer1]
require.True(t, ok)
require.True(t, peerCount1.HasOpenOrClosedChan)
require.Equal(t, 0, int(peerCount1.PendingOpenCount))
peerCount2, ok := a.peerCounts[peerKeySer2]
require.True(t, ok)
require.True(t, peerCount2.HasOpenOrClosedChan)
require.Equal(t, 1, int(peerCount2.PendingOpenCount))
peerCount3, ok := a.peerCounts[peerKeySer3]
require.True(t, ok)
require.False(t, peerCount3.HasOpenOrClosedChan)
require.Equal(t, 1, int(peerCount3.PendingOpenCount))
// We'll now start to connect the peers. We'll add a new fourth peer
// that will take up the restricted slot. The first three peers should
// be able to bypass this restricted slot mechanism.
peerPriv4, err := btcec.NewPrivateKey()
require.NoError(t, err)
peerKey4 := peerPriv4.PubKey()
// Follow the normal process of an incoming connection. We check if we
// can accommodate this peer in checkIncomingConnBanScore and then we
// assign its access permissions and then insert into the map.
assertInboundConnection(t, a, peerKey4, peerStatusRestricted)
// Connect the three peers. This should happen without any issue.
assertInboundConnection(t, a, peerKey1, peerStatusProtected)
assertInboundConnection(t, a, peerKey2, peerStatusProtected)
assertInboundConnection(t, a, peerKey3, peerStatusTemporary)
// Check that a pending-open channel promotes the restricted peer.
err = a.newPendingOpenChan(peerKey4)
require.NoError(t, err)
assertAccessState(t, a, peerKey4, peerStatusTemporary)
// Check that an open channel promotes the temporary peer.
err = a.newOpenChan(peerKey3)
require.NoError(t, err)
assertAccessState(t, a, peerKey3, peerStatusProtected)
// We should be able to accommodate a new peer.
peerPriv5, err := btcec.NewPrivateKey()
require.NoError(t, err)
peerKey5 := peerPriv5.PubKey()
assertInboundConnection(t, a, peerKey5, peerStatusRestricted)
// Check that a pending-close channel event for peer 4 demotes the
// peer.
err = a.newPendingCloseChan(peerKey4)
require.ErrorIs(t, err, ErrNoMoreRestrictedAccessSlots)
}

View file

@ -1,20 +1,15 @@
package autopilot
import (
"bytes"
"encoding/hex"
"errors"
"net"
"sort"
"sync/atomic"
"time"
"github.com/btcsuite/btcd/btcec/v2"
"github.com/btcsuite/btcd/btcec/v2/ecdsa"
"github.com/btcsuite/btcd/btcutil"
graphdb "github.com/lightningnetwork/lnd/graph/db"
"github.com/lightningnetwork/lnd/graph/db/models"
"github.com/lightningnetwork/lnd/kvdb"
"github.com/lightningnetwork/lnd/lnwire"
"github.com/lightningnetwork/lnd/routing/route"
)
@ -36,7 +31,7 @@ var (
//
// TODO(roasbeef): move inmpl to main package?
type databaseChannelGraph struct {
db *graphdb.ChannelGraph
db GraphSource
}
// A compile time assertion to ensure databaseChannelGraph meets the
@ -44,8 +39,8 @@ type databaseChannelGraph struct {
var _ ChannelGraph = (*databaseChannelGraph)(nil)
// ChannelGraphFromDatabase returns an instance of the autopilot.ChannelGraph
// backed by a live, open channeldb instance.
func ChannelGraphFromDatabase(db *graphdb.ChannelGraph) ChannelGraph {
// backed by a GraphSource.
func ChannelGraphFromDatabase(db GraphSource) ChannelGraph {
return &databaseChannelGraph{
db: db,
}
@ -55,11 +50,7 @@ func ChannelGraphFromDatabase(db *graphdb.ChannelGraph) ChannelGraph {
// channeldb.LightningNode. The wrapper method implement the autopilot.Node
// interface.
type dbNode struct {
db *graphdb.ChannelGraph
tx kvdb.RTx
node *models.LightningNode
tx graphdb.NodeRTx
}
// A compile time assertion to ensure dbNode meets the autopilot.Node
@ -72,7 +63,7 @@ var _ Node = (*dbNode)(nil)
//
// NOTE: Part of the autopilot.Node interface.
func (d *dbNode) PubKey() [33]byte {
return d.node.PubKeyBytes
return d.tx.Node().PubKeyBytes
}
// Addrs returns a slice of publicly reachable public TCP addresses that the
@ -80,7 +71,7 @@ func (d *dbNode) PubKey() [33]byte {
//
// NOTE: Part of the autopilot.Node interface.
func (d *dbNode) Addrs() []net.Addr {
return d.node.Addresses
return d.tx.Node().Addresses
}
// ForEachChannel is a higher-order function that will be used to iterate
@ -90,43 +81,35 @@ func (d *dbNode) Addrs() []net.Addr {
//
// NOTE: Part of the autopilot.Node interface.
func (d *dbNode) ForEachChannel(cb func(ChannelEdge) error) error {
return d.db.ForEachNodeChannelTx(d.tx, d.node.PubKeyBytes,
func(tx kvdb.RTx, ei *models.ChannelEdgeInfo, ep,
_ *models.ChannelEdgePolicy) error {
return d.tx.ForEachChannel(func(ei *models.ChannelEdgeInfo, ep,
_ *models.ChannelEdgePolicy) error {
// Skip channels for which no outgoing edge policy is
// available.
//
// TODO(joostjager): Ideally the case where channels
// have a nil policy should be supported, as autopilot
// is not looking at the policies. For now, it is not
// easily possible to get a reference to the other end
// LightningNode object without retrieving the policy.
if ep == nil {
return nil
}
// Skip channels for which no outgoing edge policy is available.
//
// TODO(joostjager): Ideally the case where channels have a nil
// policy should be supported, as autopilot is not looking at
// the policies. For now, it is not easily possible to get a
// reference to the other end LightningNode object without
// retrieving the policy.
if ep == nil {
return nil
}
node, err := d.db.FetchLightningNodeTx(
tx, ep.ToNode,
)
if err != nil {
return err
}
node, err := d.tx.FetchNode(ep.ToNode)
if err != nil {
return err
}
edge := ChannelEdge{
ChanID: lnwire.NewShortChanIDFromInt(
ep.ChannelID,
),
Capacity: ei.Capacity,
Peer: &dbNode{
tx: tx,
db: d.db,
node: node,
},
}
edge := ChannelEdge{
ChanID: lnwire.NewShortChanIDFromInt(ep.ChannelID),
Capacity: ei.Capacity,
Peer: &dbNode{
tx: node,
},
}
return cb(edge)
})
return cb(edge)
})
}
// ForEachNode is a higher-order function that should be called once for each
@ -135,353 +118,25 @@ func (d *dbNode) ForEachChannel(cb func(ChannelEdge) error) error {
//
// NOTE: Part of the autopilot.ChannelGraph interface.
func (d *databaseChannelGraph) ForEachNode(cb func(Node) error) error {
return d.db.ForEachNode(func(tx kvdb.RTx,
n *models.LightningNode) error {
return d.db.ForEachNode(func(nodeTx graphdb.NodeRTx) error {
// We'll skip over any node that doesn't have any advertised
// addresses. As we won't be able to reach them to actually
// open any channels.
if len(n.Addresses) == 0 {
if len(nodeTx.Node().Addresses) == 0 {
return nil
}
node := &dbNode{
db: d.db,
tx: tx,
node: n,
tx: nodeTx,
}
return cb(node)
})
}
// addRandChannel creates a new channel two target nodes. This function is
// meant to aide in the generation of random graphs for use within test cases
// the exercise the autopilot package.
func (d *databaseChannelGraph) addRandChannel(node1, node2 *btcec.PublicKey,
capacity btcutil.Amount) (*ChannelEdge, *ChannelEdge, error) {
fetchNode := func(pub *btcec.PublicKey) (*models.LightningNode, error) {
if pub != nil {
vertex, err := route.NewVertexFromBytes(
pub.SerializeCompressed(),
)
if err != nil {
return nil, err
}
dbNode, err := d.db.FetchLightningNode(vertex)
switch {
case errors.Is(err, graphdb.ErrGraphNodeNotFound):
fallthrough
case errors.Is(err, graphdb.ErrGraphNotFound):
graphNode := &models.LightningNode{
HaveNodeAnnouncement: true,
Addresses: []net.Addr{
&net.TCPAddr{
IP: bytes.Repeat([]byte("a"), 16),
},
},
Features: lnwire.NewFeatureVector(
nil, lnwire.Features,
),
AuthSigBytes: testSig.Serialize(),
}
graphNode.AddPubKey(pub)
if err := d.db.AddLightningNode(graphNode); err != nil {
return nil, err
}
case err != nil:
return nil, err
}
return dbNode, nil
}
nodeKey, err := randKey()
if err != nil {
return nil, err
}
dbNode := &models.LightningNode{
HaveNodeAnnouncement: true,
Addresses: []net.Addr{
&net.TCPAddr{
IP: bytes.Repeat([]byte("a"), 16),
},
},
Features: lnwire.NewFeatureVector(
nil, lnwire.Features,
),
AuthSigBytes: testSig.Serialize(),
}
dbNode.AddPubKey(nodeKey)
if err := d.db.AddLightningNode(dbNode); err != nil {
return nil, err
}
return dbNode, nil
}
vertex1, err := fetchNode(node1)
if err != nil {
return nil, nil, err
}
vertex2, err := fetchNode(node2)
if err != nil {
return nil, nil, err
}
var lnNode1, lnNode2 *btcec.PublicKey
if bytes.Compare(vertex1.PubKeyBytes[:], vertex2.PubKeyBytes[:]) == -1 {
lnNode1, _ = vertex1.PubKey()
lnNode2, _ = vertex2.PubKey()
} else {
lnNode1, _ = vertex2.PubKey()
lnNode2, _ = vertex1.PubKey()
}
chanID := randChanID()
edge := &models.ChannelEdgeInfo{
ChannelID: chanID.ToUint64(),
Capacity: capacity,
}
edge.AddNodeKeys(lnNode1, lnNode2, lnNode1, lnNode2)
if err := d.db.AddChannelEdge(edge); err != nil {
return nil, nil, err
}
edgePolicy := &models.ChannelEdgePolicy{
SigBytes: testSig.Serialize(),
ChannelID: chanID.ToUint64(),
LastUpdate: time.Now(),
TimeLockDelta: 10,
MinHTLC: 1,
MaxHTLC: lnwire.NewMSatFromSatoshis(capacity),
FeeBaseMSat: 10,
FeeProportionalMillionths: 10000,
MessageFlags: 1,
ChannelFlags: 0,
}
if err := d.db.UpdateEdgePolicy(edgePolicy); err != nil {
return nil, nil, err
}
edgePolicy = &models.ChannelEdgePolicy{
SigBytes: testSig.Serialize(),
ChannelID: chanID.ToUint64(),
LastUpdate: time.Now(),
TimeLockDelta: 10,
MinHTLC: 1,
MaxHTLC: lnwire.NewMSatFromSatoshis(capacity),
FeeBaseMSat: 10,
FeeProportionalMillionths: 10000,
MessageFlags: 1,
ChannelFlags: 1,
}
if err := d.db.UpdateEdgePolicy(edgePolicy); err != nil {
return nil, nil, err
}
return &ChannelEdge{
ChanID: chanID,
Capacity: capacity,
Peer: &dbNode{
db: d.db,
node: vertex1,
},
},
&ChannelEdge{
ChanID: chanID,
Capacity: capacity,
Peer: &dbNode{
db: d.db,
node: vertex2,
},
},
nil
}
func (d *databaseChannelGraph) addRandNode() (*btcec.PublicKey, error) {
nodeKey, err := randKey()
if err != nil {
return nil, err
}
dbNode := &models.LightningNode{
HaveNodeAnnouncement: true,
Addresses: []net.Addr{
&net.TCPAddr{
IP: bytes.Repeat([]byte("a"), 16),
},
},
Features: lnwire.NewFeatureVector(
nil, lnwire.Features,
),
AuthSigBytes: testSig.Serialize(),
}
dbNode.AddPubKey(nodeKey)
if err := d.db.AddLightningNode(dbNode); err != nil {
return nil, err
}
return nodeKey, nil
}
// memChannelGraph is an implementation of the autopilot.ChannelGraph backed by
// an in-memory graph.
type memChannelGraph struct {
graph map[NodeID]*memNode
}
// A compile time assertion to ensure memChannelGraph meets the
// autopilot.ChannelGraph interface.
var _ ChannelGraph = (*memChannelGraph)(nil)
// newMemChannelGraph creates a new blank in-memory channel graph
// implementation.
func newMemChannelGraph() *memChannelGraph {
return &memChannelGraph{
graph: make(map[NodeID]*memNode),
}
}
// ForEachNode is a higher-order function that should be called once for each
// connected node within the channel graph. If the passed callback returns an
// error, then execution should be terminated.
//
// NOTE: Part of the autopilot.ChannelGraph interface.
func (m memChannelGraph) ForEachNode(cb func(Node) error) error {
for _, node := range m.graph {
if err := cb(node); err != nil {
return err
}
}
return nil
}
// randChanID generates a new random channel ID.
func randChanID() lnwire.ShortChannelID {
id := atomic.AddUint64(&chanIDCounter, 1)
return lnwire.NewShortChanIDFromInt(id)
}
// randKey returns a random public key.
func randKey() (*btcec.PublicKey, error) {
priv, err := btcec.NewPrivateKey()
if err != nil {
return nil, err
}
return priv.PubKey(), nil
}
// addRandChannel creates a new channel two target nodes. This function is
// meant to aide in the generation of random graphs for use within test cases
// the exercise the autopilot package.
func (m *memChannelGraph) addRandChannel(node1, node2 *btcec.PublicKey,
capacity btcutil.Amount) (*ChannelEdge, *ChannelEdge, error) {
var (
vertex1, vertex2 *memNode
ok bool
)
if node1 != nil {
vertex1, ok = m.graph[NewNodeID(node1)]
if !ok {
vertex1 = &memNode{
pub: node1,
addrs: []net.Addr{
&net.TCPAddr{
IP: bytes.Repeat([]byte("a"), 16),
},
},
}
}
} else {
newPub, err := randKey()
if err != nil {
return nil, nil, err
}
vertex1 = &memNode{
pub: newPub,
addrs: []net.Addr{
&net.TCPAddr{
IP: bytes.Repeat([]byte("a"), 16),
},
},
}
}
if node2 != nil {
vertex2, ok = m.graph[NewNodeID(node2)]
if !ok {
vertex2 = &memNode{
pub: node2,
addrs: []net.Addr{
&net.TCPAddr{
IP: bytes.Repeat([]byte("a"), 16),
},
},
}
}
} else {
newPub, err := randKey()
if err != nil {
return nil, nil, err
}
vertex2 = &memNode{
pub: newPub,
addrs: []net.Addr{
&net.TCPAddr{
IP: bytes.Repeat([]byte("a"), 16),
},
},
}
}
edge1 := ChannelEdge{
ChanID: randChanID(),
Capacity: capacity,
Peer: vertex2,
}
vertex1.chans = append(vertex1.chans, edge1)
edge2 := ChannelEdge{
ChanID: randChanID(),
Capacity: capacity,
Peer: vertex1,
}
vertex2.chans = append(vertex2.chans, edge2)
m.graph[NewNodeID(vertex1.pub)] = vertex1
m.graph[NewNodeID(vertex2.pub)] = vertex2
return &edge1, &edge2, nil
}
func (m *memChannelGraph) addRandNode() (*btcec.PublicKey, error) {
newPub, err := randKey()
if err != nil {
return nil, err
}
vertex := &memNode{
pub: newPub,
addrs: []net.Addr{
&net.TCPAddr{
IP: bytes.Repeat([]byte("a"), 16),
},
},
}
m.graph[NewNodeID(newPub)] = vertex
return newPub, nil
}
// databaseChannelGraphCached wraps a channeldb.ChannelGraph instance with the
// necessary API to properly implement the autopilot.ChannelGraph interface.
type databaseChannelGraphCached struct {
db *graphdb.ChannelGraph
db GraphSource
}
// A compile time assertion to ensure databaseChannelGraphCached meets the
@ -490,7 +145,7 @@ var _ ChannelGraph = (*databaseChannelGraphCached)(nil)
// ChannelGraphFromCachedDatabase returns an instance of the
// autopilot.ChannelGraph backed by a live, open channeldb instance.
func ChannelGraphFromCachedDatabase(db *graphdb.ChannelGraph) ChannelGraph {
func ChannelGraphFromCachedDatabase(db GraphSource) ChannelGraph {
return &databaseChannelGraphCached{
db: db,
}

View file

@ -6,7 +6,9 @@ import (
"github.com/btcsuite/btcd/btcec/v2"
"github.com/btcsuite/btcd/btcutil"
"github.com/btcsuite/btcd/wire"
graphdb "github.com/lightningnetwork/lnd/graph/db"
"github.com/lightningnetwork/lnd/lnwire"
"github.com/lightningnetwork/lnd/routing/route"
)
// DefaultConfTarget is the default confirmation target for autopilot channels.
@ -216,3 +218,20 @@ type ChannelController interface {
// TODO(roasbeef): add force option?
CloseChannel(chanPoint *wire.OutPoint) error
}
// GraphSource represents read access to the channel graph.
type GraphSource interface {
// ForEachNode iterates through all the stored vertices/nodes in the
// graph, executing the passed callback with each node encountered. If
// the callback returns an error, then the transaction is aborted and
// the iteration stops early. Any operations performed on the NodeTx
// passed to the call-back are executed under the same read transaction.
ForEachNode(func(graphdb.NodeRTx) error) error
// ForEachNodeCached is similar to ForEachNode, but it utilizes the
// channel graph cache if one is available. It is less consistent than
// ForEachNode since any further calls are made across multiple
// transactions.
ForEachNodeCached(cb func(node route.Vertex,
chans map[uint64]*graphdb.DirectedChannel) error) error
}

View file

@ -2,14 +2,20 @@ package autopilot
import (
"bytes"
"errors"
prand "math/rand"
"net"
"sync/atomic"
"testing"
"time"
"github.com/btcsuite/btcd/btcec/v2"
"github.com/btcsuite/btcd/btcutil"
graphdb "github.com/lightningnetwork/lnd/graph/db"
"github.com/lightningnetwork/lnd/graph/db/models"
"github.com/lightningnetwork/lnd/kvdb"
"github.com/lightningnetwork/lnd/lnwire"
"github.com/lightningnetwork/lnd/routing/route"
"github.com/stretchr/testify/require"
)
@ -24,6 +30,11 @@ type testGraph interface {
addRandNode() (*btcec.PublicKey, error)
}
type testDBGraph struct {
db *graphdb.ChannelGraph
databaseChannelGraph
}
func newDiskChanGraph(t *testing.T) (testGraph, error) {
backend, err := kvdb.GetBoltBackend(&kvdb.BoltBackendConfig{
DBPath: t.TempDir(),
@ -38,12 +49,15 @@ func newDiskChanGraph(t *testing.T) (testGraph, error) {
graphDB, err := graphdb.NewChannelGraph(backend)
require.NoError(t, err)
return &databaseChannelGraph{
return &testDBGraph{
db: graphDB,
databaseChannelGraph: databaseChannelGraph{
db: graphDB,
},
}, nil
}
var _ testGraph = (*databaseChannelGraph)(nil)
var _ testGraph = (*testDBGraph)(nil)
func newMemChanGraph(_ *testing.T) (testGraph, error) {
return newMemChannelGraph(), nil
@ -368,3 +382,357 @@ func TestPrefAttachmentSelectSkipNodes(t *testing.T) {
}
}
}
// addRandChannel creates a new channel two target nodes. This function is
// meant to aide in the generation of random graphs for use within test cases
// the exercise the autopilot package.
func (d *testDBGraph) addRandChannel(node1, node2 *btcec.PublicKey,
capacity btcutil.Amount) (*ChannelEdge, *ChannelEdge, error) {
fetchNode := func(pub *btcec.PublicKey) (*models.LightningNode, error) {
if pub != nil {
vertex, err := route.NewVertexFromBytes(
pub.SerializeCompressed(),
)
if err != nil {
return nil, err
}
dbNode, err := d.db.FetchLightningNode(vertex)
switch {
case errors.Is(err, graphdb.ErrGraphNodeNotFound):
fallthrough
case errors.Is(err, graphdb.ErrGraphNotFound):
graphNode := &models.LightningNode{
HaveNodeAnnouncement: true,
Addresses: []net.Addr{&net.TCPAddr{
IP: bytes.Repeat(
[]byte("a"), 16,
),
}},
Features: lnwire.NewFeatureVector(
nil, lnwire.Features,
),
AuthSigBytes: testSig.Serialize(),
}
graphNode.AddPubKey(pub)
err := d.db.AddLightningNode(graphNode)
if err != nil {
return nil, err
}
case err != nil:
return nil, err
}
return dbNode, nil
}
nodeKey, err := randKey()
if err != nil {
return nil, err
}
dbNode := &models.LightningNode{
HaveNodeAnnouncement: true,
Addresses: []net.Addr{
&net.TCPAddr{
IP: bytes.Repeat([]byte("a"), 16),
},
},
Features: lnwire.NewFeatureVector(
nil, lnwire.Features,
),
AuthSigBytes: testSig.Serialize(),
}
dbNode.AddPubKey(nodeKey)
if err := d.db.AddLightningNode(dbNode); err != nil {
return nil, err
}
return dbNode, nil
}
vertex1, err := fetchNode(node1)
if err != nil {
return nil, nil, err
}
vertex2, err := fetchNode(node2)
if err != nil {
return nil, nil, err
}
var lnNode1, lnNode2 *btcec.PublicKey
if bytes.Compare(vertex1.PubKeyBytes[:], vertex2.PubKeyBytes[:]) == -1 {
lnNode1, _ = vertex1.PubKey()
lnNode2, _ = vertex2.PubKey()
} else {
lnNode1, _ = vertex2.PubKey()
lnNode2, _ = vertex1.PubKey()
}
chanID := randChanID()
edge := &models.ChannelEdgeInfo{
ChannelID: chanID.ToUint64(),
Capacity: capacity,
}
edge.AddNodeKeys(lnNode1, lnNode2, lnNode1, lnNode2)
if err := d.db.AddChannelEdge(edge); err != nil {
return nil, nil, err
}
edgePolicy := &models.ChannelEdgePolicy{
SigBytes: testSig.Serialize(),
ChannelID: chanID.ToUint64(),
LastUpdate: time.Now(),
TimeLockDelta: 10,
MinHTLC: 1,
MaxHTLC: lnwire.NewMSatFromSatoshis(capacity),
FeeBaseMSat: 10,
FeeProportionalMillionths: 10000,
MessageFlags: 1,
ChannelFlags: 0,
}
if err := d.db.UpdateEdgePolicy(edgePolicy); err != nil {
return nil, nil, err
}
edgePolicy = &models.ChannelEdgePolicy{
SigBytes: testSig.Serialize(),
ChannelID: chanID.ToUint64(),
LastUpdate: time.Now(),
TimeLockDelta: 10,
MinHTLC: 1,
MaxHTLC: lnwire.NewMSatFromSatoshis(capacity),
FeeBaseMSat: 10,
FeeProportionalMillionths: 10000,
MessageFlags: 1,
ChannelFlags: 1,
}
if err := d.db.UpdateEdgePolicy(edgePolicy); err != nil {
return nil, nil, err
}
return &ChannelEdge{
ChanID: chanID,
Capacity: capacity,
Peer: &dbNode{tx: &testNodeTx{
db: d,
node: vertex1,
}},
},
&ChannelEdge{
ChanID: chanID,
Capacity: capacity,
Peer: &dbNode{tx: &testNodeTx{
db: d,
node: vertex2,
}},
},
nil
}
func (d *testDBGraph) addRandNode() (*btcec.PublicKey, error) {
nodeKey, err := randKey()
if err != nil {
return nil, err
}
dbNode := &models.LightningNode{
HaveNodeAnnouncement: true,
Addresses: []net.Addr{
&net.TCPAddr{
IP: bytes.Repeat([]byte("a"), 16),
},
},
Features: lnwire.NewFeatureVector(
nil, lnwire.Features,
),
AuthSigBytes: testSig.Serialize(),
}
dbNode.AddPubKey(nodeKey)
if err := d.db.AddLightningNode(dbNode); err != nil {
return nil, err
}
return nodeKey, nil
}
// memChannelGraph is an implementation of the autopilot.ChannelGraph backed by
// an in-memory graph.
type memChannelGraph struct {
graph map[NodeID]*memNode
}
// A compile time assertion to ensure memChannelGraph meets the
// autopilot.ChannelGraph interface.
var _ ChannelGraph = (*memChannelGraph)(nil)
// newMemChannelGraph creates a new blank in-memory channel graph
// implementation.
func newMemChannelGraph() *memChannelGraph {
return &memChannelGraph{
graph: make(map[NodeID]*memNode),
}
}
// ForEachNode is a higher-order function that should be called once for each
// connected node within the channel graph. If the passed callback returns an
// error, then execution should be terminated.
//
// NOTE: Part of the autopilot.ChannelGraph interface.
func (m *memChannelGraph) ForEachNode(cb func(Node) error) error {
for _, node := range m.graph {
if err := cb(node); err != nil {
return err
}
}
return nil
}
// randChanID generates a new random channel ID.
func randChanID() lnwire.ShortChannelID {
id := atomic.AddUint64(&chanIDCounter, 1)
return lnwire.NewShortChanIDFromInt(id)
}
// randKey returns a random public key.
func randKey() (*btcec.PublicKey, error) {
priv, err := btcec.NewPrivateKey()
if err != nil {
return nil, err
}
return priv.PubKey(), nil
}
// addRandChannel creates a new channel two target nodes. This function is
// meant to aide in the generation of random graphs for use within test cases
// the exercise the autopilot package.
func (m *memChannelGraph) addRandChannel(node1, node2 *btcec.PublicKey,
capacity btcutil.Amount) (*ChannelEdge, *ChannelEdge, error) {
var (
vertex1, vertex2 *memNode
ok bool
)
if node1 != nil {
vertex1, ok = m.graph[NewNodeID(node1)]
if !ok {
vertex1 = &memNode{
pub: node1,
addrs: []net.Addr{&net.TCPAddr{
IP: bytes.Repeat([]byte("a"), 16),
}},
}
}
} else {
newPub, err := randKey()
if err != nil {
return nil, nil, err
}
vertex1 = &memNode{
pub: newPub,
addrs: []net.Addr{
&net.TCPAddr{
IP: bytes.Repeat([]byte("a"), 16),
},
},
}
}
if node2 != nil {
vertex2, ok = m.graph[NewNodeID(node2)]
if !ok {
vertex2 = &memNode{
pub: node2,
addrs: []net.Addr{&net.TCPAddr{
IP: bytes.Repeat([]byte("a"), 16),
}},
}
}
} else {
newPub, err := randKey()
if err != nil {
return nil, nil, err
}
vertex2 = &memNode{
pub: newPub,
addrs: []net.Addr{
&net.TCPAddr{
IP: bytes.Repeat([]byte("a"), 16),
},
},
}
}
edge1 := ChannelEdge{
ChanID: randChanID(),
Capacity: capacity,
Peer: vertex2,
}
vertex1.chans = append(vertex1.chans, edge1)
edge2 := ChannelEdge{
ChanID: randChanID(),
Capacity: capacity,
Peer: vertex1,
}
vertex2.chans = append(vertex2.chans, edge2)
m.graph[NewNodeID(vertex1.pub)] = vertex1
m.graph[NewNodeID(vertex2.pub)] = vertex2
return &edge1, &edge2, nil
}
func (m *memChannelGraph) addRandNode() (*btcec.PublicKey, error) {
newPub, err := randKey()
if err != nil {
return nil, err
}
vertex := &memNode{
pub: newPub,
addrs: []net.Addr{
&net.TCPAddr{
IP: bytes.Repeat([]byte("a"), 16),
},
},
}
m.graph[NewNodeID(newPub)] = vertex
return newPub, nil
}
type testNodeTx struct {
db *testDBGraph
node *models.LightningNode
}
func (t *testNodeTx) Node() *models.LightningNode {
return t.node
}
func (t *testNodeTx) ForEachChannel(f func(*models.ChannelEdgeInfo,
*models.ChannelEdgePolicy, *models.ChannelEdgePolicy) error) error {
return t.db.db.ForEachNodeChannel(t.node.PubKeyBytes, func(_ kvdb.RTx,
edge *models.ChannelEdgeInfo, policy1,
policy2 *models.ChannelEdgePolicy) error {
return f(edge, policy1, policy2)
})
}
func (t *testNodeTx) FetchNode(pub route.Vertex) (graphdb.NodeRTx, error) {
node, err := t.db.db.FetchLightningNode(pub)
if err != nil {
return nil, err
}
return &testNodeTx{
db: t.db,
node: node,
}, nil
}
var _ graphdb.NodeRTx = (*testNodeTx)(nil)

View file

@ -85,9 +85,7 @@ func NewSimpleGraph(g ChannelGraph) (*SimpleGraph, error) {
func maxVal(mapping map[int]uint32) uint32 {
maxValue := uint32(0)
for _, value := range mapping {
if maxValue < value {
maxValue = value
}
maxValue = max(maxValue, value)
}
return maxValue
}

View file

@ -7,12 +7,13 @@ import (
"net"
"time"
"github.com/btcsuite/btcd/btcec/v2"
"github.com/lightningnetwork/lnd/keychain"
)
// defaultHandshakes is the maximum number of handshakes that can be done in
// parallel.
const defaultHandshakes = 1000
const defaultHandshakes = 50
// Listener is an implementation of a net.Conn which executes an authenticated
// key exchange and message encryption protocol dubbed "Machine" after
@ -24,6 +25,10 @@ type Listener struct {
tcp *net.TCPListener
// shouldAccept is a closure that determines if we should accept the
// incoming connection or not based on its public key.
shouldAccept func(*btcec.PublicKey) (bool, error)
handshakeSema chan struct{}
conns chan maybeConn
quit chan struct{}
@ -34,8 +39,8 @@ var _ net.Listener = (*Listener)(nil)
// NewListener returns a new net.Listener which enforces the Brontide scheme
// during both initial connection establishment and data transfer.
func NewListener(localStatic keychain.SingleKeyECDH,
listenAddr string) (*Listener, error) {
func NewListener(localStatic keychain.SingleKeyECDH, listenAddr string,
shouldAccept func(*btcec.PublicKey) (bool, error)) (*Listener, error) {
addr, err := net.ResolveTCPAddr("tcp", listenAddr)
if err != nil {
@ -50,6 +55,7 @@ func NewListener(localStatic keychain.SingleKeyECDH,
brontideListener := &Listener{
localStatic: localStatic,
tcp: l,
shouldAccept: shouldAccept,
handshakeSema: make(chan struct{}, defaultHandshakes),
conns: make(chan maybeConn),
quit: make(chan struct{}),
@ -193,6 +199,28 @@ func (l *Listener) doHandshake(conn net.Conn) {
return
}
// Call the shouldAccept closure to see if the remote node's public key
// is allowed according to our banning heuristic. This is here because
// we do not learn the remote node's public static key until we've
// received and validated Act 3.
remoteKey := brontideConn.RemotePub()
if remoteKey == nil {
connErr := fmt.Errorf("no remote pubkey")
brontideConn.conn.Close()
l.rejectConn(rejectedConnErr(connErr, remoteAddr))
return
}
accepted, acceptErr := l.shouldAccept(remoteKey)
if !accepted {
// Reject the connection.
brontideConn.conn.Close()
l.rejectConn(rejectedConnErr(acceptErr, remoteAddr))
return
}
l.acceptConn(brontideConn)
}
@ -255,3 +283,9 @@ func (l *Listener) Close() error {
func (l *Listener) Addr() net.Addr {
return l.tcp.Addr()
}
// DisabledBanClosure is used in places that NewListener is invoked to bypass
// the ban-scoring.
func DisabledBanClosure(p *btcec.PublicKey) (bool, error) {
return true, nil
}

View file

@ -35,7 +35,7 @@ func makeListener() (*Listener, *lnwire.NetAddress, error) {
addr := "localhost:0"
// Our listener will be local, and the connection remote.
listener, err := NewListener(localKeyECDH, addr)
listener, err := NewListener(localKeyECDH, addr, DisabledBanClosure)
if err != nil {
return nil, nil, err
}
@ -85,12 +85,12 @@ func establishTestConnection(t testing.TB) (net.Conn, net.Conn, error) {
remote := <-remoteConnChan
if remote.err != nil {
return nil, nil, err
return nil, nil, remote.err
}
local := <-localConnChan
if local.err != nil {
return nil, nil, err
return nil, nil, local.err
}
t.Cleanup(func() {

View file

@ -57,7 +57,7 @@ func DefaultLogConfig() *LogConfig {
Compressor: defaultLogCompressor,
MaxLogFiles: DefaultMaxLogFiles,
MaxLogFileSize: DefaultMaxLogFileSize,
LoggerConfig: LoggerConfig{
LoggerConfig: &LoggerConfig{
CallSite: callSiteOff,
},
},
@ -92,7 +92,7 @@ func (cfg *LoggerConfig) HandlerOptions() []btclog.HandlerOption {
//
//nolint:ll
type FileLoggerConfig struct {
LoggerConfig
*LoggerConfig `yaml:",inline"`
Compressor string `long:"compressor" description:"Compression algorithm to use when rotating logs." choice:"gzip" choice:"zstd"`
MaxLogFiles int `long:"max-files" description:"Maximum logfiles to keep (0 for no rotation)"`
MaxLogFileSize int `long:"max-file-size" description:"Maximum logfile size in MB"`

View file

@ -24,15 +24,15 @@ const (
//
//nolint:ll
type consoleLoggerCfg struct {
LoggerConfig
Style bool `long:"style" description:"If set, the output will be styled with color and fonts"`
*LoggerConfig `yaml:",inline"`
Style bool `long:"style" description:"If set, the output will be styled with color and fonts"`
}
// defaultConsoleLoggerCfg returns the default consoleLoggerCfg for the dev
// console logger.
func defaultConsoleLoggerCfg() *consoleLoggerCfg {
return &consoleLoggerCfg{
LoggerConfig: LoggerConfig{
LoggerConfig: &LoggerConfig{
CallSite: callSiteShort,
},
}

View file

@ -8,14 +8,14 @@ package build
//
//nolint:ll
type consoleLoggerCfg struct {
LoggerConfig
*LoggerConfig `yaml:",inline"`
}
// defaultConsoleLoggerCfg returns the default consoleLoggerCfg for the prod
// console logger.
func defaultConsoleLoggerCfg() *consoleLoggerCfg {
return &consoleLoggerCfg{
LoggerConfig: LoggerConfig{
LoggerConfig: &LoggerConfig{
CallSite: callSiteOff,
},
}

View file

@ -11,7 +11,7 @@ import (
// SubLogCreator can be used to create a new logger for a particular subsystem.
type SubLogCreator interface {
// Logger returns a new logger for a particular subsytem.
// Logger returns a new logger for a particular subsystem.
Logger(subsystemTag string) btclog.Logger
}

View file

@ -122,11 +122,5 @@ func WithBuildInfo(ctx context.Context, cfg *LogConfig) (context.Context,
return nil, fmt.Errorf("unable to decode commit hash: %w", err)
}
// Include the first 3 bytes of the commit hash in the context as an
// slog attribute.
if len(commitHash) > 3 {
commitHash = commitHash[:3]
}
return btclog.WithCtx(ctx, btclog.Hex("rev", commitHash)), nil
return btclog.WithCtx(ctx, btclog.Hex3("rev", commitHash)), nil
}

152
chainio/README.md Normal file
View file

@ -0,0 +1,152 @@
# Chainio
`chainio` is a package designed to provide blockchain data access to various
subsystems within `lnd`. When a new block is received, it is encapsulated in a
`Blockbeat` object and disseminated to all registered consumers. Consumers may
receive these updates either concurrently or sequentially, based on their
registration configuration, ensuring that each subsystem maintains a
synchronized view of the current block state.
The main components include:
- `Blockbeat`: An interface that provides information about the block.
- `Consumer`: An interface that specifies how subsystems handle the blockbeat.
- `BlockbeatDispatcher`: The core service responsible for receiving each block
and distributing it to all consumers.
Additionally, the `BeatConsumer` struct provides a partial implementation of
the `Consumer` interface. This struct helps reduce code duplication, allowing
subsystems to avoid re-implementing the `ProcessBlock` method and provides a
commonly used `NotifyBlockProcessed` method.
### Register a Consumer
Consumers within the same queue are notified **sequentially**, while all queues
are notified **concurrently**. A queue consists of a slice of consumers, which
are notified in left-to-right order. Developers are responsible for determining
dependencies in block consumption across subsystems: independent subsystems
should be notified concurrently, whereas dependent subsystems should be
notified sequentially.
To notify the consumers concurrently, put them in different queues,
```go
// consumer1 and consumer2 will be notified concurrently.
queue1 := []chainio.Consumer{consumer1}
blockbeatDispatcher.RegisterQueue(consumer1)
queue2 := []chainio.Consumer{consumer2}
blockbeatDispatcher.RegisterQueue(consumer2)
```
To notify the consumers sequentially, put them in the same queue,
```go
// consumers will be notified sequentially via,
// consumer1 -> consumer2 -> consumer3
queue := []chainio.Consumer{
consumer1,
consumer2,
consumer3,
}
blockbeatDispatcher.RegisterQueue(queue)
```
### Implement the `Consumer` Interface
Implementing the `Consumer` interface is straightforward. Below is an example
of how
[`sweep.TxPublisher`](https://github.com/lightningnetwork/lnd/blob/5cec466fad44c582a64cfaeb91f6d5fd302fcf85/sweep/fee_bumper.go#L310)
implements this interface.
To start, embed the partial implementation `chainio.BeatConsumer`, which
already provides the `ProcessBlock` implementation and commonly used
`NotifyBlockProcessed` method, and exposes `BlockbeatChan` for the consumer to
receive blockbeats.
```go
type TxPublisher struct {
started atomic.Bool
stopped atomic.Bool
chainio.BeatConsumer
...
```
We should also remember to initialize this `BeatConsumer`,
```go
...
// Mount the block consumer.
tp.BeatConsumer = chainio.NewBeatConsumer(tp.quit, tp.Name())
```
Finally, in the main event loop, read from `BlockbeatChan`, process the
received blockbeat, and, crucially, call `tp.NotifyBlockProcessed` to inform
the blockbeat dispatcher that processing is complete.
```go
for {
select {
case beat := <-tp.BlockbeatChan:
// Consume this blockbeat, usually it means updating the subsystem
// using the new block data.
// Notify we've processed the block.
tp.NotifyBlockProcessed(beat, nil)
...
```
### Existing Queues
Currently, we have a single queue of consumers dedicated to handling force
closures. This queue includes `ChainArbitrator`, `UtxoSweeper`, and
`TxPublisher`, with `ChainArbitrator` managing two internal consumers:
`chainWatcher` and `ChannelArbitrator`. The blockbeat flows sequentially
through the chain as follows: `ChainArbitrator => chainWatcher =>
ChannelArbitrator => UtxoSweeper => TxPublisher`. The following diagram
illustrates the flow within the public subsystems.
```mermaid
sequenceDiagram
autonumber
participant bb as BlockBeat
participant cc as ChainArb
participant us as UtxoSweeper
participant tp as TxPublisher
note left of bb: 0. received block x,<br>dispatching...
note over bb,cc: 1. send block x to ChainArb,<br>wait for its done signal
bb->>cc: block x
rect rgba(165, 0, 85, 0.8)
critical signal processed
cc->>bb: processed block
option Process error or timeout
bb->>bb: error and exit
end
end
note over bb,us: 2. send block x to UtxoSweeper, wait for its done signal
bb->>us: block x
rect rgba(165, 0, 85, 0.8)
critical signal processed
us->>bb: processed block
option Process error or timeout
bb->>bb: error and exit
end
end
note over bb,tp: 3. send block x to TxPublisher, wait for its done signal
bb->>tp: block x
rect rgba(165, 0, 85, 0.8)
critical signal processed
tp->>bb: processed block
option Process error or timeout
bb->>bb: error and exit
end
end
```

54
chainio/blockbeat.go Normal file
View file

@ -0,0 +1,54 @@
package chainio
import (
"fmt"
"github.com/btcsuite/btclog/v2"
"github.com/lightningnetwork/lnd/chainntnfs"
)
// Beat implements the Blockbeat interface. It contains the block epoch and a
// customized logger.
//
// TODO(yy): extend this to check for confirmation status - which serves as the
// single source of truth, to avoid the potential race between receiving blocks
// and `GetTransactionDetails/RegisterSpendNtfn/RegisterConfirmationsNtfn`.
type Beat struct {
// epoch is the current block epoch the blockbeat is aware of.
epoch chainntnfs.BlockEpoch
// log is the customized logger for the blockbeat which prints the
// block height.
log btclog.Logger
}
// Compile-time check to ensure Beat satisfies the Blockbeat interface.
var _ Blockbeat = (*Beat)(nil)
// NewBeat creates a new beat with the specified block epoch and a customized
// logger.
func NewBeat(epoch chainntnfs.BlockEpoch) *Beat {
b := &Beat{
epoch: epoch,
}
// Create a customized logger for the blockbeat.
logPrefix := fmt.Sprintf("Height[%6d]:", b.Height())
b.log = clog.WithPrefix(logPrefix)
return b
}
// Height returns the height of the block epoch.
//
// NOTE: Part of the Blockbeat interface.
func (b *Beat) Height() int32 {
return b.epoch.Height
}
// logger returns the logger for the blockbeat.
//
// NOTE: Part of the private blockbeat interface.
func (b *Beat) logger() btclog.Logger {
return b.log
}

28
chainio/blockbeat_test.go Normal file
View file

@ -0,0 +1,28 @@
package chainio
import (
"errors"
"testing"
"github.com/lightningnetwork/lnd/chainntnfs"
"github.com/stretchr/testify/require"
)
var errDummy = errors.New("dummy error")
// TestNewBeat tests the NewBeat and Height functions.
func TestNewBeat(t *testing.T) {
t.Parallel()
// Create a testing epoch.
epoch := chainntnfs.BlockEpoch{
Height: 1,
}
// Create the beat and check the internal state.
beat := NewBeat(epoch)
require.Equal(t, epoch, beat.epoch)
// Check the height function.
require.Equal(t, epoch.Height, beat.Height())
}

113
chainio/consumer.go Normal file
View file

@ -0,0 +1,113 @@
package chainio
// BeatConsumer defines a supplementary component that should be used by
// subsystems which implement the `Consumer` interface. It partially implements
// the `Consumer` interface by providing the method `ProcessBlock` such that
// subsystems don't need to re-implement it.
//
// While inheritance is not commonly used in Go, subsystems embedding this
// struct cannot pass the interface check for `Consumer` because the `Name`
// method is not implemented, which gives us a "mortise and tenon" structure.
// In addition to reducing code duplication, this design allows `ProcessBlock`
// to work on the concrete type `Beat` to access its internal states.
type BeatConsumer struct {
// BlockbeatChan is a channel to receive blocks from Blockbeat. The
// received block contains the best known height and the txns confirmed
// in this block.
BlockbeatChan chan Blockbeat
// name is the name of the consumer which embeds the BlockConsumer.
name string
// quit is a channel that closes when the BlockConsumer is shutting
// down.
//
// NOTE: this quit channel should be mounted to the same quit channel
// used by the subsystem.
quit chan struct{}
// errChan is a buffered chan that receives an error returned from
// processing this block.
errChan chan error
}
// NewBeatConsumer creates a new BlockConsumer.
func NewBeatConsumer(quit chan struct{}, name string) BeatConsumer {
// Refuse to start `lnd` if the quit channel is not initialized. We
// treat this case as if we are facing a nil pointer dereference, as
// there's no point to return an error here, which will cause the node
// to fail to be started anyway.
if quit == nil {
panic("quit channel is nil")
}
b := BeatConsumer{
BlockbeatChan: make(chan Blockbeat),
name: name,
errChan: make(chan error, 1),
quit: quit,
}
return b
}
// ProcessBlock takes a blockbeat and sends it to the consumer's blockbeat
// channel. It will send it to the subsystem's BlockbeatChan, and block until
// the processed result is received from the subsystem. The subsystem must call
// `NotifyBlockProcessed` after it has finished processing the block.
//
// NOTE: part of the `chainio.Consumer` interface.
func (b *BeatConsumer) ProcessBlock(beat Blockbeat) error {
// Update the current height.
beat.logger().Tracef("set current height for [%s]", b.name)
select {
// Send the beat to the blockbeat channel. It's expected that the
// consumer will read from this channel and process the block. Once
// processed, it should return the error or nil to the beat.Err chan.
case b.BlockbeatChan <- beat:
beat.logger().Tracef("Sent blockbeat to [%s]", b.name)
case <-b.quit:
beat.logger().Debugf("[%s] received shutdown before sending "+
"beat", b.name)
return nil
}
// Check the consumer's err chan. We expect the consumer to call
// `beat.NotifyBlockProcessed` to send the error back here.
select {
case err := <-b.errChan:
beat.logger().Tracef("[%s] processed beat: err=%v", b.name, err)
return err
case <-b.quit:
beat.logger().Debugf("[%s] received shutdown", b.name)
}
return nil
}
// NotifyBlockProcessed signals that the block has been processed. It takes the
// blockbeat being processed and an error resulted from processing it. This
// error is then sent back to the consumer's err chan to unblock
// `ProcessBlock`.
//
// NOTE: This method must be called by the subsystem after it has finished
// processing the block.
func (b *BeatConsumer) NotifyBlockProcessed(beat Blockbeat, err error) {
// Update the current height.
beat.logger().Tracef("[%s]: notifying beat processed", b.name)
select {
case b.errChan <- err:
beat.logger().Tracef("[%s]: notified beat processed, err=%v",
b.name, err)
case <-b.quit:
beat.logger().Debugf("[%s] received shutdown before notifying "+
"beat processed", b.name)
}
}

202
chainio/consumer_test.go Normal file
View file

@ -0,0 +1,202 @@
package chainio
import (
"testing"
"time"
"github.com/lightningnetwork/lnd/fn/v2"
"github.com/stretchr/testify/require"
)
// TestNewBeatConsumer tests the NewBeatConsumer function.
func TestNewBeatConsumer(t *testing.T) {
t.Parallel()
quitChan := make(chan struct{})
name := "test"
// Test the NewBeatConsumer function.
b := NewBeatConsumer(quitChan, name)
// Assert the state.
require.Equal(t, quitChan, b.quit)
require.Equal(t, name, b.name)
require.NotNil(t, b.BlockbeatChan)
}
// TestProcessBlockSuccess tests when the block is processed successfully, no
// error is returned.
func TestProcessBlockSuccess(t *testing.T) {
t.Parallel()
// Create a test consumer.
quitChan := make(chan struct{})
b := NewBeatConsumer(quitChan, "test")
// Create a mock beat.
mockBeat := &MockBlockbeat{}
defer mockBeat.AssertExpectations(t)
mockBeat.On("logger").Return(clog)
// Mock the consumer's err chan.
consumerErrChan := make(chan error, 1)
b.errChan = consumerErrChan
// Call the method under test.
resultChan := make(chan error, 1)
go func() {
resultChan <- b.ProcessBlock(mockBeat)
}()
// Assert the beat is sent to the blockbeat channel.
beat, err := fn.RecvOrTimeout(b.BlockbeatChan, time.Second)
require.NoError(t, err)
require.Equal(t, mockBeat, beat)
// Send nil to the consumer's error channel.
consumerErrChan <- nil
// Assert the result of ProcessBlock is nil.
result, err := fn.RecvOrTimeout(resultChan, time.Second)
require.NoError(t, err)
require.Nil(t, result)
}
// TestProcessBlockConsumerQuitBeforeSend tests when the consumer is quit
// before sending the beat, the method returns immediately.
func TestProcessBlockConsumerQuitBeforeSend(t *testing.T) {
t.Parallel()
// Create a test consumer.
quitChan := make(chan struct{})
b := NewBeatConsumer(quitChan, "test")
// Create a mock beat.
mockBeat := &MockBlockbeat{}
defer mockBeat.AssertExpectations(t)
mockBeat.On("logger").Return(clog)
// Call the method under test.
resultChan := make(chan error, 1)
go func() {
resultChan <- b.ProcessBlock(mockBeat)
}()
// Instead of reading the BlockbeatChan, close the quit channel.
close(quitChan)
// Assert ProcessBlock returned nil.
result, err := fn.RecvOrTimeout(resultChan, time.Second)
require.NoError(t, err)
require.Nil(t, result)
}
// TestProcessBlockConsumerQuitAfterSend tests when the consumer is quit after
// sending the beat, the method returns immediately.
func TestProcessBlockConsumerQuitAfterSend(t *testing.T) {
t.Parallel()
// Create a test consumer.
quitChan := make(chan struct{})
b := NewBeatConsumer(quitChan, "test")
// Create a mock beat.
mockBeat := &MockBlockbeat{}
defer mockBeat.AssertExpectations(t)
mockBeat.On("logger").Return(clog)
// Mock the consumer's err chan.
consumerErrChan := make(chan error, 1)
b.errChan = consumerErrChan
// Call the method under test.
resultChan := make(chan error, 1)
go func() {
resultChan <- b.ProcessBlock(mockBeat)
}()
// Assert the beat is sent to the blockbeat channel.
beat, err := fn.RecvOrTimeout(b.BlockbeatChan, time.Second)
require.NoError(t, err)
require.Equal(t, mockBeat, beat)
// Instead of sending nil to the consumer's error channel, close the
// quit channel.
close(quitChan)
// Assert ProcessBlock returned nil.
result, err := fn.RecvOrTimeout(resultChan, time.Second)
require.NoError(t, err)
require.Nil(t, result)
}
// TestNotifyBlockProcessedSendErr asserts the error can be sent and read by
// the beat via NotifyBlockProcessed.
func TestNotifyBlockProcessedSendErr(t *testing.T) {
t.Parallel()
// Create a test consumer.
quitChan := make(chan struct{})
b := NewBeatConsumer(quitChan, "test")
// Create a mock beat.
mockBeat := &MockBlockbeat{}
defer mockBeat.AssertExpectations(t)
mockBeat.On("logger").Return(clog)
// Mock the consumer's err chan.
consumerErrChan := make(chan error, 1)
b.errChan = consumerErrChan
// Call the method under test.
done := make(chan error)
go func() {
defer close(done)
b.NotifyBlockProcessed(mockBeat, errDummy)
}()
// Assert the error is sent to the beat's err chan.
result, err := fn.RecvOrTimeout(consumerErrChan, time.Second)
require.NoError(t, err)
require.ErrorIs(t, result, errDummy)
// Assert the done channel is closed.
result, err = fn.RecvOrTimeout(done, time.Second)
require.NoError(t, err)
require.Nil(t, result)
}
// TestNotifyBlockProcessedOnQuit asserts NotifyBlockProcessed exits
// immediately when the quit channel is closed.
func TestNotifyBlockProcessedOnQuit(t *testing.T) {
t.Parallel()
// Create a test consumer.
quitChan := make(chan struct{})
b := NewBeatConsumer(quitChan, "test")
// Create a mock beat.
mockBeat := &MockBlockbeat{}
defer mockBeat.AssertExpectations(t)
mockBeat.On("logger").Return(clog)
// Mock the consumer's err chan - we don't buffer it so it will block
// on sending the error.
consumerErrChan := make(chan error)
b.errChan = consumerErrChan
// Call the method under test.
done := make(chan error)
go func() {
defer close(done)
b.NotifyBlockProcessed(mockBeat, errDummy)
}()
// Close the quit channel so the method will return.
close(b.quit)
// Assert the done channel is closed.
result, err := fn.RecvOrTimeout(done, time.Second)
require.NoError(t, err)
require.Nil(t, result)
}

358
chainio/dispatcher.go Normal file
View file

@ -0,0 +1,358 @@
package chainio
import (
"errors"
"fmt"
"sync"
"sync/atomic"
"time"
"github.com/btcsuite/btclog/v2"
"github.com/lightningnetwork/lnd/chainntnfs"
"github.com/lightningnetwork/lnd/lnutils"
"golang.org/x/sync/errgroup"
)
// DefaultProcessBlockTimeout is the timeout value used when waiting for one
// consumer to finish processing the new block epoch.
var DefaultProcessBlockTimeout = 60 * time.Second
// ErrProcessBlockTimeout is the error returned when a consumer takes too long
// to process the block.
var ErrProcessBlockTimeout = errors.New("process block timeout")
// BlockbeatDispatcher is a service that handles dispatching new blocks to
// `lnd`'s subsystems. During startup, subsystems that are block-driven should
// implement the `Consumer` interface and register themselves via
// `RegisterQueue`. When two subsystems are independent of each other, they
// should be registered in different queues so blocks are notified concurrently.
// Otherwise, when living in the same queue, the subsystems are notified of the
// new blocks sequentially, which means it's critical to understand the
// relationship of these systems to properly handle the order.
type BlockbeatDispatcher struct {
wg sync.WaitGroup
// notifier is used to receive new block epochs.
notifier chainntnfs.ChainNotifier
// beat is the latest blockbeat received.
beat Blockbeat
// consumerQueues is a map of consumers that will receive blocks. Its
// key is a unique counter and its value is a queue of consumers. Each
// queue is notified concurrently, and consumers in the same queue is
// notified sequentially.
consumerQueues map[uint32][]Consumer
// counter is used to assign a unique id to each queue.
counter atomic.Uint32
// quit is used to signal the BlockbeatDispatcher to stop.
quit chan struct{}
// queryHeightChan is used to receive queries on the current height of
// the dispatcher.
queryHeightChan chan *query
}
// query is used to fetch the internal state of the dispatcher.
type query struct {
// respChan is used to send back the current height back to the caller.
//
// NOTE: This channel must be buffered.
respChan chan int32
}
// newQuery creates a query to be used to fetch the internal state of the
// dispatcher.
func newQuery() *query {
return &query{
respChan: make(chan int32, 1),
}
}
// NewBlockbeatDispatcher returns a new blockbeat dispatcher instance.
func NewBlockbeatDispatcher(n chainntnfs.ChainNotifier) *BlockbeatDispatcher {
return &BlockbeatDispatcher{
notifier: n,
quit: make(chan struct{}),
consumerQueues: make(map[uint32][]Consumer),
queryHeightChan: make(chan *query, 1),
}
}
// RegisterQueue takes a list of consumers and registers them in the same
// queue.
//
// NOTE: these consumers are notified sequentially.
func (b *BlockbeatDispatcher) RegisterQueue(consumers []Consumer) {
qid := b.counter.Add(1)
b.consumerQueues[qid] = append(b.consumerQueues[qid], consumers...)
clog.Infof("Registered queue=%d with %d blockbeat consumers", qid,
len(consumers))
for _, c := range consumers {
clog.Debugf("Consumer [%s] registered in queue %d", c.Name(),
qid)
}
}
// Start starts the blockbeat dispatcher - it registers a block notification
// and monitors and dispatches new blocks in a goroutine. It will refuse to
// start if there are no registered consumers.
func (b *BlockbeatDispatcher) Start() error {
// Make sure consumers are registered.
if len(b.consumerQueues) == 0 {
return fmt.Errorf("no consumers registered")
}
// Start listening to new block epochs. We should get a notification
// with the current best block immediately.
blockEpochs, err := b.notifier.RegisterBlockEpochNtfn(nil)
if err != nil {
return fmt.Errorf("register block epoch ntfn: %w", err)
}
clog.Infof("BlockbeatDispatcher is starting with %d consumer queues",
len(b.consumerQueues))
defer clog.Debug("BlockbeatDispatcher started")
b.wg.Add(1)
go b.dispatchBlocks(blockEpochs)
return nil
}
// Stop shuts down the blockbeat dispatcher.
func (b *BlockbeatDispatcher) Stop() {
clog.Info("BlockbeatDispatcher is stopping")
defer clog.Debug("BlockbeatDispatcher stopped")
// Signal the dispatchBlocks goroutine to stop.
close(b.quit)
b.wg.Wait()
}
func (b *BlockbeatDispatcher) log() btclog.Logger {
return b.beat.logger()
}
// dispatchBlocks listens to new block epoch and dispatches it to all the
// consumers. Each queue is notified concurrently, and the consumers in the
// same queue are notified sequentially.
//
// NOTE: Must be run as a goroutine.
func (b *BlockbeatDispatcher) dispatchBlocks(
blockEpochs *chainntnfs.BlockEpochEvent) {
defer b.wg.Done()
defer blockEpochs.Cancel()
for {
select {
case blockEpoch, ok := <-blockEpochs.Epochs:
if !ok {
clog.Debugf("Block epoch channel closed")
return
}
// Log a separator so it's easier to identify when a
// new block arrives for subsystems.
clog.Debugf("%v", lnutils.NewSeparatorClosure())
clog.Infof("Received new block %v at height %d, "+
"notifying consumers...", blockEpoch.Hash,
blockEpoch.Height)
// Record the time it takes the consumer to process
// this block.
start := time.Now()
// Update the current block epoch.
b.beat = NewBeat(*blockEpoch)
// Notify all consumers.
err := b.notifyQueues()
if err != nil {
b.log().Errorf("Notify block failed: %v", err)
}
b.log().Infof("Notified all consumers on new block "+
"in %v", time.Since(start))
// A query has been made to fetch the current height, we now
// send the height from its current beat.
case query := <-b.queryHeightChan:
// The beat may not be set yet, e.g., during the startup
// the query is made before the block epoch being sent.
height := int32(0)
if b.beat != nil {
height = b.beat.Height()
}
query.respChan <- height
case <-b.quit:
b.log().Debugf("BlockbeatDispatcher quit signal " +
"received")
return
}
}
}
// CurrentHeight returns the current best height known to the dispatcher. 0 is
// returned if the dispatcher is shutting down.
func (b *BlockbeatDispatcher) CurrentHeight() int32 {
query := newQuery()
select {
case b.queryHeightChan <- query:
case <-b.quit:
clog.Debugf("BlockbeatDispatcher quit before query")
return 0
}
select {
case height := <-query.respChan:
clog.Debugf("Responded current height: %v", height)
return height
case <-b.quit:
clog.Debugf("BlockbeatDispatcher quit before response")
return 0
}
}
// notifyQueues notifies each queue concurrently about the latest block epoch.
func (b *BlockbeatDispatcher) notifyQueues() error {
// errChans is a map of channels that will be used to receive errors
// returned from notifying the consumers.
errChans := make(map[uint32]chan error, len(b.consumerQueues))
// Notify each queue in goroutines.
for qid, consumers := range b.consumerQueues {
b.log().Debugf("Notifying queue=%d with %d consumers", qid,
len(consumers))
// Create a signal chan.
errChan := make(chan error, 1)
errChans[qid] = errChan
// Notify each queue concurrently.
go func(qid uint32, c []Consumer, beat Blockbeat) {
// Notify each consumer in this queue sequentially.
errChan <- DispatchSequential(beat, c)
}(qid, consumers, b.beat)
}
// Wait for all consumers in each queue to finish.
for qid, errChan := range errChans {
select {
case err := <-errChan:
if err != nil {
return fmt.Errorf("queue=%d got err: %w", qid,
err)
}
b.log().Debugf("Notified queue=%d", qid)
case <-b.quit:
b.log().Debugf("BlockbeatDispatcher quit signal " +
"received, exit notifyQueues")
return nil
}
}
return nil
}
// DispatchSequential takes a list of consumers and notify them about the new
// epoch sequentially. It requires the consumer to finish processing the block
// within the specified time, otherwise a timeout error is returned.
func DispatchSequential(b Blockbeat, consumers []Consumer) error {
for _, c := range consumers {
// Send the beat to the consumer.
err := notifyAndWait(b, c, DefaultProcessBlockTimeout)
if err != nil {
b.logger().Errorf("Failed to process block: %v", err)
return err
}
}
return nil
}
// DispatchConcurrent notifies each consumer concurrently about the blockbeat.
// It requires the consumer to finish processing the block within the specified
// time, otherwise a timeout error is returned.
func DispatchConcurrent(b Blockbeat, consumers []Consumer) error {
eg := &errgroup.Group{}
// Notify each queue in goroutines.
for _, c := range consumers {
// Notify each consumer concurrently.
eg.Go(func() error {
// Send the beat to the consumer.
err := notifyAndWait(b, c, DefaultProcessBlockTimeout)
// Exit early if there's no error.
if err == nil {
return nil
}
b.logger().Errorf("Consumer=%v failed to process "+
"block: %v", c.Name(), err)
return err
})
}
// Wait for all consumers in each queue to finish.
if err := eg.Wait(); err != nil {
return err
}
return nil
}
// notifyAndWait sends the blockbeat to the specified consumer. It requires the
// consumer to finish processing the block within the specified time, otherwise
// a timeout error is returned.
func notifyAndWait(b Blockbeat, c Consumer, timeout time.Duration) error {
b.logger().Debugf("Waiting for consumer[%s] to process it", c.Name())
// Record the time it takes the consumer to process this block.
start := time.Now()
errChan := make(chan error, 1)
go func() {
errChan <- c.ProcessBlock(b)
}()
// We expect the consumer to finish processing this block under 30s,
// otherwise a timeout error is returned.
select {
case err := <-errChan:
if err == nil {
break
}
return fmt.Errorf("%s got err in ProcessBlock: %w", c.Name(),
err)
case <-time.After(timeout):
return fmt.Errorf("consumer %s: %w", c.Name(),
ErrProcessBlockTimeout)
}
b.logger().Debugf("Consumer[%s] processed block in %v", c.Name(),
time.Since(start))
return nil
}

440
chainio/dispatcher_test.go Normal file
View file

@ -0,0 +1,440 @@
package chainio
import (
"testing"
"time"
"github.com/lightningnetwork/lnd/chainntnfs"
"github.com/lightningnetwork/lnd/fn/v2"
"github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require"
)
// TestNotifyAndWaitOnConsumerErr asserts when the consumer returns an error,
// it's returned by notifyAndWait.
func TestNotifyAndWaitOnConsumerErr(t *testing.T) {
t.Parallel()
// Create a mock consumer.
consumer := &MockConsumer{}
defer consumer.AssertExpectations(t)
consumer.On("Name").Return("mocker")
// Create a mock beat.
mockBeat := &MockBlockbeat{}
defer mockBeat.AssertExpectations(t)
mockBeat.On("logger").Return(clog)
// Mock ProcessBlock to return an error.
consumer.On("ProcessBlock", mockBeat).Return(errDummy).Once()
// Call the method under test.
err := notifyAndWait(mockBeat, consumer, DefaultProcessBlockTimeout)
// We expect the error to be returned.
require.ErrorIs(t, err, errDummy)
}
// TestNotifyAndWaitOnConsumerErr asserts when the consumer successfully
// processed the beat, no error is returned.
func TestNotifyAndWaitOnConsumerSuccess(t *testing.T) {
t.Parallel()
// Create a mock consumer.
consumer := &MockConsumer{}
defer consumer.AssertExpectations(t)
consumer.On("Name").Return("mocker")
// Create a mock beat.
mockBeat := &MockBlockbeat{}
defer mockBeat.AssertExpectations(t)
mockBeat.On("logger").Return(clog)
// Mock ProcessBlock to return nil.
consumer.On("ProcessBlock", mockBeat).Return(nil).Once()
// Call the method under test.
err := notifyAndWait(mockBeat, consumer, DefaultProcessBlockTimeout)
// We expect a nil error to be returned.
require.NoError(t, err)
}
// TestNotifyAndWaitOnConsumerTimeout asserts when the consumer times out
// processing the block, the timeout error is returned.
func TestNotifyAndWaitOnConsumerTimeout(t *testing.T) {
t.Parallel()
// Set timeout to be 10ms.
processBlockTimeout := 10 * time.Millisecond
// Create a mock consumer.
consumer := &MockConsumer{}
defer consumer.AssertExpectations(t)
consumer.On("Name").Return("mocker")
// Create a mock beat.
mockBeat := &MockBlockbeat{}
defer mockBeat.AssertExpectations(t)
mockBeat.On("logger").Return(clog)
// Mock ProcessBlock to return nil but blocks on returning.
consumer.On("ProcessBlock", mockBeat).Return(nil).Run(
func(args mock.Arguments) {
// Sleep one second to block on the method.
time.Sleep(processBlockTimeout * 100)
}).Once()
// Call the method under test.
err := notifyAndWait(mockBeat, consumer, processBlockTimeout)
// We expect a timeout error to be returned.
require.ErrorIs(t, err, ErrProcessBlockTimeout)
}
// TestDispatchSequential checks that the beat is sent to the consumers
// sequentially.
func TestDispatchSequential(t *testing.T) {
t.Parallel()
// Create three mock consumers.
consumer1 := &MockConsumer{}
defer consumer1.AssertExpectations(t)
consumer1.On("Name").Return("mocker1")
consumer2 := &MockConsumer{}
defer consumer2.AssertExpectations(t)
consumer2.On("Name").Return("mocker2")
consumer3 := &MockConsumer{}
defer consumer3.AssertExpectations(t)
consumer3.On("Name").Return("mocker3")
consumers := []Consumer{consumer1, consumer2, consumer3}
// Create a mock beat.
mockBeat := &MockBlockbeat{}
defer mockBeat.AssertExpectations(t)
mockBeat.On("logger").Return(clog)
// prevConsumer specifies the previous consumer that was called.
var prevConsumer string
// Mock the ProcessBlock on consumers to return immediately.
consumer1.On("ProcessBlock", mockBeat).Return(nil).Run(
func(args mock.Arguments) {
// Check the order of the consumers.
//
// The first consumer should have no previous consumer.
require.Empty(t, prevConsumer)
// Set the consumer as the previous consumer.
prevConsumer = consumer1.Name()
}).Once()
consumer2.On("ProcessBlock", mockBeat).Return(nil).Run(
func(args mock.Arguments) {
// Check the order of the consumers.
//
// The second consumer should see consumer1.
require.Equal(t, consumer1.Name(), prevConsumer)
// Set the consumer as the previous consumer.
prevConsumer = consumer2.Name()
}).Once()
consumer3.On("ProcessBlock", mockBeat).Return(nil).Run(
func(args mock.Arguments) {
// Check the order of the consumers.
//
// The third consumer should see consumer2.
require.Equal(t, consumer2.Name(), prevConsumer)
// Set the consumer as the previous consumer.
prevConsumer = consumer3.Name()
}).Once()
// Call the method under test.
err := DispatchSequential(mockBeat, consumers)
require.NoError(t, err)
// Check the previous consumer is the last consumer.
require.Equal(t, consumer3.Name(), prevConsumer)
}
// TestRegisterQueue tests the RegisterQueue function.
func TestRegisterQueue(t *testing.T) {
t.Parallel()
// Create two mock consumers.
consumer1 := &MockConsumer{}
defer consumer1.AssertExpectations(t)
consumer1.On("Name").Return("mocker1")
consumer2 := &MockConsumer{}
defer consumer2.AssertExpectations(t)
consumer2.On("Name").Return("mocker2")
consumers := []Consumer{consumer1, consumer2}
// Create a mock chain notifier.
mockNotifier := &chainntnfs.MockChainNotifier{}
defer mockNotifier.AssertExpectations(t)
// Create a new dispatcher.
b := NewBlockbeatDispatcher(mockNotifier)
// Register the consumers.
b.RegisterQueue(consumers)
// Assert that the consumers have been registered.
//
// We should have one queue.
require.Len(t, b.consumerQueues, 1)
// The queue should have two consumers.
queue, ok := b.consumerQueues[1]
require.True(t, ok)
require.Len(t, queue, 2)
}
// TestStartDispatcher tests the Start method.
func TestStartDispatcher(t *testing.T) {
t.Parallel()
// Create a mock chain notifier.
mockNotifier := &chainntnfs.MockChainNotifier{}
defer mockNotifier.AssertExpectations(t)
// Create a new dispatcher.
b := NewBlockbeatDispatcher(mockNotifier)
// Start the dispatcher without consumers should return an error.
err := b.Start()
require.Error(t, err)
// Create a consumer and register it.
consumer := &MockConsumer{}
defer consumer.AssertExpectations(t)
consumer.On("Name").Return("mocker1")
b.RegisterQueue([]Consumer{consumer})
// Mock the chain notifier to return an error.
mockNotifier.On("RegisterBlockEpochNtfn",
mock.Anything).Return(nil, errDummy).Once()
// Start the dispatcher now should return the error.
err = b.Start()
require.ErrorIs(t, err, errDummy)
// Mock the chain notifier to return a valid notifier.
blockEpochs := &chainntnfs.BlockEpochEvent{}
mockNotifier.On("RegisterBlockEpochNtfn",
mock.Anything).Return(blockEpochs, nil).Once()
// Start the dispatcher now should not return an error.
err = b.Start()
require.NoError(t, err)
}
// TestDispatchBlocks asserts the blocks are properly dispatched to the queues.
func TestDispatchBlocks(t *testing.T) {
t.Parallel()
// Create a mock chain notifier.
mockNotifier := &chainntnfs.MockChainNotifier{}
defer mockNotifier.AssertExpectations(t)
// Create a new dispatcher.
b := NewBlockbeatDispatcher(mockNotifier)
// Create the beat and attach it to the dispatcher.
epoch := chainntnfs.BlockEpoch{Height: 1}
beat := NewBeat(epoch)
b.beat = beat
// Create a consumer and register it.
consumer := &MockConsumer{}
defer consumer.AssertExpectations(t)
consumer.On("Name").Return("mocker1")
b.RegisterQueue([]Consumer{consumer})
// Mock the consumer to return nil error on ProcessBlock. This
// implicitly asserts that the step `notifyQueues` is successfully
// reached in the `dispatchBlocks` method.
consumer.On("ProcessBlock", mock.Anything).Return(nil).Once()
// Create a test epoch chan.
epochChan := make(chan *chainntnfs.BlockEpoch, 1)
blockEpochs := &chainntnfs.BlockEpochEvent{
Epochs: epochChan,
Cancel: func() {},
}
// Call the method in a goroutine.
done := make(chan struct{})
b.wg.Add(1)
go func() {
defer close(done)
b.dispatchBlocks(blockEpochs)
}()
// Send an epoch.
epoch = chainntnfs.BlockEpoch{Height: 2}
epochChan <- &epoch
// Wait for the dispatcher to process the epoch.
time.Sleep(100 * time.Millisecond)
// Stop the dispatcher.
b.Stop()
// We expect the dispatcher to stop immediately.
_, err := fn.RecvOrTimeout(done, time.Second)
require.NoError(t, err)
}
// TestNotifyQueuesSuccess checks when the dispatcher successfully notifies all
// the queues, no error is returned.
func TestNotifyQueuesSuccess(t *testing.T) {
t.Parallel()
// Create two mock consumers.
consumer1 := &MockConsumer{}
defer consumer1.AssertExpectations(t)
consumer1.On("Name").Return("mocker1")
consumer2 := &MockConsumer{}
defer consumer2.AssertExpectations(t)
consumer2.On("Name").Return("mocker2")
// Create two queues.
queue1 := []Consumer{consumer1}
queue2 := []Consumer{consumer2}
// Create a mock chain notifier.
mockNotifier := &chainntnfs.MockChainNotifier{}
defer mockNotifier.AssertExpectations(t)
// Create a mock beat.
mockBeat := &MockBlockbeat{}
defer mockBeat.AssertExpectations(t)
mockBeat.On("logger").Return(clog)
// Create a new dispatcher.
b := NewBlockbeatDispatcher(mockNotifier)
// Register the queues.
b.RegisterQueue(queue1)
b.RegisterQueue(queue2)
// Attach the blockbeat.
b.beat = mockBeat
// Mock the consumers to return nil error on ProcessBlock for
// both calls.
consumer1.On("ProcessBlock", mockBeat).Return(nil).Once()
consumer2.On("ProcessBlock", mockBeat).Return(nil).Once()
// Notify the queues. The mockers will be asserted in the end to
// validate the calls.
err := b.notifyQueues()
require.NoError(t, err)
}
// TestNotifyQueuesError checks when one of the queue returns an error, this
// error is returned by the method.
func TestNotifyQueuesError(t *testing.T) {
t.Parallel()
// Create a mock consumer.
consumer := &MockConsumer{}
defer consumer.AssertExpectations(t)
consumer.On("Name").Return("mocker1")
// Create one queue.
queue := []Consumer{consumer}
// Create a mock chain notifier.
mockNotifier := &chainntnfs.MockChainNotifier{}
defer mockNotifier.AssertExpectations(t)
// Create a mock beat.
mockBeat := &MockBlockbeat{}
defer mockBeat.AssertExpectations(t)
mockBeat.On("logger").Return(clog)
// Create a new dispatcher.
b := NewBlockbeatDispatcher(mockNotifier)
// Register the queues.
b.RegisterQueue(queue)
// Attach the blockbeat.
b.beat = mockBeat
// Mock the consumer to return an error on ProcessBlock.
consumer.On("ProcessBlock", mockBeat).Return(errDummy).Once()
// Notify the queues. The mockers will be asserted in the end to
// validate the calls.
err := b.notifyQueues()
require.ErrorIs(t, err, errDummy)
}
// TestCurrentHeight asserts `CurrentHeight` returns the expected block height.
func TestCurrentHeight(t *testing.T) {
t.Parallel()
testHeight := int32(1000)
// Create a mock chain notifier.
mockNotifier := &chainntnfs.MockChainNotifier{}
defer mockNotifier.AssertExpectations(t)
// Create a mock beat.
mockBeat := &MockBlockbeat{}
defer mockBeat.AssertExpectations(t)
mockBeat.On("logger").Return(clog)
mockBeat.On("Height").Return(testHeight).Once()
// Create a mock consumer.
consumer := &MockConsumer{}
defer consumer.AssertExpectations(t)
consumer.On("Name").Return("mocker1")
// Create one queue.
queue := []Consumer{consumer}
// Create a new dispatcher.
b := NewBlockbeatDispatcher(mockNotifier)
// Register the queues.
b.RegisterQueue(queue)
// Attach the blockbeat.
b.beat = mockBeat
// Mock the chain notifier to return a valid notifier.
blockEpochs := &chainntnfs.BlockEpochEvent{
Cancel: func() {},
}
mockNotifier.On("RegisterBlockEpochNtfn",
mock.Anything).Return(blockEpochs, nil).Once()
// Start the dispatcher now should not return an error.
err := b.Start()
require.NoError(t, err)
// Make a query on the current height and assert it equals to
// testHeight.
height := b.CurrentHeight()
require.Equal(t, testHeight, height)
// Stop the dispatcher.
b.Stop()
// Make a query on the current height and assert it equals to 0.
height = b.CurrentHeight()
require.Zero(t, height)
}

53
chainio/interface.go Normal file
View file

@ -0,0 +1,53 @@
package chainio
import "github.com/btcsuite/btclog/v2"
// Blockbeat defines an interface that can be used by subsystems to retrieve
// block data. It is sent by the BlockbeatDispatcher to all the registered
// consumers whenever a new block is received. Once the consumer finishes
// processing the block, it must signal it by calling `NotifyBlockProcessed`.
//
// The blockchain is a state machine - whenever there's a state change, it's
// manifested in a block. The blockbeat is a way to notify subsystems of this
// state change, and to provide them with the data they need to process it. In
// other words, subsystems must react to this state change and should consider
// being driven by the blockbeat in their own state machines.
type Blockbeat interface {
// blockbeat is a private interface that's only used in this package.
blockbeat
// Height returns the current block height.
Height() int32
}
// blockbeat defines a set of private methods used in this package to make
// interaction with the blockbeat easier.
type blockbeat interface {
// logger returns the internal logger used by the blockbeat which has a
// block height prefix.
logger() btclog.Logger
}
// Consumer defines a blockbeat consumer interface. Subsystems that need block
// info must implement it.
type Consumer interface {
// TODO(yy): We should also define the start methods used by the
// consumers such that when implementing the interface, the consumer
// will always be started with a blockbeat. This cannot be enforced at
// the moment as we need refactor all the start methods to only take a
// beat.
//
// Start(beat Blockbeat) error
// Name returns a human-readable string for this subsystem.
Name() string
// ProcessBlock takes a blockbeat and processes it. It should not
// return until the subsystem has updated its state based on the block
// data.
//
// NOTE: The consumer must try its best to NOT return an error. If an
// error is returned from processing the block, it means the subsystem
// cannot react to onchain state changes and lnd will shutdown.
ProcessBlock(b Blockbeat) error
}

32
chainio/log.go Normal file
View file

@ -0,0 +1,32 @@
package chainio
import (
"github.com/btcsuite/btclog/v2"
"github.com/lightningnetwork/lnd/build"
)
// Subsystem defines the logging code for this subsystem.
const Subsystem = "CHIO"
// clog is a logger that is initialized with no output filters. This means the
// package will not perform any logging by default until the caller requests
// it.
var clog btclog.Logger
// The default amount of logging is none.
func init() {
UseLogger(build.NewSubLogger(Subsystem, nil))
}
// DisableLog disables all library log output. Logging output is disabled by
// default until UseLogger is called.
func DisableLog() {
UseLogger(btclog.Disabled)
}
// UseLogger uses a specified Logger to output package logging info. This
// should be used in preference to SetLogWriter if the caller is also using
// btclog.
func UseLogger(logger btclog.Logger) {
clog = logger
}

50
chainio/mocks.go Normal file
View file

@ -0,0 +1,50 @@
package chainio
import (
"github.com/btcsuite/btclog/v2"
"github.com/stretchr/testify/mock"
)
// MockConsumer is a mock implementation of the Consumer interface.
type MockConsumer struct {
mock.Mock
}
// Compile-time constraint to ensure MockConsumer implements Consumer.
var _ Consumer = (*MockConsumer)(nil)
// Name returns a human-readable string for this subsystem.
func (m *MockConsumer) Name() string {
args := m.Called()
return args.String(0)
}
// ProcessBlock takes a blockbeat and processes it. A receive-only error chan
// must be returned.
func (m *MockConsumer) ProcessBlock(b Blockbeat) error {
args := m.Called(b)
return args.Error(0)
}
// MockBlockbeat is a mock implementation of the Blockbeat interface.
type MockBlockbeat struct {
mock.Mock
}
// Compile-time constraint to ensure MockBlockbeat implements Blockbeat.
var _ Blockbeat = (*MockBlockbeat)(nil)
// Height returns the current block height.
func (m *MockBlockbeat) Height() int32 {
args := m.Called()
return args.Get(0).(int32)
}
// logger returns the logger for the blockbeat.
func (m *MockBlockbeat) logger() btclog.Logger {
args := m.Called()
return args.Get(0).(btclog.Logger)
}

View file

@ -491,7 +491,7 @@ out:
func (b *BitcoindNotifier) handleRelevantTx(tx *btcutil.Tx,
mempool bool, height uint32) {
// If this is a mempool spend, we'll ask the mempool notifier to hanlde
// If this is a mempool spend, we'll ask the mempool notifier to handle
// it.
if mempool {
err := b.memNotifier.ProcessRelevantSpendTx(tx)

View file

@ -11,6 +11,7 @@ import (
"github.com/btcsuite/btcd/chaincfg/chainhash"
"github.com/btcsuite/btcd/integration/rpctest"
"github.com/btcsuite/btcd/rpcclient"
"github.com/btcsuite/btcwallet/chain"
"github.com/lightningnetwork/lnd/blockcache"
"github.com/lightningnetwork/lnd/chainntnfs"
@ -103,6 +104,54 @@ func syncNotifierWithMiner(t *testing.T, notifier *BitcoindNotifier,
"err=%v, minerHeight=%v, bitcoindHeight=%v",
err, minerHeight, bitcoindHeight)
}
// Get the num of connections the miner has. We expect it to
// have at least one connection with the chain backend.
count, err := miner.Client.GetConnectionCount()
require.NoError(t, err)
if count != 0 {
continue
}
// Reconnect the miner and the chain backend.
//
// NOTE: The connection should have been made before we perform
// the `syncNotifierWithMiner`. However, due to an unknown
// reason, the miner may refuse to process the inbound
// connection made by the bitcoind node, causing the connection
// to fail. It's possible there's a bug in the handshake between
// the two nodes.
//
// A normal flow is, bitcoind starts a v2 handshake flow, which
// btcd will fail and disconnect. Upon seeing this
// disconnection, bitcoind will try a v1 handshake and succeeds.
// The failed flow is, upon seeing the v2 handshake, btcd
// doesn't seem to perform the disconnect. Instead an EOF
// websocket error is found.
//
// TODO(yy): Fix the above bug in `btcd`. This can be reproduced
// using `make flakehunter-unit pkg=$pkg case=$case`, with,
// `case=TestHistoricalConfDetailsNoTxIndex/rpc_polling_enabled`
// `pkg=chainntnfs/bitcoindnotify`.
// Also need to modify the temp dir logic so we can save the
// debug logs.
// This bug is likely to be fixed when we implement the
// encrypted p2p conn, or when we properly fix the shutdown
// issues in all our RPC conns.
t.Log("Expected to the chain backend to have one conn with " +
"the miner, instead it's disconnected!")
// We now ask the miner to add the chain backend back.
host := fmt.Sprintf(
"127.0.0.1:%s", notifier.chainParams.DefaultPort,
)
// NOTE:AddNode must take a host that has the format
// `host:port`, otherwise the default port will be used. Check
// `normalizeAddress` in btcd for details.
err = miner.Client.AddNode(host, rpcclient.ANAdd)
require.NoError(t, err, "Failed to connect miner to the chain "+
"backend")
}
}
@ -130,7 +179,7 @@ func testHistoricalConfDetailsTxIndex(t *testing.T, rpcPolling bool) {
)
bitcoindConn := unittest.NewBitcoindBackend(
t, unittest.NetParams, miner.P2PAddress(), true, rpcPolling,
t, unittest.NetParams, miner, true, rpcPolling,
)
hintCache := initHintCache(t)
@ -140,8 +189,6 @@ func testHistoricalConfDetailsTxIndex(t *testing.T, rpcPolling bool) {
t, bitcoindConn, hintCache, hintCache, blockCache,
)
syncNotifierWithMiner(t, notifier, miner)
// A transaction unknown to the node should not be found within the
// txindex even if it is enabled, so we should not proceed with any
// fallback methods.
@ -230,13 +277,15 @@ func testHistoricalConfDetailsNoTxIndex(t *testing.T, rpcpolling bool) {
miner := unittest.NewMiner(t, unittest.NetParams, nil, true, 25)
bitcoindConn := unittest.NewBitcoindBackend(
t, unittest.NetParams, miner.P2PAddress(), false, rpcpolling,
t, unittest.NetParams, miner, false, rpcpolling,
)
hintCache := initHintCache(t)
blockCache := blockcache.NewBlockCache(10000)
notifier := setUpNotifier(t, bitcoindConn, hintCache, hintCache, blockCache)
notifier := setUpNotifier(
t, bitcoindConn, hintCache, hintCache, blockCache,
)
// Since the node has its txindex disabled, we fall back to scanning the
// chain manually. A transaction unknown to the network should not be
@ -245,7 +294,11 @@ func testHistoricalConfDetailsNoTxIndex(t *testing.T, rpcpolling bool) {
copy(unknownHash[:], bytes.Repeat([]byte{0x10}, 32))
unknownConfReq, err := chainntnfs.NewConfRequest(&unknownHash, testScript)
require.NoError(t, err, "unable to create conf request")
broadcastHeight := syncNotifierWithMiner(t, notifier, miner)
// Get the current best height.
_, broadcastHeight, err := miner.Client.GetBestBlock()
require.NoError(t, err, "unable to retrieve miner's current height")
_, txStatus, err := notifier.historicalConfDetails(
unknownConfReq, uint32(broadcastHeight), uint32(broadcastHeight),
)

View file

@ -539,7 +539,7 @@ out:
func (b *BtcdNotifier) handleRelevantTx(tx *btcutil.Tx,
mempool bool, height uint32) {
// If this is a mempool spend, we'll ask the mempool notifier to hanlde
// If this is a mempool spend, we'll ask the mempool notifier to handle
// it.
if mempool {
err := b.memNotifier.ProcessRelevantSpendTx(tx)

View file

@ -211,7 +211,7 @@ func (m *MempoolNotifier) findRelevantInputs(tx *btcutil.Tx) (inputsWithTx,
// If found, save it to watchedInputs to notify the
// subscriber later.
Log.Infof("Found input %s, spent in %s", op, txid)
Log.Debugf("Found input %s, spent in %s", op, txid)
// Construct the spend details.
details := &SpendDetail{

View file

@ -41,9 +41,8 @@ func testSingleConfirmationNotification(miner *rpctest.Harness,
// function.
txid, pkScript, err := chainntnfs.GetTestTxidAndScript(miner)
require.NoError(t, err, "unable to create test tx")
if err := chainntnfs.WaitForMempoolTx(miner, txid); err != nil {
t.Fatalf("tx not relayed to miner: %v", err)
}
err = chainntnfs.WaitForMempoolTx(miner, txid)
require.NoError(t, err, "tx not relayed to miner")
_, currentHeight, err := miner.Client.GetBestBlock()
require.NoError(t, err, "unable to get current height")
@ -68,6 +67,11 @@ func testSingleConfirmationNotification(miner *rpctest.Harness,
blockHash, err := miner.Client.Generate(1)
require.NoError(t, err, "unable to generate single block")
// Assert the above tx is mined in the block.
block, err := miner.Client.GetBlock(blockHash[0])
require.NoError(t, err)
require.Len(t, block.Transactions, 2, "block does not contain tx")
select {
case confInfo := <-confIntent.Confirmed:
if !confInfo.BlockHash.IsEqual(blockHash[0]) {
@ -1928,7 +1932,7 @@ func TestInterfaces(t *testing.T, targetBackEnd string) {
case "bitcoind":
var bitcoindConn *chain.BitcoindConn
bitcoindConn = unittest.NewBitcoindBackend(
t, unittest.NetParams, p2pAddr, true, false,
t, unittest.NetParams, miner, true, false,
)
newNotifier = func() (chainntnfs.TestChainNotifier, error) {
return bitcoindnotify.New(
@ -1940,7 +1944,7 @@ func TestInterfaces(t *testing.T, targetBackEnd string) {
case "bitcoind-rpc-polling":
var bitcoindConn *chain.BitcoindConn
bitcoindConn = unittest.NewBitcoindBackend(
t, unittest.NetParams, p2pAddr, true, true,
t, unittest.NetParams, miner, true, true,
)
newNotifier = func() (chainntnfs.TestChainNotifier, error) {
return bitcoindnotify.New(

View file

@ -1757,10 +1757,6 @@ func (n *TxNotifier) NotifyHeight(height uint32) error {
for ntfn := range n.ntfnsByConfirmHeight[height] {
confSet := n.confNotifications[ntfn.ConfRequest]
Log.Debugf("Dispatching %v confirmation notification for "+
"conf_id=%v, %v", ntfn.NumConfirmations, ntfn.ConfID,
ntfn.ConfRequest)
// The default notification we assigned above includes the
// block along with the rest of the details. However not all
// clients want the block, so we make a copy here w/o the block
@ -1770,6 +1766,20 @@ func (n *TxNotifier) NotifyHeight(height uint32) error {
confDetails.Block = nil
}
// If the `confDetails` has already been sent before, we'll
// skip it and continue processing the next one.
if ntfn.dispatched {
Log.Debugf("Skipped dispatched conf details for "+
"request %v conf_id=%v", ntfn.ConfRequest,
ntfn.ConfID)
continue
}
Log.Debugf("Dispatching %v confirmation notification for "+
"conf_id=%v, %v", ntfn.NumConfirmations, ntfn.ConfID,
ntfn.ConfRequest)
select {
case ntfn.Event.Confirmed <- &confDetails:
ntfn.dispatched = true

View file

@ -2,10 +2,13 @@ package chanbackup
import (
"fmt"
"io"
"os"
"path/filepath"
"time"
"github.com/lightningnetwork/lnd/keychain"
"github.com/lightningnetwork/lnd/lnrpc"
)
const (
@ -17,6 +20,10 @@ const (
// file that we'll use to atomically update the primary back up file
// when new channel are detected.
DefaultTempBackupFileName = "temp-dont-use.backup"
// DefaultChanBackupArchiveDirName is the default name of the directory
// that we'll use to store old channel backups.
DefaultChanBackupArchiveDirName = "chan-backup-archives"
)
var (
@ -44,28 +51,40 @@ type MultiFile struct {
// tempFile is an open handle to the temp back up file.
tempFile *os.File
// archiveDir is the directory where we'll store old channel backups.
archiveDir string
// noBackupArchive indicates whether old backups should be deleted
// rather than archived.
noBackupArchive bool
}
// NewMultiFile create a new multi-file instance at the target location on the
// file system.
func NewMultiFile(fileName string) *MultiFile {
func NewMultiFile(fileName string, noBackupArchive bool) *MultiFile {
// We'll our temporary backup file in the very same directory as the
// main backup file.
backupFileDir := filepath.Dir(fileName)
tempFileName := filepath.Join(
backupFileDir, DefaultTempBackupFileName,
)
archiveDir := filepath.Join(
backupFileDir, DefaultChanBackupArchiveDirName,
)
return &MultiFile{
fileName: fileName,
tempFileName: tempFileName,
fileName: fileName,
tempFileName: tempFileName,
archiveDir: archiveDir,
noBackupArchive: noBackupArchive,
}
}
// UpdateAndSwap will attempt write a new temporary backup file to disk with
// the newBackup encoded, then atomically swap (via rename) the old file for
// the new file by updating the name of the new file to the old.
// the new file by updating the name of the new file to the old. It also checks
// if the old file should be archived first before swapping it.
func (b *MultiFile) UpdateAndSwap(newBackup PackedMulti) error {
// If the main backup file isn't set, then we can't proceed.
if b.fileName == "" {
@ -117,6 +136,12 @@ func (b *MultiFile) UpdateAndSwap(newBackup PackedMulti) error {
return fmt.Errorf("unable to close file: %w", err)
}
// Archive the old channel backup file before replacing.
if err := b.createArchiveFile(); err != nil {
return fmt.Errorf("unable to archive old channel "+
"backup file: %w", err)
}
// Finally, we'll attempt to atomically rename the temporary file to
// the main back up file. If this succeeds, then we'll only have a
// single file on disk once this method exits.
@ -147,3 +172,74 @@ func (b *MultiFile) ExtractMulti(keyChain keychain.KeyRing) (*Multi, error) {
packedMulti := PackedMulti(multiBytes)
return packedMulti.Unpack(keyChain)
}
// createArchiveFile creates an archive file with a timestamped name in the
// specified archive directory, and copies the contents of the main backup file
// to the new archive file.
func (b *MultiFile) createArchiveFile() error {
// User can skip archiving of old backup files to save disk space.
if b.noBackupArchive {
log.Debug("Skipping archive of old backup file as configured")
return nil
}
// Check for old channel backup file.
oldFileExists := lnrpc.FileExists(b.fileName)
if !oldFileExists {
log.Debug("No old channel backup file to archive")
return nil
}
log.Infof("Archiving old channel backup to %v", b.archiveDir)
// Generate archive file path with timestamped name.
baseFileName := filepath.Base(b.fileName)
timestamp := time.Now().Format("2006-01-02-15-04-05")
archiveFileName := fmt.Sprintf("%s-%s", baseFileName, timestamp)
archiveFilePath := filepath.Join(b.archiveDir, archiveFileName)
oldBackupFile, err := os.Open(b.fileName)
if err != nil {
return fmt.Errorf("unable to open old channel backup file: "+
"%w", err)
}
defer func() {
err := oldBackupFile.Close()
if err != nil {
log.Errorf("unable to close old channel backup file: "+
"%v", err)
}
}()
// Ensure the archive directory exists. If it doesn't we create it.
const archiveDirPermissions = 0o700
err = os.MkdirAll(b.archiveDir, archiveDirPermissions)
if err != nil {
return fmt.Errorf("unable to create archive directory: %w", err)
}
// Create new archive file.
archiveFile, err := os.Create(archiveFilePath)
if err != nil {
return fmt.Errorf("unable to create archive file: %w", err)
}
defer func() {
err := archiveFile.Close()
if err != nil {
log.Errorf("unable to close archive file: %v", err)
}
}()
// Copy contents of old backup to the newly created archive files.
_, err = io.Copy(archiveFile, oldBackupFile)
if err != nil {
return fmt.Errorf("unable to copy to archive file: %w", err)
}
err = archiveFile.Sync()
if err != nil {
return fmt.Errorf("unable to sync archive file: %w", err)
}
return nil
}

View file

@ -45,6 +45,121 @@ func assertFileDeleted(t *testing.T, filePath string) {
}
}
// TestUpdateAndSwapWithArchive test that we're able to properly swap out old
// backups on disk with new ones. In addition, we check for noBackupArchive to
// ensure that the archive file is created when it's set to false, and not
// created when it's set to true.
func TestUpdateAndSwapWithArchive(t *testing.T) {
t.Parallel()
testCases := []struct {
name string
noBackupArchive bool
}{
// Test with noBackupArchive set to true - should not create
// archive.
{
name: "no archive file",
noBackupArchive: true,
},
// Test with noBackupArchive set to false - should create
// archive.
{
name: "with archive file",
noBackupArchive: false,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
tempTestDir := t.TempDir()
fileName := filepath.Join(
tempTestDir, DefaultBackupFileName,
)
tempFileName := filepath.Join(
tempTestDir, DefaultTempBackupFileName,
)
backupFile := NewMultiFile(fileName, tc.noBackupArchive)
// To start with, we'll make a random byte slice that'll
// pose as our packed multi backup.
newPackedMulti, err := makeFakePackedMulti()
require.NoError(t, err)
// With our backup created, we'll now attempt to swap
// out this backup, for the old one.
err = backupFile.UpdateAndSwap(newPackedMulti)
require.NoError(t, err)
// If we read out the file on disk, then it should match
// exactly what we wrote. The temp backup file should
// also be gone.
assertBackupMatches(t, fileName, newPackedMulti)
assertFileDeleted(t, tempFileName)
// Now that we know this is a valid test case, we'll
// make a new packed multi to swap out this current one.
newPackedMulti2, err := makeFakePackedMulti()
require.NoError(t, err)
// We'll then attempt to swap the old version for this
// new one.
err = backupFile.UpdateAndSwap(newPackedMulti2)
require.NoError(t, err)
// Once again, the file written on disk should have been
// properly swapped out with the new instance.
assertBackupMatches(t, fileName, newPackedMulti2)
// Additionally, we shouldn't be able to find the temp
// backup file on disk, as it should be deleted each
// time.
assertFileDeleted(t, tempFileName)
// Now check if archive was created when noBackupArchive
// is false.
archiveDir := filepath.Join(
filepath.Dir(fileName),
DefaultChanBackupArchiveDirName,
)
// When noBackupArchive is true, no new archive file
// should be created.
//
// NOTE: In a real environment, the archive directory
// might exist with older backups before the feature is
// disabled, but for test simplicity (since we clean up
// the directory between test cases), we verify the
// directory doesn't exist at all.
if tc.noBackupArchive {
require.NoDirExists(t, archiveDir)
return
}
// Otherwise we expect an archive to be created.
files, err := os.ReadDir(archiveDir)
require.NoError(t, err)
require.Len(t, files, 1)
// Verify the archive contents match the previous
// backup.
archiveFile := filepath.Join(
archiveDir, files[0].Name(),
)
// The archived content should match the previous backup
// (newPackedMulti) that was just swapped out.
assertBackupMatches(t, archiveFile, newPackedMulti)
// Clean up the archive directory.
os.RemoveAll(archiveDir)
})
}
}
// TestUpdateAndSwap test that we're able to properly swap out old backups on
// disk with new ones. Additionally, after a swap operation succeeds, then each
// time we should only have the main backup file on disk, as the temporary file
@ -52,114 +167,112 @@ func assertFileDeleted(t *testing.T, filePath string) {
func TestUpdateAndSwap(t *testing.T) {
t.Parallel()
tempTestDir := t.TempDir()
// Check that when the main file name is blank, an error is returned.
backupFile := NewMultiFile("", false)
err := backupFile.UpdateAndSwap(PackedMulti(nil))
require.ErrorIs(t, err, ErrNoBackupFileExists)
testCases := []struct {
fileName string
tempFileName string
name string
oldTempExists bool
valid bool
}{
// Main file name is blank, should fail.
{
fileName: "",
valid: false,
},
// Old temporary file still exists, should be removed. Only one
// file should remain.
{
fileName: filepath.Join(
tempTestDir, DefaultBackupFileName,
),
tempFileName: filepath.Join(
tempTestDir, DefaultTempBackupFileName,
),
name: "remove old temp file",
oldTempExists: true,
valid: true,
},
// Old temp doesn't exist, should swap out file, only a single
// file remains.
{
fileName: filepath.Join(
tempTestDir, DefaultBackupFileName,
),
tempFileName: filepath.Join(
tempTestDir, DefaultTempBackupFileName,
),
valid: true,
name: "swap out file",
oldTempExists: false,
},
}
for i, testCase := range testCases {
backupFile := NewMultiFile(testCase.fileName)
// To start with, we'll make a random byte slice that'll pose
// as our packed multi backup.
newPackedMulti, err := makeFakePackedMulti()
if err != nil {
t.Fatalf("unable to make test backup: %v", err)
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
tempTestDir := t.TempDir()
// If the old temporary file is meant to exist, then we'll
// create it now as an empty file.
if testCase.oldTempExists {
f, err := os.Create(testCase.tempFileName)
if err != nil {
t.Fatalf("unable to create temp file: %v", err)
fileName := filepath.Join(
tempTestDir, DefaultBackupFileName,
)
tempFileName := filepath.Join(
tempTestDir, DefaultTempBackupFileName,
)
backupFile := NewMultiFile(fileName, false)
// To start with, we'll make a random byte slice that'll
// pose as our packed multi backup.
newPackedMulti, err := makeFakePackedMulti()
require.NoError(t, err)
// If the old temporary file is meant to exist, then
// we'll create it now as an empty file.
if tc.oldTempExists {
f, err := os.Create(tempFileName)
require.NoError(t, err)
require.NoError(t, f.Close())
// TODO(roasbeef): mock out fs calls?
}
require.NoError(t, f.Close())
// TODO(roasbeef): mock out fs calls?
}
// With our backup created, we'll now attempt to swap
// out this backup, for the old one.
err = backupFile.UpdateAndSwap(newPackedMulti)
require.NoError(t, err)
// With our backup created, we'll now attempt to swap out this
// backup, for the old one.
err = backupFile.UpdateAndSwap(PackedMulti(newPackedMulti))
switch {
// If this is a valid test case, and we failed, then we'll
// return an error.
case err != nil && testCase.valid:
t.Fatalf("#%v, unable to swap file: %v", i, err)
// If we read out the file on disk, then it should match
// exactly what we wrote. The temp backup file should
// also be gone.
assertBackupMatches(t, fileName, newPackedMulti)
assertFileDeleted(t, tempFileName)
// If this is an invalid test case, and we passed it, then
// we'll return an error.
case err == nil && !testCase.valid:
t.Fatalf("#%v file swap should have failed: %v", i, err)
}
// Now that we know this is a valid test case, we'll
// make a new packed multi to swap out this current one.
newPackedMulti2, err := makeFakePackedMulti()
require.NoError(t, err)
if !testCase.valid {
continue
}
// We'll then attempt to swap the old version for this
// new one.
err = backupFile.UpdateAndSwap(newPackedMulti2)
require.NoError(t, err)
// If we read out the file on disk, then it should match
// exactly what we wrote. The temp backup file should also be
// gone.
assertBackupMatches(t, testCase.fileName, newPackedMulti)
assertFileDeleted(t, testCase.tempFileName)
// Once again, the file written on disk should have been
// properly swapped out with the new instance.
assertBackupMatches(t, fileName, newPackedMulti2)
// Now that we know this is a valid test case, we'll make a new
// packed multi to swap out this current one.
newPackedMulti2, err := makeFakePackedMulti()
if err != nil {
t.Fatalf("unable to make test backup: %v", err)
}
// Additionally, we shouldn't be able to find the temp
// backup file on disk, as it should be deleted each
// time.
assertFileDeleted(t, tempFileName)
// We'll then attempt to swap the old version for this new one.
err = backupFile.UpdateAndSwap(PackedMulti(newPackedMulti2))
if err != nil {
t.Fatalf("unable to swap file: %v", err)
}
// Now check if archive was created when noBackupArchive
// is false.
archiveDir := filepath.Join(
filepath.Dir(fileName),
DefaultChanBackupArchiveDirName,
)
files, err := os.ReadDir(archiveDir)
require.NoError(t, err)
require.Len(t, files, 1)
// Once again, the file written on disk should have been
// properly swapped out with the new instance.
assertBackupMatches(t, testCase.fileName, newPackedMulti2)
// Verify the archive contents match the previous
// backup.
archiveFile := filepath.Join(
archiveDir, files[0].Name(),
)
// Additionally, we shouldn't be able to find the temp backup
// file on disk, as it should be deleted each time.
assertFileDeleted(t, testCase.tempFileName)
// The archived content should match the previous backup
// (newPackedMulti) that was just swapped out.
assertBackupMatches(t, archiveFile, newPackedMulti)
// Clean up the archive directory.
os.RemoveAll(archiveDir)
})
}
}
@ -238,7 +351,7 @@ func TestExtractMulti(t *testing.T) {
}
for i, testCase := range testCases {
// First, we'll make our backup file with the specified name.
backupFile := NewMultiFile(testCase.fileName)
backupFile := NewMultiFile(testCase.fileName, false)
// With our file made, we'll now attempt to read out the
// multi-file.
@ -274,3 +387,86 @@ func TestExtractMulti(t *testing.T) {
assertMultiEqual(t, &unpackedMulti, freshUnpackedMulti)
}
}
// TestCreateArchiveFile tests that we're able to create an archive file
// with a timestamped name in the specified archive directory, and copy the
// contents of the main backup file to the new archive file.
func TestCreateArchiveFile(t *testing.T) {
t.Parallel()
// First, we'll create a temporary directory for our test files.
tempDir := t.TempDir()
archiveDir := filepath.Join(tempDir, DefaultChanBackupArchiveDirName)
// Next, we'll create a test backup file and write some content to it.
backupFile := filepath.Join(tempDir, DefaultBackupFileName)
testContent := []byte("test backup content")
err := os.WriteFile(backupFile, testContent, 0644)
require.NoError(t, err)
tests := []struct {
name string
setup func()
noBackupArchive bool
wantError bool
}{
{
name: "successful archive",
noBackupArchive: false,
},
{
name: "skip archive when disabled",
noBackupArchive: true,
},
{
name: "invalid archive directory permissions",
setup: func() {
// Create dir with no write permissions.
err := os.MkdirAll(archiveDir, 0500)
require.NoError(t, err)
},
noBackupArchive: false,
wantError: true,
},
}
for _, tc := range tests {
tc := tc
t.Run(tc.name, func(t *testing.T) {
defer os.RemoveAll(archiveDir)
if tc.setup != nil {
tc.setup()
}
multiFile := NewMultiFile(
backupFile, tc.noBackupArchive,
)
err := multiFile.createArchiveFile()
if tc.wantError {
require.Error(t, err)
return
}
require.NoError(t, err)
// If archiving is disabled, verify no archive was
// created.
if tc.noBackupArchive {
require.NoDirExists(t, archiveDir)
return
}
// Verify archive exists and content matches.
files, err := os.ReadDir(archiveDir)
require.NoError(t, err)
require.Len(t, files, 1)
archivedContent, err := os.ReadFile(
filepath.Join(archiveDir, files[0].Name()),
)
require.NoError(t, err)
assertBackupMatches(t, backupFile, archivedContent)
})
}
}

View file

@ -3146,7 +3146,7 @@ func (c *OpenChannel) RemoteCommitChainTip() (*CommitDiff, error) {
return nil, err
}
return cd, err
return cd, nil
}
// UnsignedAckedUpdates retrieves the persisted unsigned acked remote log

View file

@ -107,6 +107,10 @@ type testChannelParams struct {
// openChannel is set to true if the channel should be fully marked as
// open if this is false, the channel will be left in pending state.
openChannel bool
// closedChannel is set to true if the channel should be marked as
// closed after opening it.
closedChannel bool
}
// testChannelOption is a functional option which can be used to alter the
@ -129,6 +133,21 @@ func openChannelOption() testChannelOption {
}
}
// closedChannelOption is an option which can be used to create a test channel
// that is closed.
func closedChannelOption() testChannelOption {
return func(params *testChannelParams) {
params.closedChannel = true
}
}
// pubKeyOption is an option which can be used to set the remote's pubkey.
func pubKeyOption(pubKey *btcec.PublicKey) testChannelOption {
return func(params *testChannelParams) {
params.channel.IdentityPub = pubKey
}
}
// localHtlcsOption is an option which allows setting of htlcs on the local
// commitment.
func localHtlcsOption(htlcs []HTLC) testChannelOption {
@ -231,6 +250,17 @@ func createTestChannel(t *testing.T, cdb *ChannelStateDB,
err = params.channel.MarkAsOpen(params.channel.ShortChannelID)
require.NoError(t, err, "unable to mark channel open")
if params.closedChannel {
// Set the other public keys so that serialization doesn't
// panic.
err = params.channel.CloseChannel(&ChannelCloseSummary{
RemotePub: params.channel.IdentityPub,
RemoteCurrentRevocation: params.channel.IdentityPub,
RemoteNextRevocation: params.channel.IdentityPub,
})
require.NoError(t, err, "unable to close channel")
}
return params.channel
}

View file

@ -730,6 +730,194 @@ func (c *ChannelStateDB) FetchChannelByID(tx kvdb.RTx, id lnwire.ChannelID) (
return c.channelScanner(tx, selector)
}
// ChanCount is used by the server in determining access control.
type ChanCount struct {
HasOpenOrClosedChan bool
PendingOpenCount uint64
}
// FetchPermAndTempPeers returns a map where the key is the remote node's
// public key and the value is a struct that has a tally of the pending-open
// channels and whether the peer has an open or closed channel with us.
func (c *ChannelStateDB) FetchPermAndTempPeers(
chainHash []byte) (map[string]ChanCount, error) {
peerCounts := make(map[string]ChanCount)
err := kvdb.View(c.backend, func(tx kvdb.RTx) error {
openChanBucket := tx.ReadBucket(openChannelBucket)
if openChanBucket == nil {
return ErrNoChanDBExists
}
openChanErr := openChanBucket.ForEach(func(nodePub,
v []byte) error {
// If there is a value, this is not a bucket.
if v != nil {
return nil
}
nodeChanBucket := openChanBucket.NestedReadBucket(
nodePub,
)
if nodeChanBucket == nil {
return nil
}
chainBucket := nodeChanBucket.NestedReadBucket(
chainHash,
)
if chainBucket == nil {
return fmt.Errorf("no chain bucket exists")
}
var isPermPeer bool
var pendingOpenCount uint64
internalErr := chainBucket.ForEach(func(chanPoint,
val []byte) error {
// If there is a value, this is not a bucket.
if val != nil {
return nil
}
chanBucket := chainBucket.NestedReadBucket(
chanPoint,
)
if chanBucket == nil {
return nil
}
var op wire.OutPoint
readErr := graphdb.ReadOutpoint(
bytes.NewReader(chanPoint), &op,
)
if readErr != nil {
return readErr
}
// We need to go through each channel and look
// at the IsPending status.
openChan, err := fetchOpenChannel(
chanBucket, &op,
)
if err != nil {
return err
}
if openChan.IsPending {
// Add to the pending-open count since
// this is a temp peer.
pendingOpenCount++
return nil
}
// Since IsPending is false, this is a perm
// peer.
isPermPeer = true
return nil
})
if internalErr != nil {
return internalErr
}
peerCount := ChanCount{
HasOpenOrClosedChan: isPermPeer,
PendingOpenCount: pendingOpenCount,
}
peerCounts[string(nodePub)] = peerCount
return nil
})
if openChanErr != nil {
return openChanErr
}
// Now check the closed channel bucket.
historicalChanBucket := tx.ReadBucket(historicalChannelBucket)
if historicalChanBucket == nil {
return ErrNoHistoricalBucket
}
historicalErr := historicalChanBucket.ForEach(func(chanPoint,
v []byte) error {
// Parse each nested bucket and the chanInfoKey to get
// the IsPending bool. This determines whether the
// peer is protected or not.
if v != nil {
// This is not a bucket. This is currently not
// possible.
return nil
}
chanBucket := historicalChanBucket.NestedReadBucket(
chanPoint,
)
if chanBucket == nil {
// This is not possible.
return fmt.Errorf("no historical channel " +
"bucket exists")
}
var op wire.OutPoint
readErr := graphdb.ReadOutpoint(
bytes.NewReader(chanPoint), &op,
)
if readErr != nil {
return readErr
}
// This channel is closed, but the structure of the
// historical bucket is the same. This is by design,
// which means we can call fetchOpenChannel.
channel, fetchErr := fetchOpenChannel(chanBucket, &op)
if fetchErr != nil {
return fetchErr
}
// Only include this peer in the protected class if
// the closing transaction confirmed. Note that
// CloseChannel can be called in the funding manager
// while IsPending is true which is why we need this
// special-casing to not count premature funding
// manager calls to CloseChannel.
if !channel.IsPending {
// Fetch the public key of the remote node. We
// need to use the string-ified serialized,
// compressed bytes as the key.
remotePub := channel.IdentityPub
remoteSer := remotePub.SerializeCompressed()
remoteKey := string(remoteSer)
count, exists := peerCounts[remoteKey]
if exists {
count.HasOpenOrClosedChan = true
peerCounts[remoteKey] = count
} else {
peerCount := ChanCount{
HasOpenOrClosedChan: true,
}
peerCounts[remoteKey] = peerCount
}
}
return nil
})
if historicalErr != nil {
return historicalErr
}
return nil
}, func() {
clear(peerCounts)
})
return peerCounts, err
}
// channelSelector describes a function that takes a chain-hash bucket from
// within the open-channel DB and returns the wanted channel point bytes, and
// channel point. It must return the ErrChannelNotFound error if the wanted

View file

@ -721,6 +721,91 @@ func TestFetchHistoricalChannel(t *testing.T) {
}
}
// TestFetchPermTempPeer tests that we're able to call FetchPermAndTempPeers
// successfully.
func TestFetchPermTempPeer(t *testing.T) {
t.Parallel()
fullDB, err := MakeTestDB(t)
require.NoError(t, err, "unable to make test database")
cdb := fullDB.ChannelStateDB()
// Create an open channel.
privKey1, err := btcec.NewPrivateKey()
require.NoError(t, err, "unable to generate new private key")
pubKey1 := privKey1.PubKey()
channelState1 := createTestChannel(
t, cdb, openChannelOption(), pubKeyOption(pubKey1),
)
// Next, assert that the channel exists in the database.
_, err = cdb.FetchChannel(channelState1.FundingOutpoint)
require.NoError(t, err, "unable to fetch channel")
// Create a pending channel.
privKey2, err := btcec.NewPrivateKey()
require.NoError(t, err, "unable to generate private key")
pubKey2 := privKey2.PubKey()
channelState2 := createTestChannel(t, cdb, pubKeyOption(pubKey2))
// Assert that the channel exists in the database.
_, err = cdb.FetchChannel(channelState2.FundingOutpoint)
require.NoError(t, err, "unable to fetch channel")
// Create a closed channel.
privKey3, err := btcec.NewPrivateKey()
require.NoError(t, err, "unable to generate new private key")
pubKey3 := privKey3.PubKey()
_ = createTestChannel(
t, cdb, pubKeyOption(pubKey3), openChannelOption(),
closedChannelOption(),
)
// Fetch the ChanCount for our peers.
peerCounts, err := cdb.FetchPermAndTempPeers(key[:])
require.NoError(t, err, "unable to fetch perm and temp peers")
// There should only be three entries.
require.Len(t, peerCounts, 3)
// The first entry should have OpenClosed set to true and Pending set
// to 0.
count1, found := peerCounts[string(pubKey1.SerializeCompressed())]
require.True(t, found, "unable to find peer 1 in peerCounts")
require.True(
t, count1.HasOpenOrClosedChan,
"couldn't find peer 1's channels",
)
require.Zero(
t, count1.PendingOpenCount,
"peer 1 doesn't have 0 pending-open",
)
count2, found := peerCounts[string(pubKey2.SerializeCompressed())]
require.True(t, found, "unable to find peer 2 in peerCounts")
require.False(
t, count2.HasOpenOrClosedChan, "found erroneous channels",
)
require.Equal(t, uint64(1), count2.PendingOpenCount)
count3, found := peerCounts[string(pubKey3.SerializeCompressed())]
require.True(t, found, "unable to find peer 3 in peerCounts")
require.True(
t, count3.HasOpenOrClosedChan,
"couldn't find peer 3's channels",
)
require.Zero(
t, count3.PendingOpenCount,
"peer 3 doesn't have 0 pending-open",
)
}
func createLightningNode(priv *btcec.PrivateKey) *models.LightningNode {
updateTime := rand.Int63()

View file

@ -103,3 +103,28 @@ func TestEncodeDecodeAmpInvoiceState(t *testing.T) {
// The two states should match.
require.Equal(t, ampState, ampState2)
}
// TestInvoiceBucketTombstone tests the behavior of setting and checking the
// invoice bucket tombstone. It verifies that the tombstone can be set correctly
// and detected when present in the database.
func TestInvoiceBucketTombstone(t *testing.T) {
t.Parallel()
// Initialize a test database.
db, err := MakeTestDB(t)
require.NoError(t, err, "unable to initialize db")
// Ensure the tombstone doesn't exist initially.
tombstoneExists, err := db.GetInvoiceBucketTombstone()
require.NoError(t, err)
require.False(t, tombstoneExists)
// Set the tombstone.
err = db.SetInvoiceBucketTombstone()
require.NoError(t, err)
// Verify that the tombstone exists after setting it.
tombstoneExists, err = db.GetInvoiceBucketTombstone()
require.NoError(t, err)
require.True(t, tombstoneExists)
}

View file

@ -80,6 +80,13 @@ var (
//
// settleIndexNo => invoiceKey
settleIndexBucket = []byte("invoice-settle-index")
// invoiceBucketTombstone is a special key that indicates the invoice
// bucket has been permanently closed. Its purpose is to prevent the
// invoice bucket from being reopened in the future. A key use case for
// the tombstone is to ensure users cannot switch back to the KV invoice
// database after migrating to the native SQL database.
invoiceBucketTombstone = []byte("invoice-tombstone")
)
const (
@ -650,18 +657,13 @@ func (d *DB) UpdateInvoice(_ context.Context, ref invpkg.InvoiceRef,
return err
}
// If the set ID hint is non-nil, then we'll use that to filter
// out the HTLCs for AMP invoice so we don't need to read them
// all out to satisfy the invoice callback below. If it's nil,
// then we pass in the zero set ID which means no HTLCs will be
// read out.
var invSetID invpkg.SetID
if setIDHint != nil {
invSetID = *setIDHint
}
// setIDHint can also be nil here, which means all the HTLCs
// for AMP invoices are fetched. If the blank setID is passed
// in, then no HTLCs are fetched for the AMP invoice. If a
// specific setID is passed in, then only the HTLCs for that
// setID are fetched for a particular sub-AMP invoice.
invoice, err := fetchInvoice(
invoiceNum, invoices, []*invpkg.SetID{&invSetID}, false,
invoiceNum, invoices, []*invpkg.SetID{setIDHint}, false,
)
if err != nil {
return err
@ -691,7 +693,7 @@ func (d *DB) UpdateInvoice(_ context.Context, ref invpkg.InvoiceRef,
// If this is an AMP update, then limit the returned AMP state
// to only the requested set ID.
if setIDHint != nil {
filterInvoiceAMPState(updatedInvoice, &invSetID)
filterInvoiceAMPState(updatedInvoice, setIDHint)
}
return nil
@ -848,7 +850,10 @@ func (k *kvInvoiceUpdater) Finalize(updateType invpkg.UpdateType) error {
return k.storeSettleHodlInvoiceUpdate()
case invpkg.CancelInvoiceUpdate:
return k.serializeAndStoreInvoice()
// Persist all changes which where made when cancelling the
// invoice. All HTLCs which were accepted are now canceled, so
// we persist this state.
return k.storeCancelHtlcsUpdate()
}
return fmt.Errorf("unknown update type: %v", updateType)
@ -2402,3 +2407,49 @@ func (d *DB) DeleteInvoice(_ context.Context,
return err
}
// SetInvoiceBucketTombstone sets the tombstone key in the invoice bucket to
// mark the bucket as permanently closed. This prevents it from being reopened
// in the future.
func (d *DB) SetInvoiceBucketTombstone() error {
return kvdb.Update(d, func(tx kvdb.RwTx) error {
// Access the top-level invoice bucket.
invoices := tx.ReadWriteBucket(invoiceBucket)
if invoices == nil {
return fmt.Errorf("invoice bucket does not exist")
}
// Add the tombstone key to the invoice bucket.
err := invoices.Put(invoiceBucketTombstone, []byte("1"))
if err != nil {
return fmt.Errorf("failed to set tombstone: %w", err)
}
return nil
}, func() {})
}
// GetInvoiceBucketTombstone checks if the tombstone key exists in the invoice
// bucket. It returns true if the tombstone is present and false otherwise.
func (d *DB) GetInvoiceBucketTombstone() (bool, error) {
var tombstoneExists bool
err := kvdb.View(d, func(tx kvdb.RTx) error {
// Access the top-level invoice bucket.
invoices := tx.ReadBucket(invoiceBucket)
if invoices == nil {
return fmt.Errorf("invoice bucket does not exist")
}
// Check if the tombstone key exists.
tombstone := invoices.Get(invoiceBucketTombstone)
tombstoneExists = tombstone != nil
return nil
}, func() {})
if err != nil {
return false, err
}
return tombstoneExists, nil
}

View file

@ -738,7 +738,7 @@ func serializeNewResult(rp *paymentResultNew) ([]byte, []byte, error) {
return nil, nil, err
}
return key, buff.Bytes(), err
return key, buff.Bytes(), nil
}
// getResultKeyNew returns a byte slice representing a unique key for this

View file

@ -10,7 +10,10 @@ import (
"github.com/btcsuite/btcd/btcec/v2"
"github.com/btcsuite/btcd/wire"
"github.com/davecgh/go-spew/spew"
sphinx "github.com/lightningnetwork/lightning-onion"
"github.com/lightningnetwork/lnd/lntypes"
"github.com/lightningnetwork/lnd/lnutils"
"github.com/lightningnetwork/lnd/lnwire"
"github.com/lightningnetwork/lnd/routing/route"
)
@ -45,12 +48,19 @@ type HTLCAttemptInfo struct {
// in which the payment's PaymentHash in the PaymentCreationInfo should
// be used.
Hash *lntypes.Hash
// onionBlob is the cached value for onion blob created from the sphinx
// construction.
onionBlob [lnwire.OnionPacketSize]byte
// circuit is the cached value for sphinx circuit.
circuit *sphinx.Circuit
}
// NewHtlcAttempt creates a htlc attempt.
func NewHtlcAttempt(attemptID uint64, sessionKey *btcec.PrivateKey,
route route.Route, attemptTime time.Time,
hash *lntypes.Hash) *HTLCAttempt {
hash *lntypes.Hash) (*HTLCAttempt, error) {
var scratch [btcec.PrivKeyBytesLen]byte
copy(scratch[:], sessionKey.Serialize())
@ -64,7 +74,11 @@ func NewHtlcAttempt(attemptID uint64, sessionKey *btcec.PrivateKey,
Hash: hash,
}
return &HTLCAttempt{HTLCAttemptInfo: info}
if err := info.attachOnionBlobAndCircuit(); err != nil {
return nil, err
}
return &HTLCAttempt{HTLCAttemptInfo: info}, nil
}
// SessionKey returns the ephemeral key used for a htlc attempt. This function
@ -79,6 +93,45 @@ func (h *HTLCAttemptInfo) SessionKey() *btcec.PrivateKey {
return h.cachedSessionKey
}
// OnionBlob returns the onion blob created from the sphinx construction.
func (h *HTLCAttemptInfo) OnionBlob() ([lnwire.OnionPacketSize]byte, error) {
var zeroBytes [lnwire.OnionPacketSize]byte
if h.onionBlob == zeroBytes {
if err := h.attachOnionBlobAndCircuit(); err != nil {
return zeroBytes, err
}
}
return h.onionBlob, nil
}
// Circuit returns the sphinx circuit for this attempt.
func (h *HTLCAttemptInfo) Circuit() (*sphinx.Circuit, error) {
if h.circuit == nil {
if err := h.attachOnionBlobAndCircuit(); err != nil {
return nil, err
}
}
return h.circuit, nil
}
// attachOnionBlobAndCircuit creates a sphinx packet and caches the onion blob
// and circuit for this attempt.
func (h *HTLCAttemptInfo) attachOnionBlobAndCircuit() error {
onionBlob, circuit, err := generateSphinxPacket(
&h.Route, h.Hash[:], h.SessionKey(),
)
if err != nil {
return err
}
copy(h.onionBlob[:], onionBlob)
h.circuit = circuit
return nil
}
// HTLCAttempt contains information about a specific HTLC attempt for a given
// payment. It contains the HTLCAttemptInfo used to send the HTLC, as well
// as a timestamp and any known outcome of the attempt.
@ -629,3 +682,69 @@ func serializeTime(w io.Writer, t time.Time) error {
_, err := w.Write(scratch[:])
return err
}
// generateSphinxPacket generates then encodes a sphinx packet which encodes
// the onion route specified by the passed layer 3 route. The blob returned
// from this function can immediately be included within an HTLC add packet to
// be sent to the first hop within the route.
func generateSphinxPacket(rt *route.Route, paymentHash []byte,
sessionKey *btcec.PrivateKey) ([]byte, *sphinx.Circuit, error) {
// Now that we know we have an actual route, we'll map the route into a
// sphinx payment path which includes per-hop payloads for each hop
// that give each node within the route the necessary information
// (fees, CLTV value, etc.) to properly forward the payment.
sphinxPath, err := rt.ToSphinxPath()
if err != nil {
return nil, nil, err
}
log.Tracef("Constructed per-hop payloads for payment_hash=%x: %v",
paymentHash, lnutils.NewLogClosure(func() string {
path := make(
[]sphinx.OnionHop, sphinxPath.TrueRouteLength(),
)
for i := range path {
hopCopy := sphinxPath[i]
path[i] = hopCopy
}
return spew.Sdump(path)
}),
)
// Next generate the onion routing packet which allows us to perform
// privacy preserving source routing across the network.
sphinxPacket, err := sphinx.NewOnionPacket(
sphinxPath, sessionKey, paymentHash,
sphinx.DeterministicPacketFiller,
)
if err != nil {
return nil, nil, err
}
// Finally, encode Sphinx packet using its wire representation to be
// included within the HTLC add packet.
var onionBlob bytes.Buffer
if err := sphinxPacket.Encode(&onionBlob); err != nil {
return nil, nil, err
}
log.Tracef("Generated sphinx packet: %v",
lnutils.NewLogClosure(func() string {
// We make a copy of the ephemeral key and unset the
// internal curve here in order to keep the logs from
// getting noisy.
key := *sphinxPacket.EphemeralKey
packetCopy := *sphinxPacket
packetCopy.EphemeralKey = &key
return spew.Sdump(packetCopy)
}),
)
return onionBlob.Bytes(), &sphinx.Circuit{
SessionKey: sessionKey,
PaymentPath: sphinxPath.NodeKeys(),
}, nil
}

View file

@ -5,12 +5,22 @@ import (
"fmt"
"testing"
"github.com/btcsuite/btcd/btcec/v2"
"github.com/lightningnetwork/lnd/lntypes"
"github.com/lightningnetwork/lnd/lnwire"
"github.com/lightningnetwork/lnd/routing/route"
"github.com/stretchr/testify/require"
)
var (
testHash = [32]byte{
0xb7, 0x94, 0x38, 0x5f, 0x2d, 0x1e, 0xf7, 0xab,
0x4d, 0x92, 0x73, 0xd1, 0x90, 0x63, 0x81, 0xb4,
0x4f, 0x2f, 0x6f, 0x25, 0x88, 0xa3, 0xef, 0xb9,
0x6a, 0x49, 0x18, 0x83, 0x31, 0x98, 0x47, 0x53,
}
)
// TestLazySessionKeyDeserialize tests that we can read htlc attempt session
// keys that were previously serialized as a private key as raw bytes.
func TestLazySessionKeyDeserialize(t *testing.T) {
@ -578,3 +588,15 @@ func makeAttemptInfo(total, amtForwarded int) HTLCAttemptInfo {
},
}
}
// TestEmptyRoutesGenerateSphinxPacket tests that the generateSphinxPacket
// function is able to gracefully handle being passed a nil set of hops for the
// route by the caller.
func TestEmptyRoutesGenerateSphinxPacket(t *testing.T) {
t.Parallel()
sessionKey, _ := btcec.NewPrivateKey()
emptyRoute := &route.Route{}
_, _, err := generateSphinxPacket(emptyRoute, testHash[:], sessionKey)
require.ErrorIs(t, err, route.ErrNoRouteHopsProvided)
}

View file

@ -28,7 +28,7 @@ func genPreimage() ([32]byte, error) {
return preimage, nil
}
func genInfo() (*PaymentCreationInfo, *HTLCAttemptInfo,
func genInfo(t *testing.T) (*PaymentCreationInfo, *HTLCAttemptInfo,
lntypes.Preimage, error) {
preimage, err := genPreimage()
@ -38,9 +38,14 @@ func genInfo() (*PaymentCreationInfo, *HTLCAttemptInfo,
}
rhash := sha256.Sum256(preimage[:])
attempt := NewHtlcAttempt(
0, priv, *testRoute.Copy(), time.Time{}, nil,
var hash lntypes.Hash
copy(hash[:], rhash[:])
attempt, err := NewHtlcAttempt(
0, priv, *testRoute.Copy(), time.Time{}, &hash,
)
require.NoError(t, err)
return &PaymentCreationInfo{
PaymentIdentifier: rhash,
Value: testRoute.ReceiverAmt(),
@ -60,7 +65,7 @@ func TestPaymentControlSwitchFail(t *testing.T) {
pControl := NewPaymentControl(db)
info, attempt, preimg, err := genInfo()
info, attempt, preimg, err := genInfo(t)
require.NoError(t, err, "unable to generate htlc message")
// Sends base htlc message which initiate StatusInFlight.
@ -196,7 +201,7 @@ func TestPaymentControlSwitchDoubleSend(t *testing.T) {
pControl := NewPaymentControl(db)
info, attempt, preimg, err := genInfo()
info, attempt, preimg, err := genInfo(t)
require.NoError(t, err, "unable to generate htlc message")
// Sends base htlc message which initiate base status and move it to
@ -266,7 +271,7 @@ func TestPaymentControlSuccessesWithoutInFlight(t *testing.T) {
pControl := NewPaymentControl(db)
info, _, preimg, err := genInfo()
info, _, preimg, err := genInfo(t)
require.NoError(t, err, "unable to generate htlc message")
// Attempt to complete the payment should fail.
@ -291,7 +296,7 @@ func TestPaymentControlFailsWithoutInFlight(t *testing.T) {
pControl := NewPaymentControl(db)
info, _, _, err := genInfo()
info, _, _, err := genInfo(t)
require.NoError(t, err, "unable to generate htlc message")
// Calling Fail should return an error.
@ -346,7 +351,7 @@ func TestPaymentControlDeleteNonInFlight(t *testing.T) {
var numSuccess, numInflight int
for _, p := range payments {
info, attempt, preimg, err := genInfo()
info, attempt, preimg, err := genInfo(t)
if err != nil {
t.Fatalf("unable to generate htlc message: %v", err)
}
@ -684,7 +689,7 @@ func TestPaymentControlMultiShard(t *testing.T) {
pControl := NewPaymentControl(db)
info, attempt, preimg, err := genInfo()
info, attempt, preimg, err := genInfo(t)
if err != nil {
t.Fatalf("unable to generate htlc message: %v", err)
}
@ -948,7 +953,7 @@ func TestPaymentControlMPPRecordValidation(t *testing.T) {
pControl := NewPaymentControl(db)
info, attempt, _, err := genInfo()
info, attempt, _, err := genInfo(t)
require.NoError(t, err, "unable to generate htlc message")
// Init the payment.
@ -997,7 +1002,7 @@ func TestPaymentControlMPPRecordValidation(t *testing.T) {
// Create and init a new payment. This time we'll check that we cannot
// register an MPP attempt if we already registered a non-MPP one.
info, attempt, _, err = genInfo()
info, attempt, _, err = genInfo(t)
require.NoError(t, err, "unable to generate htlc message")
err = pControl.InitPayment(info.PaymentIdentifier, info)
@ -1271,7 +1276,7 @@ func createTestPayments(t *testing.T, p *PaymentControl, payments []*payment) {
attemptID := uint64(0)
for i := 0; i < len(payments); i++ {
info, attempt, preimg, err := genInfo()
info, attempt, preimg, err := genInfo(t)
require.NoError(t, err, "unable to generate htlc message")
// Set the payment id accordingly in the payments slice.

View file

@ -64,7 +64,6 @@ var (
TotalAmount: 1234567,
SourcePubKey: vertex,
Hops: []*route.Hop{
testHop3,
testHop2,
testHop1,
},
@ -98,7 +97,7 @@ var (
}
)
func makeFakeInfo() (*PaymentCreationInfo, *HTLCAttemptInfo) {
func makeFakeInfo(t *testing.T) (*PaymentCreationInfo, *HTLCAttemptInfo) {
var preimg lntypes.Preimage
copy(preimg[:], rev[:])
@ -113,9 +112,10 @@ func makeFakeInfo() (*PaymentCreationInfo, *HTLCAttemptInfo) {
PaymentRequest: []byte("test"),
}
a := NewHtlcAttempt(
a, err := NewHtlcAttempt(
44, priv, testRoute, time.Unix(100, 0), &hash,
)
require.NoError(t, err)
return c, &a.HTLCAttemptInfo
}
@ -123,7 +123,7 @@ func makeFakeInfo() (*PaymentCreationInfo, *HTLCAttemptInfo) {
func TestSentPaymentSerialization(t *testing.T) {
t.Parallel()
c, s := makeFakeInfo()
c, s := makeFakeInfo(t)
var b bytes.Buffer
require.NoError(t, serializePaymentCreationInfo(&b, c), "serialize")
@ -174,6 +174,9 @@ func TestSentPaymentSerialization(t *testing.T) {
require.NoError(t, err, "deserialize")
require.Equal(t, s.Route, newWireInfo.Route)
err = newWireInfo.attachOnionBlobAndCircuit()
require.NoError(t, err)
// Clear routes to allow DeepEqual to compare the remaining fields.
newWireInfo.Route = route.Route{}
s.Route = route.Route{}
@ -517,7 +520,7 @@ func TestQueryPayments(t *testing.T) {
for i := 0; i < nonDuplicatePayments; i++ {
// Generate a test payment.
info, _, preimg, err := genInfo()
info, _, preimg, err := genInfo(t)
if err != nil {
t.Fatalf("unable to create test "+
"payment: %v", err)
@ -618,7 +621,7 @@ func TestFetchPaymentWithSequenceNumber(t *testing.T) {
pControl := NewPaymentControl(db)
// Generate a test payment which does not have duplicates.
noDuplicates, _, _, err := genInfo()
noDuplicates, _, _, err := genInfo(t)
require.NoError(t, err)
// Create a new payment entry in the database.
@ -632,7 +635,7 @@ func TestFetchPaymentWithSequenceNumber(t *testing.T) {
require.NoError(t, err)
// Generate a test payment which we will add duplicates to.
hasDuplicates, _, preimg, err := genInfo()
hasDuplicates, _, preimg, err := genInfo(t)
require.NoError(t, err)
// Create a new payment entry in the database.
@ -783,7 +786,7 @@ func putDuplicatePayment(t *testing.T, duplicateBucket kvdb.RwBucket,
require.NoError(t, err)
// Generate fake information for the duplicate payment.
info, _, _, err := genInfo()
info, _, _, err := genInfo(t)
require.NoError(t, err)
// Write the payment info to disk under the creation info key. This code

View file

@ -565,7 +565,7 @@ func TestPutRevocationLog(t *testing.T) {
},
{
// Test dust htlc is not saved.
name: "dust htlc not saved with amout data",
name: "dust htlc not saved with amount data",
commit: testCommitDust,
ourIndex: 0,
theirIndex: 1,

View file

@ -80,6 +80,14 @@ type FullyResolvedChannelEvent struct {
ChannelPoint *wire.OutPoint
}
// FundingTimeoutEvent represents a new event where a pending-open channel has
// timed out from the PoV of the funding manager because the funding tx
// has not confirmed in the allotted time.
type FundingTimeoutEvent struct {
// ChannelPoint is the channelpoint for the newly inactive channel.
ChannelPoint *wire.OutPoint
}
// New creates a new channel notifier. The ChannelNotifier gets channel
// events from peers and from the chain arbitrator, and dispatches them to
// its clients.
@ -184,6 +192,17 @@ func (c *ChannelNotifier) NotifyFullyResolvedChannelEvent(
}
}
// NotifyFundingTimeoutEvent notifies the channelEventNotifier goroutine that
// a funding timeout has occurred for a certain channel point.
func (c *ChannelNotifier) NotifyFundingTimeout(chanPoint wire.OutPoint) {
// Send this event to all channel event subscribers.
event := FundingTimeoutEvent{ChannelPoint: &chanPoint}
if err := c.ntfnServer.SendUpdate(event); err != nil {
log.Warnf("Unable to send funding timeout update: %v for "+
"ChanPoint(%v)", err, chanPoint)
}
}
// NotifyActiveLinkEvent notifies the channelEventNotifier goroutine that a
// link has been added to the switch.
func (c *ChannelNotifier) NotifyActiveLinkEvent(chanPoint wire.OutPoint) {

View file

@ -8,6 +8,7 @@ import (
"github.com/btcsuite/btcd/btcec/v2"
"github.com/btcsuite/btcd/chaincfg"
"github.com/btcsuite/btcd/chaincfg/chainhash"
"github.com/btcsuite/btcd/wire"
"github.com/lightningnetwork/lnd/chanbackup"
"github.com/lightningnetwork/lnd/channeldb"
"github.com/lightningnetwork/lnd/contractcourt"
@ -286,6 +287,9 @@ func (c *chanDBRestorer) RestoreChansFromSingles(backups ...chanbackup.Single) e
ltndLog.Infof("Informing chain watchers of new restored channels")
// Create a slice of channel points.
chanPoints := make([]wire.OutPoint, 0, len(channelShells))
// Finally, we'll need to inform the chain arbitrator of these new
// channels so we'll properly watch for their ultimate closure on chain
// and sweep them via the DLP.
@ -294,8 +298,15 @@ func (c *chanDBRestorer) RestoreChansFromSingles(backups ...chanbackup.Single) e
if err != nil {
return err
}
chanPoints = append(
chanPoints, restoredChannel.Chan.FundingOutpoint,
)
}
// With all the channels restored, we'll now re-send the blockbeat.
c.chainArb.RedispatchBlockbeat(chanPoints)
return nil
}
@ -314,7 +325,7 @@ func (s *server) ConnectPeer(nodePub *btcec.PublicKey, addrs []net.Addr) error {
// to ensure the new connection is created after this new link/channel
// is known.
if err := s.DisconnectPeer(nodePub); err != nil {
ltndLog.Infof("Peer(%v) is already connected, proceeding "+
ltndLog.Infof("Peer(%x) is already connected, proceeding "+
"with chan restore", nodePub.SerializeCompressed())
}

View file

@ -1254,6 +1254,7 @@ func queryRoutes(ctx *cli.Context) error {
}
printRespJSON(route)
return nil
}

View file

@ -21,6 +21,7 @@ import (
"github.com/jessevdk/go-flags"
"github.com/lightningnetwork/lnd"
"github.com/lightningnetwork/lnd/lnrpc"
"github.com/lightningnetwork/lnd/lnwire"
"github.com/lightningnetwork/lnd/routing"
"github.com/lightningnetwork/lnd/routing/route"
"github.com/lightningnetwork/lnd/signal"
@ -50,6 +51,14 @@ var (
customDataPattern = regexp.MustCompile(
`"custom_channel_data":\s*"([0-9a-f]+)"`,
)
chanIDPattern = regexp.MustCompile(
`"chan_id":\s*"(\d+)"`,
)
channelPointPattern = regexp.MustCompile(
`"channel_point":\s*"([0-9a-fA-F]+:[0-9]+)"`,
)
)
// replaceCustomData replaces the custom channel data hex string with the
@ -86,6 +95,96 @@ func replaceCustomData(jsonBytes []byte) []byte {
return buf.Bytes()
}
// replaceAndAppendScid replaces the chan_id with scid and appends the human
// readable string representation of scid.
func replaceAndAppendScid(jsonBytes []byte) []byte {
// If there's nothing to replace, return the original JSON.
if !chanIDPattern.Match(jsonBytes) {
return jsonBytes
}
replacedBytes := chanIDPattern.ReplaceAllFunc(
jsonBytes, func(match []byte) []byte {
// Extract the captured scid group from the match.
chanID := chanIDPattern.FindStringSubmatch(
string(match),
)[1]
scid, err := strconv.ParseUint(chanID, 10, 64)
if err != nil {
return match
}
// Format a new JSON field for the scid (chan_id),
// including both its numeric representation and its
// string representation (scid_str).
scidStr := lnwire.NewShortChanIDFromInt(scid).
AltString()
updatedField := fmt.Sprintf(
`"scid": "%d", "scid_str": "%s"`, scid, scidStr,
)
// Replace the entire match with the new structure.
return []byte(updatedField)
},
)
var buf bytes.Buffer
err := json.Indent(&buf, replacedBytes, "", " ")
if err != nil {
// If we can't indent the JSON, it likely means the replacement
// data wasn't correct, so we return the original JSON.
return jsonBytes
}
return buf.Bytes()
}
// appendChanID appends the chan_id which is computed using the outpoint
// of the funding transaction (the txid, and output index).
func appendChanID(jsonBytes []byte) []byte {
// If there's nothing to replace, return the original JSON.
if !channelPointPattern.Match(jsonBytes) {
return jsonBytes
}
replacedBytes := channelPointPattern.ReplaceAllFunc(
jsonBytes, func(match []byte) []byte {
chanPoint := channelPointPattern.FindStringSubmatch(
string(match),
)[1]
chanOutpoint, err := wire.NewOutPointFromString(
chanPoint,
)
if err != nil {
return match
}
// Format a new JSON field computed from the
// channel_point (chan_id).
chanID := lnwire.NewChanIDFromOutPoint(*chanOutpoint)
updatedField := fmt.Sprintf(
`"channel_point": "%s", "chan_id": "%s"`,
chanPoint, chanID.String(),
)
// Replace the entire match with the new structure.
return []byte(updatedField)
},
)
var buf bytes.Buffer
err := json.Indent(&buf, replacedBytes, "", " ")
if err != nil {
// If we can't indent the JSON, it likely means the replacement
// data wasn't correct, so we return the original JSON.
return jsonBytes
}
return buf.Bytes()
}
func getContext() context.Context {
shutdownInterceptor, err := signal.Intercept()
if err != nil {
@ -113,6 +212,7 @@ func printJSON(resp interface{}) {
_, _ = out.WriteTo(os.Stdout)
}
// printRespJSON prints the response in a json format.
func printRespJSON(resp proto.Message) {
jsonBytes, err := lnrpc.ProtoJSONMarshalOpts.Marshal(resp)
if err != nil {
@ -120,11 +220,33 @@ func printRespJSON(resp proto.Message) {
return
}
// Make the custom data human readable.
jsonBytesReplaced := replaceCustomData(jsonBytes)
fmt.Printf("%s\n", jsonBytesReplaced)
}
// printModifiedProtoJSON prints the response with some additional formatting
// and replacements.
func printModifiedProtoJSON(resp proto.Message) {
jsonBytes, err := lnrpc.ProtoJSONMarshalOpts.Marshal(resp)
if err != nil {
fmt.Println("unable to decode response: ", err)
return
}
// Replace custom_channel_data in the JSON.
jsonBytesReplaced := replaceCustomData(jsonBytes)
// Replace chan_id with scid, and append scid_str and scid fields.
jsonBytesReplaced = replaceAndAppendScid(jsonBytesReplaced)
// Append the chan_id field to the JSON.
jsonBytesReplaced = appendChanID(jsonBytesReplaced)
fmt.Printf("%s\n", jsonBytesReplaced)
}
// actionDecorator is used to add additional information and error handling
// to command actions.
func actionDecorator(f func(*cli.Context) error) func(*cli.Context) error {
@ -889,6 +1011,11 @@ var closeChannelCommand = cli.Command{
comparison is the end boundary of the fee negotiation, if not specified
it's always x3 of the starting value. Increasing this value increases
the chance of a successful negotiation.
Moreover if the channel has active HTLCs on it, the coop close will
wait until all HTLCs are resolved and will not allow any new HTLCs on
the channel. The channel will appear as disabled in the listchannels
output. The command will block in that case until the channel close tx
is broadcasted.
In the case of a cooperative closure, one can manually set the address
to deliver funds to upon closure. This is optional, and may only be used
@ -920,8 +1047,10 @@ var closeChannelCommand = cli.Command{
Usage: "attempt an uncooperative closure",
},
cli.BoolFlag{
Name: "block",
Usage: "block until the channel is closed",
Name: "block",
Usage: `block will wait for the channel to be closed,
"meaning that it will wait for the channel close tx to
get 1 confirmation.`,
},
cli.Int64Flag{
Name: "conf_target",
@ -995,6 +1124,9 @@ func closeChannel(ctx *cli.Context) error {
SatPerVbyte: ctx.Uint64(feeRateFlag),
DeliveryAddress: ctx.String("delivery_addr"),
MaxFeePerVbyte: ctx.Uint64("max_fee_rate"),
// This makes sure that a coop close will also be executed if
// active HTLCs are present on the channel.
NoWait: true,
}
// After parsing the request, we'll spin up a goroutine that will
@ -1032,7 +1164,9 @@ func closeChannel(ctx *cli.Context) error {
// executeChannelClose attempts to close the channel from a request. The closing
// transaction ID is sent through `txidChan` as soon as it is broadcasted to the
// network. The block boolean is used to determine if we should block until the
// closing transaction receives all of its required confirmations.
// closing transaction receives a confirmation of 1 block. The logging outputs
// are sent to stderr to avoid conflicts with the JSON output of the command
// and potential work flows which depend on a proper JSON output.
func executeChannelClose(ctxc context.Context, client lnrpc.LightningClient,
req *lnrpc.CloseChannelRequest, txidChan chan<- string, block bool) error {
@ -1051,9 +1185,17 @@ func executeChannelClose(ctxc context.Context, client lnrpc.LightningClient,
switch update := resp.Update.(type) {
case *lnrpc.CloseStatusUpdate_CloseInstant:
if req.NoWait {
return nil
fmt.Fprintln(os.Stderr, "Channel close successfully "+
"initiated")
pendingHtlcs := update.CloseInstant.NumPendingHtlcs
if pendingHtlcs > 0 {
fmt.Fprintf(os.Stderr, "Cooperative channel "+
"close waiting for %d HTLCs to be "+
"resolved before the close process "+
"can kick off\n", pendingHtlcs)
}
case *lnrpc.CloseStatusUpdate_ClosePending:
closingHash := update.ClosePending.Txid
txid, err := chainhash.NewHash(closingHash)
@ -1061,12 +1203,22 @@ func executeChannelClose(ctxc context.Context, client lnrpc.LightningClient,
return err
}
fmt.Fprintf(os.Stderr, "Channel close transaction "+
"broadcasted: %v\n", txid)
txidChan <- txid.String()
if !block {
return nil
}
fmt.Fprintln(os.Stderr, "Waiting for channel close "+
"confirmation ...")
case *lnrpc.CloseStatusUpdate_ChanClose:
fmt.Fprintln(os.Stderr, "Channel close successfully "+
"confirmed")
return nil
}
}
@ -1747,7 +1899,7 @@ func ListChannels(ctx *cli.Context) error {
return err
}
printRespJSON(resp)
printModifiedProtoJSON(resp)
return nil
}
@ -1809,7 +1961,7 @@ func closedChannels(ctx *cli.Context) error {
return err
}
printRespJSON(resp)
printModifiedProtoJSON(resp)
return nil
}

View file

@ -127,10 +127,9 @@ func TestReplaceCustomData(t *testing.T) {
t.Parallel()
testCases := []struct {
name string
data string
replaceData string
expected string
name string
data string
expected string
}{
{
name: "no replacement necessary",
@ -139,10 +138,10 @@ func TestReplaceCustomData(t *testing.T) {
},
{
name: "valid json with replacement",
data: "{\"foo\":\"bar\",\"custom_channel_data\":\"" +
data: `{"foo":"bar","custom_channel_data":"` +
hex.EncodeToString([]byte(
"{\"bar\":\"baz\"}",
)) + "\"}",
`{"bar":"baz"}`,
)) + `"}`,
expected: `{
"foo": "bar",
"custom_channel_data": {
@ -152,10 +151,10 @@ func TestReplaceCustomData(t *testing.T) {
},
{
name: "valid json with replacement and space",
data: "{\"foo\":\"bar\",\"custom_channel_data\": \"" +
data: `{"foo":"bar","custom_channel_data": "` +
hex.EncodeToString([]byte(
"{\"bar\":\"baz\"}",
)) + "\"}",
`{"bar":"baz"}`,
)) + `"}`,
expected: `{
"foo": "bar",
"custom_channel_data": {
@ -178,9 +177,11 @@ func TestReplaceCustomData(t *testing.T) {
"\"custom_channel_data\":\"a\"",
},
{
name: "valid json, invalid hex, just formatted",
data: "{\"custom_channel_data\":\"f\"}",
expected: "{\n \"custom_channel_data\": \"f\"\n}",
name: "valid json, invalid hex, just formatted",
data: `{"custom_channel_data":"f"}`,
expected: `{
"custom_channel_data": "f"
}`,
},
}
@ -191,3 +192,139 @@ func TestReplaceCustomData(t *testing.T) {
})
}
}
// TestReplaceAndAppendScid tests whether chan_id is replaced with scid and
// scid_str in the JSON console output.
func TestReplaceAndAppendScid(t *testing.T) {
t.Parallel()
testCases := []struct {
name string
data string
expected string
}{
{
name: "no replacement necessary",
data: "foo",
expected: "foo",
},
{
name: "valid json with replacement",
data: `{"foo":"bar","chan_id":"829031767408640"}`,
expected: `{
"foo": "bar",
"scid": "829031767408640",
"scid_str": "754x1x0"
}`,
},
{
name: "valid json with replacement and space",
data: `{"foo":"bar","chan_id": "829031767408640"}`,
expected: `{
"foo": "bar",
"scid": "829031767408640",
"scid_str": "754x1x0"
}`,
},
{
name: "doesn't match pattern, returned identical",
data: "this ain't even json, and no chan_id " +
"either",
expected: "this ain't even json, and no chan_id " +
"either",
},
{
name: "invalid json",
data: "this ain't json, " +
"\"chan_id\":\"18446744073709551616\"",
expected: "this ain't json, " +
"\"chan_id\":\"18446744073709551616\"",
},
{
name: "valid json, invalid uint, just formatted",
data: `{"chan_id":"18446744073709551616"}`,
expected: `{
"chan_id": "18446744073709551616"
}`,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
result := replaceAndAppendScid([]byte(tc.data))
require.Equal(t, tc.expected, string(result))
})
}
}
// TestAppendChanID tests whether chan_id (BOLT02) is appended
// to the JSON console output.
func TestAppendChanID(t *testing.T) {
t.Parallel()
testCases := []struct {
name string
data string
expected string
}{
{
name: "no amendment necessary",
data: "foo",
expected: "foo",
},
{
name: "valid json with amendment",
data: `{"foo":"bar","channel_point":"6ab312e3b744e` +
`1b80a33a6541697df88766515c31c08e839bf11dc` +
`9fcc036a19:0"}`,
expected: `{
"foo": "bar",
"channel_point": "6ab312e3b744e1b80a33a6541697df88766515c31c` +
`08e839bf11dc9fcc036a19:0",
"chan_id": "196a03cc9fdc11bf39e8081cc315657688df971654a` +
`6330ab8e144b7e312b36a"
}`,
},
{
name: "valid json with amendment and space",
data: `{"foo":"bar","channel_point": "6ab312e3b744e` +
`1b80a33a6541697df88766515c31c08e839bf11dc` +
`9fcc036a19:0"}`,
expected: `{
"foo": "bar",
"channel_point": "6ab312e3b744e1b80a33a6541697df88766515c31c` +
`08e839bf11dc9fcc036a19:0",
"chan_id": "196a03cc9fdc11bf39e8081cc315657688df971654a` +
`6330ab8e144b7e312b36a"
}`,
},
{
name: "doesn't match pattern, returned identical",
data: "this ain't even json, and no channel_point " +
"either",
expected: "this ain't even json, and no channel_point" +
" either",
},
{
name: "invalid json",
data: "this ain't json, " +
"\"channel_point\":\"f:0\"",
expected: "this ain't json, " +
"\"channel_point\":\"f:0\"",
},
{
name: "valid json with invalid outpoint, formatted",
data: `{"channel_point":"f:0"}`,
expected: `{
"channel_point": "f:0"
}`,
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
result := appendChanID([]byte(tc.data))
require.Equal(t, tc.expected, string(result))
})
}
}

View file

@ -268,9 +268,10 @@ var bumpFeeCommand = cli.Command{
cli.Uint64Flag{
Name: "conf_target",
Usage: `
The deadline in number of blocks that the input should be spent within.
When not set, for new inputs, the default value (1008) is used; for
exiting inputs, their current values will be retained.`,
The conf target is the starting fee rate of the fee function expressed
in number of blocks. So instead of using sat_per_vbyte the conf target
can be specified and LND will query its fee estimator for the current
fee rate for the given target.`,
},
cli.Uint64Flag{
Name: "sat_per_byte",
@ -307,6 +308,14 @@ var bumpFeeCommand = cli.Command{
the budget for fee bumping; for existing inputs, their current budgets
will be retained.`,
},
cli.Uint64Flag{
Name: "deadline_delta",
Usage: `
The deadline delta in number of blocks that this input should be spent
within to bump the transaction. When specified also a budget value is
required. When the deadline is reached, ALL the budget will be spent as
fee.`,
},
},
Action: actionDecorator(bumpFee),
}
@ -344,11 +353,12 @@ func bumpFee(ctx *cli.Context) error {
}
resp, err := client.BumpFee(ctxc, &walletrpc.BumpFeeRequest{
Outpoint: protoOutPoint,
TargetConf: uint32(ctx.Uint64("conf_target")),
Immediate: immediate,
Budget: ctx.Uint64("budget"),
SatPerVbyte: ctx.Uint64("sat_per_vbyte"),
Outpoint: protoOutPoint,
TargetConf: uint32(ctx.Uint64("conf_target")),
Immediate: immediate,
Budget: ctx.Uint64("budget"),
SatPerVbyte: ctx.Uint64("sat_per_vbyte"),
DeadlineDelta: uint32(ctx.Uint64("deadline_delta")),
})
if err != nil {
return err
@ -377,9 +387,10 @@ var bumpCloseFeeCommand = cli.Command{
cli.Uint64Flag{
Name: "conf_target",
Usage: `
The deadline in number of blocks that the input should be spent within.
When not set, for new inputs, the default value (1008) is used; for
exiting inputs, their current values will be retained.`,
The conf target is the starting fee rate of the fee function expressed
in number of blocks. So instead of using sat_per_vbyte the conf target
can be specified and LND will query its fee estimator for the current
fee rate for the given target.`,
},
cli.Uint64Flag{
Name: "sat_per_byte",
@ -435,8 +446,17 @@ var bumpForceCloseFeeCommand = cli.Command{
cli.Uint64Flag{
Name: "conf_target",
Usage: `
The deadline in number of blocks that the anchor output should be spent
within to bump the closing transaction.`,
The conf target is the starting fee rate of the fee function expressed
in number of blocks. So instead of using sat_per_vbyte the conf target
can be specified and LND will query its fee estimator for the current
fee rate for the given target.`,
},
cli.Uint64Flag{
Name: "deadline_delta",
Usage: `
The deadline delta in number of blocks that the anchor output should
be spent within to bump the closing transaction. When the deadline is
reached, ALL the budget will be spent as fees.`,
},
cli.Uint64Flag{
Name: "sat_per_byte",
@ -513,10 +533,11 @@ func bumpForceCloseFee(ctx *cli.Context) error {
resp, err := walletClient.BumpForceCloseFee(
ctxc, &walletrpc.BumpForceCloseFeeRequest{
ChanPoint: rpcChannelPoint,
DeadlineDelta: uint32(ctx.Uint64("conf_target")),
Budget: ctx.Uint64("budget"),
Immediate: immediate,
StartingFeerate: ctx.Uint64("sat_per_vbyte"),
TargetConf: uint32(ctx.Uint64("conf_target")),
DeadlineDelta: uint32(ctx.Uint64("deadline_delta")),
})
if err != nil {
return err

View file

@ -238,6 +238,10 @@ const (
// defaultHTTPHeaderTimeout is the default timeout for HTTP requests.
DefaultHTTPHeaderTimeout = 5 * time.Second
// DefaultNumRestrictedSlots is the default number of restricted slots
// we'll allocate in the server.
DefaultNumRestrictedSlots = 30
// BitcoinChainName is a string that represents the Bitcoin blockchain.
BitcoinChainName = "bitcoin"
@ -360,6 +364,8 @@ type Config struct {
MaxPendingChannels int `long:"maxpendingchannels" description:"The maximum number of incoming pending channels permitted per peer."`
BackupFilePath string `long:"backupfilepath" description:"The target location of the channel backup file"`
NoBackupArchive bool `long:"no-backup-archive" description:"If set to true, channel backups will be deleted or replaced rather than being archived to a separate location."`
FeeURL string `long:"feeurl" description:"DEPRECATED: Use 'fee.url' option. Optional URL for external fee estimation. If no URL is specified, the method for fee estimation will depend on the chosen backend and network. Must be set for neutrino on mainnet." hidden:"true"`
Bitcoin *lncfg.Chain `group:"Bitcoin" namespace:"bitcoin"`
@ -516,6 +522,10 @@ type Config struct {
// HTTPHeaderTimeout is the maximum duration that the server will wait
// before timing out reading the headers of an HTTP request.
HTTPHeaderTimeout time.Duration `long:"http-header-timeout" description:"The maximum duration that the server will wait before timing out reading the headers of an HTTP request."`
// NumRestrictedSlots is the number of restricted slots we'll allocate
// in the server.
NumRestrictedSlots uint64 `long:"num-restricted-slots" description:"The number of restricted slots we'll allocate in the server."`
}
// GRPCConfig holds the configuration options for the gRPC server.
@ -694,6 +704,7 @@ func DefaultConfig() Config {
MaxChannelUpdateBurst: discovery.DefaultMaxChannelUpdateBurst,
ChannelUpdateInterval: discovery.DefaultChannelUpdateInterval,
SubBatchDelay: discovery.DefaultSubBatchDelay,
AnnouncementConf: discovery.DefaultProofMatureDelta,
},
Invoices: &lncfg.Invoices{
HoldExpiryDelta: lncfg.DefaultHoldInvoiceExpiryDelta,
@ -732,9 +743,10 @@ func DefaultConfig() Config {
ServerPingTimeout: defaultGrpcServerPingTimeout,
ClientPingMinWait: defaultGrpcClientPingMinWait,
},
LogConfig: build.DefaultLogConfig(),
WtClient: lncfg.DefaultWtClientCfg(),
HTTPHeaderTimeout: DefaultHTTPHeaderTimeout,
LogConfig: build.DefaultLogConfig(),
WtClient: lncfg.DefaultWtClientCfg(),
HTTPHeaderTimeout: DefaultHTTPHeaderTimeout,
NumRestrictedSlots: DefaultNumRestrictedSlots,
}
}
@ -1413,9 +1425,15 @@ func ValidateConfig(cfg Config, interceptor signal.Interceptor, fileParser,
return nil, mkErr("error validating logging config: %w", err)
}
cfg.SubLogMgr = build.NewSubLoggerManager(build.NewDefaultLogHandlers(
cfg.LogConfig, cfg.LogRotator,
)...)
// If a sub-log manager was not already created, then we'll create one
// now using the default log handlers.
if cfg.SubLogMgr == nil {
cfg.SubLogMgr = build.NewSubLoggerManager(
build.NewDefaultLogHandlers(
cfg.LogConfig, cfg.LogRotator,
)...,
)
}
// Initialize logging at the default logging level.
SetupLoggers(cfg.SubLogMgr, interceptor)
@ -1754,6 +1772,7 @@ func ValidateConfig(cfg Config, interceptor signal.Interceptor, fileParser,
cfg.Invoices,
cfg.Routing,
cfg.Pprof,
cfg.Gossip,
)
if err != nil {
return nil, err

View file

@ -51,6 +51,7 @@ import (
"github.com/lightningnetwork/lnd/rpcperms"
"github.com/lightningnetwork/lnd/signal"
"github.com/lightningnetwork/lnd/sqldb"
"github.com/lightningnetwork/lnd/sqldb/sqlc"
"github.com/lightningnetwork/lnd/sweep"
"github.com/lightningnetwork/lnd/walletunlocker"
"github.com/lightningnetwork/lnd/watchtower"
@ -60,6 +61,16 @@ import (
"gopkg.in/macaroon-bakery.v2/bakery"
)
const (
// invoiceMigrationBatchSize is the number of invoices that will be
// migrated in a single batch.
invoiceMigrationBatchSize = 1000
// invoiceMigration is the version of the migration that will be used to
// migrate invoices from the kvdb to the sql database.
invoiceMigration = 7
)
// GrpcRegistrar is an interface that must be satisfied by an external subserver
// that wants to be able to register its own gRPC server onto lnd's main
// grpc.Server instance.
@ -932,10 +943,10 @@ type DatabaseInstances struct {
// the btcwallet's loader.
WalletDB btcwallet.LoaderOption
// NativeSQLStore is a pointer to a native SQL store that can be used
// for native SQL queries for tables that already support it. This may
// be nil if the use-native-sql flag was not set.
NativeSQLStore *sqldb.BaseDB
// NativeSQLStore holds a reference to the native SQL store that can
// be used for native SQL queries for tables that already support it.
// This may be nil if the use-native-sql flag was not set.
NativeSQLStore sqldb.DB
}
// DefaultDatabaseBuilder is a type that builds the default database backends
@ -1038,7 +1049,7 @@ func (d *DefaultDatabaseBuilder) BuildDatabase(
if err != nil {
cleanUp()
err := fmt.Errorf("unable to open graph DB: %w", err)
err = fmt.Errorf("unable to open graph DB: %w", err)
d.logger.Error(err)
return nil, nil, err
@ -1072,52 +1083,103 @@ func (d *DefaultDatabaseBuilder) BuildDatabase(
case err != nil:
cleanUp()
err := fmt.Errorf("unable to open graph DB: %w", err)
err = fmt.Errorf("unable to open graph DB: %w", err)
d.logger.Error(err)
return nil, nil, err
}
// Instantiate a native SQL invoice store if the flag is set.
// Instantiate a native SQL store if the flag is set.
if d.cfg.DB.UseNativeSQL {
// KV invoice db resides in the same database as the channel
// state DB. Let's query the database to see if we have any
// invoices there. If we do, we won't allow the user to start
// lnd with native SQL enabled, as we don't currently migrate
// the invoices to the new database schema.
invoiceSlice, err := dbs.ChanStateDB.QueryInvoices(
ctx, invoices.InvoiceQuery{
NumMaxInvoices: 1,
},
)
if err != nil {
cleanUp()
d.logger.Errorf("Unable to query KV invoice DB: %v",
err)
migrations := sqldb.GetMigrations()
return nil, nil, err
// If the user has not explicitly disabled the SQL invoice
// migration, attach the custom migration function to invoice
// migration (version 7). Even if this custom migration is
// disabled, the regular native SQL store migrations will still
// run. If the database version is already above this custom
// migration's version (7), it will be skipped permanently,
// regardless of the flag.
if !d.cfg.DB.SkipSQLInvoiceMigration {
migrationFn := func(tx *sqlc.Queries) error {
err := invoices.MigrateInvoicesToSQL(
ctx, dbs.ChanStateDB.Backend,
dbs.ChanStateDB, tx,
invoiceMigrationBatchSize,
)
if err != nil {
return fmt.Errorf("failed to migrate "+
"invoices to SQL: %w", err)
}
// Set the invoice bucket tombstone to indicate
// that the migration has been completed.
d.logger.Debugf("Setting invoice bucket " +
"tombstone")
return dbs.ChanStateDB.SetInvoiceBucketTombstone() //nolint:ll
}
// Make sure we attach the custom migration function to
// the correct migration version.
for i := 0; i < len(migrations); i++ {
if migrations[i].Version != invoiceMigration {
continue
}
migrations[i].MigrationFn = migrationFn
}
}
if len(invoiceSlice.Invoices) > 0 {
// We need to apply all migrations to the native SQL store
// before we can use it.
err = dbs.NativeSQLStore.ApplyAllMigrations(ctx, migrations)
if err != nil {
cleanUp()
err := fmt.Errorf("found invoices in the KV invoice " +
"DB, migration to native SQL is not yet " +
"supported")
err = fmt.Errorf("faild to run migrations for the "+
"native SQL store: %w", err)
d.logger.Error(err)
return nil, nil, err
}
// With the DB ready and migrations applied, we can now create
// the base DB and transaction executor for the native SQL
// invoice store.
baseDB := dbs.NativeSQLStore.GetBaseDB()
executor := sqldb.NewTransactionExecutor(
dbs.NativeSQLStore,
func(tx *sql.Tx) invoices.SQLInvoiceQueries {
return dbs.NativeSQLStore.WithTx(tx)
baseDB, func(tx *sql.Tx) invoices.SQLInvoiceQueries {
return baseDB.WithTx(tx)
},
)
dbs.InvoiceDB = invoices.NewSQLStore(
sqlInvoiceDB := invoices.NewSQLStore(
executor, clock.NewDefaultClock(),
)
dbs.InvoiceDB = sqlInvoiceDB
} else {
// Check if the invoice bucket tombstone is set. If it is, we
// need to return and ask the user switch back to using the
// native SQL store.
ripInvoices, err := dbs.ChanStateDB.GetInvoiceBucketTombstone()
d.logger.Debugf("Invoice bucket tombstone set to: %v",
ripInvoices)
if err != nil {
err = fmt.Errorf("unable to check invoice bucket "+
"tombstone: %w", err)
d.logger.Error(err)
return nil, nil, err
}
if ripInvoices {
err = fmt.Errorf("invoices bucket tombstoned, please " +
"switch back to native SQL")
d.logger.Error(err)
return nil, nil, err
}
dbs.InvoiceDB = dbs.ChanStateDB
}
@ -1129,7 +1191,7 @@ func (d *DefaultDatabaseBuilder) BuildDatabase(
if err != nil {
cleanUp()
err := fmt.Errorf("unable to open %s database: %w",
err = fmt.Errorf("unable to open %s database: %w",
lncfg.NSTowerClientDB, err)
d.logger.Error(err)
return nil, nil, err
@ -1144,7 +1206,7 @@ func (d *DefaultDatabaseBuilder) BuildDatabase(
if err != nil {
cleanUp()
err := fmt.Errorf("unable to open %s database: %w",
err = fmt.Errorf("unable to open %s database: %w",
lncfg.NSTowerServerDB, err)
d.logger.Error(err)
return nil, nil, err

View file

@ -2,6 +2,7 @@ package contractcourt
import (
"errors"
"fmt"
"io"
"sync"
@ -23,9 +24,6 @@ type anchorResolver struct {
// anchor is the outpoint on the commitment transaction.
anchor wire.OutPoint
// resolved reflects if the contract has been fully resolved or not.
resolved bool
// broadcastHeight is the height that the original contract was
// broadcast to the main-chain at. We'll use this value to bound any
// historical queries to the chain for spends/confirmations.
@ -71,7 +69,7 @@ func newAnchorResolver(anchorSignDescriptor input.SignDescriptor,
currentReport: report,
}
r.initLogger(r)
r.initLogger(fmt.Sprintf("%T(%v)", r, r.anchor))
return r
}
@ -83,8 +81,121 @@ func (c *anchorResolver) ResolverKey() []byte {
return nil
}
// Resolve offers the anchor output to the sweeper and waits for it to be swept.
func (c *anchorResolver) Resolve(_ bool) (ContractResolver, error) {
// Resolve waits for the output to be swept.
//
// NOTE: Part of the ContractResolver interface.
func (c *anchorResolver) Resolve() (ContractResolver, error) {
// If we're already resolved, then we can exit early.
if c.IsResolved() {
c.log.Errorf("already resolved")
return nil, nil
}
var (
outcome channeldb.ResolverOutcome
spendTx *chainhash.Hash
)
select {
case sweepRes := <-c.sweepResultChan:
err := sweepRes.Err
switch {
// Anchor was swept successfully.
case err == nil:
sweepTxID := sweepRes.Tx.TxHash()
spendTx = &sweepTxID
outcome = channeldb.ResolverOutcomeClaimed
// Anchor was swept by someone else. This is possible after the
// 16 block csv lock.
case errors.Is(err, sweep.ErrRemoteSpend),
errors.Is(err, sweep.ErrInputMissing):
c.log.Warnf("our anchor spent by someone else")
outcome = channeldb.ResolverOutcomeUnclaimed
// An unexpected error occurred.
default:
c.log.Errorf("unable to sweep anchor: %v", sweepRes.Err)
return nil, sweepRes.Err
}
case <-c.quit:
return nil, errResolverShuttingDown
}
c.log.Infof("resolved in tx %v", spendTx)
// Update report to reflect that funds are no longer in limbo.
c.reportLock.Lock()
if outcome == channeldb.ResolverOutcomeClaimed {
c.currentReport.RecoveredBalance = c.currentReport.LimboBalance
}
c.currentReport.LimboBalance = 0
report := c.currentReport.resolverReport(
spendTx, channeldb.ResolverTypeAnchor, outcome,
)
c.reportLock.Unlock()
c.markResolved()
return nil, c.PutResolverReport(nil, report)
}
// Stop signals the resolver to cancel any current resolution processes, and
// suspend.
//
// NOTE: Part of the ContractResolver interface.
func (c *anchorResolver) Stop() {
c.log.Debugf("stopping...")
defer c.log.Debugf("stopped")
close(c.quit)
}
// SupplementState allows the user of a ContractResolver to supplement it with
// state required for the proper resolution of a contract.
//
// NOTE: Part of the ContractResolver interface.
func (c *anchorResolver) SupplementState(state *channeldb.OpenChannel) {
c.chanType = state.ChanType
}
// report returns a report on the resolution state of the contract.
func (c *anchorResolver) report() *ContractReport {
c.reportLock.Lock()
defer c.reportLock.Unlock()
reportCopy := c.currentReport
return &reportCopy
}
func (c *anchorResolver) Encode(w io.Writer) error {
return errors.New("serialization not supported")
}
// A compile time assertion to ensure anchorResolver meets the
// ContractResolver interface.
var _ ContractResolver = (*anchorResolver)(nil)
// Launch offers the anchor output to the sweeper.
func (c *anchorResolver) Launch() error {
if c.isLaunched() {
c.log.Tracef("already launched")
return nil
}
c.log.Debugf("launching resolver...")
c.markLaunched()
// If we're already resolved, then we can exit early.
if c.IsResolved() {
c.log.Errorf("already resolved")
return nil
}
// Attempt to update the sweep parameters to the post-confirmation
// situation. We don't want to force sweep anymore, because the anchor
// lost its special purpose to get the commitment confirmed. It is just
@ -124,94 +235,12 @@ func (c *anchorResolver) Resolve(_ bool) (ContractResolver, error) {
DeadlineHeight: fn.None[int32](),
},
)
if err != nil {
return nil, err
return err
}
var (
outcome channeldb.ResolverOutcome
spendTx *chainhash.Hash
)
c.sweepResultChan = resultChan
select {
case sweepRes := <-resultChan:
switch sweepRes.Err {
// Anchor was swept successfully.
case nil:
sweepTxID := sweepRes.Tx.TxHash()
spendTx = &sweepTxID
outcome = channeldb.ResolverOutcomeClaimed
// Anchor was swept by someone else. This is possible after the
// 16 block csv lock.
case sweep.ErrRemoteSpend:
c.log.Warnf("our anchor spent by someone else")
outcome = channeldb.ResolverOutcomeUnclaimed
// An unexpected error occurred.
default:
c.log.Errorf("unable to sweep anchor: %v", sweepRes.Err)
return nil, sweepRes.Err
}
case <-c.quit:
return nil, errResolverShuttingDown
}
// Update report to reflect that funds are no longer in limbo.
c.reportLock.Lock()
if outcome == channeldb.ResolverOutcomeClaimed {
c.currentReport.RecoveredBalance = c.currentReport.LimboBalance
}
c.currentReport.LimboBalance = 0
report := c.currentReport.resolverReport(
spendTx, channeldb.ResolverTypeAnchor, outcome,
)
c.reportLock.Unlock()
c.resolved = true
return nil, c.PutResolverReport(nil, report)
return nil
}
// Stop signals the resolver to cancel any current resolution processes, and
// suspend.
//
// NOTE: Part of the ContractResolver interface.
func (c *anchorResolver) Stop() {
close(c.quit)
}
// IsResolved returns true if the stored state in the resolve is fully
// resolved. In this case the target output can be forgotten.
//
// NOTE: Part of the ContractResolver interface.
func (c *anchorResolver) IsResolved() bool {
return c.resolved
}
// SupplementState allows the user of a ContractResolver to supplement it with
// state required for the proper resolution of a contract.
//
// NOTE: Part of the ContractResolver interface.
func (c *anchorResolver) SupplementState(state *channeldb.OpenChannel) {
c.chanType = state.ChanType
}
// report returns a report on the resolution state of the contract.
func (c *anchorResolver) report() *ContractReport {
c.reportLock.Lock()
defer c.reportLock.Unlock()
reportCopy := c.currentReport
return &reportCopy
}
func (c *anchorResolver) Encode(w io.Writer) error {
return errors.New("serialization not supported")
}
// A compile time assertion to ensure anchorResolver meets the
// ContractResolver interface.
var _ ContractResolver = (*anchorResolver)(nil)

View file

@ -1719,7 +1719,7 @@ func NewRetributionStore(db kvdb.Backend) *RetributionStore {
}
// taprootBriefcaseFromRetInfo creates a taprootBriefcase from a retribution
// info struct. This stores all the tap tweak informatoin we need to inrder to
// info struct. This stores all the tap tweak information we need to inrder to
// be able to hadnel breaches after a restart.
func taprootBriefcaseFromRetInfo(retInfo *retributionInfo) *taprootBriefcase {
tapCase := newTaprootBriefcase()
@ -1776,7 +1776,7 @@ func taprootBriefcaseFromRetInfo(retInfo *retributionInfo) *taprootBriefcase {
return tapCase
}
// applyTaprootRetInfo attaches the taproot specific inforamtion in the tapCase
// applyTaprootRetInfo attaches the taproot specific information in the tapCase
// to the passed retInfo struct.
func applyTaprootRetInfo(tapCase *taprootBriefcase,
retInfo *retributionInfo) error {

View file

@ -36,7 +36,7 @@ import (
)
var (
defaultTimeout = 30 * time.Second
defaultTimeout = 10 * time.Second
breachOutPoints = []wire.OutPoint{
{

View file

@ -2,6 +2,7 @@ package contractcourt
import (
"encoding/binary"
"fmt"
"io"
"github.com/lightningnetwork/lnd/channeldb"
@ -11,9 +12,6 @@ import (
// future, this will likely take over the duties the current BreachArbitrator
// has.
type breachResolver struct {
// resolved reflects if the contract has been fully resolved or not.
resolved bool
// subscribed denotes whether or not the breach resolver has subscribed
// to the BreachArbitrator for breach resolution.
subscribed bool
@ -32,7 +30,7 @@ func newBreachResolver(resCfg ResolverConfig) *breachResolver {
replyChan: make(chan struct{}),
}
r.initLogger(r)
r.initLogger(fmt.Sprintf("%T(%v)", r, r.ChanPoint))
return r
}
@ -46,8 +44,10 @@ func (b *breachResolver) ResolverKey() []byte {
// Resolve queries the BreachArbitrator to see if the justice transaction has
// been broadcast.
//
// NOTE: Part of the ContractResolver interface.
//
// TODO(yy): let sweeper handle the breach inputs.
func (b *breachResolver) Resolve(_ bool) (ContractResolver, error) {
func (b *breachResolver) Resolve() (ContractResolver, error) {
if !b.subscribed {
complete, err := b.SubscribeBreachComplete(
&b.ChanPoint, b.replyChan,
@ -59,7 +59,7 @@ func (b *breachResolver) Resolve(_ bool) (ContractResolver, error) {
// If the breach resolution process is already complete, then
// we can cleanup and checkpoint the resolved state.
if complete {
b.resolved = true
b.markResolved()
return nil, b.Checkpoint(b)
}
@ -72,8 +72,9 @@ func (b *breachResolver) Resolve(_ bool) (ContractResolver, error) {
// The replyChan has been closed, signalling that the breach
// has been fully resolved. Checkpoint the resolved state and
// exit.
b.resolved = true
b.markResolved()
return nil, b.Checkpoint(b)
case <-b.quit:
}
@ -82,22 +83,17 @@ func (b *breachResolver) Resolve(_ bool) (ContractResolver, error) {
// Stop signals the breachResolver to stop.
func (b *breachResolver) Stop() {
b.log.Debugf("stopping...")
close(b.quit)
}
// IsResolved returns true if the breachResolver is fully resolved and cleanup
// can occur.
func (b *breachResolver) IsResolved() bool {
return b.resolved
}
// SupplementState adds additional state to the breachResolver.
func (b *breachResolver) SupplementState(_ *channeldb.OpenChannel) {
}
// Encode encodes the breachResolver to the passed writer.
func (b *breachResolver) Encode(w io.Writer) error {
return binary.Write(w, endian, b.resolved)
return binary.Write(w, endian, b.IsResolved())
}
// newBreachResolverFromReader attempts to decode an encoded breachResolver
@ -110,11 +106,15 @@ func newBreachResolverFromReader(r io.Reader, resCfg ResolverConfig) (
replyChan: make(chan struct{}),
}
if err := binary.Read(r, endian, &b.resolved); err != nil {
var resolved bool
if err := binary.Read(r, endian, &resolved); err != nil {
return nil, err
}
if resolved {
b.markResolved()
}
b.initLogger(b)
b.initLogger(fmt.Sprintf("%T(%v)", b, b.ChanPoint))
return b, nil
}
@ -122,3 +122,21 @@ func newBreachResolverFromReader(r io.Reader, resCfg ResolverConfig) (
// A compile time assertion to ensure breachResolver meets the ContractResolver
// interface.
var _ ContractResolver = (*breachResolver)(nil)
// Launch offers the breach outputs to the sweeper - currently it's a NOOP as
// the outputs here are not offered to the sweeper.
//
// NOTE: Part of the ContractResolver interface.
//
// TODO(yy): implement it once the outputs are offered to the sweeper.
func (b *breachResolver) Launch() error {
if b.isLaunched() {
b.log.Tracef("already launched")
return nil
}
b.log.Debugf("launching resolver...")
b.markLaunched()
return nil
}

View file

@ -249,6 +249,15 @@ func (a ArbitratorState) String() string {
}
}
// IsContractClosed returns a bool to indicate whether the closing/breaching tx
// has been confirmed onchain. If the state is StateContractClosed,
// StateWaitingFullResolution, or StateFullyResolved, it means the contract has
// been closed and all related contracts have been launched.
func (a ArbitratorState) IsContractClosed() bool {
return a == StateContractClosed || a == StateWaitingFullResolution ||
a == StateFullyResolved
}
// resolverType is an enum that enumerates the various types of resolvers. When
// writing resolvers to disk, we prepend this to the raw bytes stored. This
// allows us to properly decode the resolver into the proper type.

View file

@ -206,8 +206,8 @@ func assertResolversEqual(t *testing.T, originalResolver ContractResolver,
ogRes.outputIncubating, diskRes.outputIncubating)
}
if ogRes.resolved != diskRes.resolved {
t.Fatalf("expected %v, got %v", ogRes.resolved,
diskRes.resolved)
t.Fatalf("expected %v, got %v", ogRes.resolved.Load(),
diskRes.resolved.Load())
}
if ogRes.broadcastHeight != diskRes.broadcastHeight {
t.Fatalf("expected %v, got %v",
@ -229,8 +229,8 @@ func assertResolversEqual(t *testing.T, originalResolver ContractResolver,
ogRes.outputIncubating, diskRes.outputIncubating)
}
if ogRes.resolved != diskRes.resolved {
t.Fatalf("expected %v, got %v", ogRes.resolved,
diskRes.resolved)
t.Fatalf("expected %v, got %v", ogRes.resolved.Load(),
diskRes.resolved.Load())
}
if ogRes.broadcastHeight != diskRes.broadcastHeight {
t.Fatalf("expected %v, got %v",
@ -275,8 +275,8 @@ func assertResolversEqual(t *testing.T, originalResolver ContractResolver,
ogRes.commitResolution, diskRes.commitResolution)
}
if ogRes.resolved != diskRes.resolved {
t.Fatalf("expected %v, got %v", ogRes.resolved,
diskRes.resolved)
t.Fatalf("expected %v, got %v", ogRes.resolved.Load(),
diskRes.resolved.Load())
}
if ogRes.broadcastHeight != diskRes.broadcastHeight {
t.Fatalf("expected %v, got %v",
@ -312,13 +312,14 @@ func TestContractInsertionRetrieval(t *testing.T) {
SweepSignDesc: testSignDesc,
},
outputIncubating: true,
resolved: true,
broadcastHeight: 102,
htlc: channeldb.HTLC{
HtlcIndex: 12,
},
}
successResolver := htlcSuccessResolver{
timeoutResolver.resolved.Store(true)
successResolver := &htlcSuccessResolver{
htlcResolution: lnwallet.IncomingHtlcResolution{
Preimage: testPreimage,
SignedSuccessTx: nil,
@ -327,40 +328,49 @@ func TestContractInsertionRetrieval(t *testing.T) {
SweepSignDesc: testSignDesc,
},
outputIncubating: true,
resolved: true,
broadcastHeight: 109,
htlc: channeldb.HTLC{
RHash: testPreimage,
},
}
resolvers := []ContractResolver{
&timeoutResolver,
&successResolver,
&commitSweepResolver{
commitResolution: lnwallet.CommitOutputResolution{
SelfOutPoint: testChanPoint2,
SelfOutputSignDesc: testSignDesc,
MaturityDelay: 99,
},
resolved: false,
broadcastHeight: 109,
chanPoint: testChanPoint1,
successResolver.resolved.Store(true)
commitResolver := &commitSweepResolver{
commitResolution: lnwallet.CommitOutputResolution{
SelfOutPoint: testChanPoint2,
SelfOutputSignDesc: testSignDesc,
MaturityDelay: 99,
},
broadcastHeight: 109,
chanPoint: testChanPoint1,
}
commitResolver.resolved.Store(false)
resolvers := []ContractResolver{
&timeoutResolver, successResolver, commitResolver,
}
// All resolvers require a unique ResolverKey() output. To achieve this
// for the composite resolvers, we'll mutate the underlying resolver
// with a new outpoint.
contestTimeout := timeoutResolver
contestTimeout.htlcResolution.ClaimOutpoint = randOutPoint()
contestTimeout := htlcTimeoutResolver{
htlcResolution: lnwallet.OutgoingHtlcResolution{
ClaimOutpoint: randOutPoint(),
SweepSignDesc: testSignDesc,
},
}
resolvers = append(resolvers, &htlcOutgoingContestResolver{
htlcTimeoutResolver: &contestTimeout,
})
contestSuccess := successResolver
contestSuccess.htlcResolution.ClaimOutpoint = randOutPoint()
contestSuccess := &htlcSuccessResolver{
htlcResolution: lnwallet.IncomingHtlcResolution{
ClaimOutpoint: randOutPoint(),
SweepSignDesc: testSignDesc,
},
}
resolvers = append(resolvers, &htlcIncomingContestResolver{
htlcExpiry: 100,
htlcSuccessResolver: &contestSuccess,
htlcSuccessResolver: contestSuccess,
})
// For quick lookup during the test, we'll create this map which allow
@ -438,12 +448,12 @@ func TestContractResolution(t *testing.T) {
SweepSignDesc: testSignDesc,
},
outputIncubating: true,
resolved: true,
broadcastHeight: 192,
htlc: channeldb.HTLC{
HtlcIndex: 9912,
},
}
timeoutResolver.resolved.Store(true)
// First, we'll insert the resolver into the database and ensure that
// we get the same resolver out the other side. We do not need to apply
@ -491,12 +501,13 @@ func TestContractSwapping(t *testing.T) {
SweepSignDesc: testSignDesc,
},
outputIncubating: true,
resolved: true,
broadcastHeight: 102,
htlc: channeldb.HTLC{
HtlcIndex: 12,
},
}
timeoutResolver.resolved.Store(true)
contestResolver := &htlcOutgoingContestResolver{
htlcTimeoutResolver: timeoutResolver,
}

View file

@ -11,6 +11,7 @@ import (
"github.com/btcsuite/btcd/chaincfg/chainhash"
"github.com/btcsuite/btcd/wire"
"github.com/btcsuite/btcwallet/walletdb"
"github.com/lightningnetwork/lnd/chainio"
"github.com/lightningnetwork/lnd/chainntnfs"
"github.com/lightningnetwork/lnd/channeldb"
"github.com/lightningnetwork/lnd/clock"
@ -244,6 +245,10 @@ type ChainArbitrator struct {
started int32 // To be used atomically.
stopped int32 // To be used atomically.
// Embed the blockbeat consumer struct to get access to the method
// `NotifyBlockProcessed` and the `BlockbeatChan`.
chainio.BeatConsumer
sync.Mutex
// activeChannels is a map of all the active contracts that are still
@ -262,6 +267,9 @@ type ChainArbitrator struct {
// active channels that it must still watch over.
chanSource *channeldb.DB
// beat is the current best known blockbeat.
beat chainio.Blockbeat
quit chan struct{}
wg sync.WaitGroup
@ -272,15 +280,23 @@ type ChainArbitrator struct {
func NewChainArbitrator(cfg ChainArbitratorConfig,
db *channeldb.DB) *ChainArbitrator {
return &ChainArbitrator{
c := &ChainArbitrator{
cfg: cfg,
activeChannels: make(map[wire.OutPoint]*ChannelArbitrator),
activeWatchers: make(map[wire.OutPoint]*chainWatcher),
chanSource: db,
quit: make(chan struct{}),
}
// Mount the block consumer.
c.BeatConsumer = chainio.NewBeatConsumer(c.quit, c.Name())
return c
}
// Compile-time check for the chainio.Consumer interface.
var _ chainio.Consumer = (*ChainArbitrator)(nil)
// arbChannel is a wrapper around an open channel that channel arbitrators
// interact with.
type arbChannel struct {
@ -554,147 +570,27 @@ func (c *ChainArbitrator) ResolveContract(chanPoint wire.OutPoint) error {
}
// Start launches all goroutines that the ChainArbitrator needs to operate.
func (c *ChainArbitrator) Start() error {
func (c *ChainArbitrator) Start(beat chainio.Blockbeat) error {
if !atomic.CompareAndSwapInt32(&c.started, 0, 1) {
return nil
}
log.Infof("ChainArbitrator starting with config: budget=[%v]",
&c.cfg.Budget)
// Set the current beat.
c.beat = beat
// First, we'll fetch all the channels that are still open, in order to
// collect them within our set of active contracts.
openChannels, err := c.chanSource.ChannelStateDB().FetchAllChannels()
if err != nil {
if err := c.loadOpenChannels(); err != nil {
return err
}
if len(openChannels) > 0 {
log.Infof("Creating ChannelArbitrators for %v active channels",
len(openChannels))
}
// For each open channel, we'll configure then launch a corresponding
// ChannelArbitrator.
for _, channel := range openChannels {
chanPoint := channel.FundingOutpoint
channel := channel
// First, we'll create an active chainWatcher for this channel
// to ensure that we detect any relevant on chain events.
breachClosure := func(ret *lnwallet.BreachRetribution) error {
return c.cfg.ContractBreach(chanPoint, ret)
}
chainWatcher, err := newChainWatcher(
chainWatcherConfig{
chanState: channel,
notifier: c.cfg.Notifier,
signer: c.cfg.Signer,
isOurAddr: c.cfg.IsOurAddress,
contractBreach: breachClosure,
extractStateNumHint: lnwallet.GetStateNumHint,
auxLeafStore: c.cfg.AuxLeafStore,
auxResolver: c.cfg.AuxResolver,
},
)
if err != nil {
return err
}
c.activeWatchers[chanPoint] = chainWatcher
channelArb, err := newActiveChannelArbitrator(
channel, c, chainWatcher.SubscribeChannelEvents(),
)
if err != nil {
return err
}
c.activeChannels[chanPoint] = channelArb
// Republish any closing transactions for this channel.
err = c.republishClosingTxs(channel)
if err != nil {
log.Errorf("Failed to republish closing txs for "+
"channel %v", chanPoint)
}
}
// In addition to the channels that we know to be open, we'll also
// launch arbitrators to finishing resolving any channels that are in
// the pending close state.
closingChannels, err := c.chanSource.ChannelStateDB().FetchClosedChannels(
true,
)
if err != nil {
if err := c.loadPendingCloseChannels(); err != nil {
return err
}
if len(closingChannels) > 0 {
log.Infof("Creating ChannelArbitrators for %v closing channels",
len(closingChannels))
}
// Next, for each channel is the closing state, we'll launch a
// corresponding more restricted resolver, as we don't have to watch
// the chain any longer, only resolve the contracts on the confirmed
// commitment.
//nolint:ll
for _, closeChanInfo := range closingChannels {
// We can leave off the CloseContract and ForceCloseChan
// methods as the channel is already closed at this point.
chanPoint := closeChanInfo.ChanPoint
arbCfg := ChannelArbitratorConfig{
ChanPoint: chanPoint,
ShortChanID: closeChanInfo.ShortChanID,
ChainArbitratorConfig: c.cfg,
ChainEvents: &ChainEventSubscription{},
IsPendingClose: true,
ClosingHeight: closeChanInfo.CloseHeight,
CloseType: closeChanInfo.CloseType,
PutResolverReport: func(tx kvdb.RwTx,
report *channeldb.ResolverReport) error {
return c.chanSource.PutResolverReport(
tx, c.cfg.ChainHash, &chanPoint, report,
)
},
FetchHistoricalChannel: func() (*channeldb.OpenChannel, error) {
chanStateDB := c.chanSource.ChannelStateDB()
return chanStateDB.FetchHistoricalChannel(&chanPoint)
},
FindOutgoingHTLCDeadline: func(
htlc channeldb.HTLC) fn.Option[int32] {
return c.FindOutgoingHTLCDeadline(
closeChanInfo.ShortChanID, htlc,
)
},
}
chanLog, err := newBoltArbitratorLog(
c.chanSource.Backend, arbCfg, c.cfg.ChainHash, chanPoint,
)
if err != nil {
return err
}
arbCfg.MarkChannelResolved = func() error {
if c.cfg.NotifyFullyResolvedChannel != nil {
c.cfg.NotifyFullyResolvedChannel(chanPoint)
}
return c.ResolveContract(chanPoint)
}
// We create an empty map of HTLC's here since it's possible
// that the channel is in StateDefault and updateActiveHTLCs is
// called. We want to avoid writing to an empty map. Since the
// channel is already in the process of being resolved, no new
// HTLCs will be added.
c.activeChannels[chanPoint] = NewChannelArbitrator(
arbCfg, make(map[HtlcSetKey]htlcSet), chanLog,
)
}
// Now, we'll start all chain watchers in parallel to shorten start up
// duration. In neutrino mode, this allows spend registrations to take
// advantage of batch spend reporting, instead of doing a single rescan
@ -746,7 +642,7 @@ func (c *ChainArbitrator) Start() error {
// transaction.
var startStates map[wire.OutPoint]*chanArbStartState
err = kvdb.View(c.chanSource, func(tx walletdb.ReadTx) error {
err := kvdb.View(c.chanSource, func(tx walletdb.ReadTx) error {
for _, arbitrator := range c.activeChannels {
startState, err := arbitrator.getStartState(tx)
if err != nil {
@ -778,119 +674,45 @@ func (c *ChainArbitrator) Start() error {
arbitrator.cfg.ChanPoint)
}
if err := arbitrator.Start(startState); err != nil {
if err := arbitrator.Start(startState, c.beat); err != nil {
stopAndLog()
return err
}
}
// Subscribe to a single stream of block epoch notifications that we
// will dispatch to all active arbitrators.
blockEpoch, err := c.cfg.Notifier.RegisterBlockEpochNtfn(nil)
if err != nil {
return err
}
// Start our goroutine which will dispatch blocks to each arbitrator.
c.wg.Add(1)
go func() {
defer c.wg.Done()
c.dispatchBlocks(blockEpoch)
c.dispatchBlocks()
}()
log.Infof("ChainArbitrator starting at height %d with %d chain "+
"watchers, %d channel arbitrators, and budget config=[%v]",
c.beat.Height(), len(c.activeWatchers), len(c.activeChannels),
&c.cfg.Budget)
// TODO(roasbeef): eventually move all breach watching here
return nil
}
// blockRecipient contains the information we need to dispatch a block to a
// channel arbitrator.
type blockRecipient struct {
// chanPoint is the funding outpoint of the channel.
chanPoint wire.OutPoint
// blocks is the channel that new block heights are sent into. This
// channel should be sufficiently buffered as to not block the sender.
blocks chan<- int32
// quit is closed if the receiving entity is shutting down.
quit chan struct{}
}
// dispatchBlocks consumes a block epoch notification stream and dispatches
// blocks to each of the chain arb's active channel arbitrators. This function
// must be run in a goroutine.
func (c *ChainArbitrator) dispatchBlocks(
blockEpoch *chainntnfs.BlockEpochEvent) {
// getRecipients is a helper function which acquires the chain arb
// lock and returns a set of block recipients which can be used to
// dispatch blocks.
getRecipients := func() []blockRecipient {
c.Lock()
blocks := make([]blockRecipient, 0, len(c.activeChannels))
for _, channel := range c.activeChannels {
blocks = append(blocks, blockRecipient{
chanPoint: channel.cfg.ChanPoint,
blocks: channel.blocks,
quit: channel.quit,
})
}
c.Unlock()
return blocks
}
// On exit, cancel our blocks subscription and close each block channel
// so that the arbitrators know they will no longer be receiving blocks.
defer func() {
blockEpoch.Cancel()
recipients := getRecipients()
for _, recipient := range recipients {
close(recipient.blocks)
}
}()
func (c *ChainArbitrator) dispatchBlocks() {
// Consume block epochs until we receive the instruction to shutdown.
for {
select {
// Consume block epochs, exiting if our subscription is
// terminated.
case block, ok := <-blockEpoch.Epochs:
if !ok {
log.Trace("dispatchBlocks block epoch " +
"cancelled")
return
}
case beat := <-c.BlockbeatChan:
// Set the current blockbeat.
c.beat = beat
// Get the set of currently active channels block
// subscription channels and dispatch the block to
// each.
for _, recipient := range getRecipients() {
select {
// Deliver the block to the arbitrator.
case recipient.blocks <- block.Height:
// If the recipient is shutting down, exit
// without delivering the block. This may be
// the case when two blocks are mined in quick
// succession, and the arbitrator resolves
// after the first block, and does not need to
// consume the second block.
case <-recipient.quit:
log.Debugf("channel: %v exit without "+
"receiving block: %v",
recipient.chanPoint,
block.Height)
// If the chain arb is shutting down, we don't
// need to deliver any more blocks (everything
// will be shutting down).
case <-c.quit:
return
}
}
// Send this blockbeat to all the active channels and
// wait for them to finish processing it.
c.handleBlockbeat(beat)
// Exit if the chain arbitrator is shutting down.
case <-c.quit:
@ -899,6 +721,47 @@ func (c *ChainArbitrator) dispatchBlocks(
}
}
// handleBlockbeat sends the blockbeat to all active channel arbitrator in
// parallel and wait for them to finish processing it.
func (c *ChainArbitrator) handleBlockbeat(beat chainio.Blockbeat) {
// Read the active channels in a lock.
c.Lock()
// Create a slice to record active channel arbitrator.
channels := make([]chainio.Consumer, 0, len(c.activeChannels))
watchers := make([]chainio.Consumer, 0, len(c.activeWatchers))
// Copy the active channels to the slice.
for _, channel := range c.activeChannels {
channels = append(channels, channel)
}
for _, watcher := range c.activeWatchers {
watchers = append(watchers, watcher)
}
c.Unlock()
// Iterate all the copied watchers and send the blockbeat to them.
err := chainio.DispatchConcurrent(beat, watchers)
if err != nil {
log.Errorf("Notify blockbeat for chainWatcher failed: %v", err)
}
// Iterate all the copied channels and send the blockbeat to them.
//
// NOTE: This method will timeout if the processing of blocks of the
// subsystems is too long (60s).
err = chainio.DispatchConcurrent(beat, channels)
if err != nil {
log.Errorf("Notify blockbeat for ChannelArbitrator failed: %v",
err)
}
// Notify the chain arbitrator has processed the block.
c.NotifyBlockProcessed(beat, err)
}
// republishClosingTxs will load any stored cooperative or unilateral closing
// transactions and republish them. This helps ensure propagation of the
// transactions in the event that prior publications failed.
@ -1200,8 +1063,8 @@ func (c *ChainArbitrator) WatchNewChannel(newChan *channeldb.OpenChannel) error
chanPoint := newChan.FundingOutpoint
log.Infof("Creating new ChannelArbitrator for ChannelPoint(%v)",
chanPoint)
log.Infof("Creating new chainWatcher and ChannelArbitrator for "+
"ChannelPoint(%v)", chanPoint)
// If we're already watching this channel, then we'll ignore this
// request.
@ -1248,7 +1111,7 @@ func (c *ChainArbitrator) WatchNewChannel(newChan *channeldb.OpenChannel) error
// arbitrators, then launch it.
c.activeChannels[chanPoint] = channelArb
if err := channelArb.Start(nil); err != nil {
if err := channelArb.Start(nil, c.beat); err != nil {
return err
}
@ -1361,3 +1224,192 @@ func (c *ChainArbitrator) FindOutgoingHTLCDeadline(scid lnwire.ShortChannelID,
// TODO(roasbeef): arbitration reports
// * types: contested, waiting for success conf, etc
// NOTE: part of the `chainio.Consumer` interface.
func (c *ChainArbitrator) Name() string {
return "ChainArbitrator"
}
// loadOpenChannels loads all channels that are currently open in the database
// and registers them with the chainWatcher for future notification.
func (c *ChainArbitrator) loadOpenChannels() error {
openChannels, err := c.chanSource.ChannelStateDB().FetchAllChannels()
if err != nil {
return err
}
if len(openChannels) == 0 {
return nil
}
log.Infof("Creating ChannelArbitrators for %v active channels",
len(openChannels))
// For each open channel, we'll configure then launch a corresponding
// ChannelArbitrator.
for _, channel := range openChannels {
chanPoint := channel.FundingOutpoint
channel := channel
// First, we'll create an active chainWatcher for this channel
// to ensure that we detect any relevant on chain events.
breachClosure := func(ret *lnwallet.BreachRetribution) error {
return c.cfg.ContractBreach(chanPoint, ret)
}
chainWatcher, err := newChainWatcher(
chainWatcherConfig{
chanState: channel,
notifier: c.cfg.Notifier,
signer: c.cfg.Signer,
isOurAddr: c.cfg.IsOurAddress,
contractBreach: breachClosure,
extractStateNumHint: lnwallet.GetStateNumHint,
auxLeafStore: c.cfg.AuxLeafStore,
auxResolver: c.cfg.AuxResolver,
},
)
if err != nil {
return err
}
c.activeWatchers[chanPoint] = chainWatcher
channelArb, err := newActiveChannelArbitrator(
channel, c, chainWatcher.SubscribeChannelEvents(),
)
if err != nil {
return err
}
c.activeChannels[chanPoint] = channelArb
// Republish any closing transactions for this channel.
err = c.republishClosingTxs(channel)
if err != nil {
log.Errorf("Failed to republish closing txs for "+
"channel %v", chanPoint)
}
}
return nil
}
// loadPendingCloseChannels loads all channels that are currently pending
// closure in the database and registers them with the ChannelArbitrator to
// continue the resolution process.
func (c *ChainArbitrator) loadPendingCloseChannels() error {
chanStateDB := c.chanSource.ChannelStateDB()
closingChannels, err := chanStateDB.FetchClosedChannels(true)
if err != nil {
return err
}
if len(closingChannels) == 0 {
return nil
}
log.Infof("Creating ChannelArbitrators for %v closing channels",
len(closingChannels))
// Next, for each channel is the closing state, we'll launch a
// corresponding more restricted resolver, as we don't have to watch
// the chain any longer, only resolve the contracts on the confirmed
// commitment.
//nolint:ll
for _, closeChanInfo := range closingChannels {
// We can leave off the CloseContract and ForceCloseChan
// methods as the channel is already closed at this point.
chanPoint := closeChanInfo.ChanPoint
arbCfg := ChannelArbitratorConfig{
ChanPoint: chanPoint,
ShortChanID: closeChanInfo.ShortChanID,
ChainArbitratorConfig: c.cfg,
ChainEvents: &ChainEventSubscription{},
IsPendingClose: true,
ClosingHeight: closeChanInfo.CloseHeight,
CloseType: closeChanInfo.CloseType,
PutResolverReport: func(tx kvdb.RwTx,
report *channeldb.ResolverReport) error {
return c.chanSource.PutResolverReport(
tx, c.cfg.ChainHash, &chanPoint, report,
)
},
FetchHistoricalChannel: func() (*channeldb.OpenChannel, error) {
return chanStateDB.FetchHistoricalChannel(&chanPoint)
},
FindOutgoingHTLCDeadline: func(
htlc channeldb.HTLC) fn.Option[int32] {
return c.FindOutgoingHTLCDeadline(
closeChanInfo.ShortChanID, htlc,
)
},
}
chanLog, err := newBoltArbitratorLog(
c.chanSource.Backend, arbCfg, c.cfg.ChainHash, chanPoint,
)
if err != nil {
return err
}
arbCfg.MarkChannelResolved = func() error {
if c.cfg.NotifyFullyResolvedChannel != nil {
c.cfg.NotifyFullyResolvedChannel(chanPoint)
}
return c.ResolveContract(chanPoint)
}
// We create an empty map of HTLC's here since it's possible
// that the channel is in StateDefault and updateActiveHTLCs is
// called. We want to avoid writing to an empty map. Since the
// channel is already in the process of being resolved, no new
// HTLCs will be added.
c.activeChannels[chanPoint] = NewChannelArbitrator(
arbCfg, make(map[HtlcSetKey]htlcSet), chanLog,
)
}
return nil
}
// RedispatchBlockbeat resends the current blockbeat to the channels specified
// by the chanPoints. It is used when a channel is added to the chain
// arbitrator after it has been started, e.g., during the channel restore
// process.
func (c *ChainArbitrator) RedispatchBlockbeat(chanPoints []wire.OutPoint) {
// Get the current blockbeat.
beat := c.beat
// Prepare two sets of consumers.
channels := make([]chainio.Consumer, 0, len(chanPoints))
watchers := make([]chainio.Consumer, 0, len(chanPoints))
// Read the active channels in a lock.
c.Lock()
for _, op := range chanPoints {
if channel, ok := c.activeChannels[op]; ok {
channels = append(channels, channel)
}
if watcher, ok := c.activeWatchers[op]; ok {
watchers = append(watchers, watcher)
}
}
c.Unlock()
// Iterate all the copied watchers and send the blockbeat to them.
err := chainio.DispatchConcurrent(beat, watchers)
if err != nil {
log.Errorf("Notify blockbeat for chainWatcher failed: %v", err)
}
// Iterate all the copied channels and send the blockbeat to them.
err = chainio.DispatchConcurrent(beat, channels)
if err != nil {
// Shutdown lnd if there's an error processing the block.
log.Errorf("Notify blockbeat for ChannelArbitrator failed: %v",
err)
}
}

View file

@ -77,7 +77,6 @@ func TestChainArbitratorRepublishCloses(t *testing.T) {
ChainIO: &mock.ChainIO{},
Notifier: &mock.ChainNotifier{
SpendChan: make(chan *chainntnfs.SpendDetail),
EpochChan: make(chan *chainntnfs.BlockEpoch),
ConfChan: make(chan *chainntnfs.TxConfirmation),
},
PublishTx: func(tx *wire.MsgTx, _ string) error {
@ -91,7 +90,8 @@ func TestChainArbitratorRepublishCloses(t *testing.T) {
chainArbCfg, db,
)
if err := chainArb.Start(); err != nil {
beat := newBeatFromHeight(0)
if err := chainArb.Start(beat); err != nil {
t.Fatal(err)
}
t.Cleanup(func() {
@ -158,7 +158,6 @@ func TestResolveContract(t *testing.T) {
ChainIO: &mock.ChainIO{},
Notifier: &mock.ChainNotifier{
SpendChan: make(chan *chainntnfs.SpendDetail),
EpochChan: make(chan *chainntnfs.BlockEpoch),
ConfChan: make(chan *chainntnfs.TxConfirmation),
},
PublishTx: func(tx *wire.MsgTx, _ string) error {
@ -175,7 +174,8 @@ func TestResolveContract(t *testing.T) {
chainArb := NewChainArbitrator(
chainArbCfg, db,
)
if err := chainArb.Start(); err != nil {
beat := newBeatFromHeight(0)
if err := chainArb.Start(beat); err != nil {
t.Fatal(err)
}
t.Cleanup(func() {

View file

@ -16,6 +16,7 @@ import (
"github.com/btcsuite/btcd/txscript"
"github.com/btcsuite/btcd/wire"
"github.com/davecgh/go-spew/spew"
"github.com/lightningnetwork/lnd/chainio"
"github.com/lightningnetwork/lnd/chainntnfs"
"github.com/lightningnetwork/lnd/channeldb"
"github.com/lightningnetwork/lnd/fn/v2"
@ -210,6 +211,10 @@ type chainWatcher struct {
started int32 // To be used atomically.
stopped int32 // To be used atomically.
// Embed the blockbeat consumer struct to get access to the method
// `NotifyBlockProcessed` and the `BlockbeatChan`.
chainio.BeatConsumer
quit chan struct{}
wg sync.WaitGroup
@ -219,13 +224,6 @@ type chainWatcher struct {
// the current state number on the commitment transactions.
stateHintObfuscator [lnwallet.StateHintSize]byte
// fundingPkScript is the pkScript of the funding output.
fundingPkScript []byte
// heightHint is the height hint used to checkpoint scans on chain for
// conf/spend events.
heightHint uint32
// All the fields below are protected by this mutex.
sync.Mutex
@ -236,6 +234,22 @@ type chainWatcher struct {
// clientSubscriptions is a map that keeps track of all the active
// client subscriptions for events related to this channel.
clientSubscriptions map[uint64]*ChainEventSubscription
// fundingSpendNtfn is the spending notification subscription for the
// funding outpoint.
fundingSpendNtfn *chainntnfs.SpendEvent
// fundingConfirmedNtfn is the confirmation notification subscription
// for the funding outpoint. This is only created if the channel is
// both taproot and pending confirmation.
//
// For taproot pkscripts, `RegisterSpendNtfn` will only notify on the
// outpoint being spent and not the outpoint+pkscript due to
// `ComputePkScript` being unable to compute the pkscript if a key
// spend is used. We need to add a `RegisterConfirmationsNtfn` here to
// ensure that the outpoint+pkscript pair is confirmed before calling
// `RegisterSpendNtfn`.
fundingConfirmedNtfn *chainntnfs.ConfirmationEvent
}
// newChainWatcher returns a new instance of a chainWatcher for a channel given
@ -260,12 +274,61 @@ func newChainWatcher(cfg chainWatcherConfig) (*chainWatcher, error) {
)
}
return &chainWatcher{
// Get the witness script for the funding output.
fundingPkScript, err := deriveFundingPkScript(chanState)
if err != nil {
return nil, err
}
// Get the channel opening block height.
heightHint := deriveHeightHint(chanState)
// We'll register for a notification to be dispatched if the funding
// output is spent.
spendNtfn, err := cfg.notifier.RegisterSpendNtfn(
&chanState.FundingOutpoint, fundingPkScript, heightHint,
)
if err != nil {
return nil, err
}
c := &chainWatcher{
cfg: cfg,
stateHintObfuscator: stateHint,
quit: make(chan struct{}),
clientSubscriptions: make(map[uint64]*ChainEventSubscription),
}, nil
fundingSpendNtfn: spendNtfn,
}
// If this is a pending taproot channel, we need to register for a
// confirmation notification of the funding tx. Check the docs in
// `fundingConfirmedNtfn` for details.
if c.cfg.chanState.IsPending && c.cfg.chanState.ChanType.IsTaproot() {
confNtfn, err := cfg.notifier.RegisterConfirmationsNtfn(
&chanState.FundingOutpoint.Hash, fundingPkScript, 1,
heightHint,
)
if err != nil {
return nil, err
}
c.fundingConfirmedNtfn = confNtfn
}
// Mount the block consumer.
c.BeatConsumer = chainio.NewBeatConsumer(c.quit, c.Name())
return c, nil
}
// Compile-time check for the chainio.Consumer interface.
var _ chainio.Consumer = (*chainWatcher)(nil)
// Name returns the name of the watcher.
//
// NOTE: part of the `chainio.Consumer` interface.
func (c *chainWatcher) Name() string {
return fmt.Sprintf("ChainWatcher(%v)", c.cfg.chanState.FundingOutpoint)
}
// Start starts all goroutines that the chainWatcher needs to perform its
@ -275,75 +338,11 @@ func (c *chainWatcher) Start() error {
return nil
}
chanState := c.cfg.chanState
log.Debugf("Starting chain watcher for ChannelPoint(%v)",
chanState.FundingOutpoint)
c.cfg.chanState.FundingOutpoint)
// First, we'll register for a notification to be dispatched if the
// funding output is spent.
fundingOut := &chanState.FundingOutpoint
// As a height hint, we'll try to use the opening height, but if the
// channel isn't yet open, then we'll use the height it was broadcast
// at. This may be an unconfirmed zero-conf channel.
c.heightHint = c.cfg.chanState.ShortChanID().BlockHeight
if c.heightHint == 0 {
c.heightHint = chanState.BroadcastHeight()
}
// Since no zero-conf state is stored in a channel backup, the below
// logic will not be triggered for restored, zero-conf channels. Set
// the height hint for zero-conf channels.
if chanState.IsZeroConf() {
if chanState.ZeroConfConfirmed() {
// If the zero-conf channel is confirmed, we'll use the
// confirmed SCID's block height.
c.heightHint = chanState.ZeroConfRealScid().BlockHeight
} else {
// The zero-conf channel is unconfirmed. We'll need to
// use the FundingBroadcastHeight.
c.heightHint = chanState.BroadcastHeight()
}
}
localKey := chanState.LocalChanCfg.MultiSigKey.PubKey
remoteKey := chanState.RemoteChanCfg.MultiSigKey.PubKey
var (
err error
)
if chanState.ChanType.IsTaproot() {
c.fundingPkScript, _, err = input.GenTaprootFundingScript(
localKey, remoteKey, 0, chanState.TapscriptRoot,
)
if err != nil {
return err
}
} else {
multiSigScript, err := input.GenMultiSigScript(
localKey.SerializeCompressed(),
remoteKey.SerializeCompressed(),
)
if err != nil {
return err
}
c.fundingPkScript, err = input.WitnessScriptHash(multiSigScript)
if err != nil {
return err
}
}
spendNtfn, err := c.cfg.notifier.RegisterSpendNtfn(
fundingOut, c.fundingPkScript, c.heightHint,
)
if err != nil {
return err
}
// With the spend notification obtained, we'll now dispatch the
// closeObserver which will properly react to any changes.
c.wg.Add(1)
go c.closeObserver(spendNtfn)
go c.closeObserver()
return nil
}
@ -555,7 +554,7 @@ func newChainSet(chanState *channeldb.OpenChannel) (*chainSet, error) {
localCommit, remoteCommit, err := chanState.LatestCommitments()
if err != nil {
return nil, fmt.Errorf("unable to fetch channel state for "+
"chan_point=%v", chanState.FundingOutpoint)
"chan_point=%v: %v", chanState.FundingOutpoint, err)
}
log.Tracef("ChannelPoint(%v): local_commit_type=%v, local_commit=%v",
@ -622,168 +621,50 @@ func newChainSet(chanState *channeldb.OpenChannel) (*chainSet, error) {
// close observer will assembled the proper materials required to claim the
// funds of the channel on-chain (if required), then dispatch these as
// notifications to all subscribers.
func (c *chainWatcher) closeObserver(spendNtfn *chainntnfs.SpendEvent) {
func (c *chainWatcher) closeObserver() {
defer c.wg.Done()
defer c.fundingSpendNtfn.Cancel()
log.Infof("Close observer for ChannelPoint(%v) active",
c.cfg.chanState.FundingOutpoint)
// If this is a taproot channel, before we proceed, we want to ensure
// that the expected funding output has confirmed on chain.
if c.cfg.chanState.ChanType.IsTaproot() {
fundingPoint := c.cfg.chanState.FundingOutpoint
confNtfn, err := c.cfg.notifier.RegisterConfirmationsNtfn(
&fundingPoint.Hash, c.fundingPkScript, 1, c.heightHint,
)
if err != nil {
log.Warnf("unable to register for conf: %v", err)
}
log.Infof("Waiting for taproot ChannelPoint(%v) to confirm...",
c.cfg.chanState.FundingOutpoint)
for {
select {
case _, ok := <-confNtfn.Confirmed:
// A new block is received, we will check whether this block
// contains a spending tx that we are interested in.
case beat := <-c.BlockbeatChan:
log.Debugf("ChainWatcher(%v) received blockbeat %v",
c.cfg.chanState.FundingOutpoint, beat.Height())
// Process the block.
c.handleBlockbeat(beat)
// If the funding outpoint is spent, we now go ahead and handle
// it. Note that we cannot rely solely on the `block` event
// above to trigger a close event, as deep down, the receiving
// of block notifications and the receiving of spending
// notifications are done in two different goroutines, so the
// expected order: [receive block -> receive spend] is not
// guaranteed .
case spend, ok := <-c.fundingSpendNtfn.Spend:
// If the channel was closed, then this means that the
// notifier exited, so we will as well.
if !ok {
return
}
err := c.handleCommitSpend(spend)
if err != nil {
log.Errorf("Failed to handle commit spend: %v",
err)
}
// The chainWatcher has been signalled to exit, so we'll do so
// now.
case <-c.quit:
return
}
}
select {
// We've detected a spend of the channel onchain! Depending on the type
// of spend, we'll act accordingly, so we'll examine the spending
// transaction to determine what we should do.
//
// TODO(Roasbeef): need to be able to ensure this only triggers
// on confirmation, to ensure if multiple txns are broadcast, we
// act on the one that's timestamped
case commitSpend, ok := <-spendNtfn.Spend:
// If the channel was closed, then this means that the notifier
// exited, so we will as well.
if !ok {
return
}
// Otherwise, the remote party might have broadcast a prior
// revoked state...!!!
commitTxBroadcast := commitSpend.SpendingTx
// First, we'll construct the chainset which includes all the
// data we need to dispatch an event to our subscribers about
// this possible channel close event.
chainSet, err := newChainSet(c.cfg.chanState)
if err != nil {
log.Errorf("unable to create commit set: %v", err)
return
}
// Decode the state hint encoded within the commitment
// transaction to determine if this is a revoked state or not.
obfuscator := c.stateHintObfuscator
broadcastStateNum := c.cfg.extractStateNumHint(
commitTxBroadcast, obfuscator,
)
// We'll go on to check whether it could be our own commitment
// that was published and know is confirmed.
ok, err = c.handleKnownLocalState(
commitSpend, broadcastStateNum, chainSet,
)
if err != nil {
log.Errorf("Unable to handle known local state: %v",
err)
return
}
if ok {
return
}
// Now that we know it is neither a non-cooperative closure nor
// a local close with the latest state, we check if it is the
// remote that closed with any prior or current state.
ok, err = c.handleKnownRemoteState(
commitSpend, broadcastStateNum, chainSet,
)
if err != nil {
log.Errorf("Unable to handle known remote state: %v",
err)
return
}
if ok {
return
}
// Next, we'll check to see if this is a cooperative channel
// closure or not. This is characterized by having an input
// sequence number that's finalized. This won't happen with
// regular commitment transactions due to the state hint
// encoding scheme.
switch commitTxBroadcast.TxIn[0].Sequence {
case wire.MaxTxInSequenceNum:
fallthrough
case mempool.MaxRBFSequence:
// TODO(roasbeef): rare but possible, need itest case
// for
err := c.dispatchCooperativeClose(commitSpend)
if err != nil {
log.Errorf("unable to handle co op close: %v", err)
}
return
}
log.Warnf("Unknown commitment broadcast for "+
"ChannelPoint(%v) ", c.cfg.chanState.FundingOutpoint)
// We'll try to recover as best as possible from losing state.
// We first check if this was a local unknown state. This could
// happen if we force close, then lose state or attempt
// recovery before the commitment confirms.
ok, err = c.handleUnknownLocalState(
commitSpend, broadcastStateNum, chainSet,
)
if err != nil {
log.Errorf("Unable to handle known local state: %v",
err)
return
}
if ok {
return
}
// Since it was neither a known remote state, nor a local state
// that was published, it most likely mean we lost state and
// the remote node closed. In this case we must start the DLP
// protocol in hope of getting our money back.
ok, err = c.handleUnknownRemoteState(
commitSpend, broadcastStateNum, chainSet,
)
if err != nil {
log.Errorf("Unable to handle unknown remote state: %v",
err)
return
}
if ok {
return
}
log.Warnf("Unable to handle spending tx %v of channel point %v",
commitTxBroadcast.TxHash(), c.cfg.chanState.FundingOutpoint)
return
// The chainWatcher has been signalled to exit, so we'll do so now.
case <-c.quit:
return
}
}
// handleKnownLocalState checks whether the passed spend is a local state that
@ -1412,3 +1293,263 @@ func (c *chainWatcher) waitForCommitmentPoint() *btcec.PublicKey {
}
}
}
// deriveFundingPkScript derives the script used in the funding output.
func deriveFundingPkScript(chanState *channeldb.OpenChannel) ([]byte, error) {
localKey := chanState.LocalChanCfg.MultiSigKey.PubKey
remoteKey := chanState.RemoteChanCfg.MultiSigKey.PubKey
var (
err error
fundingPkScript []byte
)
if chanState.ChanType.IsTaproot() {
fundingPkScript, _, err = input.GenTaprootFundingScript(
localKey, remoteKey, 0, chanState.TapscriptRoot,
)
if err != nil {
return nil, err
}
} else {
multiSigScript, err := input.GenMultiSigScript(
localKey.SerializeCompressed(),
remoteKey.SerializeCompressed(),
)
if err != nil {
return nil, err
}
fundingPkScript, err = input.WitnessScriptHash(multiSigScript)
if err != nil {
return nil, err
}
}
return fundingPkScript, nil
}
// deriveHeightHint derives the block height for the channel opening.
func deriveHeightHint(chanState *channeldb.OpenChannel) uint32 {
// As a height hint, we'll try to use the opening height, but if the
// channel isn't yet open, then we'll use the height it was broadcast
// at. This may be an unconfirmed zero-conf channel.
heightHint := chanState.ShortChanID().BlockHeight
if heightHint == 0 {
heightHint = chanState.BroadcastHeight()
}
// Since no zero-conf state is stored in a channel backup, the below
// logic will not be triggered for restored, zero-conf channels. Set
// the height hint for zero-conf channels.
if chanState.IsZeroConf() {
if chanState.ZeroConfConfirmed() {
// If the zero-conf channel is confirmed, we'll use the
// confirmed SCID's block height.
heightHint = chanState.ZeroConfRealScid().BlockHeight
} else {
// The zero-conf channel is unconfirmed. We'll need to
// use the FundingBroadcastHeight.
heightHint = chanState.BroadcastHeight()
}
}
return heightHint
}
// handleCommitSpend takes a spending tx of the funding output and handles the
// channel close based on the closure type.
func (c *chainWatcher) handleCommitSpend(
commitSpend *chainntnfs.SpendDetail) error {
commitTxBroadcast := commitSpend.SpendingTx
// First, we'll construct the chainset which includes all the data we
// need to dispatch an event to our subscribers about this possible
// channel close event.
chainSet, err := newChainSet(c.cfg.chanState)
if err != nil {
return fmt.Errorf("create commit set: %w", err)
}
// Decode the state hint encoded within the commitment transaction to
// determine if this is a revoked state or not.
obfuscator := c.stateHintObfuscator
broadcastStateNum := c.cfg.extractStateNumHint(
commitTxBroadcast, obfuscator,
)
// We'll go on to check whether it could be our own commitment that was
// published and know is confirmed.
ok, err := c.handleKnownLocalState(
commitSpend, broadcastStateNum, chainSet,
)
if err != nil {
return fmt.Errorf("handle known local state: %w", err)
}
if ok {
return nil
}
// Now that we know it is neither a non-cooperative closure nor a local
// close with the latest state, we check if it is the remote that
// closed with any prior or current state.
ok, err = c.handleKnownRemoteState(
commitSpend, broadcastStateNum, chainSet,
)
if err != nil {
return fmt.Errorf("handle known remote state: %w", err)
}
if ok {
return nil
}
// Next, we'll check to see if this is a cooperative channel closure or
// not. This is characterized by having an input sequence number that's
// finalized. This won't happen with regular commitment transactions
// due to the state hint encoding scheme.
switch commitTxBroadcast.TxIn[0].Sequence {
case wire.MaxTxInSequenceNum:
fallthrough
case mempool.MaxRBFSequence:
// TODO(roasbeef): rare but possible, need itest case for
err := c.dispatchCooperativeClose(commitSpend)
if err != nil {
return fmt.Errorf("handle coop close: %w", err)
}
return nil
}
log.Warnf("Unknown commitment broadcast for ChannelPoint(%v) ",
c.cfg.chanState.FundingOutpoint)
// We'll try to recover as best as possible from losing state. We
// first check if this was a local unknown state. This could happen if
// we force close, then lose state or attempt recovery before the
// commitment confirms.
ok, err = c.handleUnknownLocalState(
commitSpend, broadcastStateNum, chainSet,
)
if err != nil {
return fmt.Errorf("handle known local state: %w", err)
}
if ok {
return nil
}
// Since it was neither a known remote state, nor a local state that
// was published, it most likely mean we lost state and the remote node
// closed. In this case we must start the DLP protocol in hope of
// getting our money back.
ok, err = c.handleUnknownRemoteState(
commitSpend, broadcastStateNum, chainSet,
)
if err != nil {
return fmt.Errorf("handle unknown remote state: %w", err)
}
if ok {
return nil
}
log.Errorf("Unable to handle spending tx %v of channel point %v",
commitTxBroadcast.TxHash(), c.cfg.chanState.FundingOutpoint)
return nil
}
// checkFundingSpend performs a non-blocking read on the spendNtfn channel to
// check whether there's a commit spend already. Returns the spend details if
// found.
func (c *chainWatcher) checkFundingSpend() *chainntnfs.SpendDetail {
select {
// We've detected a spend of the channel onchain! Depending on the type
// of spend, we'll act accordingly, so we'll examine the spending
// transaction to determine what we should do.
//
// TODO(Roasbeef): need to be able to ensure this only triggers
// on confirmation, to ensure if multiple txns are broadcast, we
// act on the one that's timestamped
case spend, ok := <-c.fundingSpendNtfn.Spend:
// If the channel was closed, then this means that the notifier
// exited, so we will as well.
if !ok {
return nil
}
log.Debugf("Found spend details for funding output: %v",
spend.SpenderTxHash)
return spend
default:
}
return nil
}
// chanPointConfirmed checks whether the given channel point has confirmed.
// This is used to ensure that the funding output has confirmed on chain before
// we proceed with the rest of the close observer logic for taproot channels.
// Check the docs in `fundingConfirmedNtfn` for details.
func (c *chainWatcher) chanPointConfirmed() bool {
op := c.cfg.chanState.FundingOutpoint
select {
case _, ok := <-c.fundingConfirmedNtfn.Confirmed:
// If the channel was closed, then this means that the notifier
// exited, so we will as well.
if !ok {
return false
}
log.Debugf("Taproot ChannelPoint(%v) confirmed", op)
// The channel point has confirmed on chain. We now cancel the
// subscription.
c.fundingConfirmedNtfn.Cancel()
return true
default:
log.Infof("Taproot ChannelPoint(%v) not confirmed yet", op)
return false
}
}
// handleBlockbeat takes a blockbeat and queries for a spending tx for the
// funding output. If the spending tx is found, it will be handled based on the
// closure type.
func (c *chainWatcher) handleBlockbeat(beat chainio.Blockbeat) {
// Notify the chain watcher has processed the block.
defer c.NotifyBlockProcessed(beat, nil)
// If we have a fundingConfirmedNtfn, it means this is a taproot
// channel that is pending, before we proceed, we want to ensure that
// the expected funding output has confirmed on chain. Check the docs
// in `fundingConfirmedNtfn` for details.
if c.fundingConfirmedNtfn != nil {
// If the funding output hasn't confirmed in this block, we
// will check it again in the next block.
if !c.chanPointConfirmed() {
return
}
}
// Perform a non-blocking read to check whether the funding output was
// spent.
spend := c.checkFundingSpend()
if spend == nil {
log.Tracef("No spend found for ChannelPoint(%v) in block %v",
c.cfg.chanState.FundingOutpoint, beat.Height())
return
}
// The funding output was spent, we now handle it by sending a close
// event to the channel arbitrator.
err := c.handleCommitSpend(spend)
if err != nil {
log.Errorf("Failed to handle commit spend: %v", err)
}
}

View file

@ -9,10 +9,11 @@ import (
"time"
"github.com/btcsuite/btcd/wire"
"github.com/lightningnetwork/lnd/chainio"
"github.com/lightningnetwork/lnd/chainntnfs"
"github.com/lightningnetwork/lnd/channeldb"
"github.com/lightningnetwork/lnd/input"
"github.com/lightningnetwork/lnd/lntest/mock"
lnmock "github.com/lightningnetwork/lnd/lntest/mock"
"github.com/lightningnetwork/lnd/lnwallet"
"github.com/lightningnetwork/lnd/lnwire"
"github.com/stretchr/testify/require"
@ -33,8 +34,8 @@ func TestChainWatcherRemoteUnilateralClose(t *testing.T) {
// With the channels created, we'll now create a chain watcher instance
// which will be watching for any closes of Alice's channel.
aliceNotifier := &mock.ChainNotifier{
SpendChan: make(chan *chainntnfs.SpendDetail),
aliceNotifier := &lnmock.ChainNotifier{
SpendChan: make(chan *chainntnfs.SpendDetail, 1),
EpochChan: make(chan *chainntnfs.BlockEpoch),
ConfChan: make(chan *chainntnfs.TxConfirmation),
}
@ -49,6 +50,20 @@ func TestChainWatcherRemoteUnilateralClose(t *testing.T) {
require.NoError(t, err, "unable to start chain watcher")
defer aliceChainWatcher.Stop()
// Create a mock blockbeat and send it to Alice's BlockbeatChan.
mockBeat := &chainio.MockBlockbeat{}
// Mock the logger. We don't care how many times it's called as it's
// not critical.
mockBeat.On("logger").Return(log)
// Mock a fake block height - this is called based on the debuglevel.
mockBeat.On("Height").Return(int32(1)).Maybe()
// Mock `NotifyBlockProcessed` to be call once.
mockBeat.On("NotifyBlockProcessed",
nil, aliceChainWatcher.quit).Return().Once()
// We'll request a new channel event subscription from Alice's chain
// watcher.
chanEvents := aliceChainWatcher.SubscribeChannelEvents()
@ -61,7 +76,19 @@ func TestChainWatcherRemoteUnilateralClose(t *testing.T) {
SpenderTxHash: &bobTxHash,
SpendingTx: bobCommit,
}
aliceNotifier.SpendChan <- bobSpend
// Here we mock the behavior of a restart.
select {
case aliceNotifier.SpendChan <- bobSpend:
case <-time.After(1 * time.Second):
t.Fatalf("unable to send spend details")
}
select {
case aliceChainWatcher.BlockbeatChan <- mockBeat:
case <-time.After(time.Second * 1):
t.Fatalf("unable to send blockbeat")
}
// We should get a new spend event over the remote unilateral close
// event channel.
@ -117,7 +144,7 @@ func TestChainWatcherRemoteUnilateralClosePendingCommit(t *testing.T) {
// With the channels created, we'll now create a chain watcher instance
// which will be watching for any closes of Alice's channel.
aliceNotifier := &mock.ChainNotifier{
aliceNotifier := &lnmock.ChainNotifier{
SpendChan: make(chan *chainntnfs.SpendDetail),
EpochChan: make(chan *chainntnfs.BlockEpoch),
ConfChan: make(chan *chainntnfs.TxConfirmation),
@ -165,7 +192,32 @@ func TestChainWatcherRemoteUnilateralClosePendingCommit(t *testing.T) {
SpenderTxHash: &bobTxHash,
SpendingTx: bobCommit,
}
aliceNotifier.SpendChan <- bobSpend
// Create a mock blockbeat and send it to Alice's BlockbeatChan.
mockBeat := &chainio.MockBlockbeat{}
// Mock the logger. We don't care how many times it's called as it's
// not critical.
mockBeat.On("logger").Return(log)
// Mock a fake block height - this is called based on the debuglevel.
mockBeat.On("Height").Return(int32(1)).Maybe()
// Mock `NotifyBlockProcessed` to be call once.
mockBeat.On("NotifyBlockProcessed",
nil, aliceChainWatcher.quit).Return().Once()
select {
case aliceNotifier.SpendChan <- bobSpend:
case <-time.After(1 * time.Second):
t.Fatalf("unable to send spend details")
}
select {
case aliceChainWatcher.BlockbeatChan <- mockBeat:
case <-time.After(time.Second * 1):
t.Fatalf("unable to send blockbeat")
}
// We should get a new spend event over the remote unilateral close
// event channel.
@ -279,7 +331,7 @@ func TestChainWatcherDataLossProtect(t *testing.T) {
// With the channels created, we'll now create a chain watcher
// instance which will be watching for any closes of Alice's
// channel.
aliceNotifier := &mock.ChainNotifier{
aliceNotifier := &lnmock.ChainNotifier{
SpendChan: make(chan *chainntnfs.SpendDetail),
EpochChan: make(chan *chainntnfs.BlockEpoch),
ConfChan: make(chan *chainntnfs.TxConfirmation),
@ -326,7 +378,34 @@ func TestChainWatcherDataLossProtect(t *testing.T) {
SpenderTxHash: &bobTxHash,
SpendingTx: bobCommit,
}
aliceNotifier.SpendChan <- bobSpend
// Create a mock blockbeat and send it to Alice's
// BlockbeatChan.
mockBeat := &chainio.MockBlockbeat{}
// Mock the logger. We don't care how many times it's called as
// it's not critical.
mockBeat.On("logger").Return(log)
// Mock a fake block height - this is called based on the
// debuglevel.
mockBeat.On("Height").Return(int32(1)).Maybe()
// Mock `NotifyBlockProcessed` to be call once.
mockBeat.On("NotifyBlockProcessed",
nil, aliceChainWatcher.quit).Return().Once()
select {
case aliceNotifier.SpendChan <- bobSpend:
case <-time.After(time.Second * 1):
t.Fatalf("failed to send spend notification")
}
select {
case aliceChainWatcher.BlockbeatChan <- mockBeat:
case <-time.After(time.Second * 1):
t.Fatalf("unable to send blockbeat")
}
// We should get a new uni close resolution that indicates we
// processed the DLP scenario.
@ -453,7 +532,7 @@ func TestChainWatcherLocalForceCloseDetect(t *testing.T) {
// With the channels created, we'll now create a chain watcher
// instance which will be watching for any closes of Alice's
// channel.
aliceNotifier := &mock.ChainNotifier{
aliceNotifier := &lnmock.ChainNotifier{
SpendChan: make(chan *chainntnfs.SpendDetail),
EpochChan: make(chan *chainntnfs.BlockEpoch),
ConfChan: make(chan *chainntnfs.TxConfirmation),
@ -497,7 +576,33 @@ func TestChainWatcherLocalForceCloseDetect(t *testing.T) {
SpenderTxHash: &aliceTxHash,
SpendingTx: aliceCommit,
}
aliceNotifier.SpendChan <- aliceSpend
// Create a mock blockbeat and send it to Alice's
// BlockbeatChan.
mockBeat := &chainio.MockBlockbeat{}
// Mock the logger. We don't care how many times it's called as
// it's not critical.
mockBeat.On("logger").Return(log)
// Mock a fake block height - this is called based on the
// debuglevel.
mockBeat.On("Height").Return(int32(1)).Maybe()
// Mock `NotifyBlockProcessed` to be call once.
mockBeat.On("NotifyBlockProcessed",
nil, aliceChainWatcher.quit).Return().Once()
select {
case aliceNotifier.SpendChan <- aliceSpend:
case <-time.After(time.Second * 1):
t.Fatalf("unable to send spend notification")
}
select {
case aliceChainWatcher.BlockbeatChan <- mockBeat:
case <-time.After(time.Second * 1):
t.Fatalf("unable to send blockbeat")
}
// We should get a local force close event from Alice as she
// should be able to detect the close based on the commitment

View file

@ -14,6 +14,7 @@ import (
"github.com/btcsuite/btcd/chaincfg/chainhash"
"github.com/btcsuite/btcd/txscript"
"github.com/btcsuite/btcd/wire"
"github.com/lightningnetwork/lnd/chainio"
"github.com/lightningnetwork/lnd/channeldb"
"github.com/lightningnetwork/lnd/fn/v2"
"github.com/lightningnetwork/lnd/graph/db/models"
@ -330,6 +331,10 @@ type ChannelArbitrator struct {
started int32 // To be used atomically.
stopped int32 // To be used atomically.
// Embed the blockbeat consumer struct to get access to the method
// `NotifyBlockProcessed` and the `BlockbeatChan`.
chainio.BeatConsumer
// startTimestamp is the time when this ChannelArbitrator was started.
startTimestamp time.Time
@ -352,11 +357,6 @@ type ChannelArbitrator struct {
// to do its duty.
cfg ChannelArbitratorConfig
// blocks is a channel that the arbitrator will receive new blocks on.
// This channel should be buffered by so that it does not block the
// sender.
blocks chan int32
// signalUpdates is a channel that any new live signals for the channel
// we're watching over will be sent.
signalUpdates chan *signalUpdateMsg
@ -404,9 +404,8 @@ func NewChannelArbitrator(cfg ChannelArbitratorConfig,
unmerged[RemotePendingHtlcSet] = htlcSets[RemotePendingHtlcSet]
}
return &ChannelArbitrator{
c := &ChannelArbitrator{
log: log,
blocks: make(chan int32, arbitratorBlockBufferSize),
signalUpdates: make(chan *signalUpdateMsg),
resolutionSignal: make(chan struct{}),
forceCloseReqs: make(chan *forceCloseReq),
@ -415,8 +414,16 @@ func NewChannelArbitrator(cfg ChannelArbitratorConfig,
cfg: cfg,
quit: make(chan struct{}),
}
// Mount the block consumer.
c.BeatConsumer = chainio.NewBeatConsumer(c.quit, c.Name())
return c
}
// Compile-time check for the chainio.Consumer interface.
var _ chainio.Consumer = (*ChannelArbitrator)(nil)
// chanArbStartState contains the information from disk that we need to start
// up a channel arbitrator.
type chanArbStartState struct {
@ -455,7 +462,9 @@ func (c *ChannelArbitrator) getStartState(tx kvdb.RTx) (*chanArbStartState,
// Start starts all the goroutines that the ChannelArbitrator needs to operate.
// If takes a start state, which will be looked up on disk if it is not
// provided.
func (c *ChannelArbitrator) Start(state *chanArbStartState) error {
func (c *ChannelArbitrator) Start(state *chanArbStartState,
beat chainio.Blockbeat) error {
if !atomic.CompareAndSwapInt32(&c.started, 0, 1) {
return nil
}
@ -470,17 +479,15 @@ func (c *ChannelArbitrator) Start(state *chanArbStartState) error {
}
}
log.Debugf("Starting ChannelArbitrator(%v), htlc_set=%v, state=%v",
log.Tracef("Starting ChannelArbitrator(%v), htlc_set=%v, state=%v",
c.cfg.ChanPoint, lnutils.SpewLogClosure(c.activeHTLCs),
state.currentState)
// Set our state from our starting state.
c.state = state.currentState
_, bestHeight, err := c.cfg.ChainIO.GetBestBlock()
if err != nil {
return err
}
// Get the starting height.
bestHeight := beat.Height()
c.wg.Add(1)
go c.channelAttendant(bestHeight, state.commitSet)
@ -809,7 +816,7 @@ func (c *ChannelArbitrator) relaunchResolvers(commitSet *CommitSet,
// TODO(roasbeef): this isn't re-launched?
}
c.launchResolvers(unresolvedContracts, true)
c.resolveContracts(unresolvedContracts)
return nil
}
@ -1348,7 +1355,7 @@ func (c *ChannelArbitrator) stateStep(
// Finally, we'll launch all the required contract resolvers.
// Once they're all resolved, we're no longer needed.
c.launchResolvers(resolvers, false)
c.resolveContracts(resolvers)
nextState = StateWaitingFullResolution
@ -1571,17 +1578,72 @@ func (c *ChannelArbitrator) findCommitmentDeadlineAndValue(heightHint uint32,
return fn.Some(int32(deadline)), valueLeft, nil
}
// launchResolvers updates the activeResolvers list and starts the resolvers.
func (c *ChannelArbitrator) launchResolvers(resolvers []ContractResolver,
immediate bool) {
// resolveContracts updates the activeResolvers list and starts to resolve each
// contract concurrently, and launches them.
func (c *ChannelArbitrator) resolveContracts(resolvers []ContractResolver) {
c.activeResolversLock.Lock()
defer c.activeResolversLock.Unlock()
c.activeResolvers = resolvers
c.activeResolversLock.Unlock()
// Launch all resolvers.
c.launchResolvers()
for _, contract := range resolvers {
c.wg.Add(1)
go c.resolveContract(contract, immediate)
go c.resolveContract(contract)
}
}
// launchResolvers launches all the active resolvers concurrently.
func (c *ChannelArbitrator) launchResolvers() {
c.activeResolversLock.Lock()
resolvers := c.activeResolvers
c.activeResolversLock.Unlock()
// errChans is a map of channels that will be used to receive errors
// returned from launching the resolvers.
errChans := make(map[ContractResolver]chan error, len(resolvers))
// Launch each resolver in goroutines.
for _, r := range resolvers {
// If the contract is already resolved, there's no need to
// launch it again.
if r.IsResolved() {
log.Debugf("ChannelArbitrator(%v): skipping resolver "+
"%T as it's already resolved", c.cfg.ChanPoint,
r)
continue
}
// Create a signal chan.
errChan := make(chan error, 1)
errChans[r] = errChan
go func() {
err := r.Launch()
errChan <- err
}()
}
// Wait for all resolvers to finish launching.
for r, errChan := range errChans {
select {
case err := <-errChan:
if err == nil {
continue
}
log.Errorf("ChannelArbitrator(%v): unable to launch "+
"contract resolver(%T): %v", c.cfg.ChanPoint, r,
err)
case <-c.quit:
log.Debugf("ChannelArbitrator quit signal received, " +
"exit launchResolvers")
return
}
}
}
@ -1605,8 +1667,8 @@ func (c *ChannelArbitrator) advanceState(
for {
priorState = c.state
log.Debugf("ChannelArbitrator(%v): attempting state step with "+
"trigger=%v from state=%v", c.cfg.ChanPoint, trigger,
priorState)
"trigger=%v from state=%v at height=%v",
c.cfg.ChanPoint, trigger, priorState, triggerHeight)
nextState, closeTx, err := c.stateStep(
triggerHeight, trigger, confCommitSet,
@ -2553,19 +2615,17 @@ func (c *ChannelArbitrator) replaceResolver(oldResolver,
// contracts.
//
// NOTE: This MUST be run as a goroutine.
func (c *ChannelArbitrator) resolveContract(currentContract ContractResolver,
immediate bool) {
func (c *ChannelArbitrator) resolveContract(currentContract ContractResolver) {
defer c.wg.Done()
log.Debugf("ChannelArbitrator(%v): attempting to resolve %T",
log.Tracef("ChannelArbitrator(%v): attempting to resolve %T",
c.cfg.ChanPoint, currentContract)
// Until the contract is fully resolved, we'll continue to iteratively
// resolve the contract one step at a time.
for !currentContract.IsResolved() {
log.Debugf("ChannelArbitrator(%v): contract %T not yet resolved",
c.cfg.ChanPoint, currentContract)
log.Tracef("ChannelArbitrator(%v): contract %T not yet "+
"resolved", c.cfg.ChanPoint, currentContract)
select {
@ -2576,7 +2636,7 @@ func (c *ChannelArbitrator) resolveContract(currentContract ContractResolver,
default:
// Otherwise, we'll attempt to resolve the current
// contract.
nextContract, err := currentContract.Resolve(immediate)
nextContract, err := currentContract.Resolve()
if err != nil {
if err == errResolverShuttingDown {
return
@ -2625,6 +2685,13 @@ func (c *ChannelArbitrator) resolveContract(currentContract ContractResolver,
// loop.
currentContract = nextContract
// Launch the new contract.
err = currentContract.Launch()
if err != nil {
log.Errorf("Failed to launch %T: %v",
currentContract, err)
}
// If this contract is actually fully resolved, then
// we'll mark it as such within the database.
case currentContract.IsResolved():
@ -2728,8 +2795,6 @@ func (c *ChannelArbitrator) updateActiveHTLCs() {
// Nursery for incubation, and ultimate sweeping.
//
// NOTE: This MUST be run as a goroutine.
//
//nolint:funlen
func (c *ChannelArbitrator) channelAttendant(bestHeight int32,
commitSet *CommitSet) {
@ -2756,31 +2821,21 @@ func (c *ChannelArbitrator) channelAttendant(bestHeight int32,
// A new block has arrived, we'll examine all the active HTLC's
// to see if any of them have expired, and also update our
// track of the best current height.
case blockHeight, ok := <-c.blocks:
if !ok {
return
}
bestHeight = blockHeight
case beat := <-c.BlockbeatChan:
bestHeight = beat.Height()
// If we're not in the default state, then we can
// ignore this signal as we're waiting for contract
// resolution.
if c.state != StateDefault {
continue
}
log.Debugf("ChannelArbitrator(%v): new block height=%v",
c.cfg.ChanPoint, bestHeight)
// Now that a new block has arrived, we'll attempt to
// advance our state forward.
nextState, _, err := c.advanceState(
uint32(bestHeight), chainTrigger, nil,
)
err := c.handleBlockbeat(beat)
if err != nil {
log.Errorf("Unable to advance state: %v", err)
log.Errorf("Handle block=%v got err: %v",
bestHeight, err)
}
// If as a result of this trigger, the contract is
// fully resolved, then well exit.
if nextState == StateFullyResolved {
if c.state == StateFullyResolved {
return
}
@ -2803,255 +2858,55 @@ func (c *ChannelArbitrator) channelAttendant(bestHeight int32,
// We've cooperatively closed the channel, so we're no longer
// needed. We'll mark the channel as resolved and exit.
case closeInfo := <-c.cfg.ChainEvents.CooperativeClosure:
log.Infof("ChannelArbitrator(%v) marking channel "+
"cooperatively closed", c.cfg.ChanPoint)
err := c.cfg.MarkChannelClosed(
closeInfo.ChannelCloseSummary,
channeldb.ChanStatusCoopBroadcasted,
)
err := c.handleCoopCloseEvent(closeInfo)
if err != nil {
log.Errorf("Unable to mark channel closed: "+
"%v", err)
return
}
log.Errorf("Failed to handle coop close: %v",
err)
// We'll now advance our state machine until it reaches
// a terminal state, and the channel is marked resolved.
_, _, err = c.advanceState(
closeInfo.CloseHeight, coopCloseTrigger, nil,
)
if err != nil {
log.Errorf("Unable to advance state: %v", err)
return
}
// We have broadcasted our commitment, and it is now confirmed
// on-chain.
case closeInfo := <-c.cfg.ChainEvents.LocalUnilateralClosure:
log.Infof("ChannelArbitrator(%v): local on-chain "+
"channel close", c.cfg.ChanPoint)
if c.state != StateCommitmentBroadcasted {
log.Errorf("ChannelArbitrator(%v): unexpected "+
"local on-chain channel close",
c.cfg.ChanPoint)
}
closeTx := closeInfo.CloseTx
resolutions, err := closeInfo.ContractResolutions.
UnwrapOrErr(
fmt.Errorf("resolutions not found"),
)
err := c.handleLocalForceCloseEvent(closeInfo)
if err != nil {
log.Errorf("ChannelArbitrator(%v): unable to "+
"get resolutions: %v", c.cfg.ChanPoint,
err)
log.Errorf("Failed to handle local force "+
"close: %v", err)
return
}
// We make sure that the htlc resolutions are present
// otherwise we would panic dereferencing the pointer.
//
// TODO(ziggie): Refactor ContractResolutions to use
// options.
if resolutions.HtlcResolutions == nil {
log.Errorf("ChannelArbitrator(%v): htlc "+
"resolutions not found",
c.cfg.ChanPoint)
return
}
contractRes := &ContractResolutions{
CommitHash: closeTx.TxHash(),
CommitResolution: resolutions.CommitResolution,
HtlcResolutions: *resolutions.HtlcResolutions,
AnchorResolution: resolutions.AnchorResolution,
}
// When processing a unilateral close event, we'll
// transition to the ContractClosed state. We'll log
// out the set of resolutions such that they are
// available to fetch in that state, we'll also write
// the commit set so we can reconstruct our chain
// actions on restart.
err = c.log.LogContractResolutions(contractRes)
if err != nil {
log.Errorf("Unable to write resolutions: %v",
err)
return
}
err = c.log.InsertConfirmedCommitSet(
&closeInfo.CommitSet,
)
if err != nil {
log.Errorf("Unable to write commit set: %v",
err)
return
}
// After the set of resolutions are successfully
// logged, we can safely close the channel. After this
// succeeds we won't be getting chain events anymore,
// so we must make sure we can recover on restart after
// it is marked closed. If the next state transition
// fails, we'll start up in the prior state again, and
// we won't be longer getting chain events. In this
// case we must manually re-trigger the state
// transition into StateContractClosed based on the
// close status of the channel.
err = c.cfg.MarkChannelClosed(
closeInfo.ChannelCloseSummary,
channeldb.ChanStatusLocalCloseInitiator,
)
if err != nil {
log.Errorf("Unable to mark "+
"channel closed: %v", err)
return
}
// We'll now advance our state machine until it reaches
// a terminal state.
_, _, err = c.advanceState(
uint32(closeInfo.SpendingHeight),
localCloseTrigger, &closeInfo.CommitSet,
)
if err != nil {
log.Errorf("Unable to advance state: %v", err)
}
// The remote party has broadcast the commitment on-chain.
// We'll examine our state to determine if we need to act at
// all.
case uniClosure := <-c.cfg.ChainEvents.RemoteUnilateralClosure:
log.Infof("ChannelArbitrator(%v): remote party has "+
"closed channel out on-chain", c.cfg.ChanPoint)
// If we don't have a self output, and there are no
// active HTLC's, then we can immediately mark the
// contract as fully resolved and exit.
contractRes := &ContractResolutions{
CommitHash: *uniClosure.SpenderTxHash,
CommitResolution: uniClosure.CommitResolution,
HtlcResolutions: *uniClosure.HtlcResolutions,
AnchorResolution: uniClosure.AnchorResolution,
}
// When processing a unilateral close event, we'll
// transition to the ContractClosed state. We'll log
// out the set of resolutions such that they are
// available to fetch in that state, we'll also write
// the commit set so we can reconstruct our chain
// actions on restart.
err := c.log.LogContractResolutions(contractRes)
err := c.handleRemoteForceCloseEvent(uniClosure)
if err != nil {
log.Errorf("Unable to write resolutions: %v",
err)
log.Errorf("Failed to handle remote force "+
"close: %v", err)
return
}
err = c.log.InsertConfirmedCommitSet(
&uniClosure.CommitSet,
)
if err != nil {
log.Errorf("Unable to write commit set: %v",
err)
return
}
// After the set of resolutions are successfully
// logged, we can safely close the channel. After this
// succeeds we won't be getting chain events anymore,
// so we must make sure we can recover on restart after
// it is marked closed. If the next state transition
// fails, we'll start up in the prior state again, and
// we won't be longer getting chain events. In this
// case we must manually re-trigger the state
// transition into StateContractClosed based on the
// close status of the channel.
closeSummary := &uniClosure.ChannelCloseSummary
err = c.cfg.MarkChannelClosed(
closeSummary,
channeldb.ChanStatusRemoteCloseInitiator,
)
if err != nil {
log.Errorf("Unable to mark channel closed: %v",
err)
return
}
// We'll now advance our state machine until it reaches
// a terminal state.
_, _, err = c.advanceState(
uint32(uniClosure.SpendingHeight),
remoteCloseTrigger, &uniClosure.CommitSet,
)
if err != nil {
log.Errorf("Unable to advance state: %v", err)
}
// The remote has breached the channel. As this is handled by
// the ChainWatcher and BreachArbitrator, we don't have to do
// anything in particular, so just advance our state and
// gracefully exit.
case breachInfo := <-c.cfg.ChainEvents.ContractBreach:
log.Infof("ChannelArbitrator(%v): remote party has "+
"breached channel!", c.cfg.ChanPoint)
// In the breach case, we'll only have anchor and
// breach resolutions.
contractRes := &ContractResolutions{
CommitHash: breachInfo.CommitHash,
BreachResolution: breachInfo.BreachResolution,
AnchorResolution: breachInfo.AnchorResolution,
}
// We'll transition to the ContractClosed state and log
// the set of resolutions such that they can be turned
// into resolvers later on. We'll also insert the
// CommitSet of the latest set of commitments.
err := c.log.LogContractResolutions(contractRes)
err := c.handleContractBreach(breachInfo)
if err != nil {
log.Errorf("Unable to write resolutions: %v",
err)
log.Errorf("Failed to handle contract breach: "+
"%v", err)
return
}
err = c.log.InsertConfirmedCommitSet(
&breachInfo.CommitSet,
)
if err != nil {
log.Errorf("Unable to write commit set: %v",
err)
return
}
// The channel is finally marked pending closed here as
// the BreachArbitrator and channel arbitrator have
// persisted the relevant states.
closeSummary := &breachInfo.CloseSummary
err = c.cfg.MarkChannelClosed(
closeSummary,
channeldb.ChanStatusRemoteCloseInitiator,
)
if err != nil {
log.Errorf("Unable to mark channel closed: %v",
err)
return
}
log.Infof("Breached channel=%v marked pending-closed",
breachInfo.BreachResolution.FundingOutPoint)
// We'll advance our state machine until it reaches a
// terminal state.
_, _, err = c.advanceState(
uint32(bestHeight), breachCloseTrigger,
&breachInfo.CommitSet,
)
if err != nil {
log.Errorf("Unable to advance state: %v", err)
}
// A new contract has just been resolved, we'll now check our
// log to see if all contracts have been resolved. If so, then
@ -3131,6 +2986,113 @@ func (c *ChannelArbitrator) channelAttendant(bestHeight int32,
}
}
// handleBlockbeat processes a newly received blockbeat by advancing the
// arbitrator's internal state using the received block height.
func (c *ChannelArbitrator) handleBlockbeat(beat chainio.Blockbeat) error {
// Notify we've processed the block.
defer c.NotifyBlockProcessed(beat, nil)
// If the state is StateContractClosed, StateWaitingFullResolution, or
// StateFullyResolved, there's no need to read the close event channel
// since the arbitrator can only get to this state after processing a
// previous close event and launched all its resolvers.
if c.state.IsContractClosed() {
log.Infof("ChannelArbitrator(%v): skipping reading close "+
"events in state=%v", c.cfg.ChanPoint, c.state)
// Launch all active resolvers when a new blockbeat is
// received, even when the contract is closed, we still need
// this as the resolvers may transform into new ones. For
// already launched resolvers this will be NOOP as they track
// their own `launched` states.
c.launchResolvers()
return nil
}
// Perform a non-blocking read on the close events in case the channel
// is closed in this blockbeat.
c.receiveAndProcessCloseEvent()
// Try to advance the state if we are in StateDefault.
if c.state == StateDefault {
// Now that a new block has arrived, we'll attempt to advance
// our state forward.
_, _, err := c.advanceState(
uint32(beat.Height()), chainTrigger, nil,
)
if err != nil {
return fmt.Errorf("unable to advance state: %w", err)
}
}
// Launch all active resolvers when a new blockbeat is received.
c.launchResolvers()
return nil
}
// receiveAndProcessCloseEvent does a non-blocking read on all the channel
// close event channels. If an event is received, it will be further processed.
func (c *ChannelArbitrator) receiveAndProcessCloseEvent() {
select {
// Received a coop close event, we now mark the channel as resolved and
// exit.
case closeInfo := <-c.cfg.ChainEvents.CooperativeClosure:
err := c.handleCoopCloseEvent(closeInfo)
if err != nil {
log.Errorf("Failed to handle coop close: %v", err)
return
}
// We have broadcast our commitment, and it is now confirmed onchain.
case closeInfo := <-c.cfg.ChainEvents.LocalUnilateralClosure:
if c.state != StateCommitmentBroadcasted {
log.Errorf("ChannelArbitrator(%v): unexpected "+
"local on-chain channel close", c.cfg.ChanPoint)
}
err := c.handleLocalForceCloseEvent(closeInfo)
if err != nil {
log.Errorf("Failed to handle local force close: %v",
err)
return
}
// The remote party has broadcast the commitment. We'll examine our
// state to determine if we need to act at all.
case uniClosure := <-c.cfg.ChainEvents.RemoteUnilateralClosure:
err := c.handleRemoteForceCloseEvent(uniClosure)
if err != nil {
log.Errorf("Failed to handle remote force close: %v",
err)
return
}
// The remote has breached the channel! We now launch the breach
// contract resolvers.
case breachInfo := <-c.cfg.ChainEvents.ContractBreach:
err := c.handleContractBreach(breachInfo)
if err != nil {
log.Errorf("Failed to handle contract breach: %v", err)
return
}
default:
log.Infof("ChannelArbitrator(%v) no close event",
c.cfg.ChanPoint)
}
}
// Name returns a human-readable string for this subsystem.
//
// NOTE: Part of chainio.Consumer interface.
func (c *ChannelArbitrator) Name() string {
return fmt.Sprintf("ChannelArbitrator(%v)", c.cfg.ChanPoint)
}
// checkLegacyBreach returns StateFullyResolved if the channel was closed with
// a breach transaction before the channel arbitrator launched its own breach
// resolver. StateContractClosed is returned if this is a modern breach close
@ -3416,3 +3378,226 @@ func (c *ChannelArbitrator) abandonForwards(htlcs fn.Set[uint64]) error {
return nil
}
// handleCoopCloseEvent takes a coop close event from ChainEvents, marks the
// channel as closed and advances the state.
func (c *ChannelArbitrator) handleCoopCloseEvent(
closeInfo *CooperativeCloseInfo) error {
log.Infof("ChannelArbitrator(%v) marking channel cooperatively closed "+
"at height %v", c.cfg.ChanPoint, closeInfo.CloseHeight)
err := c.cfg.MarkChannelClosed(
closeInfo.ChannelCloseSummary,
channeldb.ChanStatusCoopBroadcasted,
)
if err != nil {
return fmt.Errorf("unable to mark channel closed: %w", err)
}
// We'll now advance our state machine until it reaches a terminal
// state, and the channel is marked resolved.
_, _, err = c.advanceState(closeInfo.CloseHeight, coopCloseTrigger, nil)
if err != nil {
log.Errorf("Unable to advance state: %v", err)
}
return nil
}
// handleLocalForceCloseEvent takes a local force close event from ChainEvents,
// saves the contract resolutions to disk, mark the channel as closed and
// advance the state.
func (c *ChannelArbitrator) handleLocalForceCloseEvent(
closeInfo *LocalUnilateralCloseInfo) error {
closeTx := closeInfo.CloseTx
resolutions, err := closeInfo.ContractResolutions.
UnwrapOrErr(
fmt.Errorf("resolutions not found"),
)
if err != nil {
return fmt.Errorf("unable to get resolutions: %w", err)
}
// We make sure that the htlc resolutions are present
// otherwise we would panic dereferencing the pointer.
//
// TODO(ziggie): Refactor ContractResolutions to use
// options.
if resolutions.HtlcResolutions == nil {
return fmt.Errorf("htlc resolutions is nil")
}
log.Infof("ChannelArbitrator(%v): local force close tx=%v confirmed",
c.cfg.ChanPoint, closeTx.TxHash())
contractRes := &ContractResolutions{
CommitHash: closeTx.TxHash(),
CommitResolution: resolutions.CommitResolution,
HtlcResolutions: *resolutions.HtlcResolutions,
AnchorResolution: resolutions.AnchorResolution,
}
// When processing a unilateral close event, we'll transition to the
// ContractClosed state. We'll log out the set of resolutions such that
// they are available to fetch in that state, we'll also write the
// commit set so we can reconstruct our chain actions on restart.
err = c.log.LogContractResolutions(contractRes)
if err != nil {
return fmt.Errorf("unable to write resolutions: %w", err)
}
err = c.log.InsertConfirmedCommitSet(&closeInfo.CommitSet)
if err != nil {
return fmt.Errorf("unable to write commit set: %w", err)
}
// After the set of resolutions are successfully logged, we can safely
// close the channel. After this succeeds we won't be getting chain
// events anymore, so we must make sure we can recover on restart after
// it is marked closed. If the next state transition fails, we'll start
// up in the prior state again, and we won't be longer getting chain
// events. In this case we must manually re-trigger the state
// transition into StateContractClosed based on the close status of the
// channel.
err = c.cfg.MarkChannelClosed(
closeInfo.ChannelCloseSummary,
channeldb.ChanStatusLocalCloseInitiator,
)
if err != nil {
return fmt.Errorf("unable to mark channel closed: %w", err)
}
// We'll now advance our state machine until it reaches a terminal
// state.
_, _, err = c.advanceState(
uint32(closeInfo.SpendingHeight),
localCloseTrigger, &closeInfo.CommitSet,
)
if err != nil {
log.Errorf("Unable to advance state: %v", err)
}
return nil
}
// handleRemoteForceCloseEvent takes a remote force close event from
// ChainEvents, saves the contract resolutions to disk, mark the channel as
// closed and advance the state.
func (c *ChannelArbitrator) handleRemoteForceCloseEvent(
closeInfo *RemoteUnilateralCloseInfo) error {
log.Infof("ChannelArbitrator(%v): remote party has force closed "+
"channel at height %v", c.cfg.ChanPoint,
closeInfo.SpendingHeight)
// If we don't have a self output, and there are no active HTLC's, then
// we can immediately mark the contract as fully resolved and exit.
contractRes := &ContractResolutions{
CommitHash: *closeInfo.SpenderTxHash,
CommitResolution: closeInfo.CommitResolution,
HtlcResolutions: *closeInfo.HtlcResolutions,
AnchorResolution: closeInfo.AnchorResolution,
}
// When processing a unilateral close event, we'll transition to the
// ContractClosed state. We'll log out the set of resolutions such that
// they are available to fetch in that state, we'll also write the
// commit set so we can reconstruct our chain actions on restart.
err := c.log.LogContractResolutions(contractRes)
if err != nil {
return fmt.Errorf("unable to write resolutions: %w", err)
}
err = c.log.InsertConfirmedCommitSet(&closeInfo.CommitSet)
if err != nil {
return fmt.Errorf("unable to write commit set: %w", err)
}
// After the set of resolutions are successfully logged, we can safely
// close the channel. After this succeeds we won't be getting chain
// events anymore, so we must make sure we can recover on restart after
// it is marked closed. If the next state transition fails, we'll start
// up in the prior state again, and we won't be longer getting chain
// events. In this case we must manually re-trigger the state
// transition into StateContractClosed based on the close status of the
// channel.
closeSummary := &closeInfo.ChannelCloseSummary
err = c.cfg.MarkChannelClosed(
closeSummary,
channeldb.ChanStatusRemoteCloseInitiator,
)
if err != nil {
return fmt.Errorf("unable to mark channel closed: %w", err)
}
// We'll now advance our state machine until it reaches a terminal
// state.
_, _, err = c.advanceState(
uint32(closeInfo.SpendingHeight),
remoteCloseTrigger, &closeInfo.CommitSet,
)
if err != nil {
log.Errorf("Unable to advance state: %v", err)
}
return nil
}
// handleContractBreach takes a breach close event from ChainEvents, saves the
// contract resolutions to disk, mark the channel as closed and advance the
// state.
func (c *ChannelArbitrator) handleContractBreach(
breachInfo *BreachCloseInfo) error {
closeSummary := &breachInfo.CloseSummary
log.Infof("ChannelArbitrator(%v): remote party has breached channel "+
"at height %v!", c.cfg.ChanPoint, closeSummary.CloseHeight)
// In the breach case, we'll only have anchor and breach resolutions.
contractRes := &ContractResolutions{
CommitHash: breachInfo.CommitHash,
BreachResolution: breachInfo.BreachResolution,
AnchorResolution: breachInfo.AnchorResolution,
}
// We'll transition to the ContractClosed state and log the set of
// resolutions such that they can be turned into resolvers later on.
// We'll also insert the CommitSet of the latest set of commitments.
err := c.log.LogContractResolutions(contractRes)
if err != nil {
return fmt.Errorf("unable to write resolutions: %w", err)
}
err = c.log.InsertConfirmedCommitSet(&breachInfo.CommitSet)
if err != nil {
return fmt.Errorf("unable to write commit set: %w", err)
}
// The channel is finally marked pending closed here as the
// BreachArbitrator and channel arbitrator have persisted the relevant
// states.
err = c.cfg.MarkChannelClosed(
closeSummary, channeldb.ChanStatusRemoteCloseInitiator,
)
if err != nil {
return fmt.Errorf("unable to mark channel closed: %w", err)
}
log.Infof("Breached channel=%v marked pending-closed",
breachInfo.BreachResolution.FundingOutPoint)
// We'll advance our state machine until it reaches a terminal state.
_, _, err = c.advanceState(
closeSummary.CloseHeight, breachCloseTrigger,
&breachInfo.CommitSet,
)
if err != nil {
log.Errorf("Unable to advance state: %v", err)
}
return nil
}

View file

@ -13,6 +13,8 @@ import (
"github.com/btcsuite/btcd/btcutil"
"github.com/btcsuite/btcd/chaincfg/chainhash"
"github.com/btcsuite/btcd/wire"
"github.com/davecgh/go-spew/spew"
"github.com/lightningnetwork/lnd/chainio"
"github.com/lightningnetwork/lnd/chainntnfs"
"github.com/lightningnetwork/lnd/channeldb"
"github.com/lightningnetwork/lnd/clock"
@ -227,6 +229,15 @@ func (c *chanArbTestCtx) CleanUp() {
}
}
// receiveBlockbeat mocks the behavior of a blockbeat being sent by the
// BlockbeatDispatcher, which essentially mocks the method `ProcessBlock`.
func (c *chanArbTestCtx) receiveBlockbeat(height int) {
go func() {
beat := newBeatFromHeight(int32(height))
c.chanArb.BlockbeatChan <- beat
}()
}
// AssertStateTransitions asserts that the state machine steps through the
// passed states in order.
func (c *chanArbTestCtx) AssertStateTransitions(expectedStates ...ArbitratorState) {
@ -286,7 +297,8 @@ func (c *chanArbTestCtx) Restart(restartClosure func(*chanArbTestCtx)) (*chanArb
restartClosure(newCtx)
}
if err := newCtx.chanArb.Start(nil); err != nil {
beat := newBeatFromHeight(0)
if err := newCtx.chanArb.Start(nil, beat); err != nil {
return nil, err
}
@ -513,7 +525,8 @@ func TestChannelArbitratorCooperativeClose(t *testing.T) {
chanArbCtx, err := createTestChannelArbitrator(t, log)
require.NoError(t, err, "unable to create ChannelArbitrator")
if err := chanArbCtx.chanArb.Start(nil); err != nil {
beat := newBeatFromHeight(0)
if err := chanArbCtx.chanArb.Start(nil, beat); err != nil {
t.Fatalf("unable to start ChannelArbitrator: %v", err)
}
t.Cleanup(func() {
@ -571,7 +584,8 @@ func TestChannelArbitratorRemoteForceClose(t *testing.T) {
require.NoError(t, err, "unable to create ChannelArbitrator")
chanArb := chanArbCtx.chanArb
if err := chanArb.Start(nil); err != nil {
beat := newBeatFromHeight(0)
if err := chanArb.Start(nil, beat); err != nil {
t.Fatalf("unable to start ChannelArbitrator: %v", err)
}
defer chanArb.Stop()
@ -624,7 +638,8 @@ func TestChannelArbitratorLocalForceClose(t *testing.T) {
require.NoError(t, err, "unable to create ChannelArbitrator")
chanArb := chanArbCtx.chanArb
if err := chanArb.Start(nil); err != nil {
beat := newBeatFromHeight(0)
if err := chanArb.Start(nil, beat); err != nil {
t.Fatalf("unable to start ChannelArbitrator: %v", err)
}
defer chanArb.Stop()
@ -736,7 +751,8 @@ func TestChannelArbitratorBreachClose(t *testing.T) {
chanArb.cfg.PreimageDB = newMockWitnessBeacon()
chanArb.cfg.Registry = &mockRegistry{}
if err := chanArb.Start(nil); err != nil {
beat := newBeatFromHeight(0)
if err := chanArb.Start(nil, beat); err != nil {
t.Fatalf("unable to start ChannelArbitrator: %v", err)
}
t.Cleanup(func() {
@ -863,7 +879,8 @@ func TestChannelArbitratorLocalForceClosePendingHtlc(t *testing.T) {
chanArb.cfg.PreimageDB = newMockWitnessBeacon()
chanArb.cfg.Registry = &mockRegistry{}
if err := chanArb.Start(nil); err != nil {
beat := newBeatFromHeight(0)
if err := chanArb.Start(nil, beat); err != nil {
t.Fatalf("unable to start ChannelArbitrator: %v", err)
}
defer chanArb.Stop()
@ -966,6 +983,7 @@ func TestChannelArbitratorLocalForceClosePendingHtlc(t *testing.T) {
},
},
}
closeTxid := closeTx.TxHash()
htlcOp := wire.OutPoint{
Hash: closeTx.TxHash(),
@ -1037,7 +1055,7 @@ func TestChannelArbitratorLocalForceClosePendingHtlc(t *testing.T) {
}
require.Equal(t, expectedFinalHtlcs, chanArbCtx.finalHtlcs)
// We'll no re-create the resolver, notice that we use the existing
// We'll now re-create the resolver, notice that we use the existing
// arbLog so it carries over the same on-disk state.
chanArbCtxNew, err := chanArbCtx.Restart(nil)
require.NoError(t, err, "unable to create ChannelArbitrator")
@ -1096,7 +1114,11 @@ func TestChannelArbitratorLocalForceClosePendingHtlc(t *testing.T) {
// Notify resolver that the HTLC output of the commitment has been
// spent.
oldNotifier.SpendChan <- &chainntnfs.SpendDetail{SpendingTx: closeTx}
oldNotifier.SpendChan <- &chainntnfs.SpendDetail{
SpendingTx: closeTx,
SpentOutPoint: &wire.OutPoint{},
SpenderTxHash: &closeTxid,
}
// Finally, we should also receive a resolution message instructing the
// switch to cancel back the HTLC.
@ -1123,8 +1145,12 @@ func TestChannelArbitratorLocalForceClosePendingHtlc(t *testing.T) {
default:
}
// Notify resolver that the second level transaction is spent.
oldNotifier.SpendChan <- &chainntnfs.SpendDetail{SpendingTx: closeTx}
// Notify resolver that the output of the timeout tx has been spent.
oldNotifier.SpendChan <- &chainntnfs.SpendDetail{
SpendingTx: closeTx,
SpentOutPoint: &wire.OutPoint{},
SpenderTxHash: &closeTxid,
}
// At this point channel should be marked as resolved.
chanArbCtxNew.AssertStateTransitions(StateFullyResolved)
@ -1148,7 +1174,8 @@ func TestChannelArbitratorLocalForceCloseRemoteConfirmed(t *testing.T) {
require.NoError(t, err, "unable to create ChannelArbitrator")
chanArb := chanArbCtx.chanArb
if err := chanArb.Start(nil); err != nil {
beat := newBeatFromHeight(0)
if err := chanArb.Start(nil, beat); err != nil {
t.Fatalf("unable to start ChannelArbitrator: %v", err)
}
defer chanArb.Stop()
@ -1255,7 +1282,8 @@ func TestChannelArbitratorLocalForceDoubleSpend(t *testing.T) {
require.NoError(t, err, "unable to create ChannelArbitrator")
chanArb := chanArbCtx.chanArb
if err := chanArb.Start(nil); err != nil {
beat := newBeatFromHeight(0)
if err := chanArb.Start(nil, beat); err != nil {
t.Fatalf("unable to start ChannelArbitrator: %v", err)
}
defer chanArb.Stop()
@ -1361,7 +1389,8 @@ func TestChannelArbitratorPersistence(t *testing.T) {
require.NoError(t, err, "unable to create ChannelArbitrator")
chanArb := chanArbCtx.chanArb
if err := chanArb.Start(nil); err != nil {
beat := newBeatFromHeight(0)
if err := chanArb.Start(nil, beat); err != nil {
t.Fatalf("unable to start ChannelArbitrator: %v", err)
}
@ -1479,7 +1508,8 @@ func TestChannelArbitratorForceCloseBreachedChannel(t *testing.T) {
require.NoError(t, err, "unable to create ChannelArbitrator")
chanArb := chanArbCtx.chanArb
if err := chanArb.Start(nil); err != nil {
beat := newBeatFromHeight(0)
if err := chanArb.Start(nil, beat); err != nil {
t.Fatalf("unable to start ChannelArbitrator: %v", err)
}
@ -1666,7 +1696,8 @@ func TestChannelArbitratorCommitFailure(t *testing.T) {
}
chanArb := chanArbCtx.chanArb
if err := chanArb.Start(nil); err != nil {
beat := newBeatFromHeight(0)
if err := chanArb.Start(nil, beat); err != nil {
t.Fatalf("unable to start ChannelArbitrator: %v", err)
}
@ -1750,7 +1781,8 @@ func TestChannelArbitratorEmptyResolutions(t *testing.T) {
chanArb.cfg.ClosingHeight = 100
chanArb.cfg.CloseType = channeldb.RemoteForceClose
if err := chanArb.Start(nil); err != nil {
beat := newBeatFromHeight(100)
if err := chanArb.Start(nil, beat); err != nil {
t.Fatalf("unable to start ChannelArbitrator: %v", err)
}
@ -1780,7 +1812,8 @@ func TestChannelArbitratorAlreadyForceClosed(t *testing.T) {
chanArbCtx, err := createTestChannelArbitrator(t, log)
require.NoError(t, err, "unable to create ChannelArbitrator")
chanArb := chanArbCtx.chanArb
if err := chanArb.Start(nil); err != nil {
beat := newBeatFromHeight(0)
if err := chanArb.Start(nil, beat); err != nil {
t.Fatalf("unable to start ChannelArbitrator: %v", err)
}
defer chanArb.Stop()
@ -1878,9 +1911,10 @@ func TestChannelArbitratorDanglingCommitForceClose(t *testing.T) {
t.Fatalf("unable to create ChannelArbitrator: %v", err)
}
chanArb := chanArbCtx.chanArb
if err := chanArb.Start(nil); err != nil {
t.Fatalf("unable to start ChannelArbitrator: %v", err)
}
beat := newBeatFromHeight(0)
err = chanArb.Start(nil, beat)
require.NoError(t, err)
defer chanArb.Stop()
// Now that our channel arb has started, we'll set up
@ -1924,7 +1958,8 @@ func TestChannelArbitratorDanglingCommitForceClose(t *testing.T) {
// now mine a block (height 5), which is 5 blocks away
// (our grace delta) from the expiry of that HTLC.
case testCase.htlcExpired:
chanArbCtx.chanArb.blocks <- 5
beat := newBeatFromHeight(5)
chanArbCtx.chanArb.BlockbeatChan <- beat
// Otherwise, we'll just trigger a regular force close
// request.
@ -2036,8 +2071,7 @@ func TestChannelArbitratorDanglingCommitForceClose(t *testing.T) {
// so instead, we'll mine another block which'll cause
// it to re-examine its state and realize there're no
// more HTLCs.
chanArbCtx.chanArb.blocks <- 6
chanArbCtx.AssertStateTransitions(StateFullyResolved)
chanArbCtx.receiveBlockbeat(6)
})
}
}
@ -2074,7 +2108,8 @@ func TestChannelArbitratorPendingExpiredHTLC(t *testing.T) {
return false
}
if err := chanArb.Start(nil); err != nil {
beat := newBeatFromHeight(0)
if err := chanArb.Start(nil, beat); err != nil {
t.Fatalf("unable to start ChannelArbitrator: %v", err)
}
t.Cleanup(func() {
@ -2108,13 +2143,15 @@ func TestChannelArbitratorPendingExpiredHTLC(t *testing.T) {
// We will advance the uptime to 10 seconds which should be still within
// the grace period and should not trigger going to chain.
testClock.SetTime(startTime.Add(time.Second * 10))
chanArbCtx.chanArb.blocks <- 5
beat = newBeatFromHeight(5)
chanArbCtx.chanArb.BlockbeatChan <- beat
chanArbCtx.AssertState(StateDefault)
// We will advance the uptime to 16 seconds which should trigger going
// to chain.
testClock.SetTime(startTime.Add(time.Second * 16))
chanArbCtx.chanArb.blocks <- 6
beat = newBeatFromHeight(6)
chanArbCtx.chanArb.BlockbeatChan <- beat
chanArbCtx.AssertStateTransitions(
StateBroadcastCommit,
StateCommitmentBroadcasted,
@ -2227,8 +2264,8 @@ func TestRemoteCloseInitiator(t *testing.T) {
"ChannelArbitrator: %v", err)
}
chanArb := chanArbCtx.chanArb
if err := chanArb.Start(nil); err != nil {
beat := newBeatFromHeight(0)
if err := chanArb.Start(nil, beat); err != nil {
t.Fatalf("unable to start "+
"ChannelArbitrator: %v", err)
}
@ -2482,7 +2519,7 @@ func TestSweepAnchors(t *testing.T) {
// Set current block height.
heightHint := uint32(1000)
chanArbCtx.chanArb.blocks <- int32(heightHint)
chanArbCtx.receiveBlockbeat(int(heightHint))
htlcIndexBase := uint64(99)
deadlineDelta := uint32(10)
@ -2645,7 +2682,7 @@ func TestSweepLocalAnchor(t *testing.T) {
// Set current block height.
heightHint := uint32(1000)
chanArbCtx.chanArb.blocks <- int32(heightHint)
chanArbCtx.receiveBlockbeat(int(heightHint))
htlcIndex := uint64(99)
deadlineDelta := uint32(10)
@ -2779,7 +2816,9 @@ func TestChannelArbitratorAnchors(t *testing.T) {
},
}
if err := chanArb.Start(nil); err != nil {
heightHint := uint32(1000)
beat := newBeatFromHeight(int32(heightHint))
if err := chanArb.Start(nil, beat); err != nil {
t.Fatalf("unable to start ChannelArbitrator: %v", err)
}
t.Cleanup(func() {
@ -2791,27 +2830,28 @@ func TestChannelArbitratorAnchors(t *testing.T) {
}
chanArb.UpdateContractSignals(signals)
// Set current block height.
heightHint := uint32(1000)
chanArbCtx.chanArb.blocks <- int32(heightHint)
htlcAmt := lnwire.MilliSatoshi(1_000_000)
// Create testing HTLCs.
deadlineDelta := uint32(10)
deadlinePreimageDelta := deadlineDelta + 2
spendingHeight := uint32(beat.Height())
deadlineDelta := uint32(100)
deadlinePreimageDelta := deadlineDelta
htlcWithPreimage := channeldb.HTLC{
HtlcIndex: 99,
RefundTimeout: heightHint + deadlinePreimageDelta,
HtlcIndex: 99,
// RefundTimeout is 101.
RefundTimeout: spendingHeight + deadlinePreimageDelta,
RHash: rHash,
Incoming: true,
Amt: htlcAmt,
}
expectedDeadline := deadlineDelta/2 + spendingHeight
deadlineHTLCdelta := deadlineDelta + 3
deadlineHTLCdelta := deadlineDelta + 40
htlc := channeldb.HTLC{
HtlcIndex: 100,
RefundTimeout: heightHint + deadlineHTLCdelta,
HtlcIndex: 100,
// RefundTimeout is 141.
RefundTimeout: spendingHeight + deadlineHTLCdelta,
Amt: htlcAmt,
}
@ -2896,7 +2936,9 @@ func TestChannelArbitratorAnchors(t *testing.T) {
//nolint:ll
chanArb.cfg.ChainEvents.LocalUnilateralClosure <- &LocalUnilateralCloseInfo{
SpendDetail: &chainntnfs.SpendDetail{},
SpendDetail: &chainntnfs.SpendDetail{
SpendingHeight: int32(spendingHeight),
},
LocalForceCloseSummary: &lnwallet.LocalForceCloseSummary{
CloseTx: closeTx,
ContractResolutions: fn.Some(lnwallet.ContractResolutions{
@ -2960,12 +3002,14 @@ func TestChannelArbitratorAnchors(t *testing.T) {
// to htlcWithPreimage's CLTV.
require.Equal(t, 2, len(chanArbCtx.sweeper.deadlines))
require.EqualValues(t,
heightHint+deadlinePreimageDelta/2,
chanArbCtx.sweeper.deadlines[0],
expectedDeadline,
chanArbCtx.sweeper.deadlines[0], "want %d, got %d",
expectedDeadline, chanArbCtx.sweeper.deadlines[0],
)
require.EqualValues(t,
heightHint+deadlinePreimageDelta/2,
chanArbCtx.sweeper.deadlines[1],
expectedDeadline,
chanArbCtx.sweeper.deadlines[1], "want %d, got %d",
expectedDeadline, chanArbCtx.sweeper.deadlines[1],
)
}
@ -3067,7 +3111,8 @@ func TestChannelArbitratorStartForceCloseFail(t *testing.T) {
return test.broadcastErr
}
err = chanArb.Start(nil)
beat := newBeatFromHeight(0)
err = chanArb.Start(nil, beat)
if !test.expectedStartup {
require.ErrorIs(t, err, test.broadcastErr)
@ -3115,7 +3160,8 @@ func assertResolverReport(t *testing.T, reports chan *channeldb.ResolverReport,
select {
case report := <-reports:
if !reflect.DeepEqual(report, expected) {
t.Fatalf("expected: %v, got: %v", expected, report)
t.Fatalf("expected: %v, got: %v", spew.Sdump(expected),
spew.Sdump(report))
}
case <-time.After(defaultTimeout):
@ -3146,3 +3192,11 @@ func (m *mockChannel) ForceCloseChan() (*wire.MsgTx, error) {
return &wire.MsgTx{}, nil
}
func newBeatFromHeight(height int32) *chainio.Beat {
epoch := chainntnfs.BlockEpoch{
Height: height,
}
return chainio.NewBeat(epoch)
}

View file

@ -4,7 +4,6 @@ import (
"encoding/binary"
"fmt"
"io"
"math"
"sync"
"github.com/btcsuite/btcd/btcutil"
@ -39,9 +38,6 @@ type commitSweepResolver struct {
// this HTLC on-chain.
commitResolution lnwallet.CommitOutputResolution
// resolved reflects if the contract has been fully resolved or not.
resolved bool
// broadcastHeight is the height that the original contract was
// broadcast to the main-chain at. We'll use this value to bound any
// historical queries to the chain for spends/confirmations.
@ -88,7 +84,7 @@ func newCommitSweepResolver(res lnwallet.CommitOutputResolution,
chanPoint: chanPoint,
}
r.initLogger(r)
r.initLogger(fmt.Sprintf("%T(%v)", r, r.commitResolution.SelfOutPoint))
r.initReport()
return r
@ -101,36 +97,6 @@ func (c *commitSweepResolver) ResolverKey() []byte {
return key[:]
}
// waitForHeight registers for block notifications and waits for the provided
// block height to be reached.
func waitForHeight(waitHeight uint32, notifier chainntnfs.ChainNotifier,
quit <-chan struct{}) error {
// Register for block epochs. After registration, the current height
// will be sent on the channel immediately.
blockEpochs, err := notifier.RegisterBlockEpochNtfn(nil)
if err != nil {
return err
}
defer blockEpochs.Cancel()
for {
select {
case newBlock, ok := <-blockEpochs.Epochs:
if !ok {
return errResolverShuttingDown
}
height := newBlock.Height
if height >= int32(waitHeight) {
return nil
}
case <-quit:
return errResolverShuttingDown
}
}
}
// waitForSpend waits for the given outpoint to be spent, and returns the
// details of the spending tx.
func waitForSpend(op *wire.OutPoint, pkScript []byte, heightHint uint32,
@ -195,73 +161,310 @@ func (c *commitSweepResolver) getCommitTxConfHeight() (uint32, error) {
// returned.
//
// NOTE: This function MUST be run as a goroutine.
// TODO(yy): fix the funlen in the next PR.
//
//nolint:funlen
func (c *commitSweepResolver) Resolve(_ bool) (ContractResolver, error) {
func (c *commitSweepResolver) Resolve() (ContractResolver, error) {
// If we're already resolved, then we can exit early.
if c.resolved {
if c.IsResolved() {
c.log.Errorf("already resolved")
return nil, nil
}
var sweepTxID chainhash.Hash
// Sweeper is going to join this input with other inputs if possible
// and publish the sweep tx. When the sweep tx confirms, it signals us
// through the result channel with the outcome. Wait for this to
// happen.
outcome := channeldb.ResolverOutcomeClaimed
select {
case sweepResult := <-c.sweepResultChan:
switch sweepResult.Err {
// If the remote party was able to sweep this output it's
// likely what we sent was actually a revoked commitment.
// Report the error and continue to wrap up the contract.
case sweep.ErrRemoteSpend:
c.log.Warnf("local commitment output was swept by "+
"remote party via %v", sweepResult.Tx.TxHash())
outcome = channeldb.ResolverOutcomeUnclaimed
// No errors, therefore continue processing.
case nil:
c.log.Infof("local commitment output fully resolved by "+
"sweep tx: %v", sweepResult.Tx.TxHash())
// Unknown errors.
default:
c.log.Errorf("unable to sweep input: %v",
sweepResult.Err)
return nil, sweepResult.Err
}
sweepTxID = sweepResult.Tx.TxHash()
case <-c.quit:
return nil, errResolverShuttingDown
}
// Funds have been swept and balance is no longer in limbo.
c.reportLock.Lock()
if outcome == channeldb.ResolverOutcomeClaimed {
// We only record the balance as recovered if it actually came
// back to us.
c.currentReport.RecoveredBalance = c.currentReport.LimboBalance
}
c.currentReport.LimboBalance = 0
c.reportLock.Unlock()
report := c.currentReport.resolverReport(
&sweepTxID, channeldb.ResolverTypeCommit, outcome,
)
c.markResolved()
// Checkpoint the resolver with a closure that will write the outcome
// of the resolver and its sweep transaction to disk.
return nil, c.Checkpoint(c, report)
}
// Stop signals the resolver to cancel any current resolution processes, and
// suspend.
//
// NOTE: Part of the ContractResolver interface.
func (c *commitSweepResolver) Stop() {
c.log.Debugf("stopping...")
defer c.log.Debugf("stopped")
close(c.quit)
}
// SupplementState allows the user of a ContractResolver to supplement it with
// state required for the proper resolution of a contract.
//
// NOTE: Part of the ContractResolver interface.
func (c *commitSweepResolver) SupplementState(state *channeldb.OpenChannel) {
if state.ChanType.HasLeaseExpiration() {
c.leaseExpiry = state.ThawHeight
}
c.localChanCfg = state.LocalChanCfg
c.channelInitiator = state.IsInitiator
c.chanType = state.ChanType
}
// hasCLTV denotes whether the resolver must wait for an additional CLTV to
// expire before resolving the contract.
func (c *commitSweepResolver) hasCLTV() bool {
return c.channelInitiator && c.leaseExpiry > 0
}
// Encode writes an encoded version of the ContractResolver into the passed
// Writer.
//
// NOTE: Part of the ContractResolver interface.
func (c *commitSweepResolver) Encode(w io.Writer) error {
if err := encodeCommitResolution(w, &c.commitResolution); err != nil {
return err
}
if err := binary.Write(w, endian, c.IsResolved()); err != nil {
return err
}
if err := binary.Write(w, endian, c.broadcastHeight); err != nil {
return err
}
if _, err := w.Write(c.chanPoint.Hash[:]); err != nil {
return err
}
err := binary.Write(w, endian, c.chanPoint.Index)
if err != nil {
return err
}
// Previously a sweep tx was serialized at this point. Refactoring
// removed this, but keep in mind that this data may still be present in
// the database.
return nil
}
// newCommitSweepResolverFromReader attempts to decode an encoded
// ContractResolver from the passed Reader instance, returning an active
// ContractResolver instance.
func newCommitSweepResolverFromReader(r io.Reader, resCfg ResolverConfig) (
*commitSweepResolver, error) {
c := &commitSweepResolver{
contractResolverKit: *newContractResolverKit(resCfg),
}
if err := decodeCommitResolution(r, &c.commitResolution); err != nil {
return nil, err
}
var resolved bool
if err := binary.Read(r, endian, &resolved); err != nil {
return nil, err
}
if resolved {
c.markResolved()
}
if err := binary.Read(r, endian, &c.broadcastHeight); err != nil {
return nil, err
}
_, err := io.ReadFull(r, c.chanPoint.Hash[:])
if err != nil {
return nil, err
}
err = binary.Read(r, endian, &c.chanPoint.Index)
if err != nil {
return nil, err
}
// Previously a sweep tx was deserialized at this point. Refactoring
// removed this, but keep in mind that this data may still be present in
// the database.
c.initLogger(fmt.Sprintf("%T(%v)", c, c.commitResolution.SelfOutPoint))
c.initReport()
return c, nil
}
// report returns a report on the resolution state of the contract.
func (c *commitSweepResolver) report() *ContractReport {
c.reportLock.Lock()
defer c.reportLock.Unlock()
cpy := c.currentReport
return &cpy
}
// initReport initializes the pending channels report for this resolver.
func (c *commitSweepResolver) initReport() {
amt := btcutil.Amount(
c.commitResolution.SelfOutputSignDesc.Output.Value,
)
// Set the initial report. All fields are filled in, except for the
// maturity height which remains 0 until Resolve() is executed.
//
// TODO(joostjager): Resolvers only activate after the commit tx
// confirms. With more refactoring in channel arbitrator, it would be
// possible to make the confirmation height part of ResolverConfig and
// populate MaturityHeight here.
c.currentReport = ContractReport{
Outpoint: c.commitResolution.SelfOutPoint,
Type: ReportOutputUnencumbered,
Amount: amt,
LimboBalance: amt,
RecoveredBalance: 0,
}
}
// A compile time assertion to ensure commitSweepResolver meets the
// ContractResolver interface.
var _ reportingContractResolver = (*commitSweepResolver)(nil)
// Launch constructs a commit input and offers it to the sweeper.
func (c *commitSweepResolver) Launch() error {
if c.isLaunched() {
c.log.Tracef("already launched")
return nil
}
c.log.Debugf("launching resolver...")
c.markLaunched()
// If we're already resolved, then we can exit early.
if c.IsResolved() {
c.log.Errorf("already resolved")
return nil
}
confHeight, err := c.getCommitTxConfHeight()
if err != nil {
return nil, err
return err
}
// Wait up until the CSV expires, unless we also have a CLTV that
// expires after.
unlockHeight := confHeight + c.commitResolution.MaturityDelay
if c.hasCLTV() {
unlockHeight = uint32(math.Max(
float64(unlockHeight), float64(c.leaseExpiry),
))
unlockHeight = max(unlockHeight, c.leaseExpiry)
}
c.log.Debugf("commit conf_height=%v, unlock_height=%v",
confHeight, unlockHeight)
// Update report now that we learned the confirmation height.
c.reportLock.Lock()
c.currentReport.MaturityHeight = unlockHeight
c.reportLock.Unlock()
// If there is a csv/cltv lock, we'll wait for that.
if c.commitResolution.MaturityDelay > 0 || c.hasCLTV() {
// Determine what height we should wait until for the locks to
// expire.
var waitHeight uint32
switch {
// If we have both a csv and cltv lock, we'll need to look at
// both and see which expires later.
case c.commitResolution.MaturityDelay > 0 && c.hasCLTV():
c.log.Debugf("waiting for CSV and CLTV lock to expire "+
"at height %v", unlockHeight)
// If the CSV expires after the CLTV, or there is no
// CLTV, then we can broadcast a sweep a block before.
// Otherwise, we need to broadcast at our expected
// unlock height.
waitHeight = uint32(math.Max(
float64(unlockHeight-1), float64(c.leaseExpiry),
))
// If we only have a csv lock, wait for the height before the
// lock expires as the spend path should be unlocked by then.
case c.commitResolution.MaturityDelay > 0:
c.log.Debugf("waiting for CSV lock to expire at "+
"height %v", unlockHeight)
waitHeight = unlockHeight - 1
}
err := waitForHeight(waitHeight, c.Notifier, c.quit)
if err != nil {
return nil, err
}
// Derive the witness type for this input.
witnessType, err := c.decideWitnessType()
if err != nil {
return err
}
// We'll craft an input with all the information required for the
// sweeper to create a fully valid sweeping transaction to recover
// these coins.
var inp *input.BaseInput
if c.hasCLTV() {
inp = input.NewCsvInputWithCltv(
&c.commitResolution.SelfOutPoint, witnessType,
&c.commitResolution.SelfOutputSignDesc,
c.broadcastHeight, c.commitResolution.MaturityDelay,
c.leaseExpiry, input.WithResolutionBlob(
c.commitResolution.ResolutionBlob,
),
)
} else {
inp = input.NewCsvInput(
&c.commitResolution.SelfOutPoint, witnessType,
&c.commitResolution.SelfOutputSignDesc,
c.broadcastHeight, c.commitResolution.MaturityDelay,
input.WithResolutionBlob(
c.commitResolution.ResolutionBlob,
),
)
}
// TODO(roasbeef): instead of adding ctrl block to the sign desc, make
// new input type, have sweeper set it?
// Calculate the budget for the sweeping this input.
budget := calculateBudget(
btcutil.Amount(inp.SignDesc().Output.Value),
c.Budget.ToLocalRatio, c.Budget.ToLocal,
)
c.log.Infof("sweeping commit output %v using budget=%v", witnessType,
budget)
// With our input constructed, we'll now offer it to the sweeper.
resultChan, err := c.Sweeper.SweepInput(
inp, sweep.Params{
Budget: budget,
// Specify a nil deadline here as there's no time
// pressure.
DeadlineHeight: fn.None[int32](),
},
)
if err != nil {
c.log.Errorf("unable to sweep input: %v", err)
return err
}
c.sweepResultChan = resultChan
return nil
}
// decideWitnessType returns the witness type for the input.
func (c *commitSweepResolver) decideWitnessType() (input.WitnessType, error) {
var (
isLocalCommitTx bool
signDesc = c.commitResolution.SelfOutputSignDesc
signDesc = c.commitResolution.SelfOutputSignDesc
)
switch {
@ -290,6 +493,7 @@ func (c *commitSweepResolver) Resolve(_ bool) (ContractResolver, error) {
default:
isLocalCommitTx = signDesc.WitnessScript[0] == txscript.OP_IF
}
isDelayedOutput := c.commitResolution.MaturityDelay != 0
c.log.Debugf("isDelayedOutput=%v, isLocalCommitTx=%v", isDelayedOutput,
@ -339,249 +543,5 @@ func (c *commitSweepResolver) Resolve(_ bool) (ContractResolver, error) {
witnessType = input.CommitmentNoDelay
}
c.log.Infof("Sweeping with witness type: %v", witnessType)
// We'll craft an input with all the information required for the
// sweeper to create a fully valid sweeping transaction to recover
// these coins.
var inp *input.BaseInput
if c.hasCLTV() {
inp = input.NewCsvInputWithCltv(
&c.commitResolution.SelfOutPoint, witnessType,
&c.commitResolution.SelfOutputSignDesc,
c.broadcastHeight, c.commitResolution.MaturityDelay,
c.leaseExpiry,
input.WithResolutionBlob(
c.commitResolution.ResolutionBlob,
),
)
} else {
inp = input.NewCsvInput(
&c.commitResolution.SelfOutPoint, witnessType,
&c.commitResolution.SelfOutputSignDesc,
c.broadcastHeight, c.commitResolution.MaturityDelay,
input.WithResolutionBlob(
c.commitResolution.ResolutionBlob,
),
)
}
// TODO(roasbeef): instead of ading ctrl block to the sign desc, make
// new input type, have sweeper set it?
// Calculate the budget for the sweeping this input.
budget := calculateBudget(
btcutil.Amount(inp.SignDesc().Output.Value),
c.Budget.ToLocalRatio, c.Budget.ToLocal,
)
c.log.Infof("Sweeping commit output using budget=%v", budget)
// With our input constructed, we'll now offer it to the sweeper.
resultChan, err := c.Sweeper.SweepInput(
inp, sweep.Params{
Budget: budget,
// Specify a nil deadline here as there's no time
// pressure.
DeadlineHeight: fn.None[int32](),
},
)
if err != nil {
c.log.Errorf("unable to sweep input: %v", err)
return nil, err
}
var sweepTxID chainhash.Hash
// Sweeper is going to join this input with other inputs if possible
// and publish the sweep tx. When the sweep tx confirms, it signals us
// through the result channel with the outcome. Wait for this to
// happen.
outcome := channeldb.ResolverOutcomeClaimed
select {
case sweepResult := <-resultChan:
switch sweepResult.Err {
// If the remote party was able to sweep this output it's
// likely what we sent was actually a revoked commitment.
// Report the error and continue to wrap up the contract.
case sweep.ErrRemoteSpend:
c.log.Warnf("local commitment output was swept by "+
"remote party via %v", sweepResult.Tx.TxHash())
outcome = channeldb.ResolverOutcomeUnclaimed
// No errors, therefore continue processing.
case nil:
c.log.Infof("local commitment output fully resolved by "+
"sweep tx: %v", sweepResult.Tx.TxHash())
// Unknown errors.
default:
c.log.Errorf("unable to sweep input: %v",
sweepResult.Err)
return nil, sweepResult.Err
}
sweepTxID = sweepResult.Tx.TxHash()
case <-c.quit:
return nil, errResolverShuttingDown
}
// Funds have been swept and balance is no longer in limbo.
c.reportLock.Lock()
if outcome == channeldb.ResolverOutcomeClaimed {
// We only record the balance as recovered if it actually came
// back to us.
c.currentReport.RecoveredBalance = c.currentReport.LimboBalance
}
c.currentReport.LimboBalance = 0
c.reportLock.Unlock()
report := c.currentReport.resolverReport(
&sweepTxID, channeldb.ResolverTypeCommit, outcome,
)
c.resolved = true
// Checkpoint the resolver with a closure that will write the outcome
// of the resolver and its sweep transaction to disk.
return nil, c.Checkpoint(c, report)
return witnessType, nil
}
// Stop signals the resolver to cancel any current resolution processes, and
// suspend.
//
// NOTE: Part of the ContractResolver interface.
func (c *commitSweepResolver) Stop() {
close(c.quit)
}
// IsResolved returns true if the stored state in the resolve is fully
// resolved. In this case the target output can be forgotten.
//
// NOTE: Part of the ContractResolver interface.
func (c *commitSweepResolver) IsResolved() bool {
return c.resolved
}
// SupplementState allows the user of a ContractResolver to supplement it with
// state required for the proper resolution of a contract.
//
// NOTE: Part of the ContractResolver interface.
func (c *commitSweepResolver) SupplementState(state *channeldb.OpenChannel) {
if state.ChanType.HasLeaseExpiration() {
c.leaseExpiry = state.ThawHeight
}
c.localChanCfg = state.LocalChanCfg
c.channelInitiator = state.IsInitiator
c.chanType = state.ChanType
}
// hasCLTV denotes whether the resolver must wait for an additional CLTV to
// expire before resolving the contract.
func (c *commitSweepResolver) hasCLTV() bool {
return c.channelInitiator && c.leaseExpiry > 0
}
// Encode writes an encoded version of the ContractResolver into the passed
// Writer.
//
// NOTE: Part of the ContractResolver interface.
func (c *commitSweepResolver) Encode(w io.Writer) error {
if err := encodeCommitResolution(w, &c.commitResolution); err != nil {
return err
}
if err := binary.Write(w, endian, c.resolved); err != nil {
return err
}
if err := binary.Write(w, endian, c.broadcastHeight); err != nil {
return err
}
if _, err := w.Write(c.chanPoint.Hash[:]); err != nil {
return err
}
err := binary.Write(w, endian, c.chanPoint.Index)
if err != nil {
return err
}
// Previously a sweep tx was serialized at this point. Refactoring
// removed this, but keep in mind that this data may still be present in
// the database.
return nil
}
// newCommitSweepResolverFromReader attempts to decode an encoded
// ContractResolver from the passed Reader instance, returning an active
// ContractResolver instance.
func newCommitSweepResolverFromReader(r io.Reader, resCfg ResolverConfig) (
*commitSweepResolver, error) {
c := &commitSweepResolver{
contractResolverKit: *newContractResolverKit(resCfg),
}
if err := decodeCommitResolution(r, &c.commitResolution); err != nil {
return nil, err
}
if err := binary.Read(r, endian, &c.resolved); err != nil {
return nil, err
}
if err := binary.Read(r, endian, &c.broadcastHeight); err != nil {
return nil, err
}
_, err := io.ReadFull(r, c.chanPoint.Hash[:])
if err != nil {
return nil, err
}
err = binary.Read(r, endian, &c.chanPoint.Index)
if err != nil {
return nil, err
}
// Previously a sweep tx was deserialized at this point. Refactoring
// removed this, but keep in mind that this data may still be present in
// the database.
c.initLogger(c)
c.initReport()
return c, nil
}
// report returns a report on the resolution state of the contract.
func (c *commitSweepResolver) report() *ContractReport {
c.reportLock.Lock()
defer c.reportLock.Unlock()
cpy := c.currentReport
return &cpy
}
// initReport initializes the pending channels report for this resolver.
func (c *commitSweepResolver) initReport() {
amt := btcutil.Amount(
c.commitResolution.SelfOutputSignDesc.Output.Value,
)
// Set the initial report. All fields are filled in, except for the
// maturity height which remains 0 until Resolve() is executed.
//
// TODO(joostjager): Resolvers only activate after the commit tx
// confirms. With more refactoring in channel arbitrator, it would be
// possible to make the confirmation height part of ResolverConfig and
// populate MaturityHeight here.
c.currentReport = ContractReport{
Outpoint: c.commitResolution.SelfOutPoint,
Type: ReportOutputUnencumbered,
Amount: amt,
LimboBalance: amt,
RecoveredBalance: 0,
}
}
// A compile time assertion to ensure commitSweepResolver meets the
// ContractResolver interface.
var _ reportingContractResolver = (*commitSweepResolver)(nil)

View file

@ -15,6 +15,7 @@ import (
"github.com/lightningnetwork/lnd/lnwallet"
"github.com/lightningnetwork/lnd/lnwallet/chainfee"
"github.com/lightningnetwork/lnd/sweep"
"github.com/stretchr/testify/require"
)
type commitSweepResolverTestContext struct {
@ -82,7 +83,10 @@ func (i *commitSweepResolverTestContext) resolve() {
// Start resolver.
i.resolverResultChan = make(chan resolveResult, 1)
go func() {
nextResolver, err := i.resolver.Resolve(false)
err := i.resolver.Launch()
require.NoError(i.t, err)
nextResolver, err := i.resolver.Resolve()
i.resolverResultChan <- resolveResult{
nextResolver: nextResolver,
err: err,
@ -90,12 +94,6 @@ func (i *commitSweepResolverTestContext) resolve() {
}()
}
func (i *commitSweepResolverTestContext) notifyEpoch(height int32) {
i.notifier.EpochChan <- &chainntnfs.BlockEpoch{
Height: height,
}
}
func (i *commitSweepResolverTestContext) waitForResult() {
i.t.Helper()
@ -292,22 +290,10 @@ func testCommitSweepResolverDelay(t *testing.T, sweepErr error) {
t.Fatal("report maturity height incorrect")
}
// Notify initial block height. The csv lock is still in effect, so we
// don't expect any sweep to happen yet.
ctx.notifyEpoch(testInitialBlockHeight)
select {
case <-ctx.sweeper.sweptInputs:
t.Fatal("no sweep expected")
case <-time.After(sweepProcessInterval):
}
// A new block arrives. The commit tx confirmed at height -1 and the csv
// is 3, so a spend will be valid in the first block after height +1.
ctx.notifyEpoch(testInitialBlockHeight + 1)
<-ctx.sweeper.sweptInputs
// Notify initial block height. Although the csv lock is still in
// effect, we expect the input being sent to the sweeper before the csv
// lock expires.
//
// Set the resolution report outcome based on whether our sweep
// succeeded.
outcome := channeldb.ResolverOutcomeClaimed

View file

@ -5,11 +5,13 @@ import (
"errors"
"fmt"
"io"
"sync/atomic"
"github.com/btcsuite/btcd/wire"
"github.com/btcsuite/btclog/v2"
"github.com/lightningnetwork/lnd/channeldb"
"github.com/lightningnetwork/lnd/fn/v2"
"github.com/lightningnetwork/lnd/sweep"
)
var (
@ -35,6 +37,17 @@ type ContractResolver interface {
// resides within.
ResolverKey() []byte
// Launch starts the resolver by constructing an input and offering it
// to the sweeper. Once offered, it's expected to monitor the sweeping
// result in a goroutine invoked by calling Resolve.
//
// NOTE: We can call `Resolve` inside a goroutine at the end of this
// method to avoid calling it in the ChannelArbitrator. However, there
// are some DB-related operations such as SwapContract/ResolveContract
// which need to be done inside the resolvers instead, which needs a
// deeper refactoring.
Launch() error
// Resolve instructs the contract resolver to resolve the output
// on-chain. Once the output has been *fully* resolved, the function
// should return immediately with a nil ContractResolver value for the
@ -42,7 +55,7 @@ type ContractResolver interface {
// resolution, then another resolve is returned.
//
// NOTE: This function MUST be run as a goroutine.
Resolve(immediate bool) (ContractResolver, error)
Resolve() (ContractResolver, error)
// SupplementState allows the user of a ContractResolver to supplement
// it with state required for the proper resolution of a contract.
@ -109,6 +122,21 @@ type contractResolverKit struct {
log btclog.Logger
quit chan struct{}
// sweepResultChan is the result chan returned from calling
// `SweepInput`. It should be mounted to the specific resolver once the
// input has been offered to the sweeper.
sweepResultChan chan sweep.Result
// launched specifies whether the resolver has been launched. Calling
// `Launch` will be a no-op if this is true. This value is not saved to
// db, as it's fine to relaunch a resolver after a restart. It's only
// used to avoid resending requests to the sweeper when a new blockbeat
// is received.
launched atomic.Bool
// resolved reflects if the contract has been fully resolved or not.
resolved atomic.Bool
}
// newContractResolverKit instantiates the mix-in struct.
@ -120,11 +148,36 @@ func newContractResolverKit(cfg ResolverConfig) *contractResolverKit {
}
// initLogger initializes the resolver-specific logger.
func (r *contractResolverKit) initLogger(resolver ContractResolver) {
logPrefix := fmt.Sprintf("%T(%v):", resolver, r.ChanPoint)
func (r *contractResolverKit) initLogger(prefix string) {
logPrefix := fmt.Sprintf("ChannelArbitrator(%v): %s:", r.ChanPoint,
prefix)
r.log = log.WithPrefix(logPrefix)
}
// IsResolved returns true if the stored state in the resolve is fully
// resolved. In this case the target output can be forgotten.
//
// NOTE: Part of the ContractResolver interface.
func (r *contractResolverKit) IsResolved() bool {
return r.resolved.Load()
}
// markResolved marks the resolver as resolved.
func (r *contractResolverKit) markResolved() {
r.resolved.Store(true)
}
// isLaunched returns true if the resolver has been launched.
func (r *contractResolverKit) isLaunched() bool {
return r.launched.Load()
}
// markLaunched marks the resolver as launched.
func (r *contractResolverKit) markLaunched() {
r.launched.Store(true)
}
var (
// errResolverShuttingDown is returned when the resolver stops
// progressing because it received the quit signal.

View file

@ -78,6 +78,37 @@ func (h *htlcIncomingContestResolver) processFinalHtlcFail() error {
return nil
}
// Launch will call the inner resolver's launch method if the preimage can be
// found, otherwise it's a no-op.
func (h *htlcIncomingContestResolver) Launch() error {
// NOTE: we don't mark this resolver as launched as the inner resolver
// will set it when it's launched.
if h.isLaunched() {
h.log.Tracef("already launched")
return nil
}
h.log.Debugf("launching contest resolver...")
// Query the preimage and apply it if we already know it.
applied, err := h.findAndapplyPreimage()
if err != nil {
return err
}
// No preimage found, leave it to be handled by the resolver.
if !applied {
return nil
}
h.log.Debugf("found preimage for htlc=%x, transforming into success "+
"resolver and launching it", h.htlc.RHash)
// Once we've applied the preimage, we'll launch the inner resolver to
// attempt to claim the HTLC.
return h.htlcSuccessResolver.Launch()
}
// Resolve attempts to resolve this contract. As we don't yet know of the
// preimage for the contract, we'll wait for one of two things to happen:
//
@ -90,12 +121,11 @@ func (h *htlcIncomingContestResolver) processFinalHtlcFail() error {
// as we have no remaining actions left at our disposal.
//
// NOTE: Part of the ContractResolver interface.
func (h *htlcIncomingContestResolver) Resolve(
_ bool) (ContractResolver, error) {
func (h *htlcIncomingContestResolver) Resolve() (ContractResolver, error) {
// If we're already full resolved, then we don't have anything further
// to do.
if h.resolved {
if h.IsResolved() {
h.log.Errorf("already resolved")
return nil, nil
}
@ -103,15 +133,14 @@ func (h *htlcIncomingContestResolver) Resolve(
// now.
payload, nextHopOnionBlob, err := h.decodePayload()
if err != nil {
log.Debugf("ChannelArbitrator(%v): cannot decode payload of "+
"htlc %v", h.ChanPoint, h.HtlcPoint())
h.log.Debugf("cannot decode payload of htlc %v", h.HtlcPoint())
// If we've locked in an htlc with an invalid payload on our
// commitment tx, we don't need to resolve it. The other party
// will time it out and get their funds back. This situation
// can present itself when we crash before processRemoteAdds in
// the link has ran.
h.resolved = true
h.markResolved()
if err := h.processFinalHtlcFail(); err != nil {
return nil, err
@ -164,7 +193,7 @@ func (h *htlcIncomingContestResolver) Resolve(
log.Infof("%T(%v): HTLC has timed out (expiry=%v, height=%v), "+
"abandoning", h, h.htlcResolution.ClaimOutpoint,
h.htlcExpiry, currentHeight)
h.resolved = true
h.markResolved()
if err := h.processFinalHtlcFail(); err != nil {
return nil, err
@ -179,65 +208,6 @@ func (h *htlcIncomingContestResolver) Resolve(
return nil, h.Checkpoint(h, report)
}
// applyPreimage is a helper function that will populate our internal
// resolver with the preimage we learn of. This should be called once
// the preimage is revealed so the inner resolver can properly complete
// its duties. The error return value indicates whether the preimage
// was properly applied.
applyPreimage := func(preimage lntypes.Preimage) error {
// Sanity check to see if this preimage matches our htlc. At
// this point it should never happen that it does not match.
if !preimage.Matches(h.htlc.RHash) {
return errors.New("preimage does not match hash")
}
// Update htlcResolution with the matching preimage.
h.htlcResolution.Preimage = preimage
log.Infof("%T(%v): applied preimage=%v", h,
h.htlcResolution.ClaimOutpoint, preimage)
isSecondLevel := h.htlcResolution.SignedSuccessTx != nil
// If we didn't have to go to the second level to claim (this
// is the remote commitment transaction), then we don't need to
// modify our canned witness.
if !isSecondLevel {
return nil
}
isTaproot := txscript.IsPayToTaproot(
h.htlcResolution.SignedSuccessTx.TxOut[0].PkScript,
)
// If this is our commitment transaction, then we'll need to
// populate the witness for the second-level HTLC transaction.
switch {
// For taproot channels, the witness for sweeping with success
// looks like:
// - <sender sig> <receiver sig> <preimage> <success_script>
// <control_block>
//
// So we'll insert it at the 3rd index of the witness.
case isTaproot:
//nolint:ll
h.htlcResolution.SignedSuccessTx.TxIn[0].Witness[2] = preimage[:]
// Within the witness for the success transaction, the
// preimage is the 4th element as it looks like:
//
// * <0> <sender sig> <recvr sig> <preimage> <witness script>
//
// We'll populate it within the witness, as since this
// was a "contest" resolver, we didn't yet know of the
// preimage.
case !isTaproot:
h.htlcResolution.SignedSuccessTx.TxIn[0].Witness[3] = preimage[:]
}
return nil
}
// Define a closure to process htlc resolutions either directly or
// triggered by future notifications.
processHtlcResolution := func(e invoices.HtlcResolution) (
@ -249,7 +219,7 @@ func (h *htlcIncomingContestResolver) Resolve(
// If the htlc resolution was a settle, apply the
// preimage and return a success resolver.
case *invoices.HtlcSettleResolution:
err := applyPreimage(resolution.Preimage)
err := h.applyPreimage(resolution.Preimage)
if err != nil {
return nil, err
}
@ -264,7 +234,7 @@ func (h *htlcIncomingContestResolver) Resolve(
h.htlcResolution.ClaimOutpoint,
h.htlcExpiry, currentHeight)
h.resolved = true
h.markResolved()
if err := h.processFinalHtlcFail(); err != nil {
return nil, err
@ -315,6 +285,9 @@ func (h *htlcIncomingContestResolver) Resolve(
return nil, err
}
h.log.Debugf("received resolution from registry: %v",
resolution)
defer func() {
h.Registry.HodlUnsubscribeAll(hodlQueue.ChanIn())
@ -372,7 +345,9 @@ func (h *htlcIncomingContestResolver) Resolve(
// However, we don't know how to ourselves, so we'll
// return our inner resolver which has the knowledge to
// do so.
if err := applyPreimage(preimage); err != nil {
h.log.Debugf("Found preimage for htlc=%x", h.htlc.RHash)
if err := h.applyPreimage(preimage); err != nil {
return nil, err
}
@ -391,7 +366,10 @@ func (h *htlcIncomingContestResolver) Resolve(
continue
}
if err := applyPreimage(preimage); err != nil {
h.log.Debugf("Received preimage for htlc=%x",
h.htlc.RHash)
if err := h.applyPreimage(preimage); err != nil {
return nil, err
}
@ -418,7 +396,8 @@ func (h *htlcIncomingContestResolver) Resolve(
"(expiry=%v, height=%v), abandoning", h,
h.htlcResolution.ClaimOutpoint,
h.htlcExpiry, currentHeight)
h.resolved = true
h.markResolved()
if err := h.processFinalHtlcFail(); err != nil {
return nil, err
@ -438,6 +417,76 @@ func (h *htlcIncomingContestResolver) Resolve(
}
}
// applyPreimage is a helper function that will populate our internal resolver
// with the preimage we learn of. This should be called once the preimage is
// revealed so the inner resolver can properly complete its duties. The error
// return value indicates whether the preimage was properly applied.
func (h *htlcIncomingContestResolver) applyPreimage(
preimage lntypes.Preimage) error {
// Sanity check to see if this preimage matches our htlc. At this point
// it should never happen that it does not match.
if !preimage.Matches(h.htlc.RHash) {
return errors.New("preimage does not match hash")
}
// We may already have the preimage since both the `Launch` and
// `Resolve` methods will look for it.
if h.htlcResolution.Preimage != lntypes.ZeroHash {
h.log.Debugf("already applied preimage for htlc=%x",
h.htlc.RHash)
return nil
}
// Update htlcResolution with the matching preimage.
h.htlcResolution.Preimage = preimage
log.Infof("%T(%v): applied preimage=%v", h,
h.htlcResolution.ClaimOutpoint, preimage)
isSecondLevel := h.htlcResolution.SignedSuccessTx != nil
// If we didn't have to go to the second level to claim (this
// is the remote commitment transaction), then we don't need to
// modify our canned witness.
if !isSecondLevel {
return nil
}
isTaproot := txscript.IsPayToTaproot(
h.htlcResolution.SignedSuccessTx.TxOut[0].PkScript,
)
// If this is our commitment transaction, then we'll need to
// populate the witness for the second-level HTLC transaction.
switch {
// For taproot channels, the witness for sweeping with success
// looks like:
// - <sender sig> <receiver sig> <preimage> <success_script>
// <control_block>
//
// So we'll insert it at the 3rd index of the witness.
case isTaproot:
//nolint:ll
h.htlcResolution.SignedSuccessTx.TxIn[0].Witness[2] = preimage[:]
// Within the witness for the success transaction, the
// preimage is the 4th element as it looks like:
//
// * <0> <sender sig> <recvr sig> <preimage> <witness script>
//
// We'll populate it within the witness, as since this
// was a "contest" resolver, we didn't yet know of the
// preimage.
case !isTaproot:
//nolint:ll
h.htlcResolution.SignedSuccessTx.TxIn[0].Witness[3] = preimage[:]
}
return nil
}
// report returns a report on the resolution state of the contract.
func (h *htlcIncomingContestResolver) report() *ContractReport {
// No locking needed as these values are read-only.
@ -464,17 +513,11 @@ func (h *htlcIncomingContestResolver) report() *ContractReport {
//
// NOTE: Part of the ContractResolver interface.
func (h *htlcIncomingContestResolver) Stop() {
h.log.Debugf("stopping...")
defer h.log.Debugf("stopped")
close(h.quit)
}
// IsResolved returns true if the stored state in the resolve is fully
// resolved. In this case the target output can be forgotten.
//
// NOTE: Part of the ContractResolver interface.
func (h *htlcIncomingContestResolver) IsResolved() bool {
return h.resolved
}
// Encode writes an encoded version of the ContractResolver into the passed
// Writer.
//
@ -563,3 +606,82 @@ func (h *htlcIncomingContestResolver) decodePayload() (*hop.Payload,
// A compile time assertion to ensure htlcIncomingContestResolver meets the
// ContractResolver interface.
var _ htlcContractResolver = (*htlcIncomingContestResolver)(nil)
// findAndapplyPreimage performs a non-blocking read to find the preimage for
// the incoming HTLC. If found, it will be applied to the resolver. This method
// is used for the resolver to decide whether it wants to transform into a
// success resolver during launching.
//
// NOTE: Since we have two places to query the preimage, we need to check both
// the preimage db and the invoice db to look up the preimage.
func (h *htlcIncomingContestResolver) findAndapplyPreimage() (bool, error) {
// Query to see if we already know the preimage.
preimage, ok := h.PreimageDB.LookupPreimage(h.htlc.RHash)
// If the preimage is known, we'll apply it.
if ok {
if err := h.applyPreimage(preimage); err != nil {
return false, err
}
// Successfully applied the preimage, we can now return.
return true, nil
}
// First try to parse the payload.
payload, _, err := h.decodePayload()
if err != nil {
h.log.Errorf("Cannot decode payload of htlc %v", h.HtlcPoint())
// If we cannot decode the payload, we will return a nil error
// and let it to be handled in `Resolve`.
return false, nil
}
// Exit early if this is not the exit hop, which means we are not the
// payment receiver and don't have preimage.
if payload.FwdInfo.NextHop != hop.Exit {
return false, nil
}
// Notify registry that we are potentially resolving as an exit hop
// on-chain. If this HTLC indeed pays to an existing invoice, the
// invoice registry will tell us what to do with the HTLC. This is
// identical to HTLC resolution in the link.
circuitKey := models.CircuitKey{
ChanID: h.ShortChanID,
HtlcID: h.htlc.HtlcIndex,
}
// Try get the resolution - if it doesn't give us a resolution
// immediately, we'll assume we don't know it yet and let the `Resolve`
// handle the waiting.
//
// NOTE: we use a nil subscriber here and a zero current height as we
// are only interested in the settle resolution.
//
// TODO(yy): move this logic to link and let the preimage be accessed
// via the preimage beacon.
resolution, err := h.Registry.NotifyExitHopHtlc(
h.htlc.RHash, h.htlc.Amt, h.htlcExpiry, 0,
circuitKey, nil, h.htlc.CustomRecords, payload,
)
if err != nil {
return false, err
}
res, ok := resolution.(*invoices.HtlcSettleResolution)
// Exit early if it's not a settle resolution.
if !ok {
return false, nil
}
// Otherwise we have a settle resolution, apply the preimage.
err = h.applyPreimage(res.Preimage)
if err != nil {
return false, err
}
return true, nil
}

View file

@ -5,11 +5,13 @@ import (
"io"
"testing"
"github.com/btcsuite/btcd/wire"
sphinx "github.com/lightningnetwork/lightning-onion"
"github.com/lightningnetwork/lnd/chainntnfs"
"github.com/lightningnetwork/lnd/channeldb"
"github.com/lightningnetwork/lnd/graph/db/models"
"github.com/lightningnetwork/lnd/htlcswitch/hop"
"github.com/lightningnetwork/lnd/input"
"github.com/lightningnetwork/lnd/invoices"
"github.com/lightningnetwork/lnd/kvdb"
"github.com/lightningnetwork/lnd/lnmock"
@ -356,6 +358,7 @@ func newIncomingResolverTestContext(t *testing.T, isExit bool) *incomingResolver
return nil
},
Sweeper: newMockSweeper(),
},
PutResolverReport: func(_ kvdb.RwTx,
_ *channeldb.ResolverReport) error {
@ -374,10 +377,16 @@ func newIncomingResolverTestContext(t *testing.T, isExit bool) *incomingResolver
},
}
res := lnwallet.IncomingHtlcResolution{
SweepSignDesc: input.SignDescriptor{
Output: &wire.TxOut{},
},
}
c.resolver = &htlcIncomingContestResolver{
htlcSuccessResolver: &htlcSuccessResolver{
contractResolverKit: *newContractResolverKit(cfg),
htlcResolution: lnwallet.IncomingHtlcResolution{},
htlcResolution: res,
htlc: channeldb.HTLC{
Amt: lnwire.MilliSatoshi(testHtlcAmount),
RHash: testResHash,
@ -386,6 +395,7 @@ func newIncomingResolverTestContext(t *testing.T, isExit bool) *incomingResolver
},
htlcExpiry: testHtlcExpiry,
}
c.resolver.initLogger("htlcIncomingContestResolver")
return c
}
@ -395,7 +405,11 @@ func (i *incomingResolverTestContext) resolve() {
i.resolveErr = make(chan error, 1)
go func() {
var err error
i.nextResolver, err = i.resolver.Resolve(false)
err = i.resolver.Launch()
require.NoError(i.t, err)
i.nextResolver, err = i.resolver.Resolve()
i.resolveErr <- err
}()

View file

@ -1,8 +1,6 @@
package contractcourt
import (
"math"
"github.com/btcsuite/btcd/wire"
"github.com/lightningnetwork/lnd/chainntnfs"
"github.com/lightningnetwork/lnd/channeldb"
@ -42,9 +40,7 @@ func (h *htlcLeaseResolver) deriveWaitHeight(csvDelay uint32,
waitHeight := uint32(commitSpend.SpendingHeight) + csvDelay - 1
if h.hasCLTV() {
waitHeight = uint32(math.Max(
float64(waitHeight), float64(h.leaseExpiry),
))
waitHeight = max(waitHeight, h.leaseExpiry)
}
return waitHeight
@ -57,15 +53,13 @@ func (h *htlcLeaseResolver) makeSweepInput(op *wire.OutPoint,
signDesc *input.SignDescriptor, csvDelay, broadcastHeight uint32,
payHash [32]byte, resBlob fn.Option[tlv.Blob]) *input.BaseInput {
if h.hasCLTV() {
log.Infof("%T(%x): CSV and CLTV locks expired, offering "+
"second-layer output to sweeper: %v", h, payHash, op)
log.Infof("%T(%x): offering second-layer output to sweeper: %v", h,
payHash, op)
if h.hasCLTV() {
return input.NewCsvInputWithCltv(
op, cltvWtype, signDesc,
broadcastHeight, csvDelay,
h.leaseExpiry,
input.WithResolutionBlob(resBlob),
op, cltvWtype, signDesc, broadcastHeight, csvDelay,
h.leaseExpiry, input.WithResolutionBlob(resBlob),
)
}

View file

@ -1,7 +1,6 @@
package contractcourt
import (
"fmt"
"io"
"github.com/btcsuite/btcd/btcutil"
@ -36,6 +35,37 @@ func newOutgoingContestResolver(res lnwallet.OutgoingHtlcResolution,
}
}
// Launch will call the inner resolver's launch method if the expiry height has
// been reached, otherwise it's a no-op.
func (h *htlcOutgoingContestResolver) Launch() error {
// NOTE: we don't mark this resolver as launched as the inner resolver
// will set it when it's launched.
if h.isLaunched() {
h.log.Tracef("already launched")
return nil
}
h.log.Debugf("launching contest resolver...")
_, bestHeight, err := h.ChainIO.GetBestBlock()
if err != nil {
return err
}
if uint32(bestHeight) < h.htlcResolution.Expiry {
return nil
}
// If the current height is >= expiry, then a timeout path spend will
// be valid to be included in the next block, and we can immediately
// return the resolver.
h.log.Infof("expired (height=%v, expiry=%v), transforming into "+
"timeout resolver and launching it", bestHeight,
h.htlcResolution.Expiry)
return h.htlcTimeoutResolver.Launch()
}
// Resolve commences the resolution of this contract. As this contract hasn't
// yet timed out, we'll wait for one of two things to happen
//
@ -49,12 +79,11 @@ func newOutgoingContestResolver(res lnwallet.OutgoingHtlcResolution,
// When either of these two things happens, we'll create a new resolver which
// is able to handle the final resolution of the contract. We're only the pivot
// point.
func (h *htlcOutgoingContestResolver) Resolve(
_ bool) (ContractResolver, error) {
func (h *htlcOutgoingContestResolver) Resolve() (ContractResolver, error) {
// If we're already full resolved, then we don't have anything further
// to do.
if h.resolved {
if h.IsResolved() {
h.log.Errorf("already resolved")
return nil, nil
}
@ -88,8 +117,7 @@ func (h *htlcOutgoingContestResolver) Resolve(
return nil, errResolverShuttingDown
}
// TODO(roasbeef): Checkpoint?
return h.claimCleanUp(commitSpend)
return nil, h.claimCleanUp(commitSpend)
// If it hasn't, then we'll watch for both the expiration, and the
// sweeping out this output.
@ -126,12 +154,20 @@ func (h *htlcOutgoingContestResolver) Resolve(
// finalized` will be returned and the broadcast will
// fail.
newHeight := uint32(newBlock.Height)
if newHeight >= h.htlcResolution.Expiry {
log.Infof("%T(%v): HTLC has expired "+
expiry := h.htlcResolution.Expiry
// Check if the expiry height is about to be reached.
// We offer this HTLC one block earlier to make sure
// when the next block arrives, the sweeper will pick
// up this input and sweep it immediately. The sweeper
// will handle the waiting for the one last block till
// expiry.
if newHeight >= expiry-1 {
h.log.Infof("HTLC about to expire "+
"(height=%v, expiry=%v), transforming "+
"into timeout resolver", h,
h.htlcResolution.ClaimOutpoint,
newHeight, h.htlcResolution.Expiry)
"into timeout resolver", newHeight,
h.htlcResolution.Expiry)
return h.htlcTimeoutResolver, nil
}
@ -146,10 +182,10 @@ func (h *htlcOutgoingContestResolver) Resolve(
// party is by revealing the preimage. So we'll perform
// our duties to clean up the contract once it has been
// claimed.
return h.claimCleanUp(commitSpend)
return nil, h.claimCleanUp(commitSpend)
case <-h.quit:
return nil, fmt.Errorf("resolver canceled")
return nil, errResolverShuttingDown
}
}
}
@ -180,17 +216,11 @@ func (h *htlcOutgoingContestResolver) report() *ContractReport {
//
// NOTE: Part of the ContractResolver interface.
func (h *htlcOutgoingContestResolver) Stop() {
h.log.Debugf("stopping...")
defer h.log.Debugf("stopped")
close(h.quit)
}
// IsResolved returns true if the stored state in the resolve is fully
// resolved. In this case the target output can be forgotten.
//
// NOTE: Part of the ContractResolver interface.
func (h *htlcOutgoingContestResolver) IsResolved() bool {
return h.resolved
}
// Encode writes an encoded version of the ContractResolver into the passed
// Writer.
//

View file

@ -15,6 +15,7 @@ import (
"github.com/lightningnetwork/lnd/lntypes"
"github.com/lightningnetwork/lnd/lnwallet"
"github.com/lightningnetwork/lnd/lnwire"
"github.com/stretchr/testify/require"
)
const (
@ -159,6 +160,7 @@ func newOutgoingResolverTestContext(t *testing.T) *outgoingResolverTestContext {
return nil
},
ChainIO: &mock.ChainIO{},
},
PutResolverReport: func(_ kvdb.RwTx,
_ *channeldb.ResolverReport) error {
@ -195,6 +197,7 @@ func newOutgoingResolverTestContext(t *testing.T) *outgoingResolverTestContext {
},
},
}
resolver.initLogger("htlcOutgoingContestResolver")
return &outgoingResolverTestContext{
resolver: resolver,
@ -209,7 +212,10 @@ func (i *outgoingResolverTestContext) resolve() {
// Start resolver.
i.resolverResultChan = make(chan resolveResult, 1)
go func() {
nextResolver, err := i.resolver.Resolve(false)
err := i.resolver.Launch()
require.NoError(i.t, err)
nextResolver, err := i.resolver.Resolve()
i.resolverResultChan <- resolveResult{
nextResolver: nextResolver,
err: err,

View file

@ -2,6 +2,7 @@ package contractcourt
import (
"encoding/binary"
"fmt"
"io"
"sync"
@ -9,8 +10,6 @@ import (
"github.com/btcsuite/btcd/chaincfg/chainhash"
"github.com/btcsuite/btcd/txscript"
"github.com/btcsuite/btcd/wire"
"github.com/davecgh/go-spew/spew"
"github.com/lightningnetwork/lnd/chainntnfs"
"github.com/lightningnetwork/lnd/channeldb"
"github.com/lightningnetwork/lnd/fn/v2"
"github.com/lightningnetwork/lnd/graph/db/models"
@ -43,9 +42,6 @@ type htlcSuccessResolver struct {
// second-level output (true).
outputIncubating bool
// resolved reflects if the contract has been fully resolved or not.
resolved bool
// broadcastHeight is the height that the original contract was
// broadcast to the main-chain at. We'll use this value to bound any
// historical queries to the chain for spends/confirmations.
@ -81,27 +77,30 @@ func newSuccessResolver(res lnwallet.IncomingHtlcResolution,
}
h.initReport()
h.initLogger(fmt.Sprintf("%T(%v)", h, h.outpoint()))
return h
}
// outpoint returns the outpoint of the HTLC output we're attempting to sweep.
func (h *htlcSuccessResolver) outpoint() wire.OutPoint {
// The primary key for this resolver will be the outpoint of the HTLC
// on the commitment transaction itself. If this is our commitment,
// then the output can be found within the signed success tx,
// otherwise, it's just the ClaimOutpoint.
if h.htlcResolution.SignedSuccessTx != nil {
return h.htlcResolution.SignedSuccessTx.TxIn[0].PreviousOutPoint
}
return h.htlcResolution.ClaimOutpoint
}
// ResolverKey returns an identifier which should be globally unique for this
// particular resolver within the chain the original contract resides within.
//
// NOTE: Part of the ContractResolver interface.
func (h *htlcSuccessResolver) ResolverKey() []byte {
// The primary key for this resolver will be the outpoint of the HTLC
// on the commitment transaction itself. If this is our commitment,
// then the output can be found within the signed success tx,
// otherwise, it's just the ClaimOutpoint.
var op wire.OutPoint
if h.htlcResolution.SignedSuccessTx != nil {
op = h.htlcResolution.SignedSuccessTx.TxIn[0].PreviousOutPoint
} else {
op = h.htlcResolution.ClaimOutpoint
}
key := newResolverID(op)
key := newResolverID(h.outpoint())
return key[:]
}
@ -112,423 +111,66 @@ func (h *htlcSuccessResolver) ResolverKey() []byte {
// anymore. Every HTLC has already passed through the incoming contest resolver
// and in there the invoice was already marked as settled.
//
// TODO(roasbeef): create multi to batch
//
// NOTE: Part of the ContractResolver interface.
func (h *htlcSuccessResolver) Resolve(
immediate bool) (ContractResolver, error) {
// If we're already resolved, then we can exit early.
if h.resolved {
return nil, nil
}
// If we don't have a success transaction, then this means that this is
// an output on the remote party's commitment transaction.
if h.htlcResolution.SignedSuccessTx == nil {
return h.resolveRemoteCommitOutput(immediate)
}
// Otherwise this an output on our own commitment, and we must start by
// broadcasting the second-level success transaction.
secondLevelOutpoint, err := h.broadcastSuccessTx(immediate)
if err != nil {
return nil, err
}
// To wrap this up, we'll wait until the second-level transaction has
// been spent, then fully resolve the contract.
log.Infof("%T(%x): waiting for second-level HTLC output to be spent "+
"after csv_delay=%v", h, h.htlc.RHash[:], h.htlcResolution.CsvDelay)
spend, err := waitForSpend(
secondLevelOutpoint,
h.htlcResolution.SweepSignDesc.Output.PkScript,
h.broadcastHeight, h.Notifier, h.quit,
)
if err != nil {
return nil, err
}
h.reportLock.Lock()
h.currentReport.RecoveredBalance = h.currentReport.LimboBalance
h.currentReport.LimboBalance = 0
h.reportLock.Unlock()
h.resolved = true
return nil, h.checkpointClaim(
spend.SpenderTxHash, channeldb.ResolverOutcomeClaimed,
)
}
// broadcastSuccessTx handles an HTLC output on our local commitment by
// broadcasting the second-level success transaction. It returns the ultimate
// outpoint of the second-level tx, that we must wait to be spent for the
// resolver to be fully resolved.
func (h *htlcSuccessResolver) broadcastSuccessTx(
immediate bool) (*wire.OutPoint, error) {
// If we have non-nil SignDetails, this means that have a 2nd level
// HTLC transaction that is signed using sighash SINGLE|ANYONECANPAY
// (the case for anchor type channels). In this case we can re-sign it
// and attach fees at will. We let the sweeper handle this job. We use
// the checkpointed outputIncubating field to determine if we already
// swept the HTLC output into the second level transaction.
if h.htlcResolution.SignDetails != nil {
return h.broadcastReSignedSuccessTx(immediate)
}
// Otherwise we'll publish the second-level transaction directly and
// offer the resolution to the nursery to handle.
log.Infof("%T(%x): broadcasting second-layer transition tx: %v",
h, h.htlc.RHash[:], spew.Sdump(h.htlcResolution.SignedSuccessTx))
// We'll now broadcast the second layer transaction so we can kick off
// the claiming process.
//
// TODO(roasbeef): after changing sighashes send to tx bundler
label := labels.MakeLabel(
labels.LabelTypeChannelClose, &h.ShortChanID,
)
err := h.PublishTx(h.htlcResolution.SignedSuccessTx, label)
if err != nil {
return nil, err
}
// Otherwise, this is an output on our commitment transaction. In this
// case, we'll send it to the incubator, but only if we haven't already
// done so.
if !h.outputIncubating {
log.Infof("%T(%x): incubating incoming htlc output",
h, h.htlc.RHash[:])
err := h.IncubateOutputs(
h.ChanPoint, fn.None[lnwallet.OutgoingHtlcResolution](),
fn.Some(h.htlcResolution),
h.broadcastHeight, fn.Some(int32(h.htlc.RefundTimeout)),
)
if err != nil {
return nil, err
}
h.outputIncubating = true
if err := h.Checkpoint(h); err != nil {
log.Errorf("unable to Checkpoint: %v", err)
return nil, err
}
}
return &h.htlcResolution.ClaimOutpoint, nil
}
// broadcastReSignedSuccessTx handles the case where we have non-nil
// SignDetails, and offers the second level transaction to the Sweeper, that
// will re-sign it and attach fees at will.
//
//nolint:funlen
func (h *htlcSuccessResolver) broadcastReSignedSuccessTx(immediate bool) (
*wire.OutPoint, error) {
// TODO(yy): refactor the interface method to return an error only.
func (h *htlcSuccessResolver) Resolve() (ContractResolver, error) {
var err error
// Keep track of the tx spending the HTLC output on the commitment, as
// this will be the confirmed second-level tx we'll ultimately sweep.
var commitSpend *chainntnfs.SpendDetail
switch {
// If we're already resolved, then we can exit early.
case h.IsResolved():
h.log.Errorf("already resolved")
// We will have to let the sweeper re-sign the success tx and wait for
// it to confirm, if we haven't already.
isTaproot := txscript.IsPayToTaproot(
h.htlcResolution.SweepSignDesc.Output.PkScript,
)
if !h.outputIncubating {
var secondLevelInput input.HtlcSecondLevelAnchorInput
if isTaproot {
//nolint:ll
secondLevelInput = input.MakeHtlcSecondLevelSuccessTaprootInput(
h.htlcResolution.SignedSuccessTx,
h.htlcResolution.SignDetails, h.htlcResolution.Preimage,
h.broadcastHeight,
input.WithResolutionBlob(
h.htlcResolution.ResolutionBlob,
),
)
} else {
//nolint:ll
secondLevelInput = input.MakeHtlcSecondLevelSuccessAnchorInput(
h.htlcResolution.SignedSuccessTx,
h.htlcResolution.SignDetails, h.htlcResolution.Preimage,
h.broadcastHeight,
)
}
// If this is an output on the remote party's commitment transaction,
// use the direct-spend path to sweep the htlc.
case h.isRemoteCommitOutput():
err = h.resolveRemoteCommitOutput()
// Calculate the budget for this sweep.
value := btcutil.Amount(
secondLevelInput.SignDesc().Output.Value,
)
budget := calculateBudget(
value, h.Budget.DeadlineHTLCRatio,
h.Budget.DeadlineHTLC,
)
// If this is an output on our commitment transaction using post-anchor
// channel type, it will be handled by the sweeper.
case h.isZeroFeeOutput():
err = h.resolveSuccessTx()
// The deadline would be the CLTV in this HTLC output. If we
// are the initiator of this force close, with the default
// `IncomingBroadcastDelta`, it means we have 10 blocks left
// when going onchain. Given we need to mine one block to
// confirm the force close tx, and one more block to trigger
// the sweep, we have 8 blocks left to sweep the HTLC.
deadline := fn.Some(int32(h.htlc.RefundTimeout))
log.Infof("%T(%x): offering second-level HTLC success tx to "+
"sweeper with deadline=%v, budget=%v", h,
h.htlc.RHash[:], h.htlc.RefundTimeout, budget)
// We'll now offer the second-level transaction to the sweeper.
_, err := h.Sweeper.SweepInput(
&secondLevelInput,
sweep.Params{
Budget: budget,
DeadlineHeight: deadline,
Immediate: immediate,
},
)
if err != nil {
return nil, err
}
log.Infof("%T(%x): waiting for second-level HTLC success "+
"transaction to confirm", h, h.htlc.RHash[:])
// Wait for the second level transaction to confirm.
commitSpend, err = waitForSpend(
&h.htlcResolution.SignedSuccessTx.TxIn[0].PreviousOutPoint,
h.htlcResolution.SignDetails.SignDesc.Output.PkScript,
h.broadcastHeight, h.Notifier, h.quit,
)
if err != nil {
return nil, err
}
// Now that the second-level transaction has confirmed, we
// checkpoint the state so we'll go to the next stage in case
// of restarts.
h.outputIncubating = true
if err := h.Checkpoint(h); err != nil {
log.Errorf("unable to Checkpoint: %v", err)
return nil, err
}
log.Infof("%T(%x): second-level HTLC success transaction "+
"confirmed!", h, h.htlc.RHash[:])
// If this is an output on our own commitment using pre-anchor channel
// type, we will publish the success tx and offer the output to the
// nursery.
default:
err = h.resolveLegacySuccessTx()
}
// If we ended up here after a restart, we must again get the
// spend notification.
if commitSpend == nil {
var err error
commitSpend, err = waitForSpend(
&h.htlcResolution.SignedSuccessTx.TxIn[0].PreviousOutPoint,
h.htlcResolution.SignDetails.SignDesc.Output.PkScript,
h.broadcastHeight, h.Notifier, h.quit,
)
if err != nil {
return nil, err
}
}
// The HTLC success tx has a CSV lock that we must wait for, and if
// this is a lease enforced channel and we're the imitator, we may need
// to wait for longer.
waitHeight := h.deriveWaitHeight(
h.htlcResolution.CsvDelay, commitSpend,
)
// Now that the sweeper has broadcasted the second-level transaction,
// it has confirmed, and we have checkpointed our state, we'll sweep
// the second level output. We report the resolver has moved the next
// stage.
h.reportLock.Lock()
h.currentReport.Stage = 2
h.currentReport.MaturityHeight = waitHeight
h.reportLock.Unlock()
if h.hasCLTV() {
log.Infof("%T(%x): waiting for CSV and CLTV lock to "+
"expire at height %v", h, h.htlc.RHash[:],
waitHeight)
} else {
log.Infof("%T(%x): waiting for CSV lock to expire at "+
"height %v", h, h.htlc.RHash[:], waitHeight)
}
// Deduct one block so this input is offered to the sweeper one block
// earlier since the sweeper will wait for one block to trigger the
// sweeping.
//
// TODO(yy): this is done so the outputs can be aggregated
// properly. Suppose CSV locks of five 2nd-level outputs all
// expire at height 840000, there is a race in block digestion
// between contractcourt and sweeper:
// - G1: block 840000 received in contractcourt, it now offers
// the outputs to the sweeper.
// - G2: block 840000 received in sweeper, it now starts to
// sweep the received outputs - there's no guarantee all
// fives have been received.
// To solve this, we either offer the outputs earlier, or
// implement `blockbeat`, and force contractcourt and sweeper
// to consume each block sequentially.
waitHeight--
// TODO(yy): let sweeper handles the wait?
err := waitForHeight(waitHeight, h.Notifier, h.quit)
if err != nil {
return nil, err
}
// We'll use this input index to determine the second-level output
// index on the transaction, as the signatures requires the indexes to
// be the same. We don't look for the second-level output script
// directly, as there might be more than one HTLC output to the same
// pkScript.
op := &wire.OutPoint{
Hash: *commitSpend.SpenderTxHash,
Index: commitSpend.SpenderInputIndex,
}
// Let the sweeper sweep the second-level output now that the
// CSV/CLTV locks have expired.
var witType input.StandardWitnessType
if isTaproot {
witType = input.TaprootHtlcAcceptedSuccessSecondLevel
} else {
witType = input.HtlcAcceptedSuccessSecondLevel
}
inp := h.makeSweepInput(
op, witType,
input.LeaseHtlcAcceptedSuccessSecondLevel,
&h.htlcResolution.SweepSignDesc,
h.htlcResolution.CsvDelay, uint32(commitSpend.SpendingHeight),
h.htlc.RHash, h.htlcResolution.ResolutionBlob,
)
// Calculate the budget for this sweep.
budget := calculateBudget(
btcutil.Amount(inp.SignDesc().Output.Value),
h.Budget.NoDeadlineHTLCRatio,
h.Budget.NoDeadlineHTLC,
)
log.Infof("%T(%x): offering second-level success tx output to sweeper "+
"with no deadline and budget=%v at height=%v", h,
h.htlc.RHash[:], budget, waitHeight)
// TODO(roasbeef): need to update above for leased types
_, err = h.Sweeper.SweepInput(
inp,
sweep.Params{
Budget: budget,
// For second level success tx, there's no rush to get
// it confirmed, so we use a nil deadline.
DeadlineHeight: fn.None[int32](),
},
)
if err != nil {
return nil, err
}
// Will return this outpoint, when this is spent the resolver is fully
// resolved.
return op, nil
return nil, err
}
// resolveRemoteCommitOutput handles sweeping an HTLC output on the remote
// commitment with the preimage. In this case we can sweep the output directly,
// and don't have to broadcast a second-level transaction.
func (h *htlcSuccessResolver) resolveRemoteCommitOutput(immediate bool) (
ContractResolver, error) {
isTaproot := txscript.IsPayToTaproot(
h.htlcResolution.SweepSignDesc.Output.PkScript,
)
// Before we can craft out sweeping transaction, we need to
// create an input which contains all the items required to add
// this input to a sweeping transaction, and generate a
// witness.
var inp input.Input
if isTaproot {
inp = lnutils.Ptr(input.MakeTaprootHtlcSucceedInput(
&h.htlcResolution.ClaimOutpoint,
&h.htlcResolution.SweepSignDesc,
h.htlcResolution.Preimage[:],
h.broadcastHeight,
h.htlcResolution.CsvDelay,
input.WithResolutionBlob(
h.htlcResolution.ResolutionBlob,
),
))
} else {
inp = lnutils.Ptr(input.MakeHtlcSucceedInput(
&h.htlcResolution.ClaimOutpoint,
&h.htlcResolution.SweepSignDesc,
h.htlcResolution.Preimage[:],
h.broadcastHeight,
h.htlcResolution.CsvDelay,
))
}
// Calculate the budget for this sweep.
budget := calculateBudget(
btcutil.Amount(inp.SignDesc().Output.Value),
h.Budget.DeadlineHTLCRatio,
h.Budget.DeadlineHTLC,
)
deadline := fn.Some(int32(h.htlc.RefundTimeout))
log.Infof("%T(%x): offering direct-preimage HTLC output to sweeper "+
"with deadline=%v, budget=%v", h, h.htlc.RHash[:],
h.htlc.RefundTimeout, budget)
// We'll now offer the direct preimage HTLC to the sweeper.
_, err := h.Sweeper.SweepInput(
inp,
sweep.Params{
Budget: budget,
DeadlineHeight: deadline,
Immediate: immediate,
},
)
if err != nil {
return nil, err
}
func (h *htlcSuccessResolver) resolveRemoteCommitOutput() error {
h.log.Info("waiting for direct-preimage spend of the htlc to confirm")
// Wait for the direct-preimage HTLC sweep tx to confirm.
//
// TODO(yy): use the result chan returned from `SweepInput`.
sweepTxDetails, err := waitForSpend(
&h.htlcResolution.ClaimOutpoint,
h.htlcResolution.SweepSignDesc.Output.PkScript,
h.broadcastHeight, h.Notifier, h.quit,
)
if err != nil {
return nil, err
return err
}
// Once the transaction has received a sufficient number of
// confirmations, we'll mark ourselves as fully resolved and exit.
h.resolved = true
// TODO(yy): should also update the `RecoveredBalance` and
// `LimboBalance` like other paths?
// Checkpoint the resolver, and write the outcome to disk.
return nil, h.checkpointClaim(
sweepTxDetails.SpenderTxHash,
channeldb.ResolverOutcomeClaimed,
)
return h.checkpointClaim(sweepTxDetails.SpenderTxHash)
}
// checkpointClaim checkpoints the success resolver with the reports it needs.
// If this htlc was claimed two stages, it will write reports for both stages,
// otherwise it will just write for the single htlc claim.
func (h *htlcSuccessResolver) checkpointClaim(spendTx *chainhash.Hash,
outcome channeldb.ResolverOutcome) error {
func (h *htlcSuccessResolver) checkpointClaim(spendTx *chainhash.Hash) error {
// Mark the htlc as final settled.
err := h.ChainArbitratorConfig.PutFinalHtlcOutcome(
h.ChannelArbitratorConfig.ShortChanID, h.htlc.HtlcIndex, true,
@ -556,7 +198,7 @@ func (h *htlcSuccessResolver) checkpointClaim(spendTx *chainhash.Hash,
OutPoint: h.htlcResolution.ClaimOutpoint,
Amount: amt,
ResolverType: channeldb.ResolverTypeIncomingHtlc,
ResolverOutcome: outcome,
ResolverOutcome: channeldb.ResolverOutcomeClaimed,
SpendTxID: spendTx,
},
}
@ -581,6 +223,7 @@ func (h *htlcSuccessResolver) checkpointClaim(spendTx *chainhash.Hash,
}
// Finally, we checkpoint the resolver with our report(s).
h.markResolved()
return h.Checkpoint(h, reports...)
}
@ -589,15 +232,10 @@ func (h *htlcSuccessResolver) checkpointClaim(spendTx *chainhash.Hash,
//
// NOTE: Part of the ContractResolver interface.
func (h *htlcSuccessResolver) Stop() {
close(h.quit)
}
h.log.Debugf("stopping...")
defer h.log.Debugf("stopped")
// IsResolved returns true if the stored state in the resolve is fully
// resolved. In this case the target output can be forgotten.
//
// NOTE: Part of the ContractResolver interface.
func (h *htlcSuccessResolver) IsResolved() bool {
return h.resolved
close(h.quit)
}
// report returns a report on the resolution state of the contract.
@ -649,7 +287,7 @@ func (h *htlcSuccessResolver) Encode(w io.Writer) error {
if err := binary.Write(w, endian, h.outputIncubating); err != nil {
return err
}
if err := binary.Write(w, endian, h.resolved); err != nil {
if err := binary.Write(w, endian, h.IsResolved()); err != nil {
return err
}
if err := binary.Write(w, endian, h.broadcastHeight); err != nil {
@ -688,9 +326,15 @@ func newSuccessResolverFromReader(r io.Reader, resCfg ResolverConfig) (
if err := binary.Read(r, endian, &h.outputIncubating); err != nil {
return nil, err
}
if err := binary.Read(r, endian, &h.resolved); err != nil {
var resolved bool
if err := binary.Read(r, endian, &resolved); err != nil {
return nil, err
}
if resolved {
h.markResolved()
}
if err := binary.Read(r, endian, &h.broadcastHeight); err != nil {
return nil, err
}
@ -709,6 +353,7 @@ func newSuccessResolverFromReader(r io.Reader, resCfg ResolverConfig) (
}
h.initReport()
h.initLogger(fmt.Sprintf("%T(%v)", h, h.outpoint()))
return h, nil
}
@ -737,3 +382,391 @@ func (h *htlcSuccessResolver) SupplementDeadline(_ fn.Option[int32]) {
// A compile time assertion to ensure htlcSuccessResolver meets the
// ContractResolver interface.
var _ htlcContractResolver = (*htlcSuccessResolver)(nil)
// isRemoteCommitOutput returns a bool to indicate whether the htlc output is
// on the remote commitment.
func (h *htlcSuccessResolver) isRemoteCommitOutput() bool {
// If we don't have a success transaction, then this means that this is
// an output on the remote party's commitment transaction.
return h.htlcResolution.SignedSuccessTx == nil
}
// isZeroFeeOutput returns a boolean indicating whether the htlc output is from
// a anchor-enabled channel, which uses the sighash SINGLE|ANYONECANPAY.
func (h *htlcSuccessResolver) isZeroFeeOutput() bool {
// If we have non-nil SignDetails, this means it has a 2nd level HTLC
// transaction that is signed using sighash SINGLE|ANYONECANPAY (the
// case for anchor type channels). In this case we can re-sign it and
// attach fees at will.
return h.htlcResolution.SignedSuccessTx != nil &&
h.htlcResolution.SignDetails != nil
}
// isTaproot returns true if the resolver is for a taproot output.
func (h *htlcSuccessResolver) isTaproot() bool {
return txscript.IsPayToTaproot(
h.htlcResolution.SweepSignDesc.Output.PkScript,
)
}
// sweepRemoteCommitOutput creates a sweep request to sweep the HTLC output on
// the remote commitment via the direct preimage-spend.
func (h *htlcSuccessResolver) sweepRemoteCommitOutput() error {
// Before we can craft out sweeping transaction, we need to create an
// input which contains all the items required to add this input to a
// sweeping transaction, and generate a witness.
var inp input.Input
if h.isTaproot() {
inp = lnutils.Ptr(input.MakeTaprootHtlcSucceedInput(
&h.htlcResolution.ClaimOutpoint,
&h.htlcResolution.SweepSignDesc,
h.htlcResolution.Preimage[:],
h.broadcastHeight,
h.htlcResolution.CsvDelay,
input.WithResolutionBlob(
h.htlcResolution.ResolutionBlob,
),
))
} else {
inp = lnutils.Ptr(input.MakeHtlcSucceedInput(
&h.htlcResolution.ClaimOutpoint,
&h.htlcResolution.SweepSignDesc,
h.htlcResolution.Preimage[:],
h.broadcastHeight,
h.htlcResolution.CsvDelay,
))
}
// Calculate the budget for this sweep.
budget := calculateBudget(
btcutil.Amount(inp.SignDesc().Output.Value),
h.Budget.DeadlineHTLCRatio,
h.Budget.DeadlineHTLC,
)
deadline := fn.Some(int32(h.htlc.RefundTimeout))
log.Infof("%T(%x): offering direct-preimage HTLC output to sweeper "+
"with deadline=%v, budget=%v", h, h.htlc.RHash[:],
h.htlc.RefundTimeout, budget)
// We'll now offer the direct preimage HTLC to the sweeper.
_, err := h.Sweeper.SweepInput(
inp,
sweep.Params{
Budget: budget,
DeadlineHeight: deadline,
},
)
return err
}
// sweepSuccessTx attempts to sweep the second level success tx.
func (h *htlcSuccessResolver) sweepSuccessTx() error {
var secondLevelInput input.HtlcSecondLevelAnchorInput
if h.isTaproot() {
secondLevelInput = input.MakeHtlcSecondLevelSuccessTaprootInput(
h.htlcResolution.SignedSuccessTx,
h.htlcResolution.SignDetails, h.htlcResolution.Preimage,
h.broadcastHeight, input.WithResolutionBlob(
h.htlcResolution.ResolutionBlob,
),
)
} else {
secondLevelInput = input.MakeHtlcSecondLevelSuccessAnchorInput(
h.htlcResolution.SignedSuccessTx,
h.htlcResolution.SignDetails, h.htlcResolution.Preimage,
h.broadcastHeight,
)
}
// Calculate the budget for this sweep.
value := btcutil.Amount(secondLevelInput.SignDesc().Output.Value)
budget := calculateBudget(
value, h.Budget.DeadlineHTLCRatio, h.Budget.DeadlineHTLC,
)
// The deadline would be the CLTV in this HTLC output. If we are the
// initiator of this force close, with the default
// `IncomingBroadcastDelta`, it means we have 10 blocks left when going
// onchain.
deadline := fn.Some(int32(h.htlc.RefundTimeout))
h.log.Infof("offering second-level HTLC success tx to sweeper with "+
"deadline=%v, budget=%v", h.htlc.RefundTimeout, budget)
// We'll now offer the second-level transaction to the sweeper.
_, err := h.Sweeper.SweepInput(
&secondLevelInput,
sweep.Params{
Budget: budget,
DeadlineHeight: deadline,
},
)
return err
}
// sweepSuccessTxOutput attempts to sweep the output of the second level
// success tx.
func (h *htlcSuccessResolver) sweepSuccessTxOutput() error {
h.log.Debugf("sweeping output %v from 2nd-level HTLC success tx",
h.htlcResolution.ClaimOutpoint)
// This should be non-blocking as we will only attempt to sweep the
// output when the second level tx has already been confirmed. In other
// words, waitForSpend will return immediately.
commitSpend, err := waitForSpend(
&h.htlcResolution.SignedSuccessTx.TxIn[0].PreviousOutPoint,
h.htlcResolution.SignDetails.SignDesc.Output.PkScript,
h.broadcastHeight, h.Notifier, h.quit,
)
if err != nil {
return err
}
// The HTLC success tx has a CSV lock that we must wait for, and if
// this is a lease enforced channel and we're the imitator, we may need
// to wait for longer.
waitHeight := h.deriveWaitHeight(h.htlcResolution.CsvDelay, commitSpend)
// Now that the sweeper has broadcasted the second-level transaction,
// it has confirmed, and we have checkpointed our state, we'll sweep
// the second level output. We report the resolver has moved the next
// stage.
h.reportLock.Lock()
h.currentReport.Stage = 2
h.currentReport.MaturityHeight = waitHeight
h.reportLock.Unlock()
if h.hasCLTV() {
log.Infof("%T(%x): waiting for CSV and CLTV lock to expire at "+
"height %v", h, h.htlc.RHash[:], waitHeight)
} else {
log.Infof("%T(%x): waiting for CSV lock to expire at height %v",
h, h.htlc.RHash[:], waitHeight)
}
// We'll use this input index to determine the second-level output
// index on the transaction, as the signatures requires the indexes to
// be the same. We don't look for the second-level output script
// directly, as there might be more than one HTLC output to the same
// pkScript.
op := &wire.OutPoint{
Hash: *commitSpend.SpenderTxHash,
Index: commitSpend.SpenderInputIndex,
}
// Let the sweeper sweep the second-level output now that the
// CSV/CLTV locks have expired.
var witType input.StandardWitnessType
if h.isTaproot() {
witType = input.TaprootHtlcAcceptedSuccessSecondLevel
} else {
witType = input.HtlcAcceptedSuccessSecondLevel
}
inp := h.makeSweepInput(
op, witType,
input.LeaseHtlcAcceptedSuccessSecondLevel,
&h.htlcResolution.SweepSignDesc,
h.htlcResolution.CsvDelay, uint32(commitSpend.SpendingHeight),
h.htlc.RHash, h.htlcResolution.ResolutionBlob,
)
// Calculate the budget for this sweep.
budget := calculateBudget(
btcutil.Amount(inp.SignDesc().Output.Value),
h.Budget.NoDeadlineHTLCRatio,
h.Budget.NoDeadlineHTLC,
)
log.Infof("%T(%x): offering second-level success tx output to sweeper "+
"with no deadline and budget=%v at height=%v", h,
h.htlc.RHash[:], budget, waitHeight)
// TODO(yy): use the result chan returned from SweepInput.
_, err = h.Sweeper.SweepInput(
inp,
sweep.Params{
Budget: budget,
// For second level success tx, there's no rush to get
// it confirmed, so we use a nil deadline.
DeadlineHeight: fn.None[int32](),
},
)
return err
}
// resolveLegacySuccessTx handles an HTLC output from a pre-anchor type channel
// by broadcasting the second-level success transaction.
func (h *htlcSuccessResolver) resolveLegacySuccessTx() error {
// Otherwise we'll publish the second-level transaction directly and
// offer the resolution to the nursery to handle.
h.log.Infof("broadcasting legacy second-level success tx: %v",
h.htlcResolution.SignedSuccessTx.TxHash())
// We'll now broadcast the second layer transaction so we can kick off
// the claiming process.
//
// TODO(yy): offer it to the sweeper instead.
label := labels.MakeLabel(
labels.LabelTypeChannelClose, &h.ShortChanID,
)
err := h.PublishTx(h.htlcResolution.SignedSuccessTx, label)
if err != nil {
return err
}
// Fast-forward to resolve the output from the success tx if the it has
// already been sent to the UtxoNursery.
if h.outputIncubating {
return h.resolveSuccessTxOutput(h.htlcResolution.ClaimOutpoint)
}
h.log.Infof("incubating incoming htlc output")
// Send the output to the incubator.
err = h.IncubateOutputs(
h.ChanPoint, fn.None[lnwallet.OutgoingHtlcResolution](),
fn.Some(h.htlcResolution),
h.broadcastHeight, fn.Some(int32(h.htlc.RefundTimeout)),
)
if err != nil {
return err
}
// Mark the output as incubating and checkpoint it.
h.outputIncubating = true
if err := h.Checkpoint(h); err != nil {
return err
}
// Move to resolve the output.
return h.resolveSuccessTxOutput(h.htlcResolution.ClaimOutpoint)
}
// resolveSuccessTx waits for the sweeping tx of the second-level success tx to
// confirm and offers the output from the success tx to the sweeper.
func (h *htlcSuccessResolver) resolveSuccessTx() error {
h.log.Infof("waiting for 2nd-level HTLC success transaction to confirm")
// Create aliases to make the code more readable.
outpoint := h.htlcResolution.SignedSuccessTx.TxIn[0].PreviousOutPoint
pkScript := h.htlcResolution.SignDetails.SignDesc.Output.PkScript
// Wait for the second level transaction to confirm.
commitSpend, err := waitForSpend(
&outpoint, pkScript, h.broadcastHeight, h.Notifier, h.quit,
)
if err != nil {
return err
}
// We'll use this input index to determine the second-level output
// index on the transaction, as the signatures requires the indexes to
// be the same. We don't look for the second-level output script
// directly, as there might be more than one HTLC output to the same
// pkScript.
op := wire.OutPoint{
Hash: *commitSpend.SpenderTxHash,
Index: commitSpend.SpenderInputIndex,
}
// If the 2nd-stage sweeping has already been started, we can
// fast-forward to start the resolving process for the stage two
// output.
if h.outputIncubating {
return h.resolveSuccessTxOutput(op)
}
// Now that the second-level transaction has confirmed, we checkpoint
// the state so we'll go to the next stage in case of restarts.
h.outputIncubating = true
if err := h.Checkpoint(h); err != nil {
log.Errorf("unable to Checkpoint: %v", err)
return err
}
h.log.Infof("2nd-level HTLC success tx=%v confirmed",
commitSpend.SpenderTxHash)
// Send the sweep request for the output from the success tx.
if err := h.sweepSuccessTxOutput(); err != nil {
return err
}
return h.resolveSuccessTxOutput(op)
}
// resolveSuccessTxOutput waits for the spend of the output from the 2nd-level
// success tx.
func (h *htlcSuccessResolver) resolveSuccessTxOutput(op wire.OutPoint) error {
// To wrap this up, we'll wait until the second-level transaction has
// been spent, then fully resolve the contract.
log.Infof("%T(%x): waiting for second-level HTLC output to be spent "+
"after csv_delay=%v", h, h.htlc.RHash[:],
h.htlcResolution.CsvDelay)
spend, err := waitForSpend(
&op, h.htlcResolution.SweepSignDesc.Output.PkScript,
h.broadcastHeight, h.Notifier, h.quit,
)
if err != nil {
return err
}
h.reportLock.Lock()
h.currentReport.RecoveredBalance = h.currentReport.LimboBalance
h.currentReport.LimboBalance = 0
h.reportLock.Unlock()
return h.checkpointClaim(spend.SpenderTxHash)
}
// Launch creates an input based on the details of the incoming htlc resolution
// and offers it to the sweeper.
func (h *htlcSuccessResolver) Launch() error {
if h.isLaunched() {
h.log.Tracef("already launched")
return nil
}
h.log.Debugf("launching resolver...")
h.markLaunched()
switch {
// If we're already resolved, then we can exit early.
case h.IsResolved():
h.log.Errorf("already resolved")
return nil
// If this is an output on the remote party's commitment transaction,
// use the direct-spend path.
case h.isRemoteCommitOutput():
return h.sweepRemoteCommitOutput()
// If this is an anchor type channel, we now sweep either the
// second-level success tx or the output from the second-level success
// tx.
case h.isZeroFeeOutput():
// If the second-level success tx has already been swept, we
// can go ahead and sweep its output.
if h.outputIncubating {
return h.sweepSuccessTxOutput()
}
// Otherwise, sweep the second level tx.
return h.sweepSuccessTx()
// If this is a legacy channel type, the output is handled by the
// nursery via the Resolve so we do nothing here.
//
// TODO(yy): handle the legacy output by offering it to the sweeper.
default:
return nil
}
}

View file

@ -5,6 +5,7 @@ import (
"fmt"
"reflect"
"testing"
"time"
"github.com/btcsuite/btcd/btcutil"
"github.com/btcsuite/btcd/chaincfg/chainhash"
@ -20,6 +21,7 @@ import (
"github.com/lightningnetwork/lnd/lntest/mock"
"github.com/lightningnetwork/lnd/lnwallet"
"github.com/lightningnetwork/lnd/lnwire"
"github.com/stretchr/testify/require"
)
var testHtlcAmt = lnwire.MilliSatoshi(200000)
@ -39,6 +41,15 @@ type htlcResolverTestContext struct {
t *testing.T
}
func newHtlcResolverTestContextFromReader(t *testing.T,
newResolver func(htlc channeldb.HTLC,
cfg ResolverConfig) ContractResolver) *htlcResolverTestContext {
ctx := newHtlcResolverTestContext(t, newResolver)
return ctx
}
func newHtlcResolverTestContext(t *testing.T,
newResolver func(htlc channeldb.HTLC,
cfg ResolverConfig) ContractResolver) *htlcResolverTestContext {
@ -133,8 +144,12 @@ func newHtlcResolverTestContext(t *testing.T,
func (i *htlcResolverTestContext) resolve() {
// Start resolver.
i.resolverResultChan = make(chan resolveResult, 1)
go func() {
nextResolver, err := i.resolver.Resolve(false)
err := i.resolver.Launch()
require.NoError(i.t, err)
nextResolver, err := i.resolver.Resolve()
i.resolverResultChan <- resolveResult{
nextResolver: nextResolver,
err: err,
@ -192,6 +207,7 @@ func TestHtlcSuccessSingleStage(t *testing.T) {
// sweeper.
details := &chainntnfs.SpendDetail{
SpendingTx: sweepTx,
SpentOutPoint: &htlcOutpoint,
SpenderTxHash: &sweepTxid,
}
ctx.notifier.SpendChan <- details
@ -215,8 +231,8 @@ func TestHtlcSuccessSingleStage(t *testing.T) {
)
}
// TestSecondStageResolution tests successful sweep of a second stage htlc
// claim, going through the Nursery.
// TestHtlcSuccessSecondStageResolution tests successful sweep of a second
// stage htlc claim, going through the Nursery.
func TestHtlcSuccessSecondStageResolution(t *testing.T) {
commitOutpoint := wire.OutPoint{Index: 2}
htlcOutpoint := wire.OutPoint{Index: 3}
@ -279,6 +295,7 @@ func TestHtlcSuccessSecondStageResolution(t *testing.T) {
ctx.notifier.SpendChan <- &chainntnfs.SpendDetail{
SpendingTx: sweepTx,
SpentOutPoint: &htlcOutpoint,
SpenderTxHash: &sweepHash,
}
@ -302,6 +319,8 @@ func TestHtlcSuccessSecondStageResolution(t *testing.T) {
// TestHtlcSuccessSecondStageResolutionSweeper test that a resolver with
// non-nil SignDetails will offer the second-level transaction to the sweeper
// for re-signing.
//
//nolint:ll
func TestHtlcSuccessSecondStageResolutionSweeper(t *testing.T) {
commitOutpoint := wire.OutPoint{Index: 2}
htlcOutpoint := wire.OutPoint{Index: 3}
@ -399,7 +418,20 @@ func TestHtlcSuccessSecondStageResolutionSweeper(t *testing.T) {
_ bool) error {
resolver := ctx.resolver.(*htlcSuccessResolver)
inp := <-resolver.Sweeper.(*mockSweeper).sweptInputs
var (
inp input.Input
ok bool
)
select {
case inp, ok = <-resolver.Sweeper.(*mockSweeper).sweptInputs:
require.True(t, ok)
case <-time.After(1 * time.Second):
t.Fatal("expected input to be swept")
}
op := inp.OutPoint()
if op != commitOutpoint {
return fmt.Errorf("outpoint %v swept, "+
@ -412,6 +444,7 @@ func TestHtlcSuccessSecondStageResolutionSweeper(t *testing.T) {
SpenderTxHash: &reSignedHash,
SpenderInputIndex: 1,
SpendingHeight: 10,
SpentOutPoint: &commitOutpoint,
}
return nil
},
@ -434,17 +467,37 @@ func TestHtlcSuccessSecondStageResolutionSweeper(t *testing.T) {
SpenderTxHash: &reSignedHash,
SpenderInputIndex: 1,
SpendingHeight: 10,
SpentOutPoint: &commitOutpoint,
}
}
ctx.notifier.EpochChan <- &chainntnfs.BlockEpoch{
Height: 13,
}
// We expect it to sweep the second-level
// transaction we notfied about above.
resolver := ctx.resolver.(*htlcSuccessResolver)
inp := <-resolver.Sweeper.(*mockSweeper).sweptInputs
// Mock `waitForSpend` to return the commit
// spend.
ctx.notifier.SpendChan <- &chainntnfs.SpendDetail{
SpendingTx: reSignedSuccessTx,
SpenderTxHash: &reSignedHash,
SpenderInputIndex: 1,
SpendingHeight: 10,
SpentOutPoint: &commitOutpoint,
}
var (
inp input.Input
ok bool
)
select {
case inp, ok = <-resolver.Sweeper.(*mockSweeper).sweptInputs:
require.True(t, ok)
case <-time.After(1 * time.Second):
t.Fatal("expected input to be swept")
}
op := inp.OutPoint()
exp := wire.OutPoint{
Hash: reSignedHash,
@ -461,6 +514,7 @@ func TestHtlcSuccessSecondStageResolutionSweeper(t *testing.T) {
SpendingTx: sweepTx,
SpenderTxHash: &sweepHash,
SpendingHeight: 14,
SpentOutPoint: &op,
}
return nil
@ -508,11 +562,14 @@ func testHtlcSuccess(t *testing.T, resolution lnwallet.IncomingHtlcResolution,
// for the next portion of the test.
ctx := newHtlcResolverTestContext(t,
func(htlc channeldb.HTLC, cfg ResolverConfig) ContractResolver {
return &htlcSuccessResolver{
r := &htlcSuccessResolver{
contractResolverKit: *newContractResolverKit(cfg),
htlc: htlc,
htlcResolution: resolution,
}
r.initLogger("htlcSuccessResolver")
return r
},
)
@ -562,11 +619,11 @@ func runFromCheckpoint(t *testing.T, ctx *htlcResolverTestContext,
var resolved, incubating bool
if h, ok := resolver.(*htlcSuccessResolver); ok {
resolved = h.resolved
resolved = h.resolved.Load()
incubating = h.outputIncubating
}
if h, ok := resolver.(*htlcTimeoutResolver); ok {
resolved = h.resolved
resolved = h.resolved.Load()
incubating = h.outputIncubating
}
@ -610,7 +667,12 @@ func runFromCheckpoint(t *testing.T, ctx *htlcResolverTestContext,
checkpointedState = append(checkpointedState, b.Bytes())
nextCheckpoint++
checkpointChan <- struct{}{}
select {
case checkpointChan <- struct{}{}:
case <-time.After(1 * time.Second):
t.Fatal("checkpoint timeout")
}
return nil
}
@ -621,6 +683,8 @@ func runFromCheckpoint(t *testing.T, ctx *htlcResolverTestContext,
// preCheckpoint logic if needed.
resumed := true
for i, cp := range expectedCheckpoints {
t.Logf("Running checkpoint %d", i)
if cp.preCheckpoint != nil {
if err := cp.preCheckpoint(ctx, resumed); err != nil {
t.Fatalf("failure at stage %d: %v", i, err)
@ -629,15 +693,15 @@ func runFromCheckpoint(t *testing.T, ctx *htlcResolverTestContext,
resumed = false
// Wait for the resolver to have checkpointed its state.
<-checkpointChan
select {
case <-checkpointChan:
case <-time.After(1 * time.Second):
t.Fatalf("resolver did not checkpoint at stage %d", i)
}
}
// Wait for the resolver to fully complete.
ctx.waitForResult()
if nextCheckpoint < len(expectedCheckpoints) {
t.Fatalf("not all checkpoints hit")
}
return checkpointedState
}

File diff suppressed because it is too large Load diff

View file

@ -40,7 +40,7 @@ type mockWitnessBeacon struct {
func newMockWitnessBeacon() *mockWitnessBeacon {
return &mockWitnessBeacon{
preImageUpdates: make(chan lntypes.Preimage, 1),
newPreimages: make(chan []lntypes.Preimage),
newPreimages: make(chan []lntypes.Preimage, 1),
lookupPreimage: make(map[lntypes.Hash]lntypes.Preimage),
}
}
@ -280,7 +280,7 @@ func testHtlcTimeoutResolver(t *testing.T, testCase htlcTimeoutTestCase) {
notifier := &mock.ChainNotifier{
EpochChan: make(chan *chainntnfs.BlockEpoch),
SpendChan: make(chan *chainntnfs.SpendDetail),
SpendChan: make(chan *chainntnfs.SpendDetail, 1),
ConfChan: make(chan *chainntnfs.TxConfirmation),
}
@ -321,6 +321,7 @@ func testHtlcTimeoutResolver(t *testing.T, testCase htlcTimeoutTestCase) {
return nil
},
HtlcNotifier: &mockHTLCNotifier{},
},
PutResolverReport: func(_ kvdb.RwTx,
_ *channeldb.ResolverReport) error {
@ -356,6 +357,7 @@ func testHtlcTimeoutResolver(t *testing.T, testCase htlcTimeoutTestCase) {
Amt: testHtlcAmt,
},
}
resolver.initLogger("timeoutResolver")
var reports []*channeldb.ResolverReport
@ -390,7 +392,12 @@ func testHtlcTimeoutResolver(t *testing.T, testCase htlcTimeoutTestCase) {
go func() {
defer wg.Done()
_, err := resolver.Resolve(false)
err := resolver.Launch()
if err != nil {
resolveErr <- err
}
_, err = resolver.Resolve()
if err != nil {
resolveErr <- err
}
@ -406,8 +413,7 @@ func testHtlcTimeoutResolver(t *testing.T, testCase htlcTimeoutTestCase) {
sweepChan = mockSweeper.sweptInputs
}
// The output should be offered to either the sweeper or
// the nursery.
// The output should be offered to either the sweeper or the nursery.
select {
case <-incubateChan:
case <-sweepChan:
@ -431,6 +437,7 @@ func testHtlcTimeoutResolver(t *testing.T, testCase htlcTimeoutTestCase) {
case notifier.SpendChan <- &chainntnfs.SpendDetail{
SpendingTx: spendingTx,
SpenderTxHash: &spendTxHash,
SpentOutPoint: &testChanPoint2,
}:
case <-time.After(time.Second * 5):
t.Fatalf("failed to request spend ntfn")
@ -487,6 +494,7 @@ func testHtlcTimeoutResolver(t *testing.T, testCase htlcTimeoutTestCase) {
case notifier.SpendChan <- &chainntnfs.SpendDetail{
SpendingTx: spendingTx,
SpenderTxHash: &spendTxHash,
SpentOutPoint: &testChanPoint2,
}:
case <-time.After(time.Second * 5):
t.Fatalf("failed to request spend ntfn")
@ -524,7 +532,7 @@ func testHtlcTimeoutResolver(t *testing.T, testCase htlcTimeoutTestCase) {
wg.Wait()
// Finally, the resolver should be marked as resolved.
if !resolver.resolved {
if !resolver.resolved.Load() {
t.Fatalf("resolver should be marked as resolved")
}
}
@ -549,6 +557,8 @@ func TestHtlcTimeoutResolver(t *testing.T) {
// TestHtlcTimeoutSingleStage tests a remote commitment confirming, and the
// local node sweeping the HTLC output directly after timeout.
//
//nolint:ll
func TestHtlcTimeoutSingleStage(t *testing.T) {
commitOutpoint := wire.OutPoint{Index: 3}
@ -573,6 +583,12 @@ func TestHtlcTimeoutSingleStage(t *testing.T) {
SpendTxID: &sweepTxid,
}
sweepSpend := &chainntnfs.SpendDetail{
SpendingTx: sweepTx,
SpentOutPoint: &commitOutpoint,
SpenderTxHash: &sweepTxid,
}
checkpoints := []checkpoint{
{
// We send a confirmation the sweep tx from published
@ -582,9 +598,10 @@ func TestHtlcTimeoutSingleStage(t *testing.T) {
// The nursery will create and publish a sweep
// tx.
ctx.notifier.SpendChan <- &chainntnfs.SpendDetail{
SpendingTx: sweepTx,
SpenderTxHash: &sweepTxid,
select {
case ctx.notifier.SpendChan <- sweepSpend:
case <-time.After(time.Second * 5):
t.Fatalf("failed to send spend ntfn")
}
// The resolver should deliver a failure
@ -620,7 +637,9 @@ func TestHtlcTimeoutSingleStage(t *testing.T) {
// TestHtlcTimeoutSecondStage tests a local commitment being confirmed, and the
// local node claiming the HTLC output using the second-level timeout tx.
func TestHtlcTimeoutSecondStage(t *testing.T) {
//
//nolint:ll
func TestHtlcTimeoutSecondStagex(t *testing.T) {
commitOutpoint := wire.OutPoint{Index: 2}
htlcOutpoint := wire.OutPoint{Index: 3}
@ -678,23 +697,57 @@ func TestHtlcTimeoutSecondStage(t *testing.T) {
SpendTxID: &sweepHash,
}
timeoutSpend := &chainntnfs.SpendDetail{
SpendingTx: timeoutTx,
SpentOutPoint: &commitOutpoint,
SpenderTxHash: &timeoutTxid,
}
sweepSpend := &chainntnfs.SpendDetail{
SpendingTx: sweepTx,
SpentOutPoint: &htlcOutpoint,
SpenderTxHash: &sweepHash,
}
checkpoints := []checkpoint{
{
preCheckpoint: func(ctx *htlcResolverTestContext,
_ bool) error {
// Deliver spend of timeout tx.
ctx.notifier.SpendChan <- timeoutSpend
return nil
},
// Output should be handed off to the nursery.
incubating: true,
reports: []*channeldb.ResolverReport{
firstStage,
},
},
{
// We send a confirmation for our sweep tx to indicate
// that our sweep succeeded.
preCheckpoint: func(ctx *htlcResolverTestContext,
_ bool) error {
resumed bool) error {
// The nursery will publish the timeout tx.
ctx.notifier.SpendChan <- &chainntnfs.SpendDetail{
SpendingTx: timeoutTx,
SpenderTxHash: &timeoutTxid,
// When it's reloaded from disk, we need to
// re-send the notification to mock the first
// `watchHtlcSpend`.
if resumed {
// Deliver spend of timeout tx.
ctx.notifier.SpendChan <- timeoutSpend
// Deliver spend of timeout tx output.
ctx.notifier.SpendChan <- sweepSpend
return nil
}
// Deliver spend of timeout tx output.
ctx.notifier.SpendChan <- sweepSpend
// The resolver should deliver a failure
// resolution message (indicating we
// successfully timed out the HTLC).
@ -707,12 +760,6 @@ func TestHtlcTimeoutSecondStage(t *testing.T) {
t.Fatalf("resolution not sent")
}
// Deliver spend of timeout tx.
ctx.notifier.SpendChan <- &chainntnfs.SpendDetail{
SpendingTx: sweepTx,
SpenderTxHash: &sweepHash,
}
return nil
},
@ -722,7 +769,7 @@ func TestHtlcTimeoutSecondStage(t *testing.T) {
incubating: true,
resolved: true,
reports: []*channeldb.ResolverReport{
firstStage, secondState,
secondState,
},
},
}
@ -796,10 +843,6 @@ func TestHtlcTimeoutSingleStageRemoteSpend(t *testing.T) {
}
checkpoints := []checkpoint{
{
// Output should be handed off to the nursery.
incubating: true,
},
{
// We send a spend notification for a remote spend with
// the preimage.
@ -812,6 +855,7 @@ func TestHtlcTimeoutSingleStageRemoteSpend(t *testing.T) {
// the preimage.
ctx.notifier.SpendChan <- &chainntnfs.SpendDetail{
SpendingTx: spendTx,
SpentOutPoint: &commitOutpoint,
SpenderTxHash: &spendTxHash,
}
@ -847,7 +891,7 @@ func TestHtlcTimeoutSingleStageRemoteSpend(t *testing.T) {
// After the success tx has confirmed, we expect the
// checkpoint to be resolved, and with the above
// report.
incubating: true,
incubating: false,
resolved: true,
reports: []*channeldb.ResolverReport{
claim,
@ -914,6 +958,7 @@ func TestHtlcTimeoutSecondStageRemoteSpend(t *testing.T) {
ctx.notifier.SpendChan <- &chainntnfs.SpendDetail{
SpendingTx: remoteSuccessTx,
SpentOutPoint: &commitOutpoint,
SpenderTxHash: &successTxid,
}
@ -967,20 +1012,15 @@ func TestHtlcTimeoutSecondStageRemoteSpend(t *testing.T) {
// TestHtlcTimeoutSecondStageSweeper tests that for anchor channels, when a
// local commitment confirms, the timeout tx is handed to the sweeper to claim
// the HTLC output.
//
//nolint:ll
func TestHtlcTimeoutSecondStageSweeper(t *testing.T) {
commitOutpoint := wire.OutPoint{Index: 2}
htlcOutpoint := wire.OutPoint{Index: 3}
sweepTx := &wire.MsgTx{
TxIn: []*wire.TxIn{{}},
TxOut: []*wire.TxOut{{}},
}
sweepHash := sweepTx.TxHash()
timeoutTx := &wire.MsgTx{
TxIn: []*wire.TxIn{
{
PreviousOutPoint: commitOutpoint,
PreviousOutPoint: htlcOutpoint,
},
},
TxOut: []*wire.TxOut{
@ -1027,11 +1067,16 @@ func TestHtlcTimeoutSecondStageSweeper(t *testing.T) {
},
}
reSignedHash := reSignedTimeoutTx.TxHash()
reSignedOutPoint := wire.OutPoint{
timeoutTxOutpoint := wire.OutPoint{
Hash: reSignedHash,
Index: 1,
}
// Make a copy so `isPreimageSpend` can easily pass.
sweepTx := reSignedTimeoutTx.Copy()
sweepHash := sweepTx.TxHash()
// twoStageResolution is a resolution for a htlc on the local
// party's commitment, where the timeout tx can be re-signed.
twoStageResolution := lnwallet.OutgoingHtlcResolution{
@ -1045,7 +1090,7 @@ func TestHtlcTimeoutSecondStageSweeper(t *testing.T) {
}
firstStage := &channeldb.ResolverReport{
OutPoint: commitOutpoint,
OutPoint: htlcOutpoint,
Amount: testHtlcAmt.ToSatoshis(),
ResolverType: channeldb.ResolverTypeOutgoingHtlc,
ResolverOutcome: channeldb.ResolverOutcomeFirstStage,
@ -1053,12 +1098,45 @@ func TestHtlcTimeoutSecondStageSweeper(t *testing.T) {
}
secondState := &channeldb.ResolverReport{
OutPoint: reSignedOutPoint,
OutPoint: timeoutTxOutpoint,
Amount: btcutil.Amount(testSignDesc.Output.Value),
ResolverType: channeldb.ResolverTypeOutgoingHtlc,
ResolverOutcome: channeldb.ResolverOutcomeTimeout,
SpendTxID: &sweepHash,
}
// mockTimeoutTxSpend is a helper closure to mock `waitForSpend` to
// return the commit spend in `sweepTimeoutTxOutput`.
mockTimeoutTxSpend := func(ctx *htlcResolverTestContext) {
select {
case ctx.notifier.SpendChan <- &chainntnfs.SpendDetail{
SpendingTx: reSignedTimeoutTx,
SpenderInputIndex: 1,
SpenderTxHash: &reSignedHash,
SpendingHeight: 10,
SpentOutPoint: &htlcOutpoint,
}:
case <-time.After(time.Second * 1):
t.Fatalf("spend not sent")
}
}
// mockSweepTxSpend is a helper closure to mock `waitForSpend` to
// return the commit spend in `sweepTimeoutTxOutput`.
mockSweepTxSpend := func(ctx *htlcResolverTestContext) {
select {
case ctx.notifier.SpendChan <- &chainntnfs.SpendDetail{
SpendingTx: sweepTx,
SpenderInputIndex: 1,
SpenderTxHash: &sweepHash,
SpendingHeight: 10,
SpentOutPoint: &timeoutTxOutpoint,
}:
case <-time.After(time.Second * 1):
t.Fatalf("spend not sent")
}
}
checkpoints := []checkpoint{
{
@ -1067,28 +1145,40 @@ func TestHtlcTimeoutSecondStageSweeper(t *testing.T) {
_ bool) error {
resolver := ctx.resolver.(*htlcTimeoutResolver)
inp := <-resolver.Sweeper.(*mockSweeper).sweptInputs
op := inp.OutPoint()
if op != commitOutpoint {
return fmt.Errorf("outpoint %v swept, "+
"expected %v", op,
commitOutpoint)
var (
inp input.Input
ok bool
)
select {
case inp, ok = <-resolver.Sweeper.(*mockSweeper).sweptInputs:
require.True(t, ok)
case <-time.After(1 * time.Second):
t.Fatal("expected input to be swept")
}
// Emulat the sweeper spending using the
// re-signed timeout tx.
ctx.notifier.SpendChan <- &chainntnfs.SpendDetail{
SpendingTx: reSignedTimeoutTx,
SpenderInputIndex: 1,
SpenderTxHash: &reSignedHash,
SpendingHeight: 10,
op := inp.OutPoint()
if op != htlcOutpoint {
return fmt.Errorf("outpoint %v swept, "+
"expected %v", op, htlcOutpoint)
}
// Mock `waitForSpend` twice, called in,
// - `resolveReSignedTimeoutTx`
// - `sweepTimeoutTxOutput`.
mockTimeoutTxSpend(ctx)
mockTimeoutTxSpend(ctx)
return nil
},
// incubating=true is used to signal that the
// second-level transaction was confirmed.
incubating: true,
reports: []*channeldb.ResolverReport{
firstStage,
},
},
{
// We send a confirmation for our sweep tx to indicate
@ -1096,18 +1186,18 @@ func TestHtlcTimeoutSecondStageSweeper(t *testing.T) {
preCheckpoint: func(ctx *htlcResolverTestContext,
resumed bool) error {
// If we are resuming from a checkpoint, we
// expect the resolver to re-subscribe to a
// spend, hence we must resend it.
// Mock `waitForSpend` to return the commit
// spend.
if resumed {
ctx.notifier.SpendChan <- &chainntnfs.SpendDetail{
SpendingTx: reSignedTimeoutTx,
SpenderInputIndex: 1,
SpenderTxHash: &reSignedHash,
SpendingHeight: 10,
}
mockTimeoutTxSpend(ctx)
mockTimeoutTxSpend(ctx)
mockSweepTxSpend(ctx)
return nil
}
mockSweepTxSpend(ctx)
// The resolver should deliver a failure
// resolution message (indicating we
// successfully timed out the HTLC).
@ -1120,15 +1210,23 @@ func TestHtlcTimeoutSecondStageSweeper(t *testing.T) {
t.Fatalf("resolution not sent")
}
// Mimic CSV lock expiring.
ctx.notifier.EpochChan <- &chainntnfs.BlockEpoch{
Height: 13,
}
// The timeout tx output should now be given to
// the sweeper.
resolver := ctx.resolver.(*htlcTimeoutResolver)
inp := <-resolver.Sweeper.(*mockSweeper).sweptInputs
var (
inp input.Input
ok bool
)
select {
case inp, ok = <-resolver.Sweeper.(*mockSweeper).sweptInputs:
require.True(t, ok)
case <-time.After(1 * time.Second):
t.Fatal("expected input to be swept")
}
op := inp.OutPoint()
exp := wire.OutPoint{
Hash: reSignedHash,
@ -1138,14 +1236,6 @@ func TestHtlcTimeoutSecondStageSweeper(t *testing.T) {
return fmt.Errorf("wrong outpoint swept")
}
// Notify about the spend, which should resolve
// the resolver.
ctx.notifier.SpendChan <- &chainntnfs.SpendDetail{
SpendingTx: sweepTx,
SpenderTxHash: &sweepHash,
SpendingHeight: 14,
}
return nil
},
@ -1155,7 +1245,6 @@ func TestHtlcTimeoutSecondStageSweeper(t *testing.T) {
incubating: true,
resolved: true,
reports: []*channeldb.ResolverReport{
firstStage,
secondState,
},
},
@ -1236,33 +1325,6 @@ func TestHtlcTimeoutSecondStageSweeperRemoteSpend(t *testing.T) {
}
checkpoints := []checkpoint{
{
// The output should be given to the sweeper.
preCheckpoint: func(ctx *htlcResolverTestContext,
_ bool) error {
resolver := ctx.resolver.(*htlcTimeoutResolver)
inp := <-resolver.Sweeper.(*mockSweeper).sweptInputs
op := inp.OutPoint()
if op != commitOutpoint {
return fmt.Errorf("outpoint %v swept, "+
"expected %v", op,
commitOutpoint)
}
// Emulate the remote sweeping the output with the preimage.
// re-signed timeout tx.
ctx.notifier.SpendChan <- &chainntnfs.SpendDetail{
SpendingTx: spendTx,
SpenderTxHash: &spendTxHash,
}
return nil
},
// incubating=true is used to signal that the
// second-level transaction was confirmed.
incubating: true,
},
{
// We send a confirmation for our sweep tx to indicate
// that our sweep succeeded.
@ -1277,6 +1339,7 @@ func TestHtlcTimeoutSecondStageSweeperRemoteSpend(t *testing.T) {
ctx.notifier.SpendChan <- &chainntnfs.SpendDetail{
SpendingTx: spendTx,
SpenderTxHash: &spendTxHash,
SpentOutPoint: &commitOutpoint,
}
}
@ -1314,7 +1377,7 @@ func TestHtlcTimeoutSecondStageSweeperRemoteSpend(t *testing.T) {
// After the sweep has confirmed, we expect the
// checkpoint to be resolved, and with the above
// reports.
incubating: true,
incubating: false,
resolved: true,
reports: []*channeldb.ResolverReport{
claim,
@ -1339,21 +1402,26 @@ func testHtlcTimeout(t *testing.T, resolution lnwallet.OutgoingHtlcResolution,
// for the next portion of the test.
ctx := newHtlcResolverTestContext(t,
func(htlc channeldb.HTLC, cfg ResolverConfig) ContractResolver {
return &htlcTimeoutResolver{
r := &htlcTimeoutResolver{
contractResolverKit: *newContractResolverKit(cfg),
htlc: htlc,
htlcResolution: resolution,
}
r.initLogger("htlcTimeoutResolver")
return r
},
)
checkpointedState := runFromCheckpoint(t, ctx, checkpoints)
t.Log("Running resolver to completion after restart")
// Now, from every checkpoint created, we re-create the resolver, and
// run the test from that checkpoint.
for i := range checkpointedState {
cp := bytes.NewReader(checkpointedState[i])
ctx := newHtlcResolverTestContext(t,
ctx := newHtlcResolverTestContextFromReader(t,
func(htlc channeldb.HTLC, cfg ResolverConfig) ContractResolver {
resolver, err := newTimeoutResolverFromReader(cp, cfg)
if err != nil {
@ -1361,7 +1429,8 @@ func testHtlcTimeout(t *testing.T, resolution lnwallet.OutgoingHtlcResolution,
}
resolver.Supplement(htlc)
resolver.htlcResolution = resolution
resolver.initLogger("htlcTimeoutResolver")
return resolver
},
)

View file

@ -29,6 +29,11 @@ func (r *mockRegistry) NotifyExitHopHtlc(payHash lntypes.Hash,
wireCustomRecords lnwire.CustomRecords,
payload invoices.Payload) (invoices.HtlcResolution, error) {
// Exit early if the notification channel is nil.
if hodlChan == nil {
return r.notifyResolution, r.notifyErr
}
r.notifyChan <- notifyExitHopData{
hodlChan: hodlChan,
payHash: payHash,

View file

@ -127,20 +127,30 @@ func TestTaprootBriefcase(t *testing.T) {
require.Equal(t, testCase, &decodedCase)
}
// testHtlcAuxBlobProperties is a rapid property that verifies the encoding and
// decoding of the HTLC aux blobs.
func testHtlcAuxBlobProperties(t *rapid.T) {
htlcBlobs := rapid.Make[htlcAuxBlobs]().Draw(t, "htlcAuxBlobs")
var b bytes.Buffer
require.NoError(t, htlcBlobs.Encode(&b))
decodedBlobs := newAuxHtlcBlobs()
require.NoError(t, decodedBlobs.Decode(&b))
require.Equal(t, htlcBlobs, decodedBlobs)
}
// TestHtlcAuxBlobEncodeDecode tests the encode/decode methods of the HTLC aux
// blobs.
func TestHtlcAuxBlobEncodeDecode(t *testing.T) {
t.Parallel()
rapid.Check(t, func(t *rapid.T) {
htlcBlobs := rapid.Make[htlcAuxBlobs]().Draw(t, "htlcAuxBlobs")
var b bytes.Buffer
require.NoError(t, htlcBlobs.Encode(&b))
decodedBlobs := newAuxHtlcBlobs()
require.NoError(t, decodedBlobs.Decode(&b))
require.Equal(t, htlcBlobs, decodedBlobs)
})
rapid.Check(t, testHtlcAuxBlobProperties)
}
// FuzzHtlcAuxBlobEncodeDecodeFuzz tests the encode/decode methods of the HTLC
// aux blobs using the rapid derived fuzzer.
func FuzzHtlcAuxBlobEncodeDecode(f *testing.F) {
f.Fuzz(rapid.MakeFuzz(testHtlcAuxBlobProperties))
}

View file

@ -794,7 +794,7 @@ func (u *UtxoNursery) graduateClass(classHeight uint32) error {
return err
}
utxnLog.Infof("Attempting to graduate height=%v: num_kids=%v, "+
utxnLog.Debugf("Attempting to graduate height=%v: num_kids=%v, "+
"num_babies=%v", classHeight, len(kgtnOutputs), len(cribOutputs))
// Offer the outputs to the sweeper and set up notifications that will

View file

@ -1,6 +1,6 @@
# If you change this please also update GO_VERSION in Makefile (then run
# `make lint` to see where else it needs to be updated as well).
FROM golang:1.22.6-alpine as builder
FROM golang:1.23.6-alpine as builder
LABEL maintainer="Olaoluwa Osuntokun <laolu@lightning.engineering>"

View file

@ -4,6 +4,7 @@ import (
"bytes"
"errors"
"fmt"
"strings"
"sync"
"sync/atomic"
"time"
@ -23,10 +24,13 @@ import (
"github.com/lightningnetwork/lnd/graph"
graphdb "github.com/lightningnetwork/lnd/graph/db"
"github.com/lightningnetwork/lnd/graph/db/models"
"github.com/lightningnetwork/lnd/input"
"github.com/lightningnetwork/lnd/keychain"
"github.com/lightningnetwork/lnd/lnpeer"
"github.com/lightningnetwork/lnd/lnutils"
"github.com/lightningnetwork/lnd/lnwallet"
"github.com/lightningnetwork/lnd/lnwallet/btcwallet"
"github.com/lightningnetwork/lnd/lnwallet/chanvalidate"
"github.com/lightningnetwork/lnd/lnwire"
"github.com/lightningnetwork/lnd/multimutex"
"github.com/lightningnetwork/lnd/netann"
@ -62,6 +66,11 @@ const (
// we'll maintain. This is the global size across all peers. We'll
// allocate ~3 MB max to the cache.
maxRejectedUpdates = 10_000
// DefaultProofMatureDelta specifies the default value used for
// ProofMatureDelta, which is the number of confirmations needed before
// processing the announcement signatures.
DefaultProofMatureDelta = 6
)
var (
@ -74,6 +83,23 @@ var (
// the remote peer.
ErrGossipSyncerNotFound = errors.New("gossip syncer not found")
// ErrNoFundingTransaction is returned when we are unable to find the
// funding transaction described by the short channel ID on chain.
ErrNoFundingTransaction = errors.New(
"unable to find the funding transaction",
)
// ErrInvalidFundingOutput is returned if the channel funding output
// fails validation.
ErrInvalidFundingOutput = errors.New(
"channel funding output validation failed",
)
// ErrChannelSpent is returned when we go to validate a channel, but
// the purported funding output has actually already been spent on
// chain.
ErrChannelSpent = errors.New("channel output has been spent")
// emptyPubkey is used to compare compressed pubkeys against an empty
// byte array.
emptyPubkey [33]byte
@ -359,6 +385,11 @@ type Config struct {
// updates for a channel and returns true if the channel should be
// considered a zombie based on these timestamps.
IsStillZombieChannel func(time.Time, time.Time) bool
// AssumeChannelValid toggles whether the gossiper will check for
// spent-ness of channel outpoints. For neutrino, this saves long
// rescans from blocking initial usage of the daemon.
AssumeChannelValid bool
}
// processedNetworkMsg is a wrapper around networkMsg and a boolean. It is
@ -512,6 +543,9 @@ type AuthenticatedGossiper struct {
// AuthenticatedGossiper lock.
chanUpdateRateLimiter map[uint64][2]*rate.Limiter
// vb is used to enforce job dependency ordering of gossip messages.
vb *ValidationBarrier
sync.Mutex
}
@ -537,6 +571,8 @@ func New(cfg Config, selfKeyDesc *keychain.KeyDescriptor) *AuthenticatedGossiper
banman: newBanman(),
}
gossiper.vb = NewValidationBarrier(1000, gossiper.quit)
gossiper.syncMgr = newSyncManager(&SyncManagerCfg{
ChainHash: cfg.ChainHash,
ChanSeries: cfg.ChanSeries,
@ -808,6 +844,8 @@ func (d *AuthenticatedGossiper) stop() {
func (d *AuthenticatedGossiper) ProcessRemoteAnnouncement(msg lnwire.Message,
peer lnpeer.Peer) chan error {
log.Debugf("Processing remote msg %T from peer=%x", msg, peer.PubKey())
errChan := make(chan error, 1)
// For messages in the known set of channel series queries, we'll
@ -830,9 +868,13 @@ func (d *AuthenticatedGossiper) ProcessRemoteAnnouncement(msg lnwire.Message,
// If we've found the message target, then we'll dispatch the
// message directly to it.
syncer.ProcessQueryMsg(m, peer.QuitSignal())
err := syncer.ProcessQueryMsg(m, peer.QuitSignal())
if err != nil {
log.Errorf("Process query msg from peer %x got %v",
peer.PubKey(), err)
}
errChan <- nil
errChan <- err
return errChan
// If a peer is updating its current update horizon, then we'll dispatch
@ -1398,10 +1440,6 @@ func (d *AuthenticatedGossiper) networkHandler() {
log.Errorf("Unable to rebroadcast stale announcements: %v", err)
}
// We'll use this validation to ensure that we process jobs in their
// dependency order during parallel validation.
validationBarrier := graph.NewValidationBarrier(1000, d.quit)
for {
select {
// A new policy update has arrived. We'll commit it to the
@ -1470,11 +1508,17 @@ func (d *AuthenticatedGossiper) networkHandler() {
// We'll set up any dependent, and wait until a free
// slot for this job opens up, this allow us to not
// have thousands of goroutines active.
validationBarrier.InitJobDependencies(announcement.msg)
annJobID, err := d.vb.InitJobDependencies(
announcement.msg,
)
if err != nil {
announcement.err <- err
continue
}
d.wg.Add(1)
go d.handleNetworkMessages(
announcement, &announcements, validationBarrier,
announcement, &announcements, annJobID,
)
// The trickle timer has ticked, which indicates we should
@ -1525,10 +1569,10 @@ func (d *AuthenticatedGossiper) networkHandler() {
//
// NOTE: must be run as a goroutine.
func (d *AuthenticatedGossiper) handleNetworkMessages(nMsg *networkMsg,
deDuped *deDupedAnnouncements, vb *graph.ValidationBarrier) {
deDuped *deDupedAnnouncements, jobID JobID) {
defer d.wg.Done()
defer vb.CompleteJob()
defer d.vb.CompleteJob()
// We should only broadcast this message forward if it originated from
// us or it wasn't received as part of our initial historical sync.
@ -1536,17 +1580,12 @@ func (d *AuthenticatedGossiper) handleNetworkMessages(nMsg *networkMsg,
// If this message has an existing dependency, then we'll wait until
// that has been fully validated before we proceed.
err := vb.WaitForDependants(nMsg.msg)
err := d.vb.WaitForParents(jobID, nMsg.msg)
if err != nil {
log.Debugf("Validating network message %s got err: %v",
nMsg.msg.MsgType(), err)
if !graph.IsError(
err,
graph.ErrVBarrierShuttingDown,
graph.ErrParentValidationFailed,
) {
if errors.Is(err, ErrVBarrierShuttingDown) {
log.Warnf("unexpected error during validation "+
"barrier shutdown: %v", err)
}
@ -1566,7 +1605,16 @@ func (d *AuthenticatedGossiper) handleNetworkMessages(nMsg *networkMsg,
// If this message had any dependencies, then we can now signal them to
// continue.
vb.SignalDependants(nMsg.msg, allow)
err = d.vb.SignalDependents(nMsg.msg, jobID)
if err != nil {
// Something is wrong if SignalDependents returns an error.
log.Errorf("SignalDependents returned error for msg=%v with "+
"JobID=%v", spew.Sdump(nMsg.msg), jobID)
nMsg.err <- err
return
}
// If the announcement was accepted, then add the emitted announcements
// to our announce batch to be broadcast once the trickle timer ticks
@ -1955,26 +2003,12 @@ func (d *AuthenticatedGossiper) fetchPKScript(chanID *lnwire.ShortChannelID) (
func (d *AuthenticatedGossiper) addNode(msg *lnwire.NodeAnnouncement,
op ...batch.SchedulerOption) error {
if err := graph.ValidateNodeAnn(msg); err != nil {
if err := netann.ValidateNodeAnn(msg); err != nil {
return fmt.Errorf("unable to validate node announcement: %w",
err)
}
timestamp := time.Unix(int64(msg.Timestamp), 0)
features := lnwire.NewFeatureVector(msg.Features, lnwire.Features)
node := &models.LightningNode{
HaveNodeAnnouncement: true,
LastUpdate: timestamp,
Addresses: msg.Addresses,
PubKeyBytes: msg.NodeID,
Alias: msg.Alias.String(),
AuthSigBytes: msg.Signature.ToSignatureBytes(),
Features: features,
Color: msg.RGBColor,
ExtraOpaqueData: msg.ExtraOpaqueData,
}
return d.cfg.Graph.AddNode(node, op...)
return d.cfg.Graph.AddNode(models.NodeFromWireAnnouncement(msg), op...)
}
// isPremature decides whether a given network message has a block height+delta
@ -1984,8 +2018,14 @@ func (d *AuthenticatedGossiper) addNode(msg *lnwire.NodeAnnouncement,
// NOTE: must be used inside a lock.
func (d *AuthenticatedGossiper) isPremature(chanID lnwire.ShortChannelID,
delta uint32, msg *networkMsg) bool {
// TODO(roasbeef) make height delta 6
// * or configurable
// The channel is already confirmed at chanID.BlockHeight so we minus
// one block. For instance, if the required confirmation for this
// channel announcement is 6, we then only need to wait for 5 more
// blocks once the funding tx is confirmed.
if delta > 0 {
delta--
}
msgHeight := chanID.BlockHeight + delta
@ -2058,7 +2098,7 @@ func (d *AuthenticatedGossiper) processNetworkAnnouncement(
// the existence of a channel and not yet the routing policies in
// either direction of the channel.
case *lnwire.ChannelAnnouncement1:
return d.handleChanAnnouncement(nMsg, msg, schedulerOp)
return d.handleChanAnnouncement(nMsg, msg, schedulerOp...)
// A new authenticated channel edge update has arrived. This indicates
// that the directional information for an already known channel has
@ -2371,7 +2411,8 @@ func (d *AuthenticatedGossiper) handleNodeAnnouncement(nMsg *networkMsg,
timestamp := time.Unix(int64(nodeAnn.Timestamp), 0)
log.Debugf("Processing NodeAnnouncement: peer=%v, timestamp=%v, "+
"node=%x", nMsg.peer, timestamp, nodeAnn.NodeID)
"node=%x, source=%x", nMsg.peer, timestamp, nodeAnn.NodeID,
nMsg.source.SerializeCompressed())
// We'll quickly ask the router if it already has a newer update for
// this node so we can skip validating signatures if not required.
@ -2389,7 +2430,6 @@ func (d *AuthenticatedGossiper) handleNodeAnnouncement(nMsg *networkMsg,
err,
graph.ErrOutdated,
graph.ErrIgnored,
graph.ErrVBarrierShuttingDown,
) {
log.Error(err)
@ -2430,7 +2470,8 @@ func (d *AuthenticatedGossiper) handleNodeAnnouncement(nMsg *networkMsg,
// TODO(roasbeef): get rid of the above
log.Debugf("Processed NodeAnnouncement: peer=%v, timestamp=%v, "+
"node=%x", nMsg.peer, timestamp, nodeAnn.NodeID)
"node=%x, source=%x", nMsg.peer, timestamp, nodeAnn.NodeID,
nMsg.source.SerializeCompressed())
return announcements, true
}
@ -2438,7 +2479,7 @@ func (d *AuthenticatedGossiper) handleNodeAnnouncement(nMsg *networkMsg,
// handleChanAnnouncement processes a new channel announcement.
func (d *AuthenticatedGossiper) handleChanAnnouncement(nMsg *networkMsg,
ann *lnwire.ChannelAnnouncement1,
ops []batch.SchedulerOption) ([]networkMsg, bool) {
ops ...batch.SchedulerOption) ([]networkMsg, bool) {
scid := ann.ShortChannelID
@ -2599,6 +2640,7 @@ func (d *AuthenticatedGossiper) handleChanAnnouncement(nMsg *networkMsg,
// If there were any optional message fields provided, we'll include
// them in its serialized disk representation now.
var tapscriptRoot fn.Option[chainhash.Hash]
if nMsg.optionalMsgFields != nil {
if nMsg.optionalMsgFields.capacity != nil {
edge.Capacity = *nMsg.optionalMsgFields.capacity
@ -2609,7 +2651,127 @@ func (d *AuthenticatedGossiper) handleChanAnnouncement(nMsg *networkMsg,
}
// Optional tapscript root for custom channels.
edge.TapscriptRoot = nMsg.optionalMsgFields.tapscriptRoot
tapscriptRoot = nMsg.optionalMsgFields.tapscriptRoot
}
// Before we start validation or add the edge to the database, we obtain
// the mutex for this channel ID. We do this to ensure no other
// goroutine has read the database and is now making decisions based on
// this DB state, before it writes to the DB. It also ensures that we
// don't perform the expensive validation check on the same channel
// announcement at the same time.
d.channelMtx.Lock(scid.ToUint64())
// If AssumeChannelValid is present, then we are unable to perform any
// of the expensive checks below, so we'll short-circuit our path
// straight to adding the edge to our graph. If the passed
// ShortChannelID is an alias, then we'll skip validation as it will
// not map to a legitimate tx. This is not a DoS vector as only we can
// add an alias ChannelAnnouncement from the gossiper.
if !(d.cfg.AssumeChannelValid || d.cfg.IsAlias(scid)) { //nolint:nestif
op, capacity, script, err := d.validateFundingTransaction(
ann, tapscriptRoot,
)
if err != nil {
defer d.channelMtx.Unlock(scid.ToUint64())
switch {
case errors.Is(err, ErrNoFundingTransaction),
errors.Is(err, ErrInvalidFundingOutput):
key := newRejectCacheKey(
scid.ToUint64(),
sourceToPub(nMsg.source),
)
_, _ = d.recentRejects.Put(
key, &cachedReject{},
)
// Increment the peer's ban score. We check
// isRemote so we don't actually ban the peer in
// case of a local bug.
if nMsg.isRemote {
d.banman.incrementBanScore(
nMsg.peer.PubKey(),
)
}
case errors.Is(err, ErrChannelSpent):
key := newRejectCacheKey(
scid.ToUint64(),
sourceToPub(nMsg.source),
)
_, _ = d.recentRejects.Put(key, &cachedReject{})
// Since this channel has already been closed,
// we'll add it to the graph's closed channel
// index such that we won't attempt to do
// expensive validation checks on it again.
// TODO: Populate the ScidCloser by using closed
// channel notifications.
dbErr := d.cfg.ScidCloser.PutClosedScid(scid)
if dbErr != nil {
log.Errorf("failed to mark scid(%v) "+
"as closed: %v", scid, dbErr)
nMsg.err <- dbErr
return nil, false
}
// Increment the peer's ban score. We check
// isRemote so we don't accidentally ban
// ourselves in case of a bug.
if nMsg.isRemote {
d.banman.incrementBanScore(
nMsg.peer.PubKey(),
)
}
default:
// Otherwise, this is just a regular rejected
// edge.
key := newRejectCacheKey(
scid.ToUint64(),
sourceToPub(nMsg.source),
)
_, _ = d.recentRejects.Put(key, &cachedReject{})
}
if !nMsg.isRemote {
log.Errorf("failed to add edge for local "+
"channel: %v", err)
nMsg.err <- err
return nil, false
}
shouldDc, dcErr := d.ShouldDisconnect(
nMsg.peer.IdentityKey(),
)
if dcErr != nil {
log.Errorf("failed to check if we should "+
"disconnect peer: %v", dcErr)
nMsg.err <- dcErr
return nil, false
}
if shouldDc {
nMsg.peer.Disconnect(ErrPeerBanned)
}
nMsg.err <- err
return nil, false
}
edge.FundingScript = fn.Some(script)
// TODO(roasbeef): this is a hack, needs to be removed after
// commitment fees are dynamic.
edge.Capacity = capacity
edge.ChannelPoint = op
}
log.Debugf("Adding edge for short_chan_id: %v", scid.ToUint64())
@ -2617,12 +2779,6 @@ func (d *AuthenticatedGossiper) handleChanAnnouncement(nMsg *networkMsg,
// We will add the edge to the channel router. If the nodes present in
// this channel are not present in the database, a partial node will be
// added to represent each node while we wait for a node announcement.
//
// Before we add the edge to the database, we obtain the mutex for this
// channel ID. We do this to ensure no other goroutine has read the
// database and is now making decisions based on this DB state, before
// it writes to the DB.
d.channelMtx.Lock(scid.ToUint64())
err = d.cfg.Graph.AddEdge(edge, ops...)
if err != nil {
log.Debugf("Graph rejected edge for short_chan_id(%v): %v",
@ -2633,8 +2789,7 @@ func (d *AuthenticatedGossiper) handleChanAnnouncement(nMsg *networkMsg,
// If the edge was rejected due to already being known, then it
// may be the case that this new message has a fresh channel
// proof, so we'll check.
switch {
case graph.IsError(err, graph.ErrIgnored):
if graph.IsError(err, graph.ErrIgnored) {
// Attempt to process the rejected message to see if we
// get any new announcements.
anns, rErr := d.processRejectedEdge(ann, proof)
@ -2647,6 +2802,7 @@ func (d *AuthenticatedGossiper) handleChanAnnouncement(nMsg *networkMsg,
_, _ = d.recentRejects.Put(key, cr)
nMsg.err <- rErr
return nil, false
}
@ -2662,63 +2818,15 @@ func (d *AuthenticatedGossiper) handleChanAnnouncement(nMsg *networkMsg,
nMsg.err <- nil
return anns, true
case graph.IsError(
err, graph.ErrNoFundingTransaction,
graph.ErrInvalidFundingOutput,
):
key := newRejectCacheKey(
scid.ToUint64(),
sourceToPub(nMsg.source),
)
_, _ = d.recentRejects.Put(key, &cachedReject{})
// Increment the peer's ban score. We check isRemote
// so we don't actually ban the peer in case of a local
// bug.
if nMsg.isRemote {
d.banman.incrementBanScore(nMsg.peer.PubKey())
}
case graph.IsError(err, graph.ErrChannelSpent):
key := newRejectCacheKey(
scid.ToUint64(),
sourceToPub(nMsg.source),
)
_, _ = d.recentRejects.Put(key, &cachedReject{})
// Since this channel has already been closed, we'll
// add it to the graph's closed channel index such that
// we won't attempt to do expensive validation checks
// on it again.
// TODO: Populate the ScidCloser by using closed
// channel notifications.
dbErr := d.cfg.ScidCloser.PutClosedScid(scid)
if dbErr != nil {
log.Errorf("failed to mark scid(%v) as "+
"closed: %v", scid, dbErr)
nMsg.err <- dbErr
return nil, false
}
// Increment the peer's ban score. We check isRemote
// so we don't accidentally ban ourselves in case of a
// bug.
if nMsg.isRemote {
d.banman.incrementBanScore(nMsg.peer.PubKey())
}
default:
// Otherwise, this is just a regular rejected edge.
key := newRejectCacheKey(
scid.ToUint64(),
sourceToPub(nMsg.source),
)
_, _ = d.recentRejects.Put(key, &cachedReject{})
}
// Otherwise, this is just a regular rejected edge.
key := newRejectCacheKey(
scid.ToUint64(),
sourceToPub(nMsg.source),
)
_, _ = d.recentRejects.Put(key, &cachedReject{})
if !nMsg.isRemote {
log.Errorf("failed to add edge for local channel: %v",
err)
@ -2889,6 +2997,12 @@ func (d *AuthenticatedGossiper) handleChanUpdate(nMsg *networkMsg,
graphScid = upd.ShortChannelID
}
// We make sure to obtain the mutex for this channel ID before we access
// the database. This ensures the state we read from the database has
// not changed between this point and when we call UpdateEdge() later.
d.channelMtx.Lock(graphScid.ToUint64())
defer d.channelMtx.Unlock(graphScid.ToUint64())
if d.cfg.Graph.IsStaleEdgePolicy(
graphScid, timestamp, upd.ChannelFlags,
) {
@ -2921,14 +3035,6 @@ func (d *AuthenticatedGossiper) handleChanUpdate(nMsg *networkMsg,
// Get the node pub key as far since we don't have it in the channel
// update announcement message. We'll need this to properly verify the
// message's signature.
//
// We make sure to obtain the mutex for this channel ID before we
// access the database. This ensures the state we read from the
// database has not changed between this point and when we call
// UpdateEdge() later.
d.channelMtx.Lock(graphScid.ToUint64())
defer d.channelMtx.Unlock(graphScid.ToUint64())
chanInfo, e1, e2, err := d.cfg.Graph.GetChannelByID(graphScid)
switch {
// No error, break.
@ -3034,9 +3140,9 @@ func (d *AuthenticatedGossiper) handleChanUpdate(nMsg *networkMsg,
edgeToUpdate = e2
}
log.Debugf("Validating ChannelUpdate: channel=%v, from node=%x, has "+
"edge=%v", chanInfo.ChannelID, pubKey.SerializeCompressed(),
edgeToUpdate != nil)
log.Debugf("Validating ChannelUpdate: channel=%v, for node=%x, has "+
"edge policy=%v", chanInfo.ChannelID,
pubKey.SerializeCompressed(), edgeToUpdate != nil)
// Validate the channel announcement with the expected public key and
// channel capacity. In the case of an invalid channel update, we'll
@ -3129,7 +3235,6 @@ func (d *AuthenticatedGossiper) handleChanUpdate(nMsg *networkMsg,
if graph.IsError(
err, graph.ErrOutdated,
graph.ErrIgnored,
graph.ErrVBarrierShuttingDown,
) {
log.Debugf("Update edge for short_chan_id(%v) got: %v",
@ -3580,3 +3685,165 @@ func (d *AuthenticatedGossiper) ShouldDisconnect(pubkey *btcec.PublicKey) (
return false, nil
}
// validateFundingTransaction fetches the channel announcements claimed funding
// transaction from chain to ensure that it exists, is not spent and matches
// the channel announcement proof. The transaction's outpoint and value are
// returned if we can glean them from the work done in this method.
func (d *AuthenticatedGossiper) validateFundingTransaction(
ann *lnwire.ChannelAnnouncement1,
tapscriptRoot fn.Option[chainhash.Hash]) (wire.OutPoint, btcutil.Amount,
[]byte, error) {
scid := ann.ShortChannelID
// Before we can add the channel to the channel graph, we need to obtain
// the full funding outpoint that's encoded within the channel ID.
fundingTx, err := lnwallet.FetchFundingTxWrapper(
d.cfg.ChainIO, &scid, d.quit,
)
if err != nil {
//nolint:ll
//
// In order to ensure we don't erroneously mark a channel as a
// zombie due to an RPC failure, we'll attempt to string match
// for the relevant errors.
//
// * btcd:
// * https://github.com/btcsuite/btcd/blob/master/rpcserver.go#L1316
// * https://github.com/btcsuite/btcd/blob/master/rpcserver.go#L1086
// * bitcoind:
// * https://github.com/bitcoin/bitcoin/blob/7fcf53f7b4524572d1d0c9a5fdc388e87eb02416/src/rpc/blockchain.cpp#L770
// * https://github.com/bitcoin/bitcoin/blob/7fcf53f7b4524572d1d0c9a5fdc388e87eb02416/src/rpc/blockchain.cpp#L954
switch {
case strings.Contains(err.Error(), "not found"):
fallthrough
case strings.Contains(err.Error(), "out of range"):
// If the funding transaction isn't found at all, then
// we'll mark the edge itself as a zombie so we don't
// continue to request it. We use the "zero key" for
// both node pubkeys so this edge can't be resurrected.
zErr := d.cfg.Graph.MarkZombieEdge(scid.ToUint64())
if zErr != nil {
return wire.OutPoint{}, 0, nil, zErr
}
default:
}
return wire.OutPoint{}, 0, nil, fmt.Errorf("%w: %w",
ErrNoFundingTransaction, err)
}
// Recreate witness output to be sure that declared in channel edge
// bitcoin keys and channel value corresponds to the reality.
fundingPkScript, err := makeFundingScript(
ann.BitcoinKey1[:], ann.BitcoinKey2[:], ann.Features,
tapscriptRoot,
)
if err != nil {
return wire.OutPoint{}, 0, nil, err
}
// Next we'll validate that this channel is actually well formed. If
// this check fails, then this channel either doesn't exist, or isn't
// the one that was meant to be created according to the passed channel
// proofs.
fundingPoint, err := chanvalidate.Validate(
&chanvalidate.Context{
Locator: &chanvalidate.ShortChanIDChanLocator{
ID: scid,
},
MultiSigPkScript: fundingPkScript,
FundingTx: fundingTx,
},
)
if err != nil {
// Mark the edge as a zombie so we won't try to re-validate it
// on start up.
zErr := d.cfg.Graph.MarkZombieEdge(scid.ToUint64())
if zErr != nil {
return wire.OutPoint{}, 0, nil, zErr
}
return wire.OutPoint{}, 0, nil, fmt.Errorf("%w: %w",
ErrInvalidFundingOutput, err)
}
// Now that we have the funding outpoint of the channel, ensure
// that it hasn't yet been spent. If so, then this channel has
// been closed so we'll ignore it.
chanUtxo, err := d.cfg.ChainIO.GetUtxo(
fundingPoint, fundingPkScript, scid.BlockHeight, d.quit,
)
if err != nil {
if errors.Is(err, btcwallet.ErrOutputSpent) {
zErr := d.cfg.Graph.MarkZombieEdge(scid.ToUint64())
if zErr != nil {
return wire.OutPoint{}, 0, nil, zErr
}
}
return wire.OutPoint{}, 0, nil, fmt.Errorf("%w: unable to "+
"fetch utxo for chan_id=%v, chan_point=%v: %w",
ErrChannelSpent, scid.ToUint64(), fundingPoint, err)
}
return *fundingPoint, btcutil.Amount(chanUtxo.Value), fundingPkScript,
nil
}
// makeFundingScript is used to make the funding script for both segwit v0 and
// segwit v1 (taproot) channels.
func makeFundingScript(bitcoinKey1, bitcoinKey2 []byte,
features *lnwire.RawFeatureVector,
tapscriptRoot fn.Option[chainhash.Hash]) ([]byte, error) {
legacyFundingScript := func() ([]byte, error) {
witnessScript, err := input.GenMultiSigScript(
bitcoinKey1, bitcoinKey2,
)
if err != nil {
return nil, err
}
pkScript, err := input.WitnessScriptHash(witnessScript)
if err != nil {
return nil, err
}
return pkScript, nil
}
if features.IsEmpty() {
return legacyFundingScript()
}
chanFeatureBits := lnwire.NewFeatureVector(features, lnwire.Features)
if chanFeatureBits.HasFeature(
lnwire.SimpleTaprootChannelsOptionalStaging,
) {
pubKey1, err := btcec.ParsePubKey(bitcoinKey1)
if err != nil {
return nil, err
}
pubKey2, err := btcec.ParsePubKey(bitcoinKey2)
if err != nil {
return nil, err
}
fundingScript, _, err := input.GenTaprootFundingScript(
pubKey1, pubKey2, 0, tapscriptRoot,
)
if err != nil {
return nil, err
}
// TODO(roasbeef): add tapscript root to gossip v1.5
return fundingScript, nil
}
return legacyFundingScript()
}

File diff suppressed because it is too large Load diff

View file

@ -529,8 +529,8 @@ func (m *SyncManager) createGossipSyncer(peer lnpeer.Peer) *GossipSyncer {
s.setSyncState(chansSynced)
s.setSyncType(PassiveSync)
log.Debugf("Created new GossipSyncer[state=%s type=%s] for peer=%v",
s.syncState(), s.SyncType(), peer)
log.Debugf("Created new GossipSyncer[state=%s type=%s] for peer=%x",
s.syncState(), s.SyncType(), peer.PubKey())
return s
}

View file

@ -28,7 +28,7 @@ func randPeer(t *testing.T, quit chan struct{}) *mockPeer {
func peerWithPubkey(pk *btcec.PublicKey, quit chan struct{}) *mockPeer {
return &mockPeer{
pk: pk,
sentMsgs: make(chan lnwire.Message),
sentMsgs: make(chan lnwire.Message, 1),
quit: quit,
}
}
@ -483,7 +483,9 @@ func TestSyncManagerWaitUntilInitialHistoricalSync(t *testing.T) {
// transition it to chansSynced to ensure the remaining syncers
// aren't started as active.
if i == 0 {
assertSyncerStatus(t, s, syncingChans, PassiveSync)
assertSyncerStatus(
t, s, waitingQueryRangeReply, PassiveSync,
)
continue
}

View file

@ -181,6 +181,9 @@ const (
// requestBatchSize is the maximum number of channels we will query the
// remote peer for in a QueryShortChanIDs message.
requestBatchSize = 500
// syncerBufferSize is the size of the syncer's buffers.
syncerBufferSize = 5
)
var (
@ -436,8 +439,8 @@ func newGossipSyncer(cfg gossipSyncerCfg, sema chan struct{}) *GossipSyncer {
rateLimiter: rateLimiter,
syncTransitionReqs: make(chan *syncTransitionReq),
historicalSyncReqs: make(chan *historicalSyncReq),
gossipMsgs: make(chan lnwire.Message, 100),
queryMsgs: make(chan lnwire.Message, 100),
gossipMsgs: make(chan lnwire.Message, syncerBufferSize),
queryMsgs: make(chan lnwire.Message, syncerBufferSize),
syncerSema: sema,
quit: make(chan struct{}),
}
@ -475,6 +478,39 @@ func (g *GossipSyncer) Stop() {
})
}
// handleSyncingChans handles the state syncingChans for the GossipSyncer. When
// in this state, we will send a QueryChannelRange msg to our peer and advance
// the syncer's state to waitingQueryRangeReply.
func (g *GossipSyncer) handleSyncingChans() {
// Prepare the query msg.
queryRangeMsg, err := g.genChanRangeQuery(g.genHistoricalChanRangeQuery)
if err != nil {
log.Errorf("Unable to gen chan range query: %v", err)
return
}
// Acquire a lock so the following state transition is atomic.
//
// NOTE: We must lock the following steps as it's possible we get an
// immediate response (ReplyChannelRange) after sending the query msg.
// The response is handled in ProcessQueryMsg, which requires the
// current state to be waitingQueryRangeReply.
g.Lock()
defer g.Unlock()
// Send the msg to the remote peer, which is non-blocking as
// `sendToPeer` only queues the msg in Brontide.
err = g.cfg.sendToPeer(queryRangeMsg)
if err != nil {
log.Errorf("Unable to send chan range query: %v", err)
return
}
// With the message sent successfully, we'll transition into the next
// state where we wait for their reply.
g.setSyncState(waitingQueryRangeReply)
}
// channelGraphSyncer is the main goroutine responsible for ensuring that we
// properly channel graph state with the remote peer, and also that we only
// send them messages which actually pass their defined update horizon.
@ -495,27 +531,7 @@ func (g *GossipSyncer) channelGraphSyncer() {
// understand, as we'll as responding to any other queries by
// them.
case syncingChans:
// If we're in this state, then we'll send the remote
// peer our opening QueryChannelRange message.
queryRangeMsg, err := g.genChanRangeQuery(
g.genHistoricalChanRangeQuery,
)
if err != nil {
log.Errorf("Unable to gen chan range "+
"query: %v", err)
return
}
err = g.cfg.sendToPeer(queryRangeMsg)
if err != nil {
log.Errorf("Unable to send chan range "+
"query: %v", err)
return
}
// With the message sent successfully, we'll transition
// into the next state where we wait for their reply.
g.setSyncState(waitingQueryRangeReply)
g.handleSyncingChans()
// In this state, we've sent out our initial channel range
// query and are waiting for the final response from the remote
@ -558,15 +574,11 @@ func (g *GossipSyncer) channelGraphSyncer() {
// First, we'll attempt to continue our channel
// synchronization by continuing to send off another
// query chunk.
done, err := g.synchronizeChanIDs()
if err != nil {
log.Errorf("Unable to sync chan IDs: %v", err)
}
done := g.synchronizeChanIDs()
// If this wasn't our last query, then we'll need to
// transition to our waiting state.
if !done {
g.setSyncState(waitingQueryChanReply)
continue
}
@ -723,14 +735,15 @@ func (g *GossipSyncer) sendGossipTimestampRange(firstTimestamp time.Time,
// been queried for with a response received. We'll chunk our requests as
// required to ensure they fit into a single message. We may re-renter this
// state in the case that chunking is required.
func (g *GossipSyncer) synchronizeChanIDs() (bool, error) {
func (g *GossipSyncer) synchronizeChanIDs() bool {
// If we're in this state yet there are no more new channels to query
// for, then we'll transition to our final synced state and return true
// to signal that we're fully synchronized.
if len(g.newChansToQuery) == 0 {
log.Infof("GossipSyncer(%x): no more chans to query",
g.cfg.peerPub[:])
return true, nil
return true
}
// Otherwise, we'll issue our next chunked query to receive replies
@ -754,6 +767,9 @@ func (g *GossipSyncer) synchronizeChanIDs() (bool, error) {
log.Infof("GossipSyncer(%x): querying for %v new channels",
g.cfg.peerPub[:], len(queryChunk))
// Change the state before sending the query msg.
g.setSyncState(waitingQueryChanReply)
// With our chunk obtained, we'll send over our next query, then return
// false indicating that we're net yet fully synced.
err := g.cfg.sendToPeer(&lnwire.QueryShortChanIDs{
@ -761,8 +777,11 @@ func (g *GossipSyncer) synchronizeChanIDs() (bool, error) {
EncodingType: lnwire.EncodingSortedPlain,
ShortChanIDs: queryChunk,
})
if err != nil {
log.Errorf("Unable to sync chan IDs: %v", err)
}
return false, err
return false
}
// isLegacyReplyChannelRange determines where a ReplyChannelRange message is
@ -1342,9 +1361,9 @@ func (g *GossipSyncer) ApplyGossipFilter(filter *lnwire.GossipTimestampRange) er
return err
}
log.Infof("GossipSyncer(%x): applying new update horizon: start=%v, "+
"end=%v, backlog_size=%v", g.cfg.peerPub[:], startTime, endTime,
len(newUpdatestoSend))
log.Infof("GossipSyncer(%x): applying new remote update horizon: "+
"start=%v, end=%v, backlog_size=%v", g.cfg.peerPub[:],
startTime, endTime, len(newUpdatestoSend))
// If we don't have any to send, then we can return early.
if len(newUpdatestoSend) == 0 {
@ -1515,12 +1534,15 @@ func (g *GossipSyncer) ProcessQueryMsg(msg lnwire.Message, peerQuit <-chan struc
// Reply messages should only be expected in states where we're waiting
// for a reply.
case *lnwire.ReplyChannelRange, *lnwire.ReplyShortChanIDsEnd:
g.Lock()
syncState := g.syncState()
g.Unlock()
if syncState != waitingQueryRangeReply &&
syncState != waitingQueryChanReply {
return fmt.Errorf("received unexpected query reply "+
"message %T", msg)
return fmt.Errorf("unexpected msg %T received in "+
"state %v", msg, syncState)
}
msgChan = g.gossipMsgs

View file

@ -1478,10 +1478,7 @@ func TestGossipSyncerSynchronizeChanIDs(t *testing.T) {
for i := 0; i < chunkSize*2; i += 2 {
// With our set up complete, we'll request a sync of chan ID's.
done, err := syncer.synchronizeChanIDs()
if err != nil {
t.Fatalf("unable to sync chan IDs: %v", err)
}
done := syncer.synchronizeChanIDs()
// At this point, we shouldn't yet be done as only 2 items
// should have been queried for.
@ -1528,8 +1525,7 @@ func TestGossipSyncerSynchronizeChanIDs(t *testing.T) {
}
// If we issue another query, the syncer should tell us that it's done.
done, err := syncer.synchronizeChanIDs()
require.NoError(t, err, "unable to sync chan IDs")
done := syncer.synchronizeChanIDs()
if done {
t.Fatalf("syncer should be finished!")
}

View file

@ -0,0 +1,464 @@
package discovery
import (
"fmt"
"sync"
"sync/atomic"
"github.com/go-errors/errors"
"github.com/lightningnetwork/lnd/fn/v2"
"github.com/lightningnetwork/lnd/lnwire"
"github.com/lightningnetwork/lnd/routing/route"
)
var (
// ErrVBarrierShuttingDown signals that the barrier has been requested
// to shutdown, and that the caller should not treat the wait condition
// as fulfilled.
ErrVBarrierShuttingDown = errors.New("ValidationBarrier shutting down")
)
// JobID identifies an active job in the validation barrier. It is large so
// that we don't need to worry about overflows.
type JobID uint64
// jobInfo stores job dependency info for a set of dependent gossip messages.
type jobInfo struct {
// activeParentJobIDs is the set of active parent job ids.
activeParentJobIDs fn.Set[JobID]
// activeDependentJobs is the set of active dependent job ids.
activeDependentJobs fn.Set[JobID]
}
// ValidationBarrier is a barrier used to enforce a strict validation order
// while concurrently validating other updates for channel edges. It uses a set
// of maps to track validation dependencies. This is needed in practice because
// gossip messages for a given channel may arive in order, but then due to
// scheduling in different goroutines, may be validated in the wrong order.
// With the ValidationBarrier, the dependent update will wait until the parent
// update completes.
type ValidationBarrier struct {
// validationSemaphore is a channel of structs which is used as a
// semaphore. Initially we'll fill this with a buffered channel of the
// size of the number of active requests. Each new job will consume
// from this channel, then restore the value upon completion.
validationSemaphore chan struct{}
// jobInfoMap stores the set of job ids for each channel.
// NOTE: This MUST be used with the mutex.
// NOTE: This currently stores string representations of
// lnwire.ShortChannelID and route.Vertex. Since these are of different
// lengths, collision cannot occur in their string representations.
// N.B.: Check that any new string-converted types don't collide with
// existing string-converted types.
jobInfoMap map[string]*jobInfo
// jobDependencies is a mapping from a child's JobID to the set of
// parent JobID that it depends on.
// NOTE: This MUST be used with the mutex.
jobDependencies map[JobID]fn.Set[JobID]
// childJobChans stores the notification channel that each child job
// listens on for parent job completions.
// NOTE: This MUST be used with the mutex.
childJobChans map[JobID]chan struct{}
// idCtr is an atomic integer that is used to assign JobIDs.
idCtr atomic.Uint64
quit chan struct{}
sync.Mutex
}
// NewValidationBarrier creates a new instance of a validation barrier given
// the total number of active requests, and a quit channel which will be used
// to know when to kill pending, but unfilled jobs.
func NewValidationBarrier(numActiveReqs int,
quitChan chan struct{}) *ValidationBarrier {
v := &ValidationBarrier{
jobInfoMap: make(map[string]*jobInfo),
jobDependencies: make(map[JobID]fn.Set[JobID]),
childJobChans: make(map[JobID]chan struct{}),
quit: quitChan,
}
// We'll first initialize a set of semaphores to limit our concurrency
// when validating incoming requests in parallel.
v.validationSemaphore = make(chan struct{}, numActiveReqs)
for i := 0; i < numActiveReqs; i++ {
v.validationSemaphore <- struct{}{}
}
return v
}
// InitJobDependencies will wait for a new job slot to become open, and then
// sets up any dependent signals/trigger for the new job.
func (v *ValidationBarrier) InitJobDependencies(job interface{}) (JobID,
error) {
// We'll wait for either a new slot to become open, or for the quit
// channel to be closed.
select {
case <-v.validationSemaphore:
case <-v.quit:
}
v.Lock()
defer v.Unlock()
// updateOrCreateJobInfo modifies the set of activeParentJobs for this
// annID and updates jobInfoMap.
updateOrCreateJobInfo := func(annID string, annJobID JobID) {
info, ok := v.jobInfoMap[annID]
if ok {
// If an entry already exists for annID, then a job
// related to it is being validated. Add to the set of
// parent job ids. This addition will only affect
// _later_, _child_ jobs for the annID.
info.activeParentJobIDs.Add(annJobID)
return
}
// No entry exists for annID, meaning that we should create
// one.
parentJobSet := fn.NewSet(annJobID)
info = &jobInfo{
activeParentJobIDs: parentJobSet,
activeDependentJobs: fn.NewSet[JobID](),
}
v.jobInfoMap[annID] = info
}
// populateDependencies populates the job dependency mappings (i.e.
// which should complete after another) for the (annID, childJobID)
// tuple.
populateDependencies := func(annID string, childJobID JobID) {
// If there is no entry in the jobInfoMap, we don't have to
// wait on any parent jobs to finish.
info, ok := v.jobInfoMap[annID]
if !ok {
return
}
// We want to see a snapshot of active parent jobs for this
// annID that are already registered in activeParentJobIDs. The
// child job identified by childJobID can only run after these
// parent jobs have run. After grabbing the snapshot, we then
// want to persist a slice of these jobs.
// Create the notification chan that parent jobs will send (or
// close) on when they complete.
jobChan := make(chan struct{})
// Add to set of activeDependentJobs for this annID.
info.activeDependentJobs.Add(childJobID)
// Store in childJobChans. The parent jobs will fetch this chan
// to notify on. The child job will later fetch this chan to
// listen on when WaitForParents is called.
v.childJobChans[childJobID] = jobChan
// Copy over the parent job IDs at this moment for this annID.
// This job must be processed AFTER those parent IDs.
parentJobs := info.activeParentJobIDs.Copy()
// Populate the jobDependencies mapping.
v.jobDependencies[childJobID] = parentJobs
}
// Once a slot is open, we'll examine the message of the job, to see if
// there need to be any dependent barriers set up.
switch msg := job.(type) {
case *lnwire.ChannelAnnouncement1:
id := JobID(v.idCtr.Add(1))
updateOrCreateJobInfo(msg.ShortChannelID.String(), id)
updateOrCreateJobInfo(route.Vertex(msg.NodeID1).String(), id)
updateOrCreateJobInfo(route.Vertex(msg.NodeID2).String(), id)
return id, nil
// Populate the dependency mappings for the below child jobs.
case *lnwire.ChannelUpdate1:
childJobID := JobID(v.idCtr.Add(1))
populateDependencies(msg.ShortChannelID.String(), childJobID)
return childJobID, nil
case *lnwire.NodeAnnouncement:
childJobID := JobID(v.idCtr.Add(1))
populateDependencies(
route.Vertex(msg.NodeID).String(), childJobID,
)
return childJobID, nil
case *lnwire.AnnounceSignatures1:
// TODO(roasbeef): need to wait on chan ann?
// - We can do the above by calling populateDependencies. For
// now, while we evaluate potential side effects, don't do
// anything with childJobID and just return it.
childJobID := JobID(v.idCtr.Add(1))
return childJobID, nil
default:
// An invalid message was passed into InitJobDependencies.
// Return an error.
return JobID(0), errors.New("invalid message")
}
}
// CompleteJob returns a free slot to the set of available job slots. This
// should be called once a job has been fully completed. Otherwise, slots may
// not be returned to the internal scheduling, causing a deadlock when a new
// overflow job is attempted.
func (v *ValidationBarrier) CompleteJob() {
select {
case v.validationSemaphore <- struct{}{}:
case <-v.quit:
}
}
// WaitForParents will block until all parent job dependencies have went
// through the validation pipeline. This allows us a graceful way to run jobs
// in goroutines and still have strict ordering guarantees. If this job doesn't
// have any parent job dependencies, then this function will return
// immediately.
func (v *ValidationBarrier) WaitForParents(childJobID JobID,
job interface{}) error {
var (
ok bool
jobDesc string
parentJobIDs fn.Set[JobID]
annID string
jobChan chan struct{}
)
// Acquire a lock to read ValidationBarrier.
v.Lock()
switch msg := job.(type) {
// Any ChannelUpdate or NodeAnnouncement jobs will need to wait on the
// completion of any active ChannelAnnouncement jobs related to them.
case *lnwire.ChannelUpdate1:
annID = msg.ShortChannelID.String()
parentJobIDs, ok = v.jobDependencies[childJobID]
if !ok {
// If ok is false, it means that this child job never
// had any parent jobs to wait on.
v.Unlock()
return nil
}
jobDesc = fmt.Sprintf("job=lnwire.ChannelUpdate, scid=%v",
msg.ShortChannelID.ToUint64())
case *lnwire.NodeAnnouncement:
annID = route.Vertex(msg.NodeID).String()
parentJobIDs, ok = v.jobDependencies[childJobID]
if !ok {
// If ok is false, it means that this child job never
// had any parent jobs to wait on.
v.Unlock()
return nil
}
jobDesc = fmt.Sprintf("job=lnwire.NodeAnnouncement, pub=%s",
route.Vertex(msg.NodeID))
// Other types of jobs can be executed immediately, so we'll just
// return directly.
case *lnwire.AnnounceSignatures1:
// TODO(roasbeef): need to wait on chan ann?
v.Unlock()
return nil
case *lnwire.ChannelAnnouncement1:
v.Unlock()
return nil
}
// Release the lock once the above read is finished.
v.Unlock()
log.Debugf("Waiting for dependent on %s", jobDesc)
v.Lock()
jobChan, ok = v.childJobChans[childJobID]
if !ok {
v.Unlock()
// The entry may not exist because this job does not depend on
// any parent jobs.
return nil
}
v.Unlock()
for {
select {
case <-v.quit:
return ErrVBarrierShuttingDown
case <-jobChan:
// Every time this is sent on or if it's closed, a
// parent job has finished. The parent jobs have to
// also potentially close the channel because if all
// the parent jobs finish and call SignalDependents
// before the goroutine running WaitForParents has a
// chance to grab the notification chan from
// childJobChans, then the running goroutine will wait
// here for a notification forever. By having the last
// parent job close the notificiation chan, we avoid
// this issue.
// Check and see if we have any parent jobs left. If we
// don't, we can finish up.
v.Lock()
info, found := v.jobInfoMap[annID]
if !found {
v.Unlock()
// No parent job info found, proceed with
// validation.
return nil
}
x := parentJobIDs.Intersect(info.activeParentJobIDs)
v.Unlock()
if x.IsEmpty() {
// The parent jobs have all completed. We can
// proceed with validation.
return nil
}
// If we've reached this point, we are still waiting on
// a parent job to complete.
}
}
}
// SignalDependents signals to any child jobs that this parent job has
// finished.
func (v *ValidationBarrier) SignalDependents(job interface{}, id JobID) error {
v.Lock()
defer v.Unlock()
// removeJob either removes a child job or a parent job. If it is
// removing a child job, then it removes the child's JobID from the set
// of dependent jobs for the announcement ID. If this is removing a
// parent job, then it removes the parentJobID from the set of active
// parent jobs and notifies the child jobs that it has finished
// validating.
removeJob := func(annID string, id JobID, child bool) error {
if child {
// If we're removing a child job, check jobInfoMap and
// remove this job from activeDependentJobs.
info, ok := v.jobInfoMap[annID]
if ok {
info.activeDependentJobs.Remove(id)
}
// Remove the notification chan from childJobChans.
delete(v.childJobChans, id)
// Remove this job's dependency mapping.
delete(v.jobDependencies, id)
return nil
}
// Otherwise, we are removing a parent job.
jobInfo, found := v.jobInfoMap[annID]
if !found {
// NOTE: Some sort of consistency guarantee has been
// broken.
return fmt.Errorf("no job info found for "+
"identifier(%v)", id)
}
jobInfo.activeParentJobIDs.Remove(id)
lastJob := jobInfo.activeParentJobIDs.IsEmpty()
// Notify all dependent jobs that a parent job has completed.
for child := range jobInfo.activeDependentJobs {
notifyChan, ok := v.childJobChans[child]
if !ok {
// NOTE: Some sort of consistency guarantee has
// been broken.
return fmt.Errorf("no job info found for "+
"identifier(%v)", id)
}
// We don't want to block when sending out the signal.
select {
case notifyChan <- struct{}{}:
default:
}
// If this is the last parent job for this annID, also
// close the channel. This is needed because it's
// possible that the parent job cleans up the job
// mappings before the goroutine handling the child job
// has a chance to call WaitForParents and catch the
// signal sent above. We are allowed to close because
// no other parent job will be able to send along the
// channel (or close) as we're removing the entry from
// the jobInfoMap below.
if lastJob {
close(notifyChan)
}
}
// Remove from jobInfoMap if last job.
if lastJob {
delete(v.jobInfoMap, annID)
}
return nil
}
switch msg := job.(type) {
case *lnwire.ChannelAnnouncement1:
// Signal to the child jobs that parent validation has
// finished. We have to call removeJob for each annID
// that this ChannelAnnouncement can be associated with.
err := removeJob(msg.ShortChannelID.String(), id, false)
if err != nil {
return err
}
err = removeJob(route.Vertex(msg.NodeID1).String(), id, false)
if err != nil {
return err
}
err = removeJob(route.Vertex(msg.NodeID2).String(), id, false)
if err != nil {
return err
}
return nil
case *lnwire.NodeAnnouncement:
// Remove child job info.
return removeJob(route.Vertex(msg.NodeID).String(), id, true)
case *lnwire.ChannelUpdate1:
// Remove child job info.
return removeJob(msg.ShortChannelID.String(), id, true)
case *lnwire.AnnounceSignatures1:
// No dependency mappings are stored for AnnounceSignatures1,
// so do nothing.
return nil
}
return errors.New("invalid message - no job dependencies")
}

View file

@ -0,0 +1,315 @@
package discovery
import (
"encoding/binary"
"errors"
"sync"
"testing"
"time"
"github.com/lightningnetwork/lnd/lnwire"
"github.com/stretchr/testify/require"
)
// TestValidationBarrierSemaphore checks basic properties of the validation
// barrier's semaphore wrt. enqueuing/dequeuing.
func TestValidationBarrierSemaphore(t *testing.T) {
t.Parallel()
const (
numTasks = 8
numPendingTasks = 8
timeout = 50 * time.Millisecond
)
quit := make(chan struct{})
barrier := NewValidationBarrier(numTasks, quit)
var scidMtx sync.RWMutex
currentScid := lnwire.ShortChannelID{}
// Saturate the semaphore with jobs.
for i := 0; i < numTasks; i++ {
scidMtx.Lock()
dummyUpdate := &lnwire.ChannelUpdate1{
ShortChannelID: currentScid,
}
currentScid.TxIndex++
scidMtx.Unlock()
_, err := barrier.InitJobDependencies(dummyUpdate)
require.NoError(t, err)
}
// Spawn additional tasks that will signal completion when added.
jobAdded := make(chan struct{})
for i := 0; i < numPendingTasks; i++ {
go func() {
scidMtx.Lock()
dummyUpdate := &lnwire.ChannelUpdate1{
ShortChannelID: currentScid,
}
currentScid.TxIndex++
scidMtx.Unlock()
_, err := barrier.InitJobDependencies(dummyUpdate)
require.NoError(t, err)
jobAdded <- struct{}{}
}()
}
// Check that no jobs are added while semaphore is full.
select {
case <-time.After(timeout):
// Expected since no slots open.
case <-jobAdded:
t.Fatalf("job should not have been added")
}
// Complete jobs one at a time and verify that they get added.
for i := 0; i < numPendingTasks; i++ {
barrier.CompleteJob()
select {
case <-time.After(timeout):
t.Fatalf("timeout waiting for job to be added")
case <-jobAdded:
// Expected since one slot opened up.
}
}
}
// TestValidationBarrierQuit checks that pending validation tasks will return an
// error from WaitForDependants if the barrier's quit signal is canceled.
func TestValidationBarrierQuit(t *testing.T) {
t.Parallel()
const (
numTasks = 8
timeout = 50 * time.Millisecond
)
quit := make(chan struct{})
barrier := NewValidationBarrier(2*numTasks, quit)
// Create a set of unique channel announcements that we will prep for
// validation.
anns := make([]*lnwire.ChannelAnnouncement1, 0, numTasks)
parentJobIDs := make([]JobID, 0, numTasks)
for i := 0; i < numTasks; i++ {
anns = append(anns, &lnwire.ChannelAnnouncement1{
ShortChannelID: lnwire.NewShortChanIDFromInt(uint64(i)),
NodeID1: nodeIDFromInt(uint64(2 * i)),
NodeID2: nodeIDFromInt(uint64(2*i + 1)),
})
parentJobID, err := barrier.InitJobDependencies(anns[i])
require.NoError(t, err)
parentJobIDs = append(parentJobIDs, parentJobID)
}
// Create a set of channel updates, that must wait until their
// associated channel announcement has been verified.
chanUpds := make([]*lnwire.ChannelUpdate1, 0, numTasks)
childJobIDs := make([]JobID, 0, numTasks)
for i := 0; i < numTasks; i++ {
chanUpds = append(chanUpds, &lnwire.ChannelUpdate1{
ShortChannelID: lnwire.NewShortChanIDFromInt(uint64(i)),
})
childJob, err := barrier.InitJobDependencies(chanUpds[i])
require.NoError(t, err)
childJobIDs = append(childJobIDs, childJob)
}
// Spawn additional tasks that will send the error returned after
// waiting for the announcements to finish. In the background, we will
// iteratively queue the channel updates, which will send back the error
// returned from waiting.
jobErrs := make(chan error)
for i := 0; i < numTasks; i++ {
go func(ii int) {
jobErrs <- barrier.WaitForParents(
childJobIDs[ii], chanUpds[ii],
)
}(i)
}
// Check that no jobs are added while semaphore is full.
select {
case <-time.After(timeout):
// Expected since no slots open.
case <-jobErrs:
t.Fatalf("job should not have been signaled")
}
// Complete the first half of jobs, one at a time, verifying that they
// get signaled. Then, quit the barrier and check that all others exit
// with the correct error.
for i := 0; i < numTasks; i++ {
switch {
case i < numTasks/2:
err := barrier.SignalDependents(
anns[i], parentJobIDs[i],
)
require.NoError(t, err)
barrier.CompleteJob()
// At midpoint, quit the validation barrier.
case i == numTasks/2:
close(quit)
}
var err error
select {
case <-time.After(timeout):
t.Fatalf("timeout waiting for job to be signaled")
case err = <-jobErrs:
}
switch {
// First half should return without failure.
case i < numTasks/2 && err != nil:
t.Fatalf("unexpected failure while waiting: %v", err)
// Last half should return the shutdown error.
case i >= numTasks/2 && !errors.Is(
err, ErrVBarrierShuttingDown,
):
t.Fatalf("expected failure after quitting: want %v, "+
"got %v", ErrVBarrierShuttingDown, err)
}
}
}
// TestValidationBarrierParentJobsClear tests that creating two parent jobs for
// ChannelUpdate / NodeAnnouncement will pause child jobs until the set of
// parent jobs has cleared.
func TestValidationBarrierParentJobsClear(t *testing.T) {
t.Parallel()
const (
numTasks = 8
timeout = time.Second
)
quit := make(chan struct{})
barrier := NewValidationBarrier(numTasks, quit)
sharedScid := lnwire.NewShortChanIDFromInt(0)
sharedNodeID := nodeIDFromInt(0)
// Create a set of gossip messages that depend on each other. ann1 and
// ann2 share the ShortChannelID field. ann1 and ann3 share both the
// ShortChannelID field and the NodeID1 field. These shared values let
// us test the "set" properties of the ValidationBarrier.
ann1 := &lnwire.ChannelAnnouncement1{
ShortChannelID: sharedScid,
NodeID1: sharedNodeID,
NodeID2: nodeIDFromInt(1),
}
parentID1, err := barrier.InitJobDependencies(ann1)
require.NoError(t, err)
ann2 := &lnwire.ChannelAnnouncement1{
ShortChannelID: sharedScid,
NodeID1: nodeIDFromInt(2),
NodeID2: nodeIDFromInt(3),
}
parentID2, err := barrier.InitJobDependencies(ann2)
require.NoError(t, err)
ann3 := &lnwire.ChannelAnnouncement1{
ShortChannelID: sharedScid,
NodeID1: sharedNodeID,
NodeID2: nodeIDFromInt(10),
}
parentID3, err := barrier.InitJobDependencies(ann3)
require.NoError(t, err)
// Create the ChannelUpdate & NodeAnnouncement messages.
upd1 := &lnwire.ChannelUpdate1{
ShortChannelID: sharedScid,
}
childID1, err := barrier.InitJobDependencies(upd1)
require.NoError(t, err)
node1 := &lnwire.NodeAnnouncement{
NodeID: sharedNodeID,
}
childID2, err := barrier.InitJobDependencies(node1)
require.NoError(t, err)
run := func(vb *ValidationBarrier, childJobID JobID, job interface{},
resp chan error, start chan error) {
close(start)
err := vb.WaitForParents(childJobID, job)
resp <- err
}
errChan := make(chan error, 2)
startChan1 := make(chan error, 1)
startChan2 := make(chan error, 1)
go run(barrier, childID1, upd1, errChan, startChan1)
go run(barrier, childID2, node1, errChan, startChan2)
// Wait for the start signal since we are testing the case where the
// parent jobs only complete _after_ the child jobs have called. Note
// that there is technically an edge case where we receive the start
// signal and call SignalDependents before WaitForParents can actually
// be called in the goroutine launched above. In this case, which
// arises due to our inability to control precisely when these VB
// methods are scheduled (as they are in different goroutines), the
// test should still pass as we want to test that validation jobs are
// completing and not stalling. In other words, this issue with the
// test itself is good as it actually randomizes some of the ordering,
// occasionally. This tests that the VB is robust against ordering /
// concurrency issues.
select {
case <-startChan1:
case <-time.After(timeout):
t.Fatal("timed out waiting for startChan1")
}
select {
case <-startChan2:
case <-time.After(timeout):
t.Fatal("timed out waiting for startChan2")
}
// Now we can call SignalDependents for our parent jobs.
err = barrier.SignalDependents(ann1, parentID1)
require.NoError(t, err)
err = barrier.SignalDependents(ann2, parentID2)
require.NoError(t, err)
err = barrier.SignalDependents(ann3, parentID3)
require.NoError(t, err)
select {
case <-errChan:
case <-time.After(timeout):
t.Fatal("unexpected timeout waiting for first error signal")
}
select {
case <-errChan:
case <-time.After(timeout):
t.Fatal("unexpected timeout waiting for second error signal")
}
}
// nodeIDFromInt creates a node ID by writing a uint64 to the first 8 bytes.
func nodeIDFromInt(i uint64) [33]byte {
var nodeID [33]byte
binary.BigEndian.PutUint64(nodeID[:8], i)
return nodeID
}

View file

@ -1,6 +1,6 @@
# If you change this please also update GO_VERSION in Makefile (then run
# `make lint` to see where else it needs to be updated as well).
FROM golang:1.22.6-alpine as builder
FROM golang:1.23.6-alpine as builder
LABEL maintainer="Olaoluwa Osuntokun <laolu@lightning.engineering>"

View file

@ -93,7 +93,7 @@ following build dependencies are required:
### Installing Go
`lnd` is written in Go, with a minimum version of `1.22.6` (or, in case this
`lnd` is written in Go, with a minimum version of `1.23.6` (or, in case this
document gets out of date, whatever the Go version in the main `go.mod` file
requires). To install, run one of the following commands for your OS:
@ -101,16 +101,16 @@ requires). To install, run one of the following commands for your OS:
<summary>Linux (x86-64)</summary>
```
wget https://dl.google.com/go/go1.22.6.linux-amd64.tar.gz
sha256sum go1.22.6.linux-amd64.tar.gz | awk -F " " '{ print $1 }'
wget https://dl.google.com/go/go1.23.6.linux-amd64.tar.gz
sha256sum go1.23.6.linux-amd64.tar.gz | awk -F " " '{ print $1 }'
```
The final output of the command above should be
`999805bed7d9039ec3da1a53bfbcafc13e367da52aa823cb60b68ba22d44c616`. If it
`9379441ea310de000f33a4dc767bd966e72ab2826270e038e78b2c53c2e7802d`. If it
isn't, then the target REPO HAS BEEN MODIFIED, and you shouldn't install
this version of Go. If it matches, then proceed to install Go:
```
sudo rm -rf /usr/local/go && sudo tar -C /usr/local -xzf go1.22.6.linux-amd64.tar.gz
sudo rm -rf /usr/local/go && sudo tar -C /usr/local -xzf go1.23.6.linux-amd64.tar.gz
export PATH=$PATH:/usr/local/go/bin
```
</details>
@ -119,16 +119,16 @@ requires). To install, run one of the following commands for your OS:
<summary>Linux (ARMv6)</summary>
```
wget https://dl.google.com/go/go1.22.6.linux-armv6l.tar.gz
sha256sum go1.22.6.linux-armv6l.tar.gz | awk -F " " '{ print $1 }'
wget https://dl.google.com/go/go1.23.6.linux-armv6l.tar.gz
sha256sum go1.23.6.linux-armv6l.tar.gz | awk -F " " '{ print $1 }'
```
The final output of the command above should be
`b566484fe89a54c525dd1a4cbfec903c1f6e8f0b7b3dbaf94c79bc9145391083`. If it
`27a4611010c16b8c4f37ade3aada55bd5781998f02f348b164302fd5eea4eb74`. If it
isn't, then the target REPO HAS BEEN MODIFIED, and you shouldn't install
this version of Go. If it matches, then proceed to install Go:
```
sudo rm -rf /usr/local/go && tar -C /usr/local -xzf go1.22.6.linux-armv6l.tar.gz
sudo rm -rf /usr/local/go && tar -C /usr/local -xzf go1.23.6.linux-armv6l.tar.gz
export PATH=$PATH:/usr/local/go/bin
```

View file

@ -98,6 +98,20 @@ Once the specification is finalized, it will be the most up-to-date
comprehensive document explaining the Lightning Network. As a result, it will
be recommended for newcomers to read first in order to get up to speed.
# Substantial contributions only
Due to the prevalence of automated analysis and pull request authoring tools
and online competitions that incentivize creating commits in popular
repositories, the maintainers of this project are flooded with trivial pull
requests that only change some typos or other insubstantial content (e.g. the
year in the license file).
If you are an honest user that wants to contribute to this project, please
consider that every pull request takes precious time from the maintainers to
review and consider the impact of changes. Time that could be spent writing
features or fixing bugs.
If you really want to contribute, consider reviewing and testing other users'
pull requests instead. Or add value to the project by writing unit tests.
# Development Practices
Developers are expected to work in their own trees and submit pull requests when
@ -337,7 +351,7 @@ Examples of common patterns w.r.t commit structures within the project:
small scale, fix typos, or any changes that do not modify the code, the
commit message of the HEAD commit of the PR should end with `[skip ci]` to
skip the CI checks. When pushing to such an existing PR, the latest commit
being pushed should end with `[skip ci]` as to not inadvertantly trigger the
being pushed should end with `[skip ci]` as to not inadvertently trigger the
CI checks.
## Sign your git commits

View file

@ -78,7 +78,7 @@ prefix is overwritten by the `LNWL` subsystem.
Moreover when using the `lncli` command the return value will provide the
updated list of all subsystems and their associated logging levels. This makes
it easy to get an overview of the corrent logging level for the whole system.
it easy to get an overview of the current logging level for the whole system.
Example:

View file

@ -91,17 +91,12 @@ types in a series of changes:
## lncli Additions
* [`updatechanpolicy`](https://github.com/lightningnetwork/lnd/pull/8805) will
now update the channel policy if the edge was not found in the graph
database if the `create_missing_edge` flag is set.
# Improvements
## Functional Updates
## RPC Updates
## lncli Updates
## Code Health
## Breaking Changes
## Performance Improvements
@ -124,5 +119,4 @@ types in a series of changes:
* George Tsagkarelis
* Olaoluwa Osuntokun
* Oliver Gugger
* Ziggie
* Ziggie

View file

@ -0,0 +1,79 @@
# Release Notes
- [Bug Fixes](#bug-fixes)
- [New Features](#new-features)
- [Functional Enhancements](#functional-enhancements)
- [RPC Additions](#rpc-additions)
- [lncli Additions](#lncli-additions)
- [Improvements](#improvements)
- [Functional Updates](#functional-updates)
- [RPC Updates](#rpc-updates)
- [lncli Updates](#lncli-updates)
- [Breaking Changes](#breaking-changes)
- [Performance Improvements](#performance-improvements)
- [Technical and Architectural Updates](#technical-and-architectural-updates)
- [BOLT Spec Updates](#bolt-spec-updates)
- [Testing](#testing)
- [Database](#database)
- [Code Health](#code-health)
- [Tooling and Documentation](#tooling-and-documentation)
# Bug Fixes
* [Fixed a bug](https://github.com/lightningnetwork/lnd/pull/9459) where we
would not cancel accepted HTLCs on AMP invoices if the whole invoice was
canceled.
# New Features
## Functional Enhancements
## RPC Additions
## lncli Additions
* [`updatechanpolicy`](https://github.com/lightningnetwork/lnd/pull/8805) will
now update the channel policy if the edge was not found in the graph
database if the `create_missing_edge` flag is set.
# Improvements
## Functional Updates
## RPC Updates
## lncli Updates
## Code Health
## Breaking Changes
## Performance Improvements
# Technical and Architectural Updates
## BOLT Spec Updates
## Testing
## Database
* [Remove global application level lock for
Postgres](https://github.com/lightningnetwork/lnd/pull/9242) so multiple DB
transactions can run at once, increasing efficiency. Includes several bugfixes
to allow this to work properly.
## Code Health
* [Golang was updated to
`v1.22.11`](https://github.com/lightningnetwork/lnd/pull/9462).
* [Improved user experience](https://github.com/lightningnetwork/lnd/pull/9454)
by returning a custom error code when HTLC carries incorrect custom records.
* [Make input validation stricter](https://github.com/lightningnetwork/lnd/pull/9470)
when using the `BumpFee`, `BumpCloseFee(deprecated)` and `BumpForceCloseFee`
RPCs. For the `BumpFee` RPC the new param `deadline_delta` is introduced. For
the `BumpForceCloseFee` RPC the param `conf_target` was added. The conf_target
changed in its meaning for all the RPCs which had it before. Now it is used
for estimating the starting fee rate instead of being treated as the deadline,
and it cannot be set together with `StartingFeeRate`. Moreover if the user now
specifies the `deadline_delta` param, the budget value has to be set as well.
## Tooling and Documentation
# Contributors (Alphabetical Order)
* Ziggie
* Jesse de Wit
* Alex Akselrod
* Konstantin Nick

Some files were not shown because too many files have changed in this diff Show more