For older nodes, this bucket was never created, so we'll get an error if
we try and query it. In this commit, we catch this error like we do when
a given channel doesn't have the information (but the bucket actually
exists).
Fixes#6155
In this commit, we fix a bug that would cause newly updated nodes to be
unable to start up, if they have an older channel that was closed before
we started to store all the historical state for each channel.
The issue is that we started to write the complete state to disk, but
newer channels don't have it, so when we try to supplement the
resolvers, we run into this error.
Ultimately, we only need this new supplemented information for script
enforcement channels. Ideally we would instead check the channel type
there instead, but it doesn't appear to be available in this context as
is, without further changes.
Fixes https://github.com/lightningnetwork/lnd/issues/6001.
In order to sweep the commitment and HTLC outputs belonging to a
script-enforced leased channel, each resolver must know whether the
additional CLTV clause on the channel initiator applies to them. To do
so, we retrieve the historical channel state stored within the database
and supplement it to the resolvers to provide them with what's needed in
order to sweep the necessary outputs and resolve their respective
contracts.
This commit modifies the channel state machine to be able to derive the
proper commitment and second-level HTLC output scripts required by the
new script-enforced leased channel commitment type.
In this commit, we take an initial step towards converting the existing
breach arbiter and utxo nursery logic into contract resolvers by moving
the files as is, into the `contractcourt` pacakge.
This commit is primarily move only, though we had to massage some
interfaces and config names along the way to make things compile and the
tests run properly.
Adds an optional tx parameter to ForAllOutgoingChannels and FetchChannel
so that data can be queried within the context of an existing database
transaction.
We might want to react to a channel being fully resolved after being
involved in a force close. For this we add a new callback and invoke it
where appropriate.
This commit adds two tests to check that a) the correct deadline is used
given different HTLC sets and b) when sweeping anchors the correct
deadlines are used.
This commit adds a deadline field to mockSweeper that can be used to
track the customized conf target (deadline) used for sweeping anchors.
The relevant test, TestChannelArbitratorAnchors is updated to reflect
that the deadlines are indeed taking effect.
In this commit, we made the change so that when sweeping anchors for the
commitment transactions, we will be aware of the deadline which is
derived from its HTLC set. It's very likely we will use a much larger
conf target from now on, and save us some sats.
This commit adds a new struct AnchorResolutions which wraps the anchor
resolutions for local/remote/pending remote commitment transactions. It
is then returned from NewAnchorResolutions. Thus the caller knows how to
retrieve a certain anchor resolution.
closed
This commit makes the handoff procedure between the breachabiter and
chainwatcher use a function closure to mark the channel pending closed
in the DB. Doing it this way we know that the channel has been markd
pending closed in the DB when ProcessACK returns.
The reason we do this is that we really need a "two-way ACK" to have the
breacharbiter know it can go on with the breach handling. Earlier it
would just send the ACK on the channel and continue. This lead to a race
where breach handling could finish before the chain watcher had marked
the channel pending closed in the database, which again lead to the
breacharbiter failing to mark the channel fully closed.
We saw this causing flakes during itests.
This commit moves the contract breach event dispatch after the channel
close summary has been added to the database. This is important
otherwise it may occur that we attempt to mark the channel fully closed
while the channel close summary is not yet serialized.
This commit moves the logic for sweeping the confirmed second-level
timeout transaction into its own method.
We do a small change to the logic: When setting the spending tx in the
report, we use the detected commitspend instead of the presigned tiemout
tx. This is to prepare for the coming change where the spending
transaction might actually be a re-signed timeout tx, and will therefore
have a different txid.
success tx
This commit makes the HTLC resolutions having non-nil SignDetails
(meaning we can re-sign the second-level transactions) go through the
sweeper. They will be offered to the sweeper which will cluster them and
arrange them on its sweep transaction. When that is done we will further
sweep the output on this sweep transaction as any other second-level tx.
In this commit we do this for the HTLC success resolver and the
accompanying HTLC success transaction.
To make the linter happy, make a pointer to the inner resolver.
Otherwise the linter would complain with
copylocks: literal copies lock value
since we'll add a mutex to the resolver in following commits.
* mod: bump btcwallet version to accept db timeout
* btcwallet: add DBTimeOut in config
* kvdb: add database timeout option for bbolt
This commit adds a DBTimeout option in bbolt config. The relevant
functions walletdb.Open/Create are updated to use this config. In
addition, the bolt compacter also applies the new timeout option.
* channeldb: add DBTimeout in db options
This commit adds the DBTimeout option for channeldb. A new unit
test file is created to test the default options. In addition,
the params used in kvdb.Create inside channeldb_test is updated
with a DefaultDBTimeout value.
* contractcourt+routing: use DBTimeout in kvdb
This commit touches multiple test files in contractcourt and routing.
The call of function kvdb.Create and kvdb.Open are now updated with
the new param DBTimeout, using the default value kvdb.DefaultDBTimeout.
* lncfg: add DBTimeout option in db config
The DBTimeout option is added to db config. A new unit test is
added to check the default DB config is created as expected.
* migration: add DBTimeout param in kvdb.Create/kvdb.Open
* keychain: update tests to use DBTimeout param
* htlcswitch+chainreg: add DBTimeout option
* macaroons: support DBTimeout config in creation
This commit adds the DBTimeout during the creation of macaroons.db.
The usage of kvdb.Create and kvdb.Open in its tests are updated with
a timeout value using kvdb.DefaultDBTimeout.
* walletunlocker: add dbTimeout option in UnlockerService
This commit adds a new param, dbTimeout, during the creation of
UnlockerService. This param is then passed to wallet.NewLoader
inside various service calls, specifying a timeout value to be
used when opening the bbolt. In addition, the macaroonService
is also called with this dbTimeout param.
* watchtower/wtdb: add dbTimeout param during creation
This commit adds the dbTimeout param for the creation of both
watchtower.db and wtclient.db.
* multi: add db timeout param for walletdb.Create
This commit adds the db timeout param for the function call
walletdb.Create. It touches only the test files found in chainntnfs,
lnwallet, and routing.
* lnd: pass DBTimeout config to relevant services
This commit enables lnd to pass the DBTimeout config to the following
services/config/functions,
- chainControlConfig
- walletunlocker
- wallet.NewLoader
- macaroons
- watchtower
In addition, the usage of wallet.Create is updated too.
* sample-config: add dbtimeout option
This adds the scenario to the local force close test cases where a node
force closes one of its channels, then lose state (or do recovery)
before the commmitment is confirmed. Without the previous commit this
would go undetected.
outdated local state
This commit fixes a bug that would cause us to not sweep our local
output in case we force closed, then lost state or attempted recovery.
The reason being that we would use or local commit height when deriving
our scripts, which would be incorrect. Instead we use the extracted
state number to derive the correct scripts, allowing us to sweep the
output.
Allthough being an unlikely scenario, we would leave money on chain in
this case without any warning (since we would just end up with an empty
delay script) and forget about the spend.
Similar to what we did for the local state handling, we extract handling
all known remote states we can act on (breach, current, pending state)
into its own method.
Since we want to handle the case where we lost state (both in case of
local and remote close) last, we don't rely on the remote state number
to check which commit we are looking at, but match on TXIDs directly.
The tests didn't really roll back the channel state, so we would only
rely on the state number to determine whether we had lost state. Now we
properly roll back the channel to a previous state, in preparation for
upcoming changes.
To allow us to grab all of the information we need for our channel arbs
in a more efficient way on startup, we add an optional tx to our lookup
functions required on start.