The AMP struct in a hop was never being set when deserizlied. Also,
the AMP TLV record was not being added when the hop was serialized.
This sets the TLV record when serializing and correctly sets the
AMP struct on the hop when that record is present.
Co-authored-by: BitcoinCoderBob <90647227+BitcoinCoderBob@users.noreply.github.com>
Co-authored-by: Tee8z <tee8z@protonmail.com>
We've only ever made macaroons with the v2 versions, so we should
explicitly reject those that aren't actually v2. We add a basic test
along the way, and also add a similar check for the version encoded in
the macaroon ID.
* sweep: use longer variable name for clarity in `addToState`
* sweeper: add more docs and debug logs
* sweep: prioritize smaller inputs when adding wallet UTXOs
This commit sorts wallet UTXOs by their values when using them for
sweeping inputs. This way we'd avoid locking large UTXOs when sweeping
inputs and also provide an opportunity to aggregate wallet UTXOs.
* contractcourt+itest: relax anchor sweeping for CPFP purpose
This commit changes from always sweeping anchor for a local force close
to only do so when there is an actual time pressure. After this change,
a forced anchor sweeping will only be attempted when the deadline is
less than 144 blocks.
* docs: update release notes
* itest: update test `testMultiHopHtlcLocalChainClaim` to skip CPFP
Since we now only perform CPFP when both the fee rate is higher and the
deadline is less than 144, we need to update the test to reflect that
Bob will not CPFP the force close tx for the channle Alice->Bob.
* itest: fix `testMultiHopRemoteForceCloseOnChainHtlcTimeout`
* itest: update related tests to reflect anchor sweeping
This commit updates all related tests to reflect the latest anchor
sweeping behavior. Previously, anchor sweeping is always attempted as
CPFP when a force close is broadcast, while now it only happens when the
deadline is less than 144. For non-CPFP purpose sweeping, it will happen
after one block is mined after the force close transaction is confirmed
as the anchor will be resent to the sweeper with a floor fee rate, hence
making it economical to sweep.
* multi: extend InvoiceDB methods with a context argument
This commit adds a context to InvoiceDB's methods. Along this refactor
we also extend InvoiceRegistry methods with contexts where it makes
sense. This change is essential to be able to provide kvdb and sqldb
implementations for InvoiceDB.
* channeldb: restrict invoice tests to only use an InvoiceDB instance
* docs: update release notes for 0.18.0
* htlcswitch/hop: use InvalidOnionVersion for replayed packets
The link will send an update_fail_malformed_htlc, so we need to set
the BADONION bit. Since there isn't a replay-specific error, we
set the failure code to InvalidOnionVersion which has the BADONION bit.
* release-notes: update for 0.17.1
* Add instructions to generate type hints
Use mypy to generate .pyi files as well. These files are useful for type hinting in IDEs.
* Update python.md
fix lines that got swapped in copy-paste
* remove mypy
mypy is not a dependency
In this commit, we modify the incoming contest resolver to use a
concurrent queue. This is meant to ensure that the invoice registry
subscription loop never blocks. This change is meant to be minimal and
implements option `5` as outlined here:
https://github.com/lightningnetwork/lnd/issues/8023.
With this change, the inner loop of the subscription dispatch method in
the invoice registry will no longer block, as the concurrent queue uses
a fixed buffer of a queue, then overflows into another queue when that
gets full.
Fixes https://github.com/lightningnetwork/lnd/issues/7917
In this commit, we add the ability to obtain blocking and mutex
profiles. The blocking profile will show which goroutines are
consistently blocked on synchronization primitives like channels, or
I/O. The mutex profile will show which mutexes are very contested.
The blocking profile can be enabled with a new arg: `--blockingprofile`.
The mutex profile can be enabled with a new arg: `--mutexprofile`. These
are both ignored if the profile port isn't set.
Activating these profiles requires the caller to pass in a sampling
rate. For now I've set it just to `1` to test things out. Unfortunately
documentation is rather scarce, so there aren't any good guides re what
these values should be set to. AFAICT, these add more overhead than the
other prowling options, so they shouldn't necessarily be enabled
persistently in production.
In this commit, we update the set of protos to accept the local secret
nonces over RPC. This is actually a 97 byte value, as it includes the
two 32 byte nonces, as well as the 33 byte value of the public key of
the signer.
This is needed in order to be able to open taproot channels over the RPC
interface.
In this commit, we increase the max message size for the ws proxy. We
have a similar setting for the normal gRPC server which was tuned to be
able to support decoding `GetNetworkInfo` as the channel graph got
larger. We keep the default buffer size of 64 KB, but allow that to be
expanded to up to 4 MB (current value) to decode larger messages.
One alternative would be to modify the `Split` function to break up
larger lines into smaller ones. We'd need to double check that the
libraries at a higher level of abstraction can handle the chunks. The
scan function would look something like:
```go
splitFunc := func(data []byte, eof bool) (int, []byte, error) {
if len(data) >= chunkSize {
return chunkSize, data[:chunkSize], nil
}
return bufio.ScanLines(data, eof))
}
scanner.Split(splitFunc)
```
In this commit, update the start up logic to gracefully handle a
seemingly rare case. In this case, a peer detects local data loss with a
set of active HTLCs. These HTLCs then eventually expire (they may or may
not actually "exist"), causing a force close decision. Before this PR,
this attempt would fail with a fatal error that can impede start up.
To better handle such a scenario, we'll now catch the error when we fail
to force close due to entering the DLP and instead terminate the state
machine at the broadcast state. When a commitment transaction eventually
confirms, we'll play it as normal.
Fixes https://github.com/lightningnetwork/lnd/issues/7984
* lnwallet: fix log output msg
The log message is off by one.
* htlcswitch: fail channel when revoking it fails.
When the revocation of a channel state fails after receiving a new
CommitmentSigned msg we have to fail the channel otherwise we
continue with an unclean state.
* docs: update release-docs
* htlcswitch: tear down connection if revocation processing fails
If we couldn't revoke due to a DB error, then we want to also tear down
the connection, as we don't want the other party to continue to send
updates. That may lead to de-sync'd state an eventual force close.
Otherwise, the database might be able to recover come the next
reconnection attempt.
* kvdb: use sql.LevelSerializable for all backends
In this commit, we modify the default isolation level to be
`sql.LevelSerializable. This is the strictness isolation type for
postgres. For sqlite, there's only ever a single writer, so this doesn't
apply directly.
* kvdb/sqlbase: add randomized exponential backoff for serialization failures
In this commit, we add randomized exponential backoff for serialization
failures. For postgres, we''ll his this any time a transaction set fails
to be linearized. For sqlite, we'll his this if we have many writers
trying to grab the write lock at time same time, manifesting as a
`SQLITE_BUSY` error code.
As is, we'll retry up to 10 times, waiting a minimum of 50 miliseconds
between each attempt, up to 5 seconds without any delay at all. For
sqlite, this is also bounded by the busy timeout set, which applies on
top of this retry logic (block for busy timeout seconds, then apply this
back off logic).
* docs/release-notes: add entry for sqlite/postgres tx retry
---------
Co-authored-by: ziggie <ziggie1984@protonmail.com>
In this commit, we make sure that all the `wg.Add(1)` calls succeed
before we attempt to wait on the shutdown of all the goroutines. Under
rare scheduling scenarios, if both `Start` and `Disconnect` are called
concurrently, then this internal race error can be hit, causing the
panic to occur.
Fixes https://github.com/lightningnetwork/lnd/issues/7853
In this commit we add support for two more custom parameters in the
main entrypoint for the `lnd` Docker image script (`start-lnd.sh`)
The two parameters are:
* RPCHOST to set a custom endpoint for the RPC server
* RPCCRTPATH to set a custom location for the certificate used when
communicating with the RPC server
Verify that the addresses we're decoding when sending coins onchain are
for the correct network. Without this check we'll convert the users
addresses to their equivalent on other networks, which is a gross
violation of the principle of least astonishment.
In this commit, we add a new LinkFailureDisconnect action that'll be
used if we detect that the remote party hasn't sent a revoke and ack
when it actually should.
Before this commit, we would log our action, tear down the link, but
then not actually force a connection recycle, as we assumed that if the
TCP connection was actually stale, then the read/write timeout would
expire.
In practice this doesn't always seem to be the case, so we make a strong
action here to actually force a disconnection in hopes that either side
will reconnect and keep the good times rollin' 🕺.
This modifies the `genMacaroons` logic to indepently check for each of
the three default macaroons (admin, readonly, invoice) and generate
whichever are missing. Previously, this was an all or nothing routine.
In other words, either all three didn't exist on disk and all three are
created, or no macaroons are created. Although that works for the first
run of a new node, it can result in inconsistent states if only one or
two of the macaroons is deleted.
See https://github.com/lightningnetwork/lnd/discussions/7566.
This commit adds a a new `MarshalOutPoint` helper in the `lnrpc` package
that can be used to convert a `wire.Outpoint` to an `lnrpc.Outpoint`.
By using this helper, we are less likely to forget to populate both the
string and byte form of the TXID.
In this commit, we an existing gap in our rebroadcast handling logic. As
is, if we're trying to sweep a transaction and a conflicting transaction
is mined (timeout lands on chain, anchor swept), then we'll continue to
try to rebroadcast the tx in the background.
To resolve this, we give the sweeper a new closure function that it can
use to mark conflicted transactions as no longer requiring rebroadcast.
In this commit, we increase the default CTLV value to 80 blocks.
Initially this was set to 144 blocks in the early days, but then was
lowered to 40 blocks as the lnd implementation matured. By setting this
to a higher value, we increase the safety window (MTTR) when it comes to
node downtime, and also add some buffer room around time locks which may
become more stressed in the future assuming the current mempool load
remains persistent.
In this commit, a bug is fixed in the funding manager that could result
in the funding process erroring out if the persisted initial forwarding
policy is not found. This could occur if a node restarts after opening a
channel that is not yet fully confirmed and also upgrades their node
from a pre-0.16 version to 0.16 since the values are only expected to be
persisted after 0.16.
With this commit we update the docs according to the latest changes that
were necessary to support loop and pool (which requires all 255
internally used accounts to be imported at wallet creation time).
Fixes#7567.
In this commit, a small migration is added to the watchtower client DB
to ensure that there is an entry in the towerID-to-sessionID index for
all towers in the db regardless of if they have sessions or not. This is
required as a follow up to migration 1 since that migration only created
entries in the index for towers that had associated sessions which would
lead to "tower not found" errors on start up.
We make the capacity factor configurable via an lnd.conf routerrpc
apriori parameter. The capacity factor trades off increased success
probability with a reduced set of channel candidates, which may lead to
increased fees. To let users choose whether the factor is active or not,
we add a config setting where a capacity fraction of 1.0 disables the
factor. We limit the capacity fraction to values between 0.75 and 1.0.
Lower values may discard too many channels.
We require channel updates to have the max HTLC message flag set.
Several flows need to pass that check before channel updates are
forwarded to peers:
* after channel funding: `addToRouterGraph`
* after receiving channel updates from a peer:
`ProcessRemoteAnnouncement`
* after we update channel policies: `PropagateChanPolicyUpdate`