This will prevent the subservers from writing macaroons to disk
when the stateless_init flag is set to true. It accomplishes
this by storing the StatelessInit value in the Macaroon Service.
Because we'll need to return the macaroon through the wallet unlocker
we cannot shut down its service before we have done so, otherwise
we'll end up in a deadlock. That's why we collect all shutdown
tasks and return them as a function that can be called after we've
initialized the macaroon service.
As a preparation for the next commit where we add proper wallet unlocker
shutdown handling, we move the calls that require cleanup down after the
creation of the wallet unlocker service itself.
To make sure no macaroons are created anywhere if the stateless
initialization was requested, we keep the requested initialization mode
in the memory of the macaroon service.
This commit adds the --stateless_init flag to all three wallet unlocker
operations. Once you initialize a wallet stateless, you need to set
this flag for every further wallet unlocker operation. Otherwise you
risk non-encrypted macaroon information to leak to the underlying
system.
Similarly as with kvdb.View this commits adds a reset closure to the
kvdb.Update call in order to be able to reset external state if the
underlying db backend needs to retry the transaction.
This commit adds a reset() closure to the kvdb.View function which will
be called before each retry (including the first) of the view
transaction. The reset() closure can be used to reset external state
(eg slices or maps) where the view closure puts intermediate results.
This addresses a potential panic when a tower has one of its candidate
sessions chosen, but its only reachable address was removed by a
user-initiated RPC before the fact.
Without this change the high-fee logic is never tested as it is instead caught by the dust-output logic. This change uses a higher fee rate to ensure an output value above the dust limit, while still spending 20% on fees.
To allow nodes more control over the amount of time that their funds
will be locked up, we add a MaxLocalCSVDelay option which sets the
maximum csv delay we will accept for all channels. We default to the
existing constant of 10000, and set a sane minimum on this value so that
clients cannot set unreasonably low maximum csv delays which will result
in their node rejecting all channels.
To make sure the test that takes the longest overall time is always
started first, independent of the number of test tranches we run, we
move it to the beginning of the list. Because that test involves a lot
of waiting, it allows us to play around with the number of tranches more
efficiently.