As a preparation to not have a local and remote version of the database
around anymore, we rename the variables into what their actual function
is. In case of the RPC server we even directly use the channel graph
instead of the DB instance. This should allow us to extract the channel
graph into its own, separate database (perhaps with better access
characteristics) in the future.
The logger string used to identify the wtclient and wtclientrpc loggers
was the same, leading to being unable to modify the log level of the
wtclient logger as it would be overwritten with the wtclientrpc's one.
To simplify things, we decide to use the existing RPC logger for
wtclientrpc.
In this commit, we remove the restriction surrounding the largest
invoices that we'll allow a user to create. After #3967 has landed,
users will be able to send in _aggregate_ a payment larger than the
current max HTLC size limit in the network. As a result, we can just
treat that value as the system's MTU, and allow users to request
payments it multiples of that MTU value.
A follow up to this PR at a later time will also allow wumbo _channels_.
However, that requires us to tweak the way we scale CSV values, as post
wumbo, there is no true channel size limit, only the
_local_ limit of a given node. We also need to implement a way for nodes
to signal to other nodes their accepted max channel size.
In this commit, we address an issue that would cause us to scan from the
genesis block for the spend of an output that we wish to use to raise
the fee of a transaction through CPFP. This was due to setting a 0
height hint when constructing the input required by the sweeper and was
discovered due to the recently added validation checks at the chain
notifier level. We'll now use the current height as the height hint
instead as the sweeper will end up creating a new transaction that
spends the input.
In this commit, we add the glue infrastructure to make the sub RPC
server system work properly. Our high level goal is the following: using
only the lnrpc package (with no visibility into the sub RPC servers),
the RPC server is able to find, create, run, and manage the entire set
of present and future sub RPC servers. In order to achieve this, we use
the reflect package and build tags heavily to permit a loosely coupled
configuration parsing system for the sub RPC servers.
We start with a new `subRpcServerConfigs` struct which is _always_
present. This struct has its own group, and will house a series of
sub-configs, one for each sub RPC server. Each sub-config is actually
gated behind a build flag, and can be used to allow users on the command
line or in the config to specify arguments related to the sub-server. If
the config isn't present, then we don't attempt to parse it at all, if
it is, then that means the RPC server has been registered, and we should
parse the contents of its config.
The `subRpcServerConfigs` struct has two main methods:
`PopulateDependancies` and `FetchConfig`. The `PopulateDependancies` is
used to dynamically locate and set the config fields for each new
sub-server. As the config may not actually have any fields (if the build
flag is off), we use the reflect pacakge to determine if things are
compiled in or not, then if so, we dynamically set each of the config
parameters. The `PopulateDependancies` method implements the
`lnrpc.SubServerConfigDispatcher` interface. Our goal is to allow sub
servers to look up their actual config in this main config struct. We
achieve this by using reflect to look up the target field _as if it were
a key in a map_. If the field is found, then we check if it has any
actual attributes (it won't if the build flag is off), if it is, then we
return it as we expect it to be populated already.