To avoid the problem where multiple jobs try to create the same cache
entry, we add the GitHub job ID to the cache key.
The two restore keys make it possible that a job can also restore a
cache entry from another job if none for the current job exist yet.
With this commit we allow the Go version that is set up to be
configurable and not dependent on a specific environment variable. This
will allow us to eventually extract the action into a tooling
repository.
For some reason we used to override the GOCACHE and GOPATH variables
earlier. This now causes the updated cache action not to pick up any
caches. As the overrides shouldn't be needed anymore, we remove them.
Our OpenChannelRPC was accepting invalid values for the closing address
field. If we were able to decode the address we would use it in the
script even if the address is for another bitcoin net.
Fixes a bug where channel update data is read until the end of the stream
rather than stopping after the specified length. This is problematic
when failure message tlv data is present, because this data is interpreted
as channel update tlv data.
This commit, the error returned from `getInitialFwdingPolicy` is checked
in order to avoid a nil pointer dereference panic during the
TestFundingManagerCustomChannelParameters test.
In this commit, we modify our gossip broadcast logic to ensure that we
always will send out our own gossip messages regardless of the
filtering/feature policies of the peer.
Before this commit, it was possible that when we went to broadcast an
announcement, none of our peers actually had us as a syncer peer (lnd
terminology). In this case, the FilterGossipMsg function wouldn't do
anything, as they don't have an active timestamp filter set. When we go
to them merge the syncer map, we'd add all these peers we didn't send
to, meaning we would skip them when it came to broadcast time.
In this commit, we now split things into two phases: we'll broadcast
_our_ own announcements to all our peers, but then do the normal
filtering and chunking for the announcements we got from a remote peer.
Fixes https://github.com/lightningnetwork/lnd/issues/6531
Fixes https://github.com/lightningnetwork/lnd/issues/7223
Fixes https://github.com/lightningnetwork/lnd/issues/7073
We start with a simple Map function that can be useful for transforming
objects on the fly.
We may want to eventually make this Taro pakage into a module so we can
just have everything in one place:
https://github.com/lightninglabs/taro/tree/main/chanutils.
Ensure active nodes are synced to the latest block mined.
There are two scenarios where they might not be synced to the correct
block even when SyncedToChain is true:
1. The backend may have rejected a newly mined block (e.g., see
#7241).
2. The backend might not have fully processed the new blocks yet.
In either case SyncedToChain will (correctly) be true since the node is
indeed fully synced to the backend. This commit makes sure we detect
case 1 above, while making sure we continue to wait in case 2.
We deprecate `QueryProbability`, as it displays the same information as
`QueryMissionControl` less the probability. `QueryRoutes` still contains
the total probability of a route.
We multiply the apriori probability with a factor to take capacity into
account:
P *= 1 - 1 / [1 + exp(-(amount - cutoff)/smearing)]
The factor is a function value between 1 (small amount) and 0 (high
amount). The zero limit may not be reached exactly depending on the
smearing and cutoff combination. The function is a logistic function
mirrored about the y-axis. The cutoff determines the amount at which a
significant reduction in probability takes place and the smearing
parameter defines how smooth the transition from 1 to 0 is. Both, the
cutoff and smearing parameters are defined in terms of fixed fractions
of the capacity.
Extends the pathfinder with a capacity argument for later usage.
In tests, the inserted testCapacity has no effect, but will be used
later to estimate reduced probabilities from it.
This commit adds the maximal capacity between two nodes to the unified
edge data. We use MaxHTLC as a replacement if the channel capacity is
not available.
In tests we use larger maxHTLC values to be able to convert to a
non-zero sat capacity.
This commit refactors the semantics of unified policies to unified
edges. The main changes are the following renamings:
* unifiedPolicies -> nodeEdgeUnifier
* unifiedPolicy -> edgeUnifier
* unifiedPolicyEdge -> unifiedEdge
Comments and shortened variable names are changed to reflect the new
semantics.
We encapsulate the capacity inside a unifiedPolicyEdge for later usage.
The meaning of "policy" has changed now, which will be refactored in the
next commmit.