Add a test to demonstrate that if a NodeAnnouncement includes an address
with an unknown type, then we incorrectly return an error. This will be
fixed in the following commit.
This commit was previously split into the following parts to ease
review:
- 2d746f68: replace imports
- 4008f0fd: use ecdsa.Signature
- 849e33d1: remove btcec.S256()
- b8f6ebbd: use v2 library correctly
- fa80bca9: bump go modules
This was not properly enforced and would be a spec violation on the
peer's end. Also re-use a pong buffer to save on heap allocations if
there are a lot of peers. The pong buffer is only read from, so this
is concurrent safe.
To simplify the message signing API even further, we refactor the
lnwallet.MessageSigner interface to use a key locator instead of the
public key to identify which key should be signed with.
If these bits are present, then both sides can examine the new
CommitmentType TLV field that's present and use this in place of the
existing implicit commiment type negotiation. With this change, it's now
possible to actually deprecate old unsupported commitment types
properly.
In this commit, we add a new ChannelType field as a new TLV record to
the OpenChannel message. During this change, we make a few tweaks to the
generic TLV encode/decode methods for the ExtraOpaqueData struct to have
it work on the level of tlv.RecordProducer instead of tlv.Record, as
this reduces line noise a bit.
We also partially undo existing logic that would attempt to "prepend"
any new TLV records to the end of the ExtraOpaqueData if one was already
present within the struct. This is based on the assumption that if we've
read a message from disk to order to re-send/transmit it, then the
ExtraOpaqueData is fully populated so we'll write that as is. Otherwise,
a message is being encoded for the first time, and we expect all fields
that are known TLV fields to be specified within the struct itself.
This change required the unit tests to be modified slightly, as we'll
always encode a fresh set of TLV records if none was already specified
within the struct.
In this commit, we add a new TLV record that's intended to be used as an
explicit channel commitment type for a new form of funding negotiation,
and later on a dynamic commitment upgrade protocol. As defined, we have
3 channel types: base (the OG), tweakless, and anchors w/ zero fee
HTLCs. We omit the original variant of anchors as it was never truly
deployed from the PoV of lnd.
This commit refactors the remaining usage of WriteElements. By
replacing the interface types with concrete types for the params used in
the methods, most of the encoding of the messages now takes zero heap
allocations.
This commit changes the WriteElement and WriteElements methods to take a
write buffer instead of io.Writer. The corresponding Encode methods are
changed to use the write buffer.
This commit changes the method WriteMessage to use bytes.Buffer to save
heap allocations. A unit test is added to check the method is
implemented as expected.
Removes the MaxPayloadLength function from the Message interface
and checks that each message payload is not greater than MaxMsgBody.
Since all messages are now allowed to be 65535 bytes in size, the
MaxPayloadLength is no longer needed.
In this commit, we convert the delivery address in the open and accept
channel methods to be a TLV type. This works as an "empty" delivery
address is encoded using a two zero bytes (uint16 length zero), and a
tlv type of 0 is encoded in the same manner (byte for type, byte for
zero length). This change allows us to easily extend these messages in
the future, in a uniform manner.
When decoding the message we snip the bytes from the read TLV data.
Similarly, when encoding we concatenate the TLV record for the shutdown
script with the rest of the TLV data.
Messages:
- UpdateFulfillHTLC
- UpdateFee
- UpdateFailMalformedHTLC
- UpdateFailHTLC
- UpdateAddHTLC
- Shutdown
- RevokeAndAck
- ReplyShortChanIDsEnd
- ReplyChannelRange
- QueryShortChanIDs
- QueryChannelRange
- NodeAnnouncement
- Init
- GossipTimestampRange
- FundingSigned
- FundingLocked
- FundingCreated
- CommitSig
- ClosingSigned
- ChannelUpdate
- ChannelReestablish
- ChannelAnnouncement
- AnnounceSignatures
lnwire: update quickcheck tests, use constant for Error
multi: update unit tests to pass deep equal assertions with messages
In this commit, we update a series of unit tests in the code base to now
pass due to the new wire message encode/decode logic. In many instances,
we'll now manually set the extra bytes to an empty byte slice to avoid
comparisons that fail due to one message having an empty byte slice and
the other having a nil pointer.
In this commit, we create a new `ExtraOpaqueData` based on the field
with the same name that's present in all the announcement related
messages. In later commits, we'll embed this new type in each message,
so we'll have a generic way to add/parse TLV extensions from messages.
In order to prep for allowing TLV extensions for the `ReplyChannelRange`
and `QueryChannelRange` messages, we'll need to remove the struct
embedding as is. If we don't remove this, then we'll attempt to decode
TLV extensions from both the embedded and outer struct.
All relevant call sites have been updated to reflect this minor change.
In this commit, we add a new RequiresFeature method to the feature
vector struct. This method allows us to check if the set of features
we're examining *require* that the even portion of a bit pair be set.
This can be used to check if new behavior should be allowed (after we
flip new bits to be required) for existing contexts.
This change was largely motivated by an increase in high disk usage as a
result of channel update spam. With an in memory graph, this would've
gone mostly undetected except for the increased bandwidth usage, which
this doesn't aim to solve yet. To minimize the effects to disks, we
begin to rate limit channel updates in two ways. Keep alive updates,
those which only increase their timestamps to signal liveliness, are now
limited to one per lnd's rebroadcast interval (current default of 24H).
Non keep alive updates are now limited to one per block per direction.
This fixes a decoding error when the list of short channel ids within a
QueryShortChanIDs message started with a zero sid.
BOLT-0007 specifies that lists of short channel ids should be sorted in
ascending order. Previously, this was checked within lnwire by comparing
two consecutive sids in the list, starting at the empty (zero) sid.
This meant that a list that started with a zero sid couldn't be decoded
since the first element would _not_ be greater than the last one
(namely: also zero).
Given that one can only check for ordering starting at the second
element, we add a check to ensure the proper behavior.
A unit test is also added to ensure no future regressions on this
behavior.
Before this commit, both writing and reading an encoded empty set of
short channel IDs from the wire would fail. Prior to this commit, we
treated decoding an empty set as a caller error, and failed to write out
the zlib encoding of an empty set in a way that us and the other
implementations were able to read.
To fix this, rather than giving zlib an empty buffer to write out (which
results in an encoding with the zlib header data and the rest), we just
write a blank slice. When decoding, if we have an empty query body, then
we'll return a `nil` slice.
With the above changes, we'll now always write out an empty short
channel ID set as:
```
0001 (1 byte follows) || <encoding_type>
```
A new test has also been added to exercise this case for both known
encoding types.
The number and the name will be separate on the rpc level, so we remove
the feature bit from the string. Currently this method is unused apart
from maybe in some rare logging instances.
This commit removes an unnecessarely large 32 byte buffer in favor of
a small 2 byte buffer and cleans up type conversion between uint16
and uint32 values.
This commit adds the feature bit and additional fields
required in `open_channel` and `accept_channel` wire
messages for `option_upfront_shutdown_script`.
This commit introduces a feature.Manager, which derives feature vectors
for various contexts within the daemon. The sets can be described via a
staticly compiled format, which makes any runtime adjustments to the
feature sets when the manager is initialized.
In this commit we change path finding to no longer consider all channels
between a pair of nodes individually. We assume that nodes forward
non-strict and when we attempt a connection between two nodes, we don't
want to try multiple channels because their policies may not be identical.
Having distinct policies for channel to the same peer is against the
recommendation in the spec, but it happens in the wild. Especially since
we recently changed the default cltv delta value.
What this commit introduces is a unified policy. This can be looked upon
as the greatest common denominator of all policies and should maximize
the probability of getting the payment forwarded.
Extends the invalid payment details failure with the new accept height
field. This allows sender to distinguish between a genuine invalid
details situation and a delay caused by intermediate nodes.
Align naming better with the lightning spec. Not the full name of the
failure (FailIncorrectOrUnknownPaymentDetails) is used, because this
would cause too many long lines in the code.
Methods on failure message types used to be defined on value receivers.
This allowed assignment of a failure message to ForwardingError both as
a value and as a pointer. This is error-prone, especially when using a
type switch.
In this commit the failure message methods are changed so that they
target pointer receivers.
Two instances where a value was assigned instead of a reference are
fixed.
In this commit, we modify the decoding of the FailUnknownPaymentHash
message to ensure we're able to fully decode the legacy serialization of
the onion error. We do this by catching the `io.EOF` error as it's
returned when _no_ bytes are read. If this is the case, then only the
error type was serialized and not also the optional amount.
In this commit, we fix a bug in the way we defined our even/odd features
for a particular feature. The check for if a feature bit is part of a
pair assumes that the pair bit has the exact same name as the bit being
queried. The way we defined our feature map didn't take note of this
assumption, as a result, any attempts to require a new bit moving from
optional to required would fail since the bit would be found, but the
names differed.
In this commit, we deprecate the `IncorrectHtlcAmount` onion error.
We'll still decode this error to use when retrying paths, but we'll no
longer send this ourselves. The `UnknownPaymentHash` error has been
amended to also include the value of the payment as well. This allows us
to worry about one less error.
In this commit, we add a field to the ChannelUpdate
denoting the maximum HTLC we support sending over
this channel, a field which was recently added to the
spec.
This field serves multiple purposes. In the short
term, it enables nodes to signal the largest HTLC
they're willing to carry, allows light clients who
don't verify channel existence to have some guidance
when routing HTLCs, and finally may allow nodes to
preserve a portion of bandwidth at all times.
In the long term, this field can be used by
implementations of AMP to guide payment splitting,
as it becomes apparent to a node the largest possible
HTLC one can route over a particular channel.
This PR was made possible by the merge of #1825,
which enables older nodes to properly retain and
verify signatures on updates that include new fields
(like this new max HTLC field) that they haven't yet
been updated to recognize.
In addition, the new ChannelUpdate fields are added to
the lnwire fuzzing tests.
Co-authored-by: Johan T. Halseth <johanth@gmail.com>
In this commit, we fix the problem where it's annoying to parse a
bitfield printed out in decimal by writing a String method for the
ChanUpdate[Chan|Msg]Flags bitfield.
Co-authored-by: Johan T. Halseth <johanth@gmail.com>
In this commit:
* we partition lnwire.ChanUpdateFlag into two (ChanUpdateChanFlags and
ChanUpdateMsgFlags), from a uint16 to a pair of uint8's
* we rename the ChannelUpdate.Flags to ChannelFlags and add an
additional MessageFlags field, which will be used to indicate the
presence of the optional field HtlcMaximumMsat within the ChannelUpdate.
* we partition ChannelEdgePolicy.Flags into message and channel flags.
This change corresponds to the partitioning of the ChannelUpdate's Flags
field into MessageFlags and ChannelFlags.
Co-authored-by: Johan T. Halseth <johanth@gmail.com>
In this commit we fix a compatibility issue with other implementations.
Before this commit, when writing out an onion error that includes a
`ChannelUpdate` we would use the `MaxPayloadLength` to get the length to
encode. However, a recent update has modified that to be the max
`brontide` payload length as it's possible to pad out the message with
optional fields we're unaware of. As a result, we would always write out
a length of 65KB or so. This didn't effect our parser as we ignore the
length and decode the channel update directly as we don't need the
length to do that. However, other implementations depended on the length
rather than just reading the channel update, meaning that they weren't
able to decode our onion errors that had channel updates.
In this commit we fix that by introducing a new
`writeOnionErrorChanUpdate` which will write out the precise length
instead of using the max payload size.
Fixes#2450.
In this commit, we ensure that when we read node aliases from the wire,
we ensure that they're valid. Before this commit, we would read the raw
bytes without checking for validity which could result in us writing in
invalid node alias to disk. We've fixed this, and also updated the
quickcheck tests to generate valid strings.
In this commit, we export the ReadElements and WriteElements functions.
We do this as exporting these functions makes it possible for outside
packages to define serializations which use the BOLT 1.0 wire format.
In this commit we add a check to HtlcSatifiesPolicy to verify that the
time lock for the outgoing htlc that is requested in the onion packet
isn't too far in the future.
Without this check, anyone could force an unreasonably long time lock on
the forwarding node.
This commit adds the required feature name to our
set of local known features. This will allow other
peers connecting to us to set the required gossip
queries feature bit. This is required for the
subsequent commits, which instruct the server to
set the bit depending on user configured preferences.
In this commit, we add a new field to all the existing gossip messages:
ExtraOpqueData. We do this, as before this commit, if we came across a
ChannelUpdate message with a set of optional fields, then we wouldn't be
able to properly parse the signatures related to the message. If we
never corrected this behavior, then we would violate the forwards
compatible principle we use when parsing existing messages.
As these messages can now be padded out to the max message size, we've
increased the MaxPayloadLength value for all of these messages.
Fixes#1814.
In this commit, we add a compatibility mode for older version of
clightning to ensure that we're able to properly parse all their channel
updates. An older version of c-lightning would send out encapsulated
onion error message with an additional type byte. This would throw off
our parsing as we didn't expect the type byte, and so we always 2 bytes
off. In order to ensure that we're able to parse these messages and make
adjustments to our path finding, we'll first check to see if the type
byte is there, if so, then we'll snip off two bytes from the front and
continue with parsing. if the bytes aren't found, then we can proceed as
normal and parse the request.
In this commit, we alter the behavior of the regular
short channel id encoding, such that it returns a nil
slice if the decoded number of elements is 0. This is
done so that it matches the behavior of the zlib
decompression, allowing us to test both in using the
same corpus.
Modifies the behavior of the quick test for
MsgQueryShortChanIDs, such that the generated
slice of expected short chan ids is always nil
if no elements are returned. This mimics the
behavior of the zlib decompression, where
elements are appended to the slice, instead of
assigning to preallocated slice.
In this commit, we add a new package level mutex. Each time we decode a
new set of chan IDs w/ zlib, we also grab this mutex. The purpose here
is to ensure that we only EVER allocate the maxZlibBufSize globally
across all peers. Otherwise, it may be possible for us to allocate up to
64 MB for _each_ peer, exposing an easy OOM attack vector.
In this commit, we implement zlib encoding and decoding for the channel
range queries. Notably, we utilize an io.LimitedReader to ensure that we
can enforce a hard cap on the total number of bytes we'll ever allocate
in a decoding attempt.
In this commit, we fix a slight bug in the parsing of encoded short
channel ID's. Before this commit, we would always assume that the remote
peer was sending us the sorted+encoded variant of the short channel
ID's. In the case that they weren't (as there isn't yet a feature bit),
we would assert this check and fail early as atm we don't support any
sort of compression.
In this commit, we add recognition of the data loss protected feature
bit. We already implement the full feature set, but then never added the
bit to our set of known features.