mirror of
https://github.com/lightning/bolts.git
synced 2025-03-10 09:10:07 +01:00
Complete the Fundamental Types. (#778)
* Rename all the 'varint' to 'bigsize'. Having both is confusing; we chose the name bigsize, so use it explicitly. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> * BOLT 7: use `byte` instead of `u8`. `u8` isn't a type; see BOLT #1 "Fundamental Types". Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> * BOLT 1: promote bigsize to a Fundamental Type. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
This commit is contained in:
parent
5322c2b8ce
commit
9e8e29af9b
4 changed files with 27 additions and 34 deletions
|
@ -355,7 +355,6 @@ verifier
|
|||
verifiers
|
||||
EOF
|
||||
monotonicity
|
||||
varint
|
||||
optimizations
|
||||
structs
|
||||
CompactSize
|
||||
|
@ -367,8 +366,6 @@ namespaces
|
|||
tlvs
|
||||
fips
|
||||
rfc
|
||||
varint
|
||||
CompactSize
|
||||
multipath
|
||||
mpp
|
||||
tlvs
|
||||
|
|
|
@ -110,20 +110,15 @@ the backwards-compatible addition of new fields to existing message types.
|
|||
|
||||
A `tlv_record` represents a single field, encoded in the form:
|
||||
|
||||
* [`varint`: `type`]
|
||||
* [`varint`: `length`]
|
||||
* [`bigsize`: `type`]
|
||||
* [`bigsize`: `length`]
|
||||
* [`length`: `value`]
|
||||
|
||||
A `varint` is a variable-length, unsigned integer encoding using the
|
||||
[BigSize](#appendix-a-bigsize-test-vectors) format, which resembles the bitcoin
|
||||
CompactSize encoding but uses big-endian for multi-byte values rather than
|
||||
little-endian.
|
||||
|
||||
A `tlv_stream` is a series of (possibly zero) `tlv_record`s, represented as the
|
||||
concatenation of the encoded `tlv_record`s. When used to extend existing
|
||||
messages, a `tlv_stream` is typically placed after all currently defined fields.
|
||||
|
||||
The `type` is a varint encoded using the BigSize format. It functions as a
|
||||
The `type` is encoded using the BigSize format. It functions as a
|
||||
message-specific, 64-bit identifier for the `tlv_record` determining how the
|
||||
contents of `value` should be decoded. `type` identifiers below 2^16 are
|
||||
reserved for use in this specification. `type` identifiers greater than or equal
|
||||
|
@ -131,7 +126,7 @@ to 2^16 are available for custom records. Any record not defined in this
|
|||
specification is considered a custom record. This includes experimental and
|
||||
application-specific messages.
|
||||
|
||||
The `length` is a varint encoded using the BigSize format signaling the size of
|
||||
The `length` is encoded using the BigSize format signaling the size of
|
||||
`value` in bytes.
|
||||
|
||||
The `value` depends entirely on the `type`, and should be encoded or decoded
|
||||
|
@ -192,7 +187,7 @@ things, enables the following optimizations:
|
|||
- variable-size fields can reserve their expected size up front, rather than
|
||||
appending elements sequentially and incurring double-and-copy overhead.
|
||||
|
||||
The use of a varint for `type` and `length` permits a space savings for small
|
||||
The use of a bigsize for `type` and `length` permits a space savings for small
|
||||
`type`s or short `value`s. This potentially leaves more space for application
|
||||
data over the wire or in an onion payload.
|
||||
|
||||
|
@ -236,6 +231,7 @@ The following convenience types are also defined:
|
|||
* `signature`: a 64-byte bitcoin Elliptic Curve signature
|
||||
* `point`: a 33-byte Elliptic Curve point (compressed encoding as per [SEC 1 standard](http://www.secg.org/sec1-v2.pdf#subsubsection.2.3.3))
|
||||
* `short_channel_id`: an 8 byte value identifying a channel (see [BOLT #7](07-routing-gossip.md#definition-of-short-channel-id))
|
||||
* `bigsize`: a variable-length, unsigned integer similar to Bitcoin's CompactSize encoding, but big-endian. Described in [BigSize](#appendix-a-bigsize-test-vectors).
|
||||
|
||||
## Setup Messages
|
||||
|
||||
|
@ -474,10 +470,10 @@ decoded with BigSize should be checked to ensure they are minimally encoded.
|
|||
|
||||
The following is an example of how to execute the BigSize decoding tests.
|
||||
```golang
|
||||
func testReadVarInt(t *testing.T, test varIntTest) {
|
||||
func testReadBigSize(t *testing.T, test bigSizeTest) {
|
||||
var buf [8]byte
|
||||
r := bytes.NewReader(test.Bytes)
|
||||
val, err := tlv.ReadVarInt(r, &buf)
|
||||
val, err := tlv.ReadBigSize(r, &buf)
|
||||
if err != nil && err.Error() != test.ExpErr {
|
||||
t.Fatalf("expected decoding error: %v, got: %v",
|
||||
test.ExpErr, err)
|
||||
|
@ -541,19 +537,19 @@ A correct implementation should pass against these test vectors:
|
|||
"name": "two byte not canonical",
|
||||
"value": 0,
|
||||
"bytes": "fd00fc",
|
||||
"exp_error": "decoded varint is not canonical"
|
||||
"exp_error": "decoded bigsize is not canonical"
|
||||
},
|
||||
{
|
||||
"name": "four byte not canonical",
|
||||
"value": 0,
|
||||
"bytes": "fe0000ffff",
|
||||
"exp_error": "decoded varint is not canonical"
|
||||
"exp_error": "decoded bigsize is not canonical"
|
||||
},
|
||||
{
|
||||
"name": "eight byte not canonical",
|
||||
"value": 0,
|
||||
"bytes": "ff00000000ffffffff",
|
||||
"exp_error": "decoded varint is not canonical"
|
||||
"exp_error": "decoded bigsize is not canonical"
|
||||
},
|
||||
{
|
||||
"name": "two byte short read",
|
||||
|
@ -604,14 +600,14 @@ A correct implementation should pass against these test vectors:
|
|||
|
||||
The following is an example of how to execute the BigSize encoding tests.
|
||||
```golang
|
||||
func testWriteVarInt(t *testing.T, test varIntTest) {
|
||||
func testWriteBigSize(t *testing.T, test bigSizeTest) {
|
||||
var (
|
||||
w bytes.Buffer
|
||||
buf [8]byte
|
||||
)
|
||||
err := tlv.WriteVarInt(&w, test.Value, &buf)
|
||||
err := tlv.WriteBigSize(&w, test.Value, &buf)
|
||||
if err != nil {
|
||||
t.Fatalf("unable to encode %d as varint: %v",
|
||||
t.Fatalf("unable to encode %d as bigsize: %v",
|
||||
test.Value, err)
|
||||
}
|
||||
|
||||
|
|
|
@ -95,7 +95,7 @@ There are a number of conventions adhered to throughout this document:
|
|||
- Each hop in the route has a variable length `hop_payload`, or a fixed-size
|
||||
legacy `hop_data` payload.
|
||||
- The legacy `hop_data` is identified by a single `0x00`-byte prefix
|
||||
- The variable length `hop_payload` is prefixed with a `varint` encoding
|
||||
- The variable length `hop_payload` is prefixed with a `bigsize` encoding
|
||||
the length in bytes, excluding the prefix and the trailing HMAC.
|
||||
|
||||
# Key Generation
|
||||
|
@ -163,7 +163,7 @@ It is 1300 bytes long and has the following structure:
|
|||
|
||||
1. type: `hop_payloads`
|
||||
2. data:
|
||||
* [`varint`:`length`]
|
||||
* [`bigsize`:`length`]
|
||||
* [`hop_payload_length`:`hop_payload`]
|
||||
* [`32*byte`:`hmac`]
|
||||
* ...
|
||||
|
@ -523,10 +523,10 @@ For each hop in the route, in reverse order, the sender applies the
|
|||
following operations:
|
||||
|
||||
- The _rho_-key and _mu_-key are generated using the hop's shared secret.
|
||||
- `shift_size` is defined as the length of the `hop_payload` plus the varint encoding of the length and the length of that HMAC. Thus if the payload length is `l` then the `shift_size` is `1 + l + 32` for `l < 253`, otherwise `3 + l + 32` due to the varint encoding of `l`.
|
||||
- `shift_size` is defined as the length of the `hop_payload` plus the bigsize encoding of the length and the length of that HMAC. Thus if the payload length is `l` then the `shift_size` is `1 + l + 32` for `l < 253`, otherwise `3 + l + 32` due to the bigsize encoding of `l`.
|
||||
- The `hop_payload` field is right-shifted by `shift_size` bytes, discarding the last `shift_size`
|
||||
bytes that exceed its 1300-byte size.
|
||||
- The varint-serialized length, serialized `hop_payload` and `hmac` are copied into the following `shift_size` bytes.
|
||||
- The bigsize-serialized length, serialized `hop_payload` and `hmac` are copied into the following `shift_size` bytes.
|
||||
- The _rho_-key is used to generate 1300 bytes of pseudo-random byte stream
|
||||
which is then applied, with `XOR`, to the `hop_payloads` field.
|
||||
- If this is the last hop, i.e. the first iteration, then the tail of the
|
||||
|
@ -662,7 +662,7 @@ The routing information is then deobfuscated, and the information about the
|
|||
next hop is extracted.
|
||||
To do so, the processing node copies the `hop_payloads` field, appends 1300 `0x00`-bytes,
|
||||
generates `2*1300` pseudo-random bytes (using the _rho_-key), and applies the result, using `XOR`, to the copy of the `hop_payloads`.
|
||||
The first few bytes correspond to the varint-encoded length `l` of the `hop_payload`, followed by `l` bytes of the resulting routing information become the `hop_payload`, and the 32 byte HMAC.
|
||||
The first few bytes correspond to the bigsize-encoded length `l` of the `hop_payload`, followed by `l` bytes of the resulting routing information become the `hop_payload`, and the 32 byte HMAC.
|
||||
The next 1300 bytes are the `hop_payloads` for the outgoing packet.
|
||||
|
||||
A special `hmac` value of 32 `0x00`-bytes indicates that the currently processing hop is the intended recipient and that the packet should not be forwarded.
|
||||
|
@ -1000,7 +1000,7 @@ The CLTV expiry in the HTLC is too far in the future.
|
|||
|
||||
1. type: PERM|22 (`invalid_onion_payload`)
|
||||
2. data:
|
||||
* [`varint`:`type`]
|
||||
* [`bigsize`:`type`]
|
||||
* [`u16`:`offset`]
|
||||
|
||||
The decrypted onion per-hop payload was not understood by the processing node
|
||||
|
|
|
@ -602,10 +602,10 @@ Nodes can signal that they support extended gossip queries with the `gossip_quer
|
|||
2. types:
|
||||
1. type: 1 (`query_flags`)
|
||||
2. data:
|
||||
* [`u8`:`encoding_type`]
|
||||
* [`byte`:`encoding_type`]
|
||||
* [`...*byte`:`encoded_query_flags`]
|
||||
|
||||
`encoded_query_flags` is an array of bitfields, one varint per bitfield, one bitfield for each `short_channel_id`. Bits have the following meaning:
|
||||
`encoded_query_flags` is an array of bitfields, one bigsize per bitfield, one bitfield for each `short_channel_id`. Bits have the following meaning:
|
||||
|
||||
| Bit Position | Meaning |
|
||||
| ------------- | ---------------------------------------- |
|
||||
|
@ -642,7 +642,7 @@ The sender:
|
|||
- SHOULD NOT send this if the channel referred to is not an unspent output.
|
||||
- MAY include an optional `query_flags`. If so:
|
||||
- MUST set `encoding_type`, as for `encoded_short_ids`.
|
||||
- Each query flag is a minimally-encoded varint.
|
||||
- Each query flag is a minimally-encoded bigsize.
|
||||
- MUST encode one query flag per `short_channel_id`.
|
||||
|
||||
The receiver:
|
||||
|
@ -663,7 +663,7 @@ The receiver:
|
|||
- MUST follow with any `node_announcement`s for each `channel_announcement`
|
||||
- otherwise:
|
||||
- We define `query_flag` for the Nth `short_channel_id` in
|
||||
`encoded_short_ids` to be the Nth varint of the decoded
|
||||
`encoded_short_ids` to be the Nth bigsize of the decoded
|
||||
`encoded_query_flags`.
|
||||
- if bit 0 of `query_flag` is set:
|
||||
- MUST reply with a `channel_announcement`
|
||||
|
@ -707,9 +707,9 @@ timeouts. It also causes a natural ratelimiting of queries.
|
|||
2. types:
|
||||
1. type: 1 (`query_option`)
|
||||
2. data:
|
||||
* [`varint`:`query_option_flags`]
|
||||
* [`bigsize`:`query_option_flags`]
|
||||
|
||||
`query_option_flags` is a bitfield represented as a minimally-encoded varint. Bits have the following meaning:
|
||||
`query_option_flags` is a bitfield represented as a minimally-encoded bigsize. Bits have the following meaning:
|
||||
|
||||
| Bit Position | Meaning |
|
||||
| ------------- | ----------------------- |
|
||||
|
@ -732,7 +732,7 @@ Though it is possible, it would not be very useful to ask for checksums without
|
|||
2. types:
|
||||
1. type: 1 (`timestamps_tlv`)
|
||||
2. data:
|
||||
* [`u8`:`encoding_type`]
|
||||
* [`byte`:`encoding_type`]
|
||||
* [`...*byte`:`encoded_timestamps`]
|
||||
1. type: 3 (`checksums_tlv`)
|
||||
2. data:
|
||||
|
|
Loading…
Add table
Reference in a new issue