1 Meeting Notes 2022 10 03
Elias Rohrer edited this page 2022-10-03 20:19:39 +02:00
This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

Releases

  • 0.0.111
    • Elias fixing compilation error here, waiting on some CI coverage but its close
    • Matt: q: should we do a faster 112 if someones trying to use the “feature” feature? Bc its not working rn, specifically for the rust-async stuff
    • The actual feature object you can use w/o rust async stuff, just does a callback. So that works fine for bindings
    • Bindings not done, C bindings should be done but matt still working on java bindings
    • PR for java bindings, would be nice if someone took a look at it. Mostly done.
    • Swift side: swift code itself technically works and running in xcode works, but compilation of the bindings is not working. Trying to find a combo of macos, xcode, etc that will allow arik to compile for catalyst
    • Jurvis: i can pair with you arik
  • 0.0.112 (https://github.com/lightningdevkit/rust-lightning/milestone/29)
    • Maybe should push this out quick if someones waiting on it
    • But if no ones using it, it was kinda added for sensei and
    • John cantrell: not a huge blocker for me. Bdk blocking for 2 releases. but i would use it
    • Lexe client may also want it but unless they complain …
  • 0.1 (https://github.com/lightningdevkit/rust-lightning/milestone/1)
    • Matt: my thinking is once were happy w stability of all the features we have, biggest blocker being async persist stuff, kinda need to rewrite a big chunk of it, landed 1 PR and a number left to go, that was always my thinking was once we get that stuff done then we just call it 0.1
    • Also bc we feel pretty comfortable with how stable the library is overall. Certainly have a bunch of people using it in prod and it seems to work

Roadmap Progress

  • Developer support
    • Conor: nothing crazy. RGS livestream last week, so if anyone asks about it you can point them to it and the blog post
    • Tabconf next week, so dedicated space on builders day for people who want to contribute to LDK and for users of LDK to get in-person support from spiral devs etc
    • Matt and val speaking/panels
    • Adding some visuals from jeff/arik from btc++
  • Payment protocols
    • Onion msgs blog post tmrw hopefully
    • Async payments are gonna be a focus soon
    • Things moving on offers encoding
    • Mostly a review bottleneck now
    • val<>jeff to touch base on next steps for offers/payment protocols and dividing work
    • OM pathfinding coming along but low prio due to direct-connect-always for v1
    • Supporting custom onion messages has a PR open, which tees us up for offers messaging and async payments
  • Language bindings
    • Discussed above
    • May be inspired by bdks approach using uniFFI but unclear how much we wanna go down that path bc we dont get some languages that way
  • Taproot support
    • Arik: not really much new on this, except that htlc sigs are now also working
      • Halfway into last week i had to switch to working on swift
    • Thankfully TR now in a pretty good state
    • Momentum with lnd is going well too
    • Bolts and specs need to move forward, waiting on some responses from laolu
  • Anchor outputs
    • Wilmer: PR is up still waiting on review, i think now that matts back it should be getting some eyes soon
    • Ariard: almost good here IMO
  • LSP
    • John carvalho: as before were making progress on the marketplace api weve been working on
    • Think well have a first version prob after the next meeting, tho it will be delayed a few weeks due to conference
    • A bit of headway from LL being there so trying to resolve whether this is sth that would be supported w Pool and such
    • Still need to talk to people about liquidity ads and how that fits in
    • Have meeting notes if anyone wants to dig thru them
    • Cdecker tends to be in attendance, zizek attending too but doesnt say much, would be nice to have lisa there
    • Also been working on VASP regulations, talking to lawyers, and how this may relate to LSPs, bc our LSP is deeply integrated into our upcoming wallet, so kinda a minefield
    • Tricky bc of the wording being very broad
    • Steve: im chatting w Block about that
  • WSP (Wallet Storage Provider)
    • Gursharan: synced w devrandom on this
    • I think we are thinking of having # items at the impl level so they can each have their own limits
    • E.g. One wallet provider wants to use tx limit of 1000, postgres backend. Another might want some other KV db w/ tx limit of 100-500
    • I will sync up more w devrandom about that
    • Matt: thats gonna be really awkward .. how much of the goal of the project includes a desire to have the ability to swap out storage providers and how much is the goal to define some common code that an operator of a wallet vendor can use to store data on behalf of their users? If done at impl level and not at standard/api level, then its only useful as “wallet vendor runs service to backup data for users” vs doing it at api level, you can kinda say “i can connect to any WSP server or even run my own, or…”
    • G: rn the initial scope is for first party WSP, as in the wallet provider or somebodys providing that storage. Synced w steve and initial goal is to have 1st party support and then maybe we can see “what do we need to get to 3rd party support”
    • G: rn limitation is # of items in a tx. Eg if you wanna do a transactional write of 1000 items
    • Devrandom: we have a more normalized schema, each payment is a separate row, and when you commit a commitment tx you are updating all the payments that are affected by this transaction in the same atomic db tx
    • Since theoretically a commit tx can fulfill 583 or wtv HTLCs and create 583 new htlcs, can have a max of 1000ish payments affected
    • So bc we have a normalized schema, the # of items we touch in a tx is up to 1000ish
    • Ldks a bit diff, bc you have larger things like the chanmon and chanman which hold all the payments then get updated all at once
    • So thats why we have that requirement for a larger # of items in a db tx
    • But you guys have requirement to put more into one row/cell of a db
    • Which may also be a problem on amazon dynamodb
    • I think yall might wanna switch to inserting historical pmt hashes 1 tx at a time instead of storing them in one object in one go bc chanmon has unbounded storage in general and a bunch of details, not sure we can cover them rn
    • Discussing whether its possible to have an api that can work with dynamodb in some use cases, maybe not vls or maybe vls w specific config and then have another config thats backed by postgres and doesnt have any such limitation
    • Matts point is also that we wanna support storage service that works for a variety of use cases, dont have to worry about who you connect to
    • Matt: also that its awkward that it becomes no longer a standard, just an api but you cant really swap out the server side, it becomes tightly coupled w client side. If thats how it has to be, it is what it is, but seems awkward and would be very nice if we could avoid that
    • Matt: sounds like postgres is the lowest common denominator, that may end up being the thing
    • Ariard: i think youll care about latency if you wanna support routing nodes, if slow db youll be outta the market
    • Matt: if were talking about routing nodes, you prob shouldnt be talking about a remote server storage, so … idk if thats in scope here
    • Devr: postgres has a few milliseconds latency, not sure if itll impact perf that much. Only high latency when doing 100s of txs at once, far from normal case
    • Ariard: anyprevout should solve latency, just have to load balance chan storage between peers or so. So may not need to over optimize rn
    • G: issues with an sql or postgres, one is latency, the other is it is not that predictable in scaling. So for any sql backend we will generally have to do vertical scaling, and there are kv stores that support horizontal scaling and give guarantees on same perf when 100 users vs 1mil users. For sql, simply not true. There are strict limits that we will hit if we wanna go into the direction where there are 3rd party storage providers. So idk if sql can support that scale for a large wallet or 3rd party storage provider w millions of users
    • Devrandom: ithink it can scale horizontally bc it can shard by client (each ln node)
    • Matt: can automate that w normal postgres sharding stuff, per keys
    • G: that is operation burden and a manual thing
    • Matt: but i assume if u run a wallet for 100 mil users, you can prob stand operational overhead
    • G: i dont expect a wallet or storage provider to be running a datacenter and managing their own storage, bc it is unsafe if you lose the data center (funds loss). So therefore we want application datacenter redundancy out of the box
  • LDK Lite

Dependent Projects

  • VLS (https://gitlab.com/lightning-signer/validating-lightning-signer)

    • Kensedgwick: working on stm32 demo, invoice approval layout. Rly small screen, trying to put pertinent info about an invoice on it so users can approve/decline
    • Found and fixed a controller reset bug
    • Experimenting w activity display on stm32. So what can we do w small amount of screen area to show whats going on in a node to help users debug/show whats happening?
    • Devrandom: moving w postgres backend, should have sth working e2e in a few days
  • Sensei (https://github.com/L2-Technology/sensei)

  • Synonym (https://github.com/synonymdev/ldk-node-js)

    • Cinnamon lol
    • J and cory are gearing up for mainnet test on friday
    • We do app testing on fridays
    • Been testing on regtest for a while, gonna try to test on mainnet now
    • App launch in october
    • No progress on combining synonym rnldk w bluewallets rnldk, prob not happening

Spec

  • 2022/09/26 (https://github.com/lightning/bolts/issues/1028)
    • Viktor has a PR related to this, dropping support for legacy onion payload format
    • Been updating the test vectors. Made a PR to drop legacy enums, havent had time to actually code the removal of everything regarding legacy onion but perhaps it can be merged w/o that part or in same PR
    • When constructing onion packet with our utils, it doesnt match test vectors even though our inputs are the same.
    • Jeff: plz add me as a reviewer, can discuss offline

Misc

  • review begs?