dedbox
2017-11-25 08:16:57

An agent is a link between authorities. A transport is a reliable stream between agents. A protocol is a scheme-specific family of resources accessible via transport.


notjack
2017-11-25 09:04:02

I think defining the pile of concepts in net2 like that is a very good idea


notjack
2017-11-25 09:04:49

this is also making me wonder if there should be some explicit “network” type provided by net2 that does something like what the Agent API you described does


notjack
2017-11-25 09:29:50

more thoughts:

  1. The URI structure and types for its substructures are probably the most important and “core” part of net2. However the docs describe a split of concepts should take that into account.
  2. I think listeners ought to operate in terms of sets of authorities, since there’s value in being able to accept connections from “all localhost addresses” or “all ports in this range for this address” (turns out the x11 protocol does the latter quite a bit). Maybe the URI data structures for net2 should also specific a “reg-namespace” struct representing some single value used for routing that defines a subset of the namespace. It would basically generalize how CIDR ranges work for ip addresses. This would work nicely with TLS certificates for non-dns hosts too, because a wildcard certificate is essentially using the wildcard to represent a particular namespace of DNS addresses. With a general struct, TLS certs could be issued for an IP range like 192.168.123.0/8 to use a single cert for a range of private IPs.
  3. I’m not sure what the protocol API’s relationship to transports should be. This kind of code would be really well suited to a Rust-like language with a powerful haskellish type system and emphasis on purity but the ability to express control over allocation and deallocation. It’s basically a whole bunch of parsing and pure functions for shuffling values into bytes. I think the best way to figure out how this should work is by paying attention to what rust’s Tokio libraries do and what Haskell’s parsing libraries do, especially attoparsec. I’ve also been pondering this part a lot and this might work best in terms of general streams / sequences of values and parsing combinators but where there’s a protocol to guarantee a maximum number of stream elements consumed or produced in individual serialization and deserialization operations. Otherwise there’s nowhere to “commit” parsing and you can’t throw away old parse tree info since you might need it for backtracking. But sticking commit / maximum length semantics in the mix might give a way to get attoparsec-like performance without sacrificing parsec-like abstraction powers.
  4. I think disposable needs some sort of “closing” type that represents a value that’s been allocated and will be deallocated eventually (e.g. by custodian shutdown), but if you want you can close it early using an explicit “close” function. Otherwise I don’t know how to express that listeners offer an “accept connection” event that returns a transport because there isn’t a good place to stick transport closing logic.

notjack
2017-11-25 09:30:48

oh also: google FOSS legal is finally getting through their backlog and policy tweaks, and racket-net2 is now officially not google copyright :)


notjack
2017-11-25 09:36:38

Oh and I really want a standard struct for representing media type names and “content” (bytes + media type), since that’s a really crucial concept for internet computation


notjack
2017-11-25 09:43:08

huh, I guess that “parsing with maximum on number of produced/consumed elements” is basically just adding an MTU to an abstract stream


notjack
2017-11-25 09:43:21

definitely seems like the right direction to explore