
@notjack working on protocols. It looks like this API is also a stack.


The codec layer is pretty messy.

I don’t think a driver-style API will work.

Not like it did with the Agent API.

encode : * -> * decode : * -> * These signatures are too generic to reason about usefully.

So instead, we can devise a “primitive” set of codecs and prescribe rules for making new ones. I’ll call this an axiomatic approach.

The roadmap suggests restructuring and serialization

That’s still pretty generic.

The restructuring side can be axiomatic as well

Same for serialization.

In some sense, serialization is more “universal” than restructuring

If net2
can serialize Racket data structures, we can model and analyze those behaviors.

Then future protocol implementers will get a practical foundation (the code) as well as a theoretical one (the model).

But I can’t find anything that general to say about restructuring.

I can say interesting protocol-specific things about restructuring.

Suppose I have a messenger that prints alists by joining keys and values with a fixed string.

Say I’m building an HTTP protocol that presents requests as http-request
objects.

Then I can say the protocol restructures http-request
objects as alists.

But that’s not generally useful.

I mean, restructuring http-request
objects into alists is not generally useful.

It wouldn’t belong in a standard library.

Maybe if we included codecs for converting between Racket data structures, but that smells like scope creep.

Or, we could just catalog the collection of restructuring codecs used in whatever protocols net2
ships with.

So codecs must be composable. Must they also be invertible?

The theory is simpler if codecs are composable and invertible. That means a simpler API, stronger theoretical guarantees, and less bugs.

@dedbox That API stack diagram looks great, that’s just precisely what I’ve been shooting for. Wonderful job :)

and I’m thinking along the same lines with codecs

composable and defined in a way that’s somehow “symmetric” seems ideal and makes things much simpler

also, keep in mind this is for byte-level serialization protocols not parsing user-written source code, so error messages are less of a concern than things like a way to guarantee an upper bound on memory consumption

I agree with you about not being 100% sure where structs and (de)serializers for http types should go

I’m thinking maybe a net2/codec
module that defines a whole pile of generic ways to extend and compose codecs? like, a generic way to do “message framing” where you turn a Codec A B
value (meaning a codec with high-level format of type A
and low-level (e.g. binary) format of type B
) and a function telling you the size of values of type A
into a Codec A (Frame B)
value, where the Frame
type prefixes each unit with size information. Or something like that. Basically, something that generalizes over how Ethernet frames, IP packets, TCP packets, and HTTP messages all have a size header that tells you how far to read for more input. That sort of thing would belong in net2/codec

and then ideally, http codecs should require relatively little code to implement and just be glueing together various generic bits of net2/codec
. That sort of stuff seems alright in a net2/http
module