For all those little papers scattered across your desk
A small amount of server-side Racket and client-side JavaScript give me a passable version of a reactive front-end.
In my talk for 14th RacketCon I mentioned that the Frosthaven Manager can spawn a web-server for my friends and players to interact with the app on their mobile devices. I run the entire program on my machine, so all the state is stored in-process in one place. Edits in the desktop GUI are propagated to my players web pages live, and their actions translate back to the GUI in turn.
There’s no JavaScript framework on either the back-end or front-end. So how does it all work?
There are 3 pieces to the puzzle:
fetch
calls in
onclick
handlers that send POST requests back to the server, which the
server translates into actions the rest of the program knows how to handle
(but which, as mentioned in the talk, are shunted back to the GUI execution
loop rather than executed in the concurrent web-server handler threads). So
while my players mostly see the rendered HTML content returned by the
servers primary route, it actually supports a limited kind of
URLSearchParams
-backed API. If you know what routes to hit, you could write
your own client to trigger game events. I’ve done so with
hurl
when playing with new features just to try it.This post focuses on the server-sent event implementation, or primarily the latter 2 pieces.
Note: Rather than embed the full code in those post, I’m going to link to the implementation as it was at time of writing. Follow the links to get the full details.
With server-sent events, it’s possible for a server to send new data to a web page at any time, by pushing messages to the web page. (MDN)
SSEs are one-way connections from server to client. Clients point a standard
JavaScript API EventSource
at a URL that will produce a SSE-compatible
response and then attach event listeners. These listeners can operate over
generic
events
or named
events.
Messages can have arbitrary data
fields
which the client must parse to decide how to use.
The server implements SSEs by responding with the correct MIME type and raw response format.
Before we look at implementation details, let’s get a grasp on the fundamentals of the protocol my web-server uses atop SSEs.
data
fields. Racket is
capable of emitting standard JSON and JavaScript of parsing it, so this
simplifies communication.id
attribute).innerHTML
as needed.The last point is important: it avoids performing duplicate calculations in the client (when a player’s HP changes, we send the new number, not an event requesting that the HP be incremented or decremented), which makes keeping the state in sync more reliable.
The simplest events in my protocol are number
and text
: they send an id and
a number or string that should replace the named node’s innerHTML
. They
actually have nearly identical client
implementations
and server
implementations.
Other events are more complicated and outside the scope of this article. As an
example, the player
event is triggered when a player object changes: ignoring
the summon data, it receives an HTML id, a mapping of sub-components to HTML
strings, and a complete HTML node. The complete node is used if the player
doesn’t already exist, allowing it to be inserted wholesale into the display.
Otherwise, we update the sub-nodes based on the mapping of HTML strings. The
monster-group
event is similar.
As we said earlier, the client opens a new event source and attaches event handlers. We include the script on the main page. Sending events is the server’s responsibility.
const evtSource = new EventSource("events");
evtSource.addEventListener("name", handler);
The server subscribes to the GUI observables1: when they change, the subscribers place structured data in a multicast channel (example). We need a multicast channel because we create one per server (usually just one), but each client request handler needs to be able to be able to read from it (usually one per event source connection).
(define ch (make-multicast-channel))
(obs-observe! @state
(λ (state)
(multicast-channel-put ch (state-event state))))
Then, the server establishes a route which implements the SSEs. This is the same path that forms part of the URL that the client will connect to. The route’s implementation responds with appropriate headers. It also gets an output port it can use to write to the client.
(define ((event-source ch) _req)
(define receiver (make-multicast-receiver ch))
(response/output
#:headers (list (header #"Cache-Control" #"no-store")
(header #"Content-Type" #"text/event-stream")
;; Don't use Connection in HTTP/2 or HTTP/3, but Racket's
;; web-server is HTTP/1.1 as confirmed by
;; `curl -vso /dev/null --http2 <addr>`.
(header #"Connection" #"keep-alive")
;; Pairs with Connection; since our event source sends data
;; every 5 seconds at minimum, this 10s timeout should be
;; sufficient.
(header #"Keep-Alive" #"timeout=10"))
(sse-output receiver)))
The main loop of the response handler is to wait on the channel to produce data: when it does, a separate function transforms that data into SSE format and shoves it through the port. If we don’t get a response in time, we send a comment to prevent connection timeout.
(define (see-output receiver)
(λ (out)
(let loop ()
(cond
[(sync/timeout 5 receiver) => (event-stream out)]
[else (displayln ":" out)])
(loop))))
That’s all there is to it! I’m hoping to find a way to extract the two pieces
(client-side code and server-side implementation) into a library for other
Racket applications to use to implement server-side events more easily. Ideally
it will handle the basics of SSEs while remaining agnostic to how the
application generates and handles events. We might be able to be
concurrency-agnostic, though: while Racket’s sync
is generic, most
applications probably need a single-producer multi-consumer channel. Still,
allowing any event that produces data a consumer can transform into SSE-data
might work and allow other patterns.
For performance reasons, some subscribers spawn a thread that sends the message. Since GUI Easy subscribers execute serially, moving expensive work out of the main loop quickly can help avoid bottlenecks. ↩