
All, does anybody know what’s the granularity of the racket cache? By looking into .racket
it seems it’s either <version>
or snapshot
. So if I grab HEAD today and a different HEAD tomorrow, the same cache will be used. Is this the case?

The package-download cache is shared across all Racket versions and variants. It’s keyed on the package checksum, so two variants of Racket would use the same cache entry only when they want exactly the same package content.

but with regards to deinprogramm
. I install clone two HEADs, one today and one tomorrow. It’s fine if they both use the same download cache. But then the package is built. Will the built files today be overwritten by those tomorrow? Also, how does CS fit into this? I assume the built files with cs must be put someplace else?

A Racket build from a Git repo checkout uses installation scope by default, so the installation is not written to “~/.racket”. The two builds will have separate installations of the package. A make cs
build configures racketcs
to put “.zo” files in a subdirectory of “compiled”, such as “compiled/ta6le”, so that’s why racket
and racketcs
can coexist within a build. If you install anything specifically in user scope, then both checkout builds would use it, and that’s generally not good. Avoid that problem by not installing into user scope or by changing one of the installations (using raco pkg config ...
) to have a name other than “development”.

Thanks for the clarification.

Hello! I am new to Racket, and naturally have some questions … My first ones, since I like static type systems, are:
What is the goal of Typed Racket? Is its main goal to be a workbench for type systems research? What does Typed Racket offer that the type systems of OCaml or Haskell do not?

I have a list of identifiers that I’d like to use as input to define-values
. Is it possible to from '(a b c)
to (values a b c)
?

(apply values '(a b c))

tyvm!

there’s also match-define: (match-define (list a-var b-var c-var) '(a b c))

needs (require racket/match)
though, but match-define
is pretty powerful

Neat. The trick for me is my list is coming out of hash-values
so I need to make sure it expands before match-define
or define-values
. I don’t need hash-values
itself redefined. :slightly_smiling_face:

“expands before” ??

Sorry for my clunky terminology. Basically I’m trying to do something like (define val-list (hash-keys h))
(define-values (values val-list) (hash-values h))
but it’s trying to literally redefine val-list

It’s not clear what you expect it to do

Correct me if I’m wrong: If h
is (hash 'a 1 'b 2 'c 3)

Oh, wait: you want to define a variable per k/v pair in the hash?

My first response is: you do not want to do that.

The do you want something that will define a
as 1, b
as 2, and c
as 3?

Yes, exactly.

Are the keys of the hash table only known/stored at runtime?

Yeah. Unfortunately the way the actual data is created it is outside the definitions context.

If you had a magic thing that did what you want called definitions-from-hash

That looked like: (definitions-from-hash h)
which took h
as a runtime value, say (hash 'a 1 'b 2 'c 3)
, and generated those definitions

Yup that’s the dream.

What should the compiler do if it sees: (define (f h)
(definitions-from-hash h)
a)
?

Based on my limited understanding I would expect it create those definitions local to the scope of f
and thus to return 1
if you called it with the example hash.

What benefit are you hoping to derive from this? Since the hash keys aren’t known until runtime, how would you refer to any of the defined variables?

How would you know that they aren’t messing with the definitions of things like +
?

I’m trying to create a new package in the catalog, and I’m getting an error message saying “Save failed.” I’m not seeing any other details about what’s wrong. Is something amiss with the server? (@jeapostrophe?)

My use case is that I have a situation where I’m creating a tree of nested items that I would like to be able to refer to by name later because we may need to export multiple references to the same instance of data. This would be wrapped behind macros and not something generally available to our scripters.

To make what @jaz is saying more concrete, what should the compiler do if it sees: (define (f h)
(definitions-from-hash h)
(+ a 10))
?
And then you call f
with a hash table that (accidentally of course) contains the key '+
: (f (hash 'a "a~vtion" '+ printf))
?

Also: (define (f h)
(definitions-from-hash h)
a)
(f (hash))

In that case it would redefine +
to printf
. Is the argument that because such a thing is possible then this function shouldn’t exist? Because I can right now do
(define + printf)
(+ "foo ~a" 1)

What about (f (hash 'definitions-from-hash +))
?

I’m not working on a publicly available API or anything here. This is an internal library to be used by our own employees who are all reasonably professional. :slightly_smiling_face:

Okay

How about you modify it slightly: (definitions-from-hash h [a b c])
That way a
, b
, and c
are known at compile time?

Definitions and binding need to be determined at compile time, that’s why

So the net effect of that would be every instance of this data would require the scripter to also maintain a list of all known instances in a separate location. I’d say we’d go maybe 48 hours before that process broke down. While everyone is reasonably professional most of them are non-technical by nature.

I guess I still don’t understand this part: if the keys aren’t known until runtime, how can any of the people using this refer to them? How/why would they expect a
to be bound?

If they are known ahead of time, then you don’t need to do this. If they aren’t, I don’t see how it’s useful to do this.

Or are you assuming that the programmers will be able to refer to unbound variables and get back some kind of undefined
value à la javascript?

The runtime in this case is an exporter which generates a data file to be used by the game. So if our exporter can refer to data at a later point in time we can do more useful things with that data. Say we are able to export the tree of data as authored by the scripters but then automatically generate a list of all known tree nodes which also gets written out.

Sorry, I don’t follow. I guess I still don’t have a good idea of how this code would be used.

But… assuming that whoever is going to be referring to these variables is also responsible for making sure that the hash has the relevant keys (which I think is what you’re saying), maybe this could be done as a kind of staged approach? Where you’d macro-generate the code per dataset. Does that make any sense?

An alternative would be to embed an interpreter in your code.

If someone writes (define (f h)
(definitions-from-hash h)
eeeeeee)
And then f
is called on a hash that doesn’t have that key, like (hash 'a 1 'b 2 'c 3)
, what do you want to happen? 1. Error at compile time, somehow 2. Error at run time 3. Some king of “undefined” value, but no error

I think so, I believe we do similar macro generated code/datasets pretty frequently if I take your meaning correctly.

And who should be “blamed” for this error?

Runtime error @alexknauth

The person who wrote f
in a way that referred to a key that didn’t exist?

Or the person who called f
without giving it the keys that it expected?

I would expect the “blame” to be on the author of f
in this case

So when f
is written, is there some list of known valid keys somewhere?

And a
, b
, and c
are in that list, but eeeeeee
is not?

Unfortunately that’s what the hash table is supposed to be, the list of known things.

But the list of known valid keys is known when f
is written, which is before f
is called, right?

So, I’ve never done this before, but maybe you could do something like: - create a new namespace - iterate through the hash pairs, calling namespace-set-variable-value!
on each - use the populated namespace to eval
the user code

(This is a simplified version of what you’d need to do to set up a useful environment for eval-ing code.)

There are probably much better ways than eval
, even for potentially dynamic behavior like this

And quite possibly not the best way to go about this.

I’m trying to figure out how dynamic it needs to be

Should information be known when f
is defined, or only when f
is called?

f
is not written when there is a list of known keys.

Then how is it right to blame the author of f
for writing eeeeeee
when the list of keys is not known when they wrote it?

Let me see if I can conjure up what this might look like to a scripter

(new-game-data example
(new-node foo 1 2 3)
(new-node bar 4 5 6)
)
;; foo is undefined
(magic)
;; foo is now bound to a new data node
(do-something-with foo bar)

new-node
in this example is a macro that internally updates the hash table behind the scenes.

Currently the new-node
macro updates the hash at run time?

Could you have it update a set of known keys at compile time as well?

How might I do that?

What if the information about foo
and bar
being valid keys was stored at compile time in the example
identifier? Then when you invoke magic
you pass in example
like this: (magic example)
And then foo
and bar
are defined

The part I’m stuck on is associating foo
and bar
with example
at compile time. I think it would require new-game-data
having significant visibility into the the format of the data nodes.

But I admit that is particular to our own macro structure.

Assuming the new-game-data
macro had that information, what would the process of compile time association look like?

One possible way would look something like this:

#lang racket
(require syntax/parse/define)
(define-syntax new-node #f)
(define-syntax-parser new-game-data
#:literals [new-node]
[(new-game-data name:id (new-node key:id) ...)
#'(define-syntax name '(key ...))])
(define-syntax-parser magic
[(magic name:id)
(define keys (syntax-local-value #'name))
(define keys-stx (datum->syntax #'name keys))
#`(define-values #,keys-stx (apply values (range #,(length keys))))])
(new-game-data example
(new-node foo)
(new-node bar))
(magic example)
foo
;=> 0
bar
;=> 1

define-syntax
creates an association at compile time

and syntax-local-value
looks up the compile-time value associated with the identifier

It’s also possible to have example
tied to both a compile-time set of keys and a run-time hashtable, by creating a compile-time struct that refers to a runtime identifier

Or if there might be multiple run-time hash-tables for the same set of keys, it might be better to keep those separate.

Interesting! I’ve not seen define-syntax-parser
before.

Unfortunately I have to step away for a bit, if I don’t get a chance to follow up later I just want to say thank you for working this out with me.

@greg at the moment in frog, to create the website, I do a raco frog -b
which puts everything in a site
subfolder but then I need to manually copy in the css, img and js folders so I can sync it to the destination. Is there a way to specify assets
to have automatically copied to the site folder?

@pocmatos Can you just locate the css / img / js in the site
subfolder in the first place? (raco frog --clean
doesn’t rm *
it cleans only files it produced)

but those are not in the site subfolder.

I mean, for my own frog blog, I have the source and output all mixed in one tree which effectively gets bitblt
ed to GitHub Pages. https://github.com/greghendershott/greghendershott.github.com Including the _src
tree. It happens that GitHub Pages doesn’t serve stuff under _src
, so e.g. https://www.greghendershott.com/_src/About.md doesn’t show you the source.

But even if it showed the source markdown I’m like ¯_(ツ)_/¯

So at other people’s request frog has options to build to some other dir. But I never use it. I am supremely bored by non-trivial deployments of anything. :slightly_smiling_face:

hummm, yes that works. but I wanted a cleaner solution where only the files required for the website are put in site/
.

Would you accept a PR to add assets to the target folder?

I guess so but I’m still confused, can’t you simply have the assets under site/
where you want them to be, and you run frog and it “fills in” the output HTML etc alongside it?

but site doesn’t actually exist. I could of course create it before hand, but I generally just point to a temp dir for frog to generate the site.

I at the moment have a makefile that does that: create temp dir, copy css, js, img and then run raco frog -b
but it would be cooler if I could ditch the makefile and instead have in frog.rkt
something like: (assets (list "css/" "img/" "js/"))
or something of the sort.

OK well if it’s a simple change I’m open to a PR.

ok, will give it a go. thanks.

I guess my reluctance is a feeling that, in hindsight, Frog should probably be exploded into pieces that can be driven by a makefile or other script, and then people can do all sorts of flows.

A lot of frog is what I call “path math”, plus an application of Greenspun’s rule to makefiles. :slightly_smiling_face:

really? i would think going on a path to ditch makefiles would be better. :slightly_smiling_face: anyhow, your call.

But that’s just my reluctance, hindsight. The horse is well out of the barn, so some more PRs are inevitable. :slightly_smiling_face:

lets turn this frog into bullfrog…

(i am now left wondering if the bullfrog is indeed as large as it sounds…)

:confused:

One way to put it: I wish Frog, instead of an application and a “framework”, were more just a set of libraries.

Whether you drive it with a makefile, or your own little Racket program, or shell script, whatever.

Tangentially, have either of you used the Racket make
module? (https://docs.racket-lang.org/make/) I haven’t really tried it yet, but I do run into some situations where I want to bridge the gap between raco setup
’s notion dependency tracking with external things that lend themselves more to makefiles, like rebuilding various static files.

The main value Frog has, as an app, is any “just-works” element. But once people want it to work unique ways, it kind of falls down fast, and starts to go down the configurability slippery slope.

@philip.mcgrath I haven’t looked it for awhile. I thought of it recently wrt Racket projects working on Windows, too. But I haven’t done much with the thought, yet.

@greg for what it’s worth, I really like how frog works. :slightly_smiling_face:

@philip.mcgrath that’s a pretty old module. I haven’t looked at it in a decade but I have used it back in the plt scheme time.

but since i haven’t looked at it since, it might have been refactored since.

geee… what have i written there. time to bed.

I wish the racket make
thing included a language so I could easily port regular makefiles to racket ones just by adding #lang make
to the top of each makefile

Never mind: I was pasting the package’s name, and it turns out I’d pasted some invisible leading whitespace. Though maybe the site should use string-trim
and/or give a more descriptive error message?