
Matthew Flatt will give a talk about Racket-on-Chez in 20 minutes, watch the livestream here: https://ventotene.conf.meetecho.com/icfp/

Is it ongoing

? I can’t see anybody

It is about to start. Briefly got sound.

FWIW the streaming site doesn’t work on ipad.

Nice talk:racket-flat:

Is it available to those who missed it?

I hope

@mflatt I have a question, but my guess is it is probably not in scope to have asked on slido, so consider this me asking in the hallway. :) You touched briefly upon three aspects of Racket-on-Chez’s compilation model in your talk: linklets and the way they interact with the expander, the ability to load/compile code “on the fly,” and the relatively AOT compilation model of Racket-on-Chez.
Independently, all those things make sense to me. But together, they seem more complicated, and I don’t have a great model in my head of how they interact. I realize the on-the-fly compilation support is necessary to support eval
/namespaces, and my understanding is that is important for the implementation of the expander, since it needs to perform evaluation during expansion due to things like let-syntax
, but what does that mean on Chez? Are individual expressions compiled on demand via the AOT compiler every time the expander expands let-syntax
?
More generally, does this mean that a language with a You Want It When module system needs to take a top level approach to compilation and code loading, and the traditional C approach of “compile object files and link them separately” is incompatible? Or, to put it a different way: does You Want It When require the host system support eval
, or is it an implementation detail?

Any language with macros needs eval, in some form. If you look at GHC or Rust, they do this by requiring more stratification so that they can still use their normal compilation model, but that precludes things like local macros/let-syntax.

Yes, it will be up online later

@samth I think my question boils down to whether eval
needs to be in some way evaluating in the same “environment”/running system as the expander itself (in order to tie some kind of knot I’m not thinking of) or if it could be doing something like “generate new objects on the fly and dlopen
them” rather than using an explicitly reflective, top-level environment.

I don’t completely understand how the knot gets tied in either Racket or GHC (for Template Haskell), but my fuzzy understanding is that they do things pretty differently. I’d like to understand the details a lot better.

The racket expander doesn’t use ‘eval’ in that sense — the eval needed for macros is really just instantiate-linklet, which doesn’t have the weird top level

Alright, my existing intuition was wrong then, but in that case, how can I get to someplace less wrong? I’ve spent a nontrivial amount of time reading the expander/namespace
source code and related modules, and I’ve invested some effort into trying to understand the bootstrapping process on both Racket 3m and Racket CS, but I always get lost somewhere in the details.

One of the main reasons I think I tend to get lost is I have a hard time understanding which pieces of the expander belong exclusively to the host system, which pieces affect or belong to the dynamic evaluation environment of the module being compiled, and which pieces are shared. My understanding is that some work is done to tie certain knots so that references to certain bindings in racket/base
in the expander get redirected to primitives in the host system, but then certain values in the expander clearly do flow between the expander’s “namespace” and the namespace of a module being compiled (via the stuff that sets up #%kernel
and the other primitive modules, for example), and I am just clueless as to how any of that actually works.

Linklets refer to primitives directly, so that’s essentially how racket/base
leads to primitives references: the expander compiles certain referees to direct primitive uses.

Everything is linklets. So the underlying Scheme needs only to support linklets that sometimes refer to primitives.

It can sometimes be easier to understand the pycket implementation because the different languages make the layers clearer (in my opinion)

The expander juggles all linklets, linking, and instantiation.

Also, the expander does not share the implementation of + with the program being expanded — instead it creates a module which contains bindings for + and other things found in #%kernel

It happens that that module is defined to use the + obtained from the host system which is also used to implement + in the expander but that’s not in principle necessary

Thanks for the tip. @spdegabrielle It was mostly the talk from RacketCon.

A linklet is an s-expression, though, right? A linklet doesn’t have, say, closures inside of it… or does it? Here’s what I’m getting at: when the expander invokes a macro transformer, it just calls it as a procedure—it doesn’t have to do some special incantation like (apply-in-compile-time-environment <special-compiled-linklet-reference>)
. So even if the expander generates a linklet for the RHS of let-syntax
that implies it gets handed back a procedure that it understands, that can be applied to a syntax object it defines, and to me that feels almost magical… how can a value in the expander be the same as a value in module’s environment if there isn’t some kind of shared dynamic evaluation environment?

Indeed linklets do not have closures in them

Let me put this another way: my understanding of the GHC implementation of Template Haskell is that it uses GHCi, which is basically a Haskell interpreter. I know very little about how GHCi actually works, but it is my understanding that when GHC calls into GHCi to apply a Template Haskell splice, values don’t magically bridge the gap between GHC-land and the interpreted environment created by GHCi. Rather, there’s a GHCi-specific representation of “a Haskell value” that has to be converted (essentially via serialization) to and from the GHC representations that have been compiled to native (usually x86) code.

This is clearly not how the Racket expander works. But I do not understand the secret sauce that makes the Racket expander not have to do that.

Perhaps it would help to imagine that instantiate-linklet runs an interpreter that is also provided by the host system

But then how is a lambda created inside the interpreter the same type of value as a lambda created outside it (i.e. in the expander)?

If I naïvely wrote a Racket interpreter in Racket, assuming I don’t do some kind of HOAS-y thing, I would probably end up with a “procedure” being an s-expression, and “applying” that “procedure” would do substitution. Maybe I’d do it in Redex. My interpreter’s procedure is clearly not the same kind of thing as a procedure in the host language.

You just have to make them callable, but calling them could run redex or your interpreter or something else

Okay, that does help—so you’re saying they really are (or at least could be) different kinds of things, and the interface just hides that detail. But how is that possible when the expander is defined in terms of itself? I realize that’s the whole reason the bootstrapping process needs to exist, but I don’t understand how it actually works. The metaphor I just used assumes there is, in fact, a host Racket with its own implementation of procedures, but the expander’s implementation of procedures is presumably not different from the implementation of procedures in the compilation environment, right?

Maybe a better comparison for this would be to start contrasting cify versus the bootstrapping process that uses the JIT. I think I can grok how cify works—you rewrite all the expander’s primitives into simpler things directly, then set things up such that things like “function application” do the right thing. But I’m confused about the JITting version that, IIUC, uses real Racket bytecode for the expander?

In reality, they’re all the same values, and roughly you have to run things in such a way that that is the case, but it isn’t enforced

I don’t think I understand what you mean.

If that’s not helpful then don’t worry, I’ll go from what you asked

I think looking at this function is helpful here: https://github.com/pycket/pycket/blob/master/pycket/racket_entry.py#L427

I think some kind of understanding is slowly sinking in. Your suggestion to think about Pycket instead of 3m/CS has been helpful to clarify some of my thoughts.

So: Pycket presumably has a linklet to RPython compiler, right? And, starting from a copy of the expander that has been turned into a linklet from some existing Racket implementation, it can use that to transform the expander into RPython code, which just happens to refer to the Pycket primitives for compiling linklets to RPython.

That function calls initiate-boot-sequence which calls compile-linklet on the extracted source of the expander

Yes exactly

And that works out because the notion of what a value is inside the expander has been implemented in the same code path as the code that the expander then also hands to compile-linklet
as part of module expansion.

Yes

Okay, that makes sense to me! But now I guess I’m curious about how all this works on RacketCS, specifically, which relates back to my original question. Since RacketCS has a comparatively AOT compilation model, what does compile-linklet
do on RacketCS? Which, to be fair, is I guess is equivalent to asking what eval
does on Chez Scheme, right?

Right

instantiate-linklet (which is really what does evaluation, compile-linklet is just calling schemify) does mostly just call chez’s eval

So does (eval '(lambda () 42))
on Chez Scheme genuinely just run the whole AOT compiler, generate x86 code, stick it in some executable address space, and jump to it?

And there are ways in which eval is harder in Chez, but basically it does what you say.

Alright, given what you’ve explained to me, I guess I’ve realized my remaining questions are probably more about how to implement eval
in a language with a compilation model like Chez, which aren’t really about Racket anymore.

And when you have a compiled file in Chez, it reads it in, puts it in various parts of the Chez heap, and then calls the executable code

In some ways the answers are similar to “how does dynamic linking and loading work” but it’s easier in Chez than for general Unix

Yes, that makes sense. Do you say it’s easier in Chez because it can do its own thing instead of having to deal with the existing protocols, or is there some more fundamental reason?

Is it practical/painless to run multiple versions of Racket on the same machine?

Yes.

Which one ends up as the default? (venv) Stefans-MacBook-Pro:emissions_test stefan$ file `which racket`
/usr/local/bin/racket: Mach-O 64-bit executable x86_64
(venv) Stefans-MacBook-Pro:emissions_test stefan$

I install using http://download.racket-lang.org\|download.racket-lang.org. The installers put each version into a subfolder in /Applications. The installer doesn’t affect environment variable PATH. In DrRacket (in recent versions?) there is a menu option that updates the path using /etc/paths.d

@stefan.kruger You can use “in-place” installations instead of “Unix-style” installations to create Racket installations that live in their own directory (instead of being installed into some global location).

Using one of the installers from http://download.racket-lang.org\|download.racket-lang.org should not disturb your other installation in /usr/local/bin . (How did you install?)

I think that one came via homebrew…

Yes, I believe the Linux installers let you choose (I don’t remember if they default to one or the other), but the macOS installers all create “in-place” installations.

Homebrew is an unofficial installation channel, and they do a Unix-style installation with a custom prefix.

Homebrew Cask uses the official macOS installers.

Racketwise, I had no idea what I was doing at the time I installed it (and that’s likely still true)…

So there are means to instantiate a racket environment per project/directory?

Depends on what you mean.

If the goal is just “I want to install a self-contained Racket installation,” the answer is “sure”: download a Racket source code distribution (either from http://download.racket-lang.org\|download.racket-lang.org or by cloning the git repository) and run make
. It will build an installation that is completely local to that directory.

If the goal is “I want to share the same racket
executable between multiple projects, but I want each project to have its own set of packages,” the answer is “it’s theoretically possible but harder and has more pitfalls than you would probably like.”

I understand, thank you.

I think there are 2 approaches, one is like JVM like you mean

the other is the linux way with shared things

In blue-skies terms, I’d like pipenv-for-racket :slightly_smiling_face: https://docs.pipenv.org/en/latest/

So a way of picking a specific binary from many installed and have directory-local packages.

If you just need a “in this directory use racket version X” when I write “racket” then you can use .env files in bash.

Racket doesn’t quite suffer from the problem that pipenv solves for Python, a proliferation of versions both of the platform and of packages. I’ve not had to dive into raco
enough to understand how package versioning actually works.

Raco handles different versions on the same computer, so usually it just works.

The easiest way of trying different versions is simply to use the absolute path when you invoke racket. I have never needed anything more advanced.

From a position of ignorance, is there a way of making a source-code-distributed application, specifying specific versions of packages X, Y and Z? I can obviously compile into a self-contained binary bundle.

You cannot depend on a package version in Racket, only on a package. raco
always downloads the latest version of the package. You can say “I need at least version X
of this package,” but that doesn’t have any impact on what actually gets installed, it only causes raco
to report an error if the version it knows about (which, again, is the only version in the package catalog) isn’t high enough.

That said, raco pkg catalog-archive
might be closest to what you’re looking for.

(Possibly combined with raco pkg archive
.)

I’m nowhere near needing this kind of functionality in racket, but in my day job as writing code in boring-er languages I am up against this every day — being able to repeatedly and reliably create exact builds of things. But racket’s packages seem to have a much lower rate of change, and the platform itself seem to change very slowly. So maybe it’s not a problem here?

lexi.lambda is correct that packages haven’t got a version. But @sorawee showed be me that an info file can refer to specific versions of a git repository. A package source can be: <git://github.com/‹user›/‹repo›[.git][/][?path=‹path›][#‹rev›]>

@stefan.kruger I am a noted package system detractor, so for the sake of those weary of my whining, I won’t rehash my opinions again here… but if you’re interested, this is probably the most complete, up-to-date statement I’ve made on the topic: https://www.reddit.com/r/Racket/comments/9rn5ms/raco_package_versioning/e8jx6ws/

Nice.

So I can certainly agree with the sentiment of the comment someone made there: I’d much rather retain a culture of stable packages, than the move-fast-and-break-stuff culture of Python and JS.
Those languages need a lot of this kind of scaffolding to manage the dependency hell they have become.

but….

(I use both)

Arguably, that is where you end up if popularity explodes.

Is there a guide / example of how to use raco pkg archive (or catalog-copy) anywhere? ( @samth )

@stefan.kruger I think versioning is useful, and I think a system without it is user-hostile. I currently write Haskell for a living, and I published a package a couple weeks ago. After about a week, I realized the API could be better if I made some changes to it, and those changes were backwards-incompatible. But they were backwards-incompatible in small ways, so upgrading is basically just a mechanical process of fixing all the type errors by making minor changes at each use site until the code compiles.
The Haskell package system let me do that by bumping the major version number and being very confident nobody would complain, even if people were already using my package. They have the option to upgrade when they want to, or they can keep using the old version. But in Racket, I can never change any of my packages’ APIs unless I make an entirely new package, and in this case, the improvement was small enough that I probably wouldn’t have bothered. But those small things accumulate over time.
An advantage of the Racket approach is that lots of old Racket (or even PLT Scheme) programs still run on modern Racket with zero changes. And that’s cool, I guess, but I can compile old Haskell programs by downloading an old version of GHC and using old libraries (which are both easy to do using modern Haskell build tools). IMO, having to sometimes make small changes to your program to accommodate new APIs is a small price to pay for libraries with nice, consistent interfaces, but hey—that’s just my opinion. In any case, I do think the changes I outlined in that comment would likely be accepted if someone did the engineering effort, so the issue is probably more practical than philosophical.

Couldn’t agree more.

Just doing its own thing

For an alternative perspective, I just gave up on the idea of preserving backwards compatibility in perpetuity. Instead I only promise not to break anyone who I can actually verify is using my package and whose tests I can run. So if your closed source project is using my thing in a way that nobody else is, and you don’t tell me, then you get no promises. It’s not ideal but realistically it’s the best I can do as a hobbyist maintainer with limited free time.

In simpler terms: compatibility is a two way street, and if you don’t want me to break you you have to work with me.

That’s fair — but isn’t this problem exactly what lockable dependencies would solve? No promises from the package creator to maintain backwards compatibility, but a way for the user to put a stake in the ground as to the exact versions which are known to work in a given application?

@stefan.kruger one additonal issue is that if you want to ensure that things work in the future, you need to archive the code yourself, not just the version, since http://pkgs.racket-lang.org\|pkgs.racket-lang.org does not host anything. And once you do that, you’ve handled the version issue as well.

Sure — most package management systems I’m familiar with have a central package repository.

@stefan.kruger Yeah, better package system features like that would help a lot. Not entirely, because of clients that have “API dependencies” on your package - meaning their public API exposes stuff from your package so two clients depending on different versions is much more likely to cause problems.
The other part of my position is that if I do need to break people in order to improve an API, I’ll send them pull requests myself and do it in stages.

Like if I’m just renaming something because the current name is sub par, it’s easy for me to automate sending pull requests to everyone else.

The pkg-build sever (which updates package docs and test results) isn’t currently responding. I don’t have a way to reset it remotely, so it may stay down until Thursday.

Thanks! I hope people find it useful

Ah, that’s unfortunate. Thanks for the update!

@bvdbogae has joined the channel

Does anybody know something about the Racket mirroring here? I am the server administrator for Infogroep @ the VUB in Belgium, we are hosting a mirror for Racket, however it seems that the rsync server we are using is not responding anymore and the syncing is currently not working as expected.

@bvdbogae I’ll look it up, but I guess it’s @samth or @mflatt you need.

I think @mflatt corresponded with Nils Van Geele, so I was close.

I think it’s actually @jbclements that you need but I might be wrong

Is the pkg-build server involved? (it’s down until Thursday).

No it should not be involved

It tries to sync with http://mirror.racket-lang.org/racket-installers\|mirror.racket-lang.org/racket-installers. It is possible that it has stop working some time ago, but just noticed it now with the new racket release.

I dunno if someof you know that but in some lang like elixir and clojure and others?

There is a bot in Twitter which retweet things on what people are doing

With the lang. Do we have something similar in racket? If not we could perhaps implement it in racket and somehow I wish that the community could manage the bot like in my idealist world’ the bot could run on x computer cluster from community but that isn’t a requirement it would be indeed a nice thing :racket-flat:

I was looking for this post the other day and I thought it was on the mailing list, but it was on reddit. Thank you for linking it back in here, since related questions have been coming up more frequently of late.

@mflatt no rush, but when you get a chance, I’m curious why this (not being able to remotely reset) is so. I assume the server is not with a typical provider such as Amazon EC2. Is it running on a university server?

@samth if someone would be able to further assist us to resolve this issue, please contact us at . Thanks.

@bvdbogae: not sure what you are looking for, but I found https://mirror.racket-lang.org/installers/ and https://mirror.racket-lang.org/installers/recent/

Can you use any of these instead?

@bvdbogae so good to hear from you! I’ve been sending mail to <mailto:research@infogroep.be\|research@infogroep.be>
. Looks like I should direct all future email to <mailto:server@infogroep.be\|server@infogroep.be>
instead?

@bvdbogae In response to your question: yes, rsync is down. Since our “did absolutely everything” machine melted, we’ve distributed its roles, and the machine that is currently hosting our bundles does not support rsync. I’ll send an e-mail with this information to <mailto:server@infogroep.be\|server@infogroep.be>
. If mirroring using https is not possible, we can investigate the possibility of restoring rsync access, probably to a different hostname.

Using an old machine on the university network seemed cheap (i.e., free) and easy, so that’s what I set up. I agree that it would be better as a virtual machine somewhere.