
Today, I learned that:
> (equal? 'Π ; \Pi
'∏ ; \prod
)
#f
> (equal? 'Σ ; \Sigma
'∑ ; \sum
)
#f

Moreover, when searching-and-replacing Σ → ∑ in DrRacket, it’s smart enough to match σ (lowercase Σ) as well :slightly_smiling_face:


:fearful:

@haffnersam has joined the channel

At racket summer school I was asking about racket hashlangs vs nanopass and Matt or Sriram (I don’t remember at this point, but I think it was the latter) advocated for racket hashlangs in the style of nanopass. There are pros and cons to both. With hashlangs, you lose the ability to insert/arrange/remove layers as easily as nonopass… But I’m not getting something about this approach. I spent an inordinate amount of time figuring out how to instantiate a module in order to implement run-with-lang
for beautiful racket (so you can run via ./basic blahblah.bas
)… it is clearly not meant to be an easy thing…

so… if I were to implement MANY racket hashlangs for a nanopass style architecture… how do a go from layer1 to layer2, etc?? I want to have completely separate passes for desugaring, ANF transformation, etc etc. (and at some point, I’d LOVE to disconnect entirely from racket as a backend and produce machine or VM code)… am I thinking about this wrong?

(the more I type this, the more I suspect I’m thinking top-down and should be thinking bottom up… but I’m (possibly) even more confused by that approach. Designing languages starts with the language, not the bottom layer mechanics)

I guess part of the pitch of the Racket approach, as opposed to the Nanopass one, is that you don’t have to disconnect from Racket to generate VM code, because the Racket compiler is doing that for you.

absolutely. you get a TON for free… and I want that, for sure… ignore my parenthetical about eventually disconnecting from racket as a backend for now.

that’s a success problem. :slightly_smiling_face:

(I love questions like “what if we get too much traffic for heroku??” to which I answer “congratulations?“… but I’m not concerned with finishing, I’m concerned with starting)

In my experience, the Nanopass approach is more appropriate than the Racket approach when I’m building a transpiler. In that case, I’m not targeting a machine, so linguistic reuse doesn’t apply. For example, I built CSS-expressions (https://docs.racket-lang.org/css-expr/index.html) using Nanopass, and I don’t see how I could’ve used the Racket way. (Though, from having talked to Jay, I believe it’s possible, it’s just beyond my reach.)

oh god… the domain in that doco… that’s amazing

My other point is, unless you want to directly write code for the languages in every pass on the compiler, then it’s not worth having ‘#lang’s for them. Let them just be a collection of macros expanding to other macros. Or, at most, module languages, as opposed to full ‘#lang’s with custom readers.

Are you referring to ‘http://bettermotherfuckingwebsite.com\|bettermotherfuckingwebsite.com’?

@zenspider another idea is for hashlangs to provide more extension points that transform things that seem like they would require another language layer into things that hook into provided extension points and thus compose with other extensions

@leafac yes… and I’m learning actual CSS from it. :stuck_out_tongue:

@notjack ummm… I’m not following that

is that basically the same as @leafac’s comment?

@zenspider things like #%app and #%datum can let you override parts of #lang racket by importing modules instead of changing the language’s reader. I think racket langs should move towards doing more of that

I’d really like a way for requiring a module to somehow wrap all forms after the require statement in a macro defined by the required module

that’d be interesting but possibly scary. :slightly_smiling_face:

absolutely :) but if langs don’t provide it, people do it anyway with metalanguages that operate by chaining readers together and that’s way easier to get wrong

I think I’d want it to be opt-in tho… like (require (inject-in somerequire form1 form2))

that’s a good idea, and means it might be possible to implement it as a regular require transformer instead of a wrapper on top of require

I like a name like module-body-in
better tho :p inject-in
makes me think it’s for some dependency injection framework

I’d want to combine this with making #%app
/ #%datum
/ etc syntax parameters instead of magic identifiers inserted by the reader, that way a module-body-in
form could override #%app
by wrapping the module body in a syntax-parameterize
form

How can one create syntax for a keyword argument? I want to do something like #'(lamdba (a #:b c) ...)
Where #:b c
is generated from some identifiers in a macro.

@cfinegan you might be interested in #,@
or using ...

@cfinegan (with-syntax ([foo-kw (make-keyword-dynamically ...)]) #'(lambda (a foo-kw c) ...)
ought to work

maybe

see this part of the docs for with-syntax
in particular:
> However, if any individual stx-expr produces a non-syntax object, then it is converted to one using datum->syntax and the lexical context and source location of the individual stx-expr.

the basics > (define val 'z)
> (define key '#:y)
> #`(lambda (x #,key #,val) 42)
#<syntax:12:2 (lambda (x #:y z) 42)>

^ and that example is using quasisyntax
and unsyntax
, and unsyntax
performs the same conversion that with-syntax
does

But what if the y
part of #:y
is something I want to be generated from an identifier? Can I unquote just the part that is the name?

@cfinegan You can use string->keyword
to produce a keyword dynamically from a string

awesome that sounds like exactly what I need, i’ll look into it. Thanks!

:+1:

does it still count as 3d syntax if a syntax object contains a non-prefab struct but that struct has an implementation of gen:custom-write
?

yes

why?

custom-write converts things to strings. perhaps the better question should be asking about serializable structs?

more pointedly, custom-write does not have any notion of reading the result. 3D syntax is about things that aren’t roundtrippable.

No, I mean custom write specifically. The case I’m thinking of is where a module is read with a reader that produces syntax objects containing structs to represent certain special kinds of syntactic constructs (as opposed to using tagged s-exps), and those structs implement custom write in such a way that the read-write contract holds but only if the custom reader is used

so it would round-trip as long as the extended reader was used

the notion of the reader is long gone by the time syntax objects are marshalled to bytecode

that would mean bytecode loading would need to call a reader, which is a string parser

that sounds like a very bad idea to me

fully expanded syntax objects are the only syntax objects that are marshalled to bytecode correct? this use of syntax objects wouldn’t persist that far

I don’t really understand what you’re asking. 3D syntax is only a problem when syntax objects are marshalled to bytecode; that’s the whole reason the notion of 2D/3D syntax exists, IIUC.

If your syntax isn’t getting marshalled to bytecode, you can put whatever values you want in your syntax.

The way this relates to marshalling is if i did have a reader that did this and expected the lang it was used with to override #%datum
in order to detect these special structs, that would work fine and these syntax objects wouldn’t exist after expansion. But using the reader with a lang that uses the default definition of #%datum
would expand it into (quite <stx object>)
, and that would result in a 3d syntax object surviving to a fully expanded module.

So I guess my question is: should the default #%datum
form with a 3d syntax object be an error?

Should it be an error? I don’t think it should be an error… not currently, at any rate. Whether or not it will do what you want is a separate question. I’m not entirely sure what “phase” the reader runs at (since it doesn’t really have one, currently), but I would imagine it is different from macroexpansion phases, so your struct might end up being from no phases at all… and it would at least need to be cross-phase persistent to be defined behavior.

I think the reader and the expander share namespaces

@robby Is the indentation settings for racket text% editors defined in framework
?

If so could you point me to it? (Searching the docs appears to have failed me.)

I don’t know what you mean by indentation settings exactly

What purpose are you trying to achieve?

Checking indentation, in specific:

At the moment I have an object that satisfies racket:text<%>
, I can call one of the tabify
functions on it to indent it.

However, I also want the ability to change some of the tabbing rules, much like in the DrRacket preferences window.

In fact, exactly like in the DrRacket preferences window.

I’m just looking for a programatic way to do that. :wink:

If you know how I could do that that would be awesome. :smile: (If not thanks anyway though.)

preferences:get
and preferences:set

You can avoid updating your own preferences file if you use:

preferences:low-level-put-preferences
and the getter

(set up a hash table and avoid writing to the file)

You can look at the difference between the files before and after you make the modification via the drracket gui

and you probably also want to look in framework/private/main, I believe.

hth

Ok, cool

So basically try it in drracket and check the preferences file to see the exact preference? Cool, much appreciated. :smile:

or read the code I pointed you to, in order to see what the settings are.

or both, I suppose

Cool, thanks.

Although hmm…the preferences file doesn’t seem to be in my ~/Library/Racket folder.

See find-system-path

facepalm ah, thanks.

~/Library/Preferences
…ah, okay. Thanks. :slightly_smiling_face:

@robby Ah, okay plt:framework-pref:framework:tabify
, thanks. :slightly_smiling_face:

when you call preferences:set
you don’t pass the plt:framework-pref
part

just framework:tabify
? Makes sense. Thanks a lot for the help.

@robby or @leif or anyone who’s worked with images in DrRacket: is there a reason make-bitmap
requires its width/height arguments to be strictly positive? Tonight, two students pasted into their code the rendered output of empty-image
, i.e. a 0x0 image, and my Racket->text converter choked on it because make-bitmap
threw an error.

I guess one of the underlying bitmap construction operations on one of the platforms won’t create a zero sized bit map

The image library has a bunch of special cases for this reason.

(I don’t actually know why tho. But that’s my guess.)

gotcha. So in my code, I can use a workaround with (equal? snip 2htdp:empty-image)
, and short-circuit the problem. but it seemed kinda hackish to me :wink:

@blerner Ya, fwiw, I suspect Matthew has done the most with that particular bit of the code (bitmap%)

Although can I ask why you are using bitmap% in the first place?

(I mean, don’t get me wrong, its frequently the right task for the job, but pict
is usually much nicer when it comes to silly edge cases like this. :wink: )

I’m using neither; I’m walking the wxme tree of snips, finding the convertibles, and converting them to pngs

do you happen to know how to ask a convertible what its width will be, if I try to convert it?

assuming I don’t know what type of snip it is, merely that it’s convertible?

Oh wow, if you’re using convert
to convert to a png and you’re getting an error that’s probably a bug.

So short answer, no, I don’t think you can do that. Long answer, that’s probably a bug…can you give me a sample?

so, (convert empty-image 'png-bytes)
will trigger the contract error

You can rig it with some image snip if you want, but that’s simple enough as it is

@blerner Interesting, this is certainly a bug.

also, really annoyingly: (convert empty-image 'png-bytes 'fallback)
doesn’t give me 'fallback
, it dies with the contract error, first.

As per the docs…

yup

which is why its a bug. :wink:

Either: A) It should not say its convertible?, or B) It should return fallback (which is by default an error iirc.)

Although…its probably a bug in the 2htdp library.

I’d rather B: I don’t want image snips to not claim that they’re convertible things, but I do want a decent error-handling mechanism

Ya, this is almost certainly a bug in 2htdp
, for example, when trying to convert blank
, I get:

> (convert (blank 0) 'png-bytes)
#"\211PNG\r\n\32\n\0\0\0\rIHDR\0\0\0\1\0\0\0\1\b\6\0\0\0\37\25\304\211\0\0\0\rIDAT\b\231c\370\377\377?\3\0\b\374\2\376\205\315\2534\0\0\0\0IEND\256B`\202"

I ‘think’ @matthias manages that library? I’ll talk to him about it tomorrow morning if that works for you?

(Unless you need something sooner.)

well, I can workaround it with a with-handlers on the exn:fail:contract error, but it’s ugly. I’d like a cleaner long-term solution

@blerner Okay. Fwiw, something like this will solve your immediate problem, I’ll talk to Matthias tomorrow about getting it fixed properly: #lang racket
(require (prefix-in file: file/convertible)
pict)
(define (convert v request [default #f])
(with-handlers ([exn:fail:contract? (λ (e)
(case request
[(png-bytes) (convert (blank 0) 'png-bytes default)]
[else (raise e)]))])
(file:convert v request default)))

Thanks. For now, I’m using a similar handler, but I’m not outputting the empty image itself. In a minor sense, it’s better that I handle the error, because a 0-pixel image is very hard to see on a monitor… :slightly_smiling_face:

LOL, very true. Which is why I always (aka never) use zero-width space characters in my racket variables. :wink:

Anyway, I need to be up early in the morning, so if that works for the moment I’m off to bed. :slightly_smiling_face:

thanks

Thank you for finding the bug.

don’t thank me; thank students with wonky submissions!

Found another bug too (filed in git) about indentation and highlighting; not urgent, just weird

@soares.chen has joined the channel