
How come literal fx/flvectors are not allowed in read-syntax mode?

I am trying to understand what happens when one does racket -l errortrace -t prog.rkt
, however, I can’t seem to find the source code for errortrace
in the racket source. Where is this located?

@notjack i was looking at syntax-warn
. My main use case is to develop something like what I discussed in: https://groups.google.com/d/msg/racket-users/UtS3tov0u40/Xg0eOr3BAQAJ

@notjack the idea is to have a config file set for my project such that it becomes part of CI to check if the commit follows the guidelines and if it doesn’t, the commit gets reverted. Or is never committed if you use gerrit or a pre-commit hook.

It looks like a lot of the stuff is in place, but I cannot understand where to start to set a project-wide file with the syntax warnings I want.

> I can’t seem to find the source code for errortrace
in the racket source. Where is this located?
- Do
raco pkg install raco-find-collection
to install @asumu’s great utility. - Thereafter, do e.g.
raco fc errortrace
to see where it is on your system. (In racket-mode, after doing 1, you canM-x racket-find-collection
)
OR
In DrRacket try Open Require Path
on the File
menu. In racket-mode try M-x racket-open-require-path
. These are incremental-search style.

Oops didn’t @pocmatos

@greg awesome, was unaware of this. thanks.

answer is here in a flash: racket-6.12/share/pkgs/errortrace-lib/errortrace
thx

I didn’t know about errortrace - where does it shine in comparison to DrRacket debugging?

Not 100% sure but will take a stab at this. errortrace
is a library takes a Racket program and rewrites it to be one that is “instrumented” for better stack traces, profiling, and/or coverage

DrRacket error reporting uses errortrace

When using racket
command-line, just doing racket -l errortrace -t ‹prog›
will get you the better stack traces, as described here.


As @lexi.lambda mentioned and I was about to say :smile:, DrRacket also uses the library to annotate programs.

To insert a “break?” function call (essentially) at every single step-able point.

OK, so it is both part of and complementary to DrRacket rather than alternative

Ergo the step debugger on your program, is really rewriting your program to be a step-debuggable version of your program.

Also ergo: Much slower. :slightly_smiling_face: This isn’t hardware breakpoints.

I will give it a try then, thank you both!

In DrRacket, you can try it indirectly by choosing one of the profiling or debugging levels.

See Language
| Choose Language
, the radio buttons in upper-right corner

ahh thanks, wouldn’t find it myself!


it’s pretty weird place for debugging properties… me thinks

Generally the whole menu location and operation of the Choose Language
dialog is weird if you ask me ¯_(ツ)_/¯

But that is where the pot of gold is hidden. Now you know. :slightly_smiling_face:

wonders why we don’t have a Now you Know! emoji

while we are at this - are there any resources (guides) that cover the racket debugging process? I wonder how many more gems I wasn’t not aware of!

So maybe this deserves a blog post, and it would be different among us, but. I haven’t used the DrRacket step debugger in, idk, years. For me, it’s sufficient to have better stack traces (via DrRacket or racket-mode option enabling that via errortrace). Plus the REPL. Plus print
. :slightly_smiling_face: Seriously. When I wrote C and C++, I was zealous about “the first time you run new code, step through it in a debugger to see what it does”. But with Racket, I build things up in the REPL, and that is my “step debugging”.

Also for things like a long-running web-server, print
~= using Racket’s logging with a reasonably well though-out approach to what logger names and levels to use.

I don’t have any other “gems” to suggest, unless I’m forgetting something, but maybe other people do.

As a p.s. using the macro step expander — absolutely. If you don’t have to use it, that’s wonderful. But when you do need it, it’s awesome.

coming from .NET that’s what I got used to (stepping through the code in VS) with Racket I mostly do print
so wondered if I may be missing something :smiley:

Oh, right. I swung by here actually to mention something else. Strange Loop and RacketCon (and ICFP IIRC?) are same time this year. Call for presentations has opened: https://thestrangeloop.com/cfp.html Why not propose to give a talk at both? :slightly_smiling_face:

Also there’s an opportunity grant: https://thestrangeloop.com/opportunity.html

@tetsumi It wasn’t obvious whether a shallow reading of fx/flvectors (i.e., content as numbers, as opposed to syntax objects) would be a good idea. If there’s demand for a shallow reading, and unless there’s some other obstacle I forget, we could change the reader to allow it.

@mflatt I’ve been spending some effort trying to clarify some of the docs around first-class definition contexts, and one of the things I’ve run into is how to talk about the binding environment. Is there any place in the reference in which the binding environment is discussed? As far as I can tell, the Syntax Model section describes two things: scope sets and the global binding table. But the global binding table is distinct from the local binding environment, which, unless I’m overlooking something, seems to be ignored.
Also, I’m also a bit curious about the phrasing around “lexical information”. The documentation describes lexical information as a distinct concept from scope sets, specifically by stating that “The lexical information of a syntax object is its scope set combined with the portion of the global table of bindings that is relevant to the syntax object’s set of scopes.” However, the following statement that “The lexical information in a syntax object is independent of the rest of the syntax object, and it can be copied to a new syntax object in combination with an arbitrary other Racket value” seems incongruous to me—the scope set can be copied from syntax object to syntax object, but surely not the global binding table? It’s global. I realize that, in practice, the binding table is not actually global and is instead distributed across the individual scopes, but this seems inconsistent with the abstraction provided by the documentation that treats the binding table as truly global and binding lookups defined entirely in terms of scope subset operations.

Maybe this seems like nitpicking, and I imagine the distinction isn’t very important for the average user of the macro system, but I do want as much precision as I can manage, since it matters for my own understanding of the macro system.

Ideally, the docs would not talk about the expand-time environment, which seems like an implementation term. I would try to phrase it as the “expansion context”, or something like that, but I don’t know whether that would be the right choice. “Lexical context” is a similar attempt to avoid committing to an implementation, which paid off in the renames->sets transition. But I agree that it’s not all that clear, and it may be better to commit to scope sets instead trying to hold “lexical context” abstract.

@pocmatos You can’t set automatic project-wide defaults with a file yet, unfortunately. Tracking issue: https://github.com/jackfirth/syntax-warn/issues/46

However you could reduce the overhead by defining your common warning config in a module and requiring it

Each of your modules would then look something like this:
#lang racket/base
(module warning-config racket/base
(require my/project/warning-config)
(define config my-warning-config)
(provide config))
... module implementation ...

@mflatt Wrt the expand-time environment, I see why it would be nice to avoid talking about it, but the fact that the docs don’t talk about it has been very confusing to me in the past. It means there is no precise definition of what it means for an identifier to be used out of context, no precise explanation for why syntax-local-value
might sometimes report an identifier as unbound when identifier-binding
says it is bound, and no precise explanation for why providing a first-class definition context to local-expand
is necessary in addition to adding the scopes with internal-definition-context-introduce
. Those have all been big sources of confusion for me.
Also, I’m fine with using the “lexical information” term, since I agree that it makes sense to try and maintain some sort of abstraction from the implementation details of the expander when possible. It just seems like maybe it should be defined as precisely equivalent to the syntax object’s scope set, alongside a perhaps higher-level description of the meaning decoupled from the operational details of scope sets (which can be described as the low-level implementation of lexical information).

(In some sense, I think what I’m getting at is that the desire to have an abstraction makes sense, and that abstraction works well for most macro users, but once you get down into the details of first-class definition contexts and manipulating individual scopes with syntax introducers, the abstractions get pretty leaky.)

There’s various tricks you could do to make adding that to every module less annoying. For instance, you could define a macro in a helper module that expands to a definition of such a module:
(define-syntax-rule (warning-config)
(module warning-config racket/base
(require my/project/warning-config)
(define config my-warning-config)
(provide config)))
Then your modules would look like this:
#lang racket/base
(require ...)
(warning-config)
... implementation ...

Could also make a small custom #lang
that wraps racket/base
like #lang my/project/racket/base
and automatically add the module that way

personally I’d really like if the syntax APIs just committed wholesale to the existence of scopes and scope sets with an exposed scope?
predicate and functions like make-scope
instead of make-syntax-introducer

I find that easier to learn and I especially find it easier to remember

I haven’t worked on this project since the raketcon I presented it at and I’ve forgotten some of these details. For instance, I said the other day that there’s no way to configure things - that was totally wrong, there’s submodule config but you can’t configure things at the command line or through project-wide defaults.

I agree that I would personally find such an API easier to use, but I think there are two significant downsides to such an API: it is pretty low-level, so it makes it harder to understand for casual macro users who don’t need to understand the implementation details of the Racket macro system, and it would have caused significant problems during the marks+renames->scope sets change, as Matthew mentioned.

One thing I have thought about/discussed with people in the past is the usefulness of splitting the macro system API into two levels of abstraction, which addresses the first problem but not the second.

huh, to me it feels high-level compared to the current api

The idea of names like syntax-local-introduce
and make-syntax-introducer
is that they describe what the functions are for, not how they are implemented.

Otherwise they would be called syntax-local-flip-use-site/macro-introduction-scopes
and make-scope
.

I do think that the notion of a “scope” is much easier to think about abstractly compared to marks… users can understand what a “scope” is in terms of their understanding of lexical scope in other languages without macro systems.

So leaking the scope terminology into the docs and the API is probably not quite so bad. But my understanding of syntax-local-introduce
is that it was originally intended to be a lighter way to break hygiene without resorting to heavier things like datum->syntax
, hence the name. That said, I get the sense it’s mostly failed at that purpose.

I understand that the idea of the current names is/was to not leak implementation details

Thanks, makes sense. I have been wondering in the past few days that the community is large enough to have a weekly newsletter that would highlight new packages going into the repo, interesting packages being used by people, racket blog posts etc. It sounds like it could really help highlight what the racket community is doing, working on, etc. I love newsletters like the Rust weekly newsletter, so my opinion might be biased.

Thanks. I will take a look into that. It can’t be that hard to add a project wide config file instead of having a per-module config.

They didn’t feel high level to me though because the implementation is/was extremely difficult to intuit

I didn’t have a good way of understanding what the general model even was

What do you think of using a module instead of a command line tool to check and fix warnings? That would make a project wide config much, much easier to implement

I mostly agree with you, which is why I said earlier the abstractions get pretty leaky at that point. I’m just trying to be understanding about the aspirational intent of the existing names. :)

@mflatt @cadr and I have a question about the interaction of prompts and continuation marks and parameters. In particular, it seems like there’s always a parameterization associated with the parameterization-key
continuation mark, even in situations where the semantics suggest that nothing would be there. Here’s a program that shows the issue — I would expect all the calls to print #f
, but 2 and 4 show a parameterization: #lang racket
(require '#%paramz)
(define t (make-continuation-prompt-tag))
(define (test k tag)
(call-with-continuation-prompt
(λ ()
(print
(continuation-mark-set-first #f k #f tag)))
tag
(λ _ (error 'fail))))
(test 'p (default-continuation-prompt-tag)) (newline)
(test parameterization-key (default-continuation-prompt-tag)) (newline)
(test 'p t) (newline)
(test parameterization-key t) (newline)

a scopes API feels high level to me because I can look at a function like syntax-local-introduce
and guess that it’s implemented with something like (syntax-scopes-flip stx (syntax-local-scopes))

(assuming a documented and public syntax-local-scopes
function)

@samth @cadr The correct name for parameterization-key
would be unsafe-parameterization-key
, but it predates the convention. The unchecked constraint that makes parameterization-key
unsafe is that it should be used only with a particular internal prompt tag. Based on that unchecked assumption, there are some special cases in the Racket/Rumble layer to produce a parameterization-key
value under all circumstances (because that simplifies the implementation of parameters).

For now, looking at “src/cs/rumble/control.ss” is the best way to find out how parameterization-key
is handled. In principle, that would be part of a Rumble spec.

@mflatt Right, I understand that it’s internal but we’re trying to get the implementation right in Pycket. It seems that there’s either a “look past a prompt” behavior when looking for the continuation-mark associated with parameterization-key
or every prompt installs a parameterization associated with that key when it’s created. Is one of those the right behavior?

Conceptually, every thread start’s with a prompt for the-root-continuation-prompt-tag
and (within that prompt) a mapping for parameterization-key
. And the value for parameterization-key
is always retreived using the-root-continuation-prompt-tag
.

ok, that makes sense (and was not one of the possibilites I had thought of) so thanks

you already have raco warn
in place with a suitable command line --config-submod
. Why not having a raco warn
with --mod
which is a module with the configuration that can default to warnings.rkt
in the project root or something.

@lexi.lambda @notjack I’m all for improvements to the documentation, terminology, and API along those lines – and hoping that you’ll get to them before I would!

I’m currently interested in improving the documentation! I’m working on a PR as we speak. I’m just feeling like I need to mention the expand-time environment somehow. If I added a subsection to the Syntax Model section that discussed it in an appropriately-abstract way, would that be a reasonable thing to do?

Yes, that sounds right

Okay, I’ll give that a shot. Thank you.

@mflatt eventually! :)

Yup that works too. My thinking is that switches like that usually get used in CI scripts, they’re not often typed out by a human trying to use the tool interactively. And if it’s being used in a script - which is a program in of itself - why not just do it in a racket module with a language where you get editor support and more safety?

@mflatt thanks for replying (about literal fxvectors) but i fail to understand. would you please give an example where a literal fxvector would be problematic

this really should be a section in the guide… (debugging)

@mflatt Would you prefer the terminology “binding environment”, “binding context”, or something else for the expander’s environment that maps bindings to transformers?

I can’t really use “expansion context” because that’s currently used to mean top-level/module-begin/module/expression/internal-definition.

@mflatt @robby @jeapostrophe This ones a doozy. Apparently: installed-packages + DrRacket + symlinks = :disappointed:. https://github.com/racket/racket/issues/2050

I’m not even sure where to begin digging for this one. :confused:

Is there any way to access the bytes of a string instead of doing a deep copy?

Strings are semantically sequences of unicode code points, not bytes. Why do you want the bytes/what do you want to do with them?

crc32 checksum

something like (immutable-string->immutable-bytes …) would be comfy

What encoding are you using?

utf8

Why isn’t string->bytes/utf-8
good enough? You don’t want to copy the data?

i would prefer not

Are you worried about the memory footprint of duplicating the data in memory or the cost of reencoding?

both

How big is the data in question?

@tetsumi Can I ask how big the strings are that you plan on dealing with?

@lexi.lambda lol. :slightly_smiling_face:

I’d ask a slightly different question: have you measured this to objectively show that it is actually a problem?

@tetsumi Basically, depending on what you’re doing, this can be a bad idea.

i don’t really know, i am implementing a small wiki

@tetsumi you can do it, but its very unsafe.

i may found a solution by using string-utf–8-length with unsafe-string-ref

@zenspider I was trying to be a little more socratic, but that’s more or less what I was getting at.

md5 function doco pointed me at (hex-string->bytes str)

it’d be nice if it didn’t require an input-port, but that seems easy enough.

what shall be the result of (char=? (string-ref "≠" 0) #\≠)
?

Why don’t you run it and see?

it gave me #f

It gives me #t
. How are you running it?

racket-mode repl in emacs

maybe due to how emacs encode the input

it gives me #t
in Emacs racket-mode repl

check your Emacs encoding

What does the (string-ref ...)
call return for you?

it does return #`

Ah, okay, so it seems like for the rest of us its #≠

Where are you located?

(Namely what language are you using, also what OS are you using?)

@tetsumi try C-h v
buffer-file-coding-system
and see what its reported value is

i am from western europe, my os is archlinux and the language is english. @abmclin Its value is ‘utf–8’

hmm, not sure why you’re not getting #\≠
then sorry am not of much help

sounds like could be a OS level setting, instead of emacs, since the racket repl is communicating back to a Racket process running under archlinux

Can you try running:
(char-utf-8-length (string-ref "≠" 0))

As well as:
(char-utf-8-length #\≠)

@abmclin no problem, thank anyway. @leif 1 and 3

Ya…somethings up here. Can you run:

(current-locale)

gives ""

For me its just the empty string

Hmm….okay. So I’m inclined to think its not Racket proper, and its not an emacs thing based on @abmclin…

so, can you try the experiment again in DrRacket, just to make sure?

@abmclin The racket-mode REPL does do an explicit (set-process-coding-system (get-buffer-process racket--repl-buffer-name) 'utf-8 'utf-8)
which I think should cover it The length of this chapter is one of the few things that makes me sad about Emacs: https://www.gnu.org/software/emacs/manual/html_node/emacs/International.html#International

the only settings about encoding in my emacs init file are ;; enable UTF-8
(set-language-environment "UTF-8")
(prefer-coding-system 'utf-8)
(set-default-coding-systems 'utf-8)
(set-terminal-coding-system 'utf-8)
(set-keyboard-coding-system 'utf-8)
(setq default-buffer-file-coding-system 'utf-8
x-select-request-type '(UTF8_STRING COMPOUND_TEXT TEXT STRING))

@tetsumi if you put point on the ≠ character in the REPL, then do C-x C-u =
, what do you get? I get: position: 199587 of 199610 (100%), column: 30
character: ≠ (displayed as ≠) (codepoint 8800, #o21140, #x2260)
preferred charset: unicode (Unicode (ISO10646))
code point in charset: 0x2260
script: symbol
syntax: . which means: punctuation
category: .:Base, c:Chinese, h:Korean, j:Japanese
to input: type "C-x 8 RET 2260" or "C-x 8 RET NOT EQUAL TO"
buffer code: #xE2 #x89 #xA0
file code: #xE2 #x89 #xA0 (encoded by coding system utf-8-unix)
display: by this font (glyph code)
mac-ct:-*-Menlo-normal-normal-normal-*-12-*-*-*-m-0-iso10646-1 (#x7F8)
Character code properties: customize what to show
name: NOT EQUAL TO
general-category: Sm (Symbol, Math)
decomposition: (61 824) ('=' '̸')
There are text properties here:
face font-lock-string-face
fontified t
front-sticky t

drract gives the correct result

drracket*

Okay ya. In that case I won’t be of much help. But its probably either something with your emacs installation, or the pipe that emacs-mode uses….sooo… @greg? :slightly_smiling_face:

defers the problem to Richard Stallman

@greg C-x C-u opens a prompt saying “You have typed C-x C-u, invoking disabled command upcase-region. …”

Ugg sorry I meant C-u C-x =

position: 2377 of 2435 (98%), column: 37
character: ≠ (displayed as ≠) (codepoint 8800, #o21140, #x2260)
preferred charset: unicode-bmp (Unicode Basic Multilingual Plane (U+0000..U+FFFF))
code point in charset: 0x2260
script: symbol
syntax: . which means: punctuation
category: .:Base, c:Chinese, h:Korean, j:Japanese
to input: type "C-x 8 RET 2260" or "C-x 8 RET NOT EQUAL TO"
buffer code: #xE2 #x89 #xA0
file code: #xE2 #x89 #xA0 (encoded by coding system utf-8)
display: by this font (glyph code)
xft:-PfEd-Unifont-normal-normal-normal-*-16-*-*-*-d-0-iso10646-1 (#x2263)
Character code properties: customize what to show
name: NOT EQUAL TO
general-category: Sm (Symbol, Math)
decomposition: (61 824) ('=' '̸')
There are text properties here:
fontified t
front-sticky t

okay it’s fixed, thank you all.

@tetsumi Oh. Good. How?

i am embarrassed, it’s my fault. a few hours ago, i typed in the repl (require (filtered-in (λ (name)
(regexp-replace #rx"unsafe-" name ""))
racket/unsafe/ops))

completly forgot

Oh and (char=? (unsafe-string-ref "≠" 0) #\≠)
is #f
.

oh so you were actually using a unsafe version of string-ref
and corrupted something in the Racket process?

ah I see cool

yes unsafe-string-ref was aliased as string-ref

Oh and the doc says > The unsafe-string-ref procedure can be used only when the result will be a Latin–1 character. Huh.

Well, to take a step back, I think Lexi and Leif had some good suggestions. For a small wiki, it might make sense to start off not prioritizing speed and space? I mean it’s up to you, but I might try the thing where I make it work, first, then if/as necessary make it faster, later.

On the other hand, you’re discovering nooks and crannies of Racket I never knew about, so if that’s your goal, that’s awesome. :slightly_smiling_face:

i am implementing crc32-c, i should probably use the safe operations during development and only turn on the unsafe operations after it’s done. http://pasterack.org/pastes/58907

Oh cool. Yeah, myself, I guess the years have worn down the rough edges. Nowadays, edge cases mostly just make me… edgy. :slightly_smiling_face:


hello, I’m really having problems getting this to type.

I have the impression that the (polymorphic) hash-ref function should have a few more cases than the type-checker actually considers.

Or maybe I’m just missing something important.

My usual approach is to spam the code with anns and asserts. To no effect on this one.

@joergen7 Try using (hash-ref my-hash "blub" (λ () '()))
instead of (hash-ref my-hash "blub" '())
. TR can’t cope with an arbitrary value for the failure-thunk argument.

interesting.

it works.

I was sure, the type spec said it should work also without a thunk.

no, you’re right, I’m just blind. Thanks a lot.

I ran in a similar misunderstanding with typed Racket’s log function. But I guess I’ll just have to accept that Racket and typed Racket are two completely separate languages.

is the wiki meant to be a web app / website of some sort? as a general rule of thumb, network-based programs will spend most of their time waiting on other servers and on bytes getting shuffled over the network / filesystem, so I think you won’t see any speedup at all with unsafe string operations

(also, cool project!)

@gregor.kiczales has joined the channel

I’m trying to implement metadata tags that will allow our HtDP students to make more formalized annotations about their design. This will also allow us to implement some grading support technology we’re working on. As a small example, we want students to write a simple function design as follows:
(@Problem 4)
(@HtDF fraction)
;; Number Number -> Number
;; produce result of dividing smaller number by larger
(check-expect (fraction 3 4) .75)
(check-expect (fraction 4 3) .75)
(check-expect (fraction 2 2) 1)
(@template Number add-param)
(define (fraction x y)
(if (< x y)
(/ x y)
(/ y x)))
I’m working on providing basic syntactic checking for the @tags. What I’m stuck at is arranging for a tag like (@HtDF fraction) to raise an error if the function fraction is in fact not defined.
I was hoping to find some kind of generic module evaluation cleanup phase that @HtDF could leave work behind for. I sort of managed to beat this into the test engine, but it doesn’t work well. I’m reluctantly starting to think that I’m going to have to do something like rewrite-module
does (htdp/htdp-lig/lang/run-teaching-program.rkt) in terms of adding my own cleanup phase so I can check to be sure the functions referenced by @HtDF have been defined. But ugh. That seems like a lot of work.
Is there some game I can play with phases to check the existence of a definition after all the definitions are processed? Here’s what I have now, which is just playing around and doesn’t really work. The use of check-expect-maker was just an experiment. It ends up achieving the right testing, but it makes there be a test that doesn’t really exist.
(define-syntax (@HtDF stx)
(syntax-case stx ()
[(_ id ...)
(let ([id-stxs (syntax-e #'(id ...))])
(when (empty? id-stxs)
(raise-syntax-error #f "expected at least one function name after @HtDF" stx))
(for ([i id-stxs])
(unless (symbol? (syntax-e i))
(raise-syntax-error #f "expected a function name" stx i)))
#;
#'(for ([id-stx (list 'id ...)])
(send (send (get-test-engine) get-info) add-wish id-stx))
#;
(check-expect-maker stx #'check
#'(list (identifier-binding #'id 0 #t) ...)
(list #'(list #'id ...))
'comes-from-@HtDF)
#'(values))]))
(define (check tst actual src engine)
(for ([bdg (tst)]
[stx actual])
(unless bdg
(raise-syntax-error #f "No function definition" tst stx))))