
But the runtime is still quadratic, right?

Assuming, say, window-size = list-length / 2

Cuz into-sum is invoked O(n) times, each with a data of size O(n)

the runtime is O(n * window-size), yeah

https://download.racket-lang.org/v7.3.html\|https://download.racket-lang.org/v7.3.html Says the json reader was improved - but I don’t know which one? I’m answering a question for someone else on discord


Thanks @soegaard2

I try using ~do and ~undo to simulate parameterize in syntax parse patterns, but when the subsequent match fails after cut, the error message simply becomes “syntax-parse: illegal use of cut”. I am not clear what it means. I guess it means “the undo thunks cannot be called since they are cut”, so the question is how to specify “this ~undo is safe to eliminate when get cut”?

example code #lang racket
(require syntax/parse)
(define p (make-parameter 0))
(define (f a)
(syntax-parse a
[((~and (~do (define old (p))
(p (add1 old)))
(~undo (p old))
_:nat
(~do (p old))
(~undo (p (add1 old))))
~! _:boolean)
#t]
[_ #f]))
(f #'(1 1))

I generally agree with looking at the type signature, but I think it gets a bit fuzzy. A lot of opaque types are as expressive of computation as functions are, and even simple data like strings can be thought of as source code or an index into a table of complex behaviors. Conversely, total functions over finitely enumerable domains like (Boolean -> Integer)
can be serialized like data types.
A big part of the connotation of “higher-order function” seems to be whether the argument or result is something the programmer thinks of as an abstraction. In particular, I get the impression some programmers approach higher-order functions as a new and unfamiliar thing even if they’re already accustomed to passing around objects that carry methods that perform arbitrary behaviors. Objects often have a narrative to them that treats them not as means to an end but as domain entities in their own right, and this can make it easy to tell them apart from other abstraction layers.
In more formalized connotations, I think “higher-order” tends to refer to one of these three inflection points in a language’s expressiveness:
Second-order: Can you write an abstraction-abstraction (order 2) that abstracts over an abstraction (order 1)? Now you’ve opened up a can of worms; why stop at 2? But if you do stop here, you have two clearly distinct kinds of abstraction, and one kind of abstraction has a higher order than the other.
Arbitrarily (but predictively) high order: No matter what abstractions you write (order N), can you write an abstraction that abstracts over the ones you’ve written before (order M for some M greater than N)? Now you can basically live in the language, abstracting over everything you want to without having to make use of an external metalanguage.
Impredicatively high order: Can you write an abstraction that abstracts over itself (order N for some N “greater than” N)? If so, now you can probably write recursive programs with the Y combinator. Using that, you can express Turing-complete systems, and we could say the distinguishing quality of a Turing-complete system is that users can feed all-new abstractions into the system at run time. But now you might stop caring to talk about whether one abstraction has a “higher order” than another, because you can get by perfectly well if all your abstractions have the same all-encompassing “order.”
Within the context of any of these higher-order languages, any abstraction of order greater than 1 can be called “higher-order.” In particular, functions of impredicative order like the Y combinator can hardly be called anything else. They’re of higher order than second-order functions, higher order than third-order functions, etc. Since in the context of that language we might not care to distinguish the different orders anyway, “higher-order function” usually says enough.

anyone familiar with honu? trying to understand the paper, and there’s one part of the design i don’t get which is the combine
function what’s the purpose of using that to build the AST? i wrote a toy parser with the same core idea (use a pratt parser to allow language extension) but the AST is built by giving control to functions associated with each token which re-invoke the parser rather than using an explicit combine
function for composition is this just so that enforest
doesn’t ever yield control?

As I understood it (I read the paper a few years ago, I just skimmed the description of the enforest
function in the paper just now) combine
is a way to make a term with a hole in it. Like if you had already read the terms 10 +
the combine
argument could look like (lambda (term) #
(+ 10 #,term))`

err, my question is more about why rather than what

Oops, sorry. I have a guess, but it’s pure speculation.

the approach i took would be that e.g. the infix *
token has an associated function along the lines of (lambda (enforest-ctx token lhs)
#`(* #,lhs #,(enforest (with-precedence enforest-ctx '*))))
where the *
symbol names a precedence group in the precedence table in enforest-ctx

so my question is basically what the benefit is of using an explicit stack over just using the call stack you already have

One of my students is reporting that read-json
can segfault, in pure Racket. Is this plausible? Known? How hard should we push to make a small reproduction?

> in pure Racket. Is this plausible? Racket’s written in C (as is Chez), so yes. :slightly_smiling_face: Do they have a repro case?

@mflatt would you be able to explain the reasoning here?

I would think that any “pure” (non-ffi) Racket program should not segfault.

It’s not known, and definitely a bug. A big reproduction sooner is probably better than a small one later.

@steven has joined the channel

I am very rusty on these details, but I think the explicit stack makes it easier to inspect. I can imagine getting the same effect using the continuation as the parsing stack, but then you need to set a mark to let the called parser decide whether/when to return. Or probably a hybrid works, where the combiner half of the stack is in the continuation, but the precedence half of the stack is passed to called parsers.

[ffi] how do i access a c-struct (defined in racket) through its pointer? so i can use id-field-id
PS: I found about ptr-ref
but trying to use a cstruct as the type crashes racket. (Unless I’m not doing it right)
Edit2: here is in a https://gist.github.com/kurinoku/3361da78d2d6fb1763eea99696ba40dc\|gist

i don’t think that a continuation mark or an explicit precedence stack is necessary, since i have a toy parser doing something very similar to honu which uses neither inspection was the only reason i could think of, but i couldn’t come up with a use-case for it outside maybe debugging the enforester itself was there a specific one you had in mind?