@lexi.lambda just reproduced your steps and i have a working racket heroku app :smile: thanks. ive destoryed the heroku app i couldnt get working so i cant say fro sure but i think maybe the problem was i had set the RACKET_VERSION set to ‘current’ a-la It may either be set to a specific version (such as 6.2) or to either current or HEAD, which will use nightly, bleeding-edge snapshots.
ill experiment in a little bit and see if i can reproduce the setup. but in the meantime i can get hacking! thanks a bunch :slightly_smiling_face:
Sorry, having to rethink… this is really boggling my mind, haha.
I know you deleted your message, but I think that’s right, isn’t it? BEQ
is “branch on equal”, so you want to take the branch (do the jump) when the Z
flag is set.
Let me post it again, if that’s okay. I think my code should have been BNE, as in, keep looping until X is zero.
Another two thoughts: (1) CPSing your code is a very reasonable way to implement jumps, but you could also use continuations if you wanted to avoid having to do that (though it would probably be slower), and (2) it’s more idiomatic to use a struct instead of a dictionary for storing processor flags.
#lang racket
(define (LDX val flags mem regs k)
(let* ([flags (if (zero? val)
(dict-set flags 'z #t)
(dict-set flags 'z #f))]
[regs (dict-set regs 'x val)])
(k flags mem regs)))
(define (DEX flags mem regs k)
(let* ([regs (dict-update regs 'x sub1)]
[flags (if (zero? (dict-ref regs 'x))
(dict-set flags 'z #t)
(dict-set flags 'z #f))])
(k flags mem regs)))
(define (BEQ label flags mem regs k)
(if (dict-ref flags 'z)
(k flags mem regs)
((dict-ref labels label) flags mem regs)))
(define init.flags (hash 'z #f))
(define init.mem (make-vector 10 0))
(define init.regs (hash 'a 0 'x 0))
(define labels (make-hash))
(define (define-label name flags mem regs k)
(dict-set! labels name k)
(k flags mem regs))
((λ (flags mem regs)
(LDX #x03 flags mem regs
(λ (flags mem regs)
(define-label 'loop flags mem regs
(λ (flags mem regs)
(displayln "looped")
(DEX flags mem regs
(λ (flags mem regs)
(BEQ 'loop flags mem regs
(λ (flags mem regs)
42))))))))) init.flags
init.mem
init.regs)
Yup, I was trying to mimick: LDX #$03
loop:
DEX
BEQ loop
return 42 (not assembly, just to test whether branch has been passed)
but what I was really thinking was BNE, or, keep branching back to the loop until X is zero.
Thanks for the tips! Could you clarify a bit on using a struct instead of a dict for the flags please?
I thought about using call/cc, but I’m doing everything manually as a sort of exercise; I’m new to continuations, as you’ve no doubt guessed :wink:
Racket (unlike, say, JavaScript or Clojure) does not idiomatically use dictionaries for records, only for key/value data where the keys are not known at compile-time.
Since you know the keys at compile-time, you could use a struct with struct-copy
instead of using a dictionary.
That is likely to be marginally faster, but that probably isn’t very important here; the more important point is that it makes lots of nonsensical states unrepresentable by making it impossible to accidentally set some flag 'not-a-real-flag
, for example.
Ah, I see, I’ll change them over. Thanks for explaining, that makes sense now :slightly_smiling_face:
Do you think it makes sense to pass the registers/memory/flags via arguments, and store the labels in a global variable as I’ve done? I’m not sure why I did it that way, I just gravitated towards it, but now I’m wondering if it makes any sense to segregate them in that way.
Probably not. I would avoid the global variable.
Thanks :slightly_smiling_face: I plan on using a state monad to avoid the explicit passing around, but I’m still just not quite comfortable enough with it yet. I’ve implemented some basic functions like length with one, but my brain still struggles a bit following the binds even there, haha.
A nice way to represent things would be to avoid the label name indirection entirely and compile things in a way that a reference to a label is just a lexical reference to the captured “continuation” (even if the continuation is just a function produced by the CPSing rather than a real Racket continuation), but it would involve a different compilation strategy, since you’d need to support references to the label before the labeled code is executed.
Though, thinking on it, I think the current compilation strategy won’t work quite right, anyway, since it also doesn’t support forward jumps.
You need all the labels to be known before execution starts.
Somehow (somehow!) I hadn’t thought of that. Sadly, the suggestion you made is a bit over my head I’m afraid.
Yes, I know it’s not phrased super well. Let me try to phrase it better.
That is very much appreciated, thanks a bunch :slightly_smiling_face:
@sydney.lambda Currently, you are implementing labels by indirecting through your labels
hash table, but I don’t think this is actually necessary: you can make labels direct variable references, so you can just take advantage of Racket’s lexical scope.
Here’s what I mean: currently, your define-label
form takes a continuation and stores it in a hash table, then your branch instructions look up keys in that hash table. But what if you compiled labeled instructions to ordinary function definitions, and you passed those functions to your branch instructions directly, like this: (define (START state)
(LDX state #x03 loop))
(define (loop state)
(DEX state
(λ (state*)
(BNE state* loop
(λ (state**)
42 ; done
)))))
Since assembly is aggressively first-order, this transformation is easy to do. It doesn’t support self-modifying code or computed jumps, but to be fair, neither does your original approach.
If you wanted to support those, you’d probably need to write a full virtual machine rather than compiling the assembly to Racket code, since you’d need the code to be mapped into some virtual (writable) address space and use a plain old program counter, etc.
Sorry for the delayed response, my mind is chugging away trying to comprehend this, haha.
Would that approach work for things like: ;if A is zero, increment X. Otherwise, continue without doing so. LDA #$00 BEQ then INX then: ;rest of code ?
I think I understand to some degree - instead of storing the labels in a hash table, just use define and let Racket take care of it. The trouble with the current method is that I’m unable to reference the continuations by name as they’re λs, and I’m unable to forward-reference them.
Yes, you’d compile that to something like this: (define (START state)
(LDA state #x00
(λ (state*)
(BEQ then
(λ (state**)
(INX state** then))))))
(define (then state)
#\| rest of code ... \|#)
The idea is that you rewrite each block of labeled instructions into a top-level function definition, and the previous block’s final continuation just becomes a reference to the labeled block.
I’ll keep reading over your replies for the rest of the evening; the more I mull it over, the clearer it becomes. Can’t thank you enough for your help with this, as well as your help with my other questions, like the currying interpreter. Thanks so much, it is very much appreciated! :slightly_smiling_face:
Would nesting the defines within each other rather than them being separate top-level definitions cause issues?
Yes, since the labels need to be visible everywhere to allow both forward and backward jumps.
Ah yes (duh!) of course. Thank you.
and each definition lasts/continues until the next (if any) label is encountered, is that correct? Edit: We’re, in a way, just replacing define-label with regular ol’ define. I think this is starting to click - I’m essentially creating my own (clumsier, less powerful) version of define for no reason.
Yeah, I’ve been writing an example implementation, which is almost done, I think…
@sydney.lambda Here’s a tiny implementation of what I had in mind. I added a TRACE
instruction that just prints out the current processor state just to validate that it’s actually doing the right thing.
The program prints this as output: (processor-state (registers 0 3 0) (flags #f #f #f #f) #"\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0")
(processor-state (registers 0 2 0) (flags #f #f #f #f) #"\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0")
(processor-state (registers 0 1 0) (flags #f #f #f #f) #"\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0")
(processor-state (registers 0 0 0) (flags #f #f #t #f) #"\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0")
If you use the Macro Stepper with macro hiding enabled, you can see how the program
and processor-do
macros work.
Wow! Thank you so much for taking the time to flesh that out for me to study. I owe you one (two, actually, for the help with the currying interpreter).
also, the #:keyword-like way you defined the labels is very cool, I’d have never thought of that.
It’s fine; I’m currently in between jobs and bored, and this is a fun project. :)
What industry do you work in? I assume you have been using Racket professionally.
I get the feeling you’ve dabbled in 6502 before? I notice you added the /i to LDX for immediate :wink:
Mostly Haskell, actually. I just spent a year writing Racket for a research project in academia, but that’s the only Racket I’ve written for money. Before that I was doing some relatively simple web stuff in Haskell, and now I’m probably going to be doing some more of that (though hopefully slightly more interesting this time around).
Technically 65c816, but it’s basically the same architecture scaled up to 16 bits—I dabbled in SNES programming when I was younger. :)
Very cool. Well best of luck with finding your next job!
Cool! You’ve probably already seen, or already know most of this, but I found this quite an interesting read: http://wilsonminesco.com/stacks/
Probably just me, but Forth implementations on 65… processors are really cool.
I hadn’t seen it, thanks for the link!
I’ll admit, I’ve never really “gotten” Forth, or concatenative languages in general. A lot of people seem to think they’re super cool, but the programming model has always seemed esoteric in an uninteresting way to me. But I’ve never really written any Forth, so maybe I just don’t understand.
I remember asking a while ago about how: square x = x * x
becomes square = join (*)
After trying to define the same thing in Racket and failing (without using a power/expt kind of function), it was like magic! Someone replied mentioning how it’s part of the function monad ((->) r), and that it in some way relates to Forth’s “dup” stack-manip operator. Sadly, I never did quite wrap my head around it, but it’s stirred-up a continuing interest regarding Forth, SKI, and the function monad.
Anyway, sorry, I’m rambling off-topic! :zipper_mouth_face:
It’s fine, talking about different models of computation probably isn’t really off-topic in a Racket context.
In any case, writing that join
in Racket is not hard, it’s just (define ((join f) x) (f x x))
though this is not really the same as what Forth is doing, since Forth manipulates the global stack, while this join
is just a function combinator.
Using that combinator, you can easily write (define square (join *))
.
I’m not sure if you can explain this in layman’s terms, but, would you mind trying to explain how is it that join for the function monad “peels off” a layer, in the same way (Just (Just x)) becomes (Just x) and [[1,2,3]] becomes [1,2,3]? Those seem rather obvious and intuitive, but the function version not so much. (other than them all being >>= id)
The key to understanding the function monad is that a -> b
in Haskell is infix syntactic sugar for the prefix type (->) a b
, and (->)
can be partially-applied like any other type. So (->) r
is a type of kind * -> *
, which is the right kind to be a member of the Monad
class.
The type of join
is m (m a) -> m a
, so if we substitute (->) r
for m
, we get (->) r ((-> r) a) -> ((->) r a)
and if we rewrite that type to use infix notation for all the ->
s, we get: (r -> r -> a) -> (r -> a)
Now, it’s clear that this type only has one implementation, which is join f x = f x x
since we need to call f
with two r
s, and the only r
we have is x
.
But this isn’t a very intuitive explanation, it just involves playing with the types, so I can also try to give a more informal explanation.
Functors and monads are functions on types, so given the function monad (->) r
, we can apply it to some type a
to get a new type r -> a
. In this sense, the function monad parameterizes a value of type a
over a value of type r
. It says “if you have an r
, I can give you an a
”.
When you have two “layers” of the function monad, you have a function of type r -> r -> a
. This says “if you have two r
s, I can give you an a
”. But we can come up with a strategy for “flattening” this type by saying “well, if I have one r
, I can give you two r
s just by using the same value twice”. So join
“peels off” a layer at the type level by using the same value twice at the value level.
However, note that, in general, join
only does this “peeling off” at the type level. Maybe a
and [a]
are containers that hold a
s, so join
also “peels off” containers at the value level, too. But r -> a
is not a container that holds any a
s, it’s just a function, so the metaphor breaks down a little bit.
However, we can view a function as a container: we can think of r -> a
as being a lot like Map r a
. In that sense, r -> r -> a
is like Map r (Map r a)
. So join
on functions is sort of like flattening those nested maps.
Yes! That seems to be what my mind has been trying to grasp at, but you summed it up perfectly. I think I make the error sometimes of confusing things going on at the type level with actual implementation. This just tells us the type, it doesn’t tell us how that is achieved. Although, I suppose the type tells you that it has to be the same r passed for both arguments, as we don’t have enough information to fabricate a new one.
and, if I remember correctly, the state monad plays the same partial-application trick, hence the newtype definition. Could be way off, but I think that’s right.
Thanks for taking the time to explain :slightly_smiling_face:
Right, the state monad State s a
is representationally equivalent to s -> (s, a)
.
Its join
is rather different, though, since State s (State s a)
is representationally equivalent to s -> (s, s -> (s, a))
, so its join
threads the new state through the “inner” function rather than just passing the original state twice.
Out of curiosity, have you ever had occasion to use join for the state monad? Or, I suppose what I’m really asking is, have you ever had occasion to use two state monads, with one “inside” of the other? I’m struggling to even comprehend it, to be honest :sweat_smile:
Admittedly, no, I always use >>=
. But join
is really more “essential” than >>=
(>>=
is just join
+ fmap
), so I think it’s easier to think about monads in terms of join
than >>=
.
Night all. Thanks again for all the help today @lexi.lambda