
I ended up mainly golfing a bit, but <https://github.com/lojic/LearningRacket/blob/master/advent-of-code–2020/day22.rkt|my part 2> is 0.8 seconds, and I didn’t think about optimizing it.

doh, i didn’t think to use a named let. I just used do loops for my solution

Yours looks much nicer i think

Thanks to looking at @mbutterick’s solution (after I created mine), I was able to collapse parts 1 & 2 by passing a recur?
flag - that was a nice touch of his :)

Knocked it down to 21 sloc.

nice

I’d be curious to find out where some of the time loss above is coming from.

In general, I find that a named let w/ tail recursion almost always performs really well.

Yeah, yours probably performs much better because it uses tail recursion better

@haskal what form did you use to make the loop in your part 2?

I had ~4.5 seconds for p2 w/o optimization, using functional queues (double-list based, I think). Really curious if I can get sub-second, but I don’t have time to spend cranking on it.

I just used (append (cdr deck1) (list card1 card2))
to put the two cards on the bottom of the deck. Are the functional queues really slow @ben.knoble?

They’re supposed to be amortized O(1)
, but maybe after dinner I’ll try a version using pure lists and time it.

FWIW I’m using Racket CS 7.9 on (16-inch, 2019) macbook pro.

Yeah, we’re all using different hardware and different inputs. I don’t know there’s much to be gained from these kind of comparisons.

Good point. I had counters for game & round while I was debugging, but I removed them. Knowing how many sub games & rounds were used would help in comparing. I hadn’t thought about the possibility that different data might cause a 8x slowdown.

@jaz are you posting your code to github? I could run it on my input and compare.

no, my advent code is complete dogshit

I can’t even bring myself to save the buffers in DrRacket. That’s how ugly it is.

lol - most of mine is also, but I still publish it :) I often start a puzzle with the idea of coding it really well (i.e. as I might do professionally), but then I inevitably end up golfing a bit for no reason.

There’s a big line of “Untitled” tabs across my screen

Well, if anyone’s curious about the performance of named let w/ simple lists vs. queues, etc., you can grab https://github.com/lojic/LearningRacket/blob/master/advent-of-code-2020/day22.rkt\|day22.rkt and my input file https://github.com/lojic/LearningRacket/blob/master/advent-of-code-2020/day22.txt\|day22.txt and compare.

input runtimes may vary wildly, like i added memoization to mine and it runs now in 10 seconds (BC, 2016 laptop) on my input but in 5 seconds on @jonathanhwchan’s input


I just ran on my input (0.74 sec) and Jonathan Chan’s input (0.44 sec) and Matthew Butterick’s input (1.2 sec) and Graham’s input (1.2 sec). So yeah, very data dependent, which makes sense.

but we can still compare on orders of magnitude even though not directly. My code takes 12 seconds to run, so i know you’re doing something better than me

you could also try runnning all those inputs on my code if you want

day 23 spoilers in thread

https://git.lain.faith/haskal/aoc2020/src/branch/aoc2020/23.rkt this ends up being reasonably fast so i am happy with it

so here’s the interesting thing… if i replace all the mutable cons operations with unsafe- variants the execution time actually doubles
why is that? how come unsafe is 2x slower??

update: this happens when i redefine the mutable ops like (define mcar unsafe-mcar)
(define mcdr unsafe-mcdr)
(define set-mcar! unsafe-set-mcar!)
(define set-mcdr! unsafe-set-mcdr!)
if i instead rewrite using a macro (define-simple-macro (mcar x) (unsafe-mcar x))
(define-simple-macro (mcdr x) (unsafe-mcdr x))
(define-simple-macro (set-mcar! l x) (unsafe-set-mcar! l x))
(define-simple-macro (set-mcdr! l x) (unsafe-set-mcdr! l x))
then there’s no slowdown