
@rokitna has joined the channel

Hello everyone! I have been using Racket in toy projects for a couple of years. In the last months, I became interested in comparing Racket with Common Lisp. Just for fun, I compared the web-server
performance with woo
, a server written in CL (with libuv
underneath). And I added my numbers to the benchmarks shown in the woo
repository: https://github.com/fukamachi/woo/pull/73

The numbers are not so important, but I wanted to ask for someone with more experience to review my implementation there. I just wrote a naive, straightforward port of what the CL code seemed to be doing.

(if someone cares, of course :slightly_smiling_face: )

Do you time startup time too?

Did you compile to bytecode before running the benchmark?

I think you may also want to disable continuations

I used the benchmark script in the repository: https://github.com/fukamachi/woo/blob/master/benchmark/run-benchmark

and no, I do not think that I did any bytecode compilation before running on my machine

(sorry if I am not super precise, I did this several months ago … )

I doubt bytecode compilation will matter there

and that benchmark doesn’t capture any continuations

right

So the the web-server is started, and then then the timings are made?

(just checking whether I understand the script)

i do get significantly better numbers with compiling the file first

@soegaard2 I understand the same from the benchmark script (I just provided an “init command” that is passed to that script)

still noticeably behind tornado

also, an important note: I provided numbers in that PR only for the Racket server, and asked the author to run it again in a comparable setup to all other servers … but just got the PR merged with no comment

So the numbers are from two different servers?

that would explain a lot of things x)

as always with benchmarks, they’re not often meaningful :stuck_out_tongue:

no, what I mean is that, if you open the benchmark page, there are numbers for Tornado, Woo, etc … those numbers have been obtained by the author previously in a setup unknown to me … in my PR, I just provided numbers for the Racket server, without overwriting the previous results with those that I could measure on my own setup


yeah, so they’re currently meaningless until they restart a full benchmark

@jerome.martin.dev completely agree, this is just an exercise for fun … benchmarks can be very misleading :slightly_smiling_face:

I’m curious to see the results though

one can run all the benchmarks locally and compare, of course

yep

i ran it locally for some of them and the results are in line with that page

maybe racket-on-chez will make a difference :stuck_out_tongue:

ah yes, I did this in June and it was Racket 6.12 at the time

Anyways I think we have better to do right now than working on those kind of optimization, but it’s still a good indicator that there’s some more work to do until the web-server becomes a real game changer in terms of performance. Considering it’s one of our objectives, obviously, but I’m not even sure it is for now.

But thanks for adding racket to the benchmark :wink:

racketcs is about the same, maybe a bit slower

yep, performance is not the be-all and end-all — I just thought I could bring my effort to the attention of the community just in case someone knows better than me :slightly_smiling_face:

Can anyone in this channel comment on the design of the class system? Maybe @mflatt or @robby? I’m trying to understand the behavior I’ve just reported in https://github.com/racket/racket/issues/2395. (I’m willing to fix it myself, but I don’t know what the fix is—it could be a documentation change or an implementation change.)

I expect that it will have to be a documentation improvement, but I haven’t looked closely – and won’t be able to look closely until later today

Okay, no worries.

I’ll operate under the assumption I’ll have to work around the behavior, then. :)

Is there any way to time
a require?

@macocio use dynamic-require