laurent.orseau
2020-11-28 09:27:54

@wanpeebaw If you find some cool/non-trivial ways to optimize your code, please share :slightly_smiling_face:


wanpeebaw
2020-11-28 13:32:18

Sham is something similar to what I’m looking for. Though it’s based on LLVM, not Racket VM. The benchmark result seems promising. But it doesn’t list on Racket Packages and I can’t find any documentation either. https://arxiv.org/pdf/2005.09028.pdf


massung
2020-11-28 15:24:32

@wanpeebaw not “low level”, but when the big-O is currently at the best you can do, I’ve found the next best optimization in dynamic, GC languages is some form of object pooling: pre-allocate the objects you’ll need, pre-size arrays/buffers, etc. Avoid as much allocating, consing, and GC pauses as possible. It’s quite amazing how much of a performance win this can end up being (based on the language).

Not sure if you particular problem lends itself to that, but it’s a suggestion.


laurent.orseau
2020-11-28 16:37:56

Usually, in my programs, I find that the GC takes about 10% of the time—not so much worth the trouble


laurent.orseau
2020-11-28 16:43:03

But I don’t care too much about the GC pauses


racket192
2020-11-28 18:11:09

Both the standard and chez scheme versions error out immediately on the MacBook Pro 13 with M1. Can’t build racket from macports either.


racket192
2020-11-28 18:14:47

But I see mflatt is working hard on it. So soon, I hope. FWIW a lot of other dev tools don’t work either. I was hoping to use DrRacket for AoC again this year. Maybe yet!


badkins
2020-11-28 19:54:57

Has anyone worked through Peter Norvig’s <https://www.amazon.com/Paradigms-Artificial-Intelligence-Programming-Studies/dp/1558601910/ref=sr_1_2?crid=2IODC2BPFSSOI&dchild=1&keywords=paradigms+of+artificial+intelligence+programming&qid=1606593078&sprefix=paradigms+of+ar%2Caps%2C173&sr=8–2|"Paradigms of Artificial Intelligence Programming: Case Studies in Common Lisp"> ? I’m considering doing so w/ Racket, and I’m curious how easy/difficult translating the CL code to Racket might be. It seems like it shouldn’t be a big deal to do so.


soegaard2
2020-11-28 19:56:19

I have - some years ago. I can’t remember anything problematic.


spdegabrielle
2020-11-28 19:57:54

Sounds like a fun project


soegaard2
2020-11-28 19:58:06

Maybe I skipped chapter 13 :slightly_smiling_face: But there is always Swindle.


massung
2020-11-28 21:58:13

GC is only part of the equation. Allocation & initialization can take a large chunk of time as well depending on the application and the memory scheme implemented by the language.

A copy collector will be wicked fast on allocation, but slower to collect, while mark/sweep may be slower to allocate cons, but quicker on the collection. Obviously there are many flavors of GC/allocation in between.

If doing lots of short-lived string/vector building, it may be much more performant to just allocate one large string/vector buffer and never think about it again.

Another common piece of advice is to “farm out to C”, but often times that can be much slower than doing it in the high-level language. Python (for example), can suck pretty bad at marshalling floating point values to/from C, so putting vector/matrix operation in C, but where the components are accessed via python frequently, can really slow things down overall (meaning you “solved” one problem only to introduce a new, worse one).

As with all things, profile, then solve the problem at hand. But, without knowing the details of this particular problem and what profiling has been done already, it’s hard to give specific advice.


darren_minsoo_kim
2020-11-29 01:12:23

@darren_minsoo_kim has joined the channel


joshibharathiramana
2020-11-29 02:29:05

Funny you should ask this, I’m running a reading group at my college on this book for the winter break :slightly_smiling_face:


notjack
2020-11-29 02:31:34

Also I’m pretty sure that when the racket slack was first created, Discord didn’t exist (or at least was way, way less popular than it is now)


samth
2020-11-29 03:35:32

Yeah, @jsyeo created this slack in July 2015, about 2 months after Discord’s initial release and before it had a stable version. Prior to that, some people had an internal PLT Slack for about a year.


wanpeebaw
2020-11-29 05:09:39

@massung Thanks for the GC/memory tips. I came from Java world, so I am pretty familiar with the GC behavior. In my current case, I just allocate a large byte array once without having to allocate or deallocate during the whole process. Therefore, the GC time is zero.