perhaps chess-game
as the main struct with a field named board
instead? :P
chess-game-board
• a game-board
field on chess
? • a board
field on chess-game
? there’s no escape! that’s why I wish we had a better convention for accessors (even like the dot in struct++)
I think that particular problem is overstated. The documentation for chess-game-board
clears up the confusion immediately when it specifies whether the input is a chess-game?
or a chess?
I would like dotted accessors though
yea it’s just a silly nitpick!
Ooh, fancy. I hadn’t heard of define-module-boundary-contract
, and a lot of my macros should be using it. :sweat_smile:
Does your code there report the name variant
when there’s an error? It looks like it might report contracted:variant
instead. If that’s the case, this looks like something that could be tweaked using the #:name-for-blame
argument to define-module-boundary-contract
.
oh wait, you do use that argument, I just didn’t see it the first time I looked! :D
@jmhimara has joined the channel
Hi everyone. New here. With mostly scientific/numerical computing in mind, is it possible to interface Racket with Fortran code? Are there any performance penalties doing this (e.g. from communication overhead)?
And is massive parallelism possible?
I know there’s a C FFI? Presumably that provides a path, or there’s a way to go to Fortran? Not sure of perf overhead. Parallelism is possible (eg for/async, https://docs.racket-lang.org/reference/futures.html\|https://docs.racket-lang.org/reference/futures.html), not sure what « massive » means here. (Not really an expert in Racket or numerics to answer, just wanted to leave some pointers.)
“Massive” refers to parallelism across multiple nodes/computers, not just multiple cores in a single CPU (there’s probably a better term for it — maybe “distributed?”). I think futures only does the latter, though I could be wrong about that. There seems to be an MPI implementation in Racket (https://pkgs.racket-lang.org/package/openmpi) but it seems that it has not been updated for years (fyi, MPI is the tool often used for this scientific computing, mostly in combination with Fortran and C).
Perhaps you are looking for https://docs.racket-lang.org/distributed-places/index.html?
Hmm, that might be something to look at. Admittedly, I mostly use tools like MPI as a black box for parallelism, and I don’t understand the underlying concepts well enough to know if it is equivalent to distributed places. Still, on the surface it looks like it’s doing similar things
As for the Fortran interface, aparently someone has already done an implementation with BLAS/LAPACK, which are written in Fortran. I guess I can check that code to see how they do it.