Interestingly all implementation of a generic interface gen:X
will provide X?
to determine if a value implements the generic methods. All but not gen:equal+hash
. Why is this? How can I determine if a value implements the equal-proc
, hash-proc
and hash2-proc
without the equal+hash?
predicate?
Any suggestions from generics users? :slightly_smiling_face:
@pocmatos every value implements that interface
similarly, every value can be written
@samth what about my own values?
i.e. structs
every value can be hashed and compared for equality and written
don’t I need to implement those methods on my structs?
you can customize that behavior
but (equal? a b)
always works
as does (equal-hash-code a)
, (write a)
, etc
….as in always works properly recursing the equality procedure?
yes
that depends on the inspector for your structs
(I’d recommend glancing at #:transparent
in the struct docs)
ah, so it won’t work for non-transparent structs?
I think “work” is not a good word to use here
ok, it won’t give an error… but what I expect from equal is to know if they are equal, not ‘the same’.
it won’t "[recurse with] the equality procedure" if #:transparent
is not specified
(struct opaque (a b c))
(define x (opaque 1 2 3))
(define y (opaque 1 2 3))
(equal? x y) ;; => #f
you’ll get pointer equality
@greg that’s what I expected.
a la Java if you don’t implement a classes comparison functions
and that’s why it would be good to know if opaque implements ‘proper’ equality through gen:equal+hash
@pocmatos the point I’m making is that there is no such thing as “proper” equality
This is because I want to force the clients of my library to implement it but I can’t do so if I have no predicate
what the X?
predicate indicates is “will calling this generic method succeed”
and that’s true for every value for equal?
, equal-hash-code
, write
, etc
for other generic methods, sometimes you’d get a runtime error because the value didn’t implement that generic
that never happens with gen:equal+hash
ok, my interpretation of X?
is incorrect… but I still think that fundamentally it would be great to know if customers of my library when passing a value into my functions have customized their equality procedures. it would give me more assurances, than not knowing anything.
@pocmatos what library is this for? how will that assurance help you?
so, I am reimplementing some parts of my superoptimizer using generics. I define an interface using generics that customers of the library need to implement.
I am specifically looking at generics for a machine instruction.
a machine instruction can have instruction packets, each packet will have operands, args, conditional flags etc depending on the architecture.
However, my library doesn’t care about most of this, however it needs to be able to do equal?
between two instructions and get the right value.
my suggestion would be just to call equal?
I can write something like this for an instruction contract that the client has to implement: (define mach-insn? (and/c (sequenceof insn-packet?) serializable? custom-write?))
.
I think checking for custom-write?
is a mistake
However, if I can’t force the customer to define an equality and hash procedure on their implementation of machine instruction then I cannot tell them, hey I can’t accept this because without this implemented, most likely your insn won’t be correct for equal?
in particular, a struct that is #:transparent
won’t implement those properties but will work just fine in your code
you’re talking specifically about which properties? custom-write?
oh… maybe even serializable?
i guess.
#:transparent
structs are not serializable?
, but #:prefab
structs are
Would it suffice to check for non-#:transparent
struct
s? Because you can use struct-info
for that. Something like: (struct opaque (a b))
(struct transparent (a b) #:transparent)
(struct-info (opaque 1 2)) ;=> (values #f #t)
(struct-info (transparent 1 2)) ;=> (values #<struct-type:transaparent #f)
;; So...
(define (instance-of-transparent-struct? v)
(and (struct? v)
(let-values ([(st _) (struct-info v)])
(and st #t))))
(instance-of-transparent-struct? (opaque 1 2)) ;=> #f
(instance-of-transparent-struct? (transparent 1 2)) ;=> #t
#:transparent
structs, however, write
the way you (maybe) expect
I think the point I am trying to make with this design is to ensure I force the user to implement all the required machinery that I need instead of accepting any types of values and hoping nothing is forgotten. I can write in the docs, implement gen:equal+hash
but it would be better to ensure this is the case at runtime.
(“suffice” is maybe the wrong word. I mean, is that a way to prevent at least one common source of error)
or something that just occurred to me is, instead of relying on existing generic interfaces to force the user to write certain methods, I create a gen:mach-insn
, which the user has to provide with all the methods I required methods like serialize
, deserialize
, equal?
, etc.
That’s a good point. To work well, you require the values to have a certain interface. So you express that.
one other option would be to just require all these structures to be prefabs
another thing is that this forces the user to use structs
to define their mach-insn
. Maybe I should instead ask the user to fill a struct mach-insn-info
with slots for these procedures and use those. This would allow the user to define mach-insn
as a class and fill in the slots to redirect the calls into their class.
@samth yes, that would be another option but again, I would force the user to use a struct
to implement a value I don’t care about. I just care about the interface. Maybe the user has this whole hierarchy of machine instructions and wants to use classes.
my recommendation then would be to trust the user of your library
and just call equal?
etc
ok.
I will have a think, this discussion certainly helped. Thanks all for your time.
I was really keen on mach-insn-info
but suddenly I feel like I am reinventing the wheel with generics. argh!
In #beginners someone asked about visiting definitions. This reminded me to try something. displayln
is implemented in Racket: $ /Applications/Racket_v6.90.0.30/bin/racket
Welcome to Racket v6.90.0.30.
> (identifier-binding #'displayln)
(identifier-binding #'displayln)
'(#<module-path-index:"misc.rkt" "pre-base.rkt" "private/base.rkt" racket/base>
displayln
#<module-path-index:(lib "racket/init")>
displayln
0
0
0)
Whereas in Racket <6.9 display
is implemented in C in #%kernel
. Let’s see about 6.90: > (identifier-binding #'display)
(identifier-binding #'display)
'(#<module-path-index:'#%runtime>
display
#<module-path-index:(lib "racket/init")>
display
0
0
0)
This is now #%runtime
(instead of #%kernel
), but it remains as “opaque”. From my (sketchy) understanding, I thought this might lead to some .rkt
source?
Did I just pick a bad example, i.e. is display
one of the (fewer) things still implemented in C?
@greg yes, display
is implemented in C
note that the expander functions implemented in racket now will still have that same behavior
ie
> (identifier-binding #'expand)
'(#<module-path-index:'#%main>
expand
#<module-path-index:(lib "racket/init")>
expand
0
0
0)
when running on Chez, display
is implemented in Racket, though
Is it that “opaque” #<module-path-index:'#%main>
instead of a module path to a .rkt
source, intentionally? As in, we don’t want to expose that to a visit-definition feature in DrRacket or racket-mode, for reasons? Or, not intentional?
it’s not intentional, but it would be not-easy to support
linklets etc?
in particular, all the source information/distinctions between modules/concept of modules gets erased before it gets built-in to the binary
I can see how that would make it harder. :slightly_smiling_face: OK.
you could potentially try some tricks to make it work
for example, require
the expander/main.rkt
file, and then look where that name is bound
[samth@huor:~/sw/plt (kw-le) plt] r
Welcome to Racket v7.0.0.1.
> (identifier-binding #'expand)
'(#<module-path-index:'#%main>
expand
#<module-path-index:(lib "racket/init")>
expand
0
0
0)
> (require (prefix-in e: "racket/src/expander/main.rkt"))
> (identifier-binding #'e:expand)
'(#<module-path-index:"eval/main.rkt" "racket/src/expander/main.rkt">
expand
#<module-path-index:"racket/src/expander/main.rkt">
expand
0
0
0)
it would be really cool to have that work in racket-mode
I have some code that is some combination of useful and horrifying: https://github.com/greghendershott/racket-mode/blob/master/defn.rkt It’s not a package because I want to deliver Elisp + Racket in one install from MELPA But maybe I should make it into a Racket package (I could still “inline”/"vendor" a copy I suppose for MELPA). And it could get more eyeballs and PRs and be an actually good example.
that seems right to me
also, presumably you can look at the debugging symbols and find the c code that implements something :slightly_smiling_face:
(gdb) info functions ^scheme_display$
All functions matching regular expression "^scheme_display$":
File ../../../racket/gc2/../src/print.c:
void scheme_display(Scheme_Object *, Scheme_Object *);
@lexi.lambda Hey I have exactly the same issue but can’t seem to find the mail in the mailing-list. Care to share it here?
In the Q&A for my racketCon talk, someone asked if I could add that. I said I’d do it if it would help e.g. Matthew, but maybe the C would be rewritten in Racket soon. Matthias yelled “YES!” So. :simple_smile:
Thanks :smile:
Of course that’s the same Matthias who walked out during it. :smile: I really need to make a GIF out of these few seconds: https://www.youtube.com/watch?v=QWiteH8PARQ&feature=youtu.be&t=2m23s
there will still be some C code for a while, it seems
This is kind of a naive question, but is there something that allows you to macro-rewrite variable references? E.g., #%variable-reference
is obviously not that, but the name might allude to the idea that variable refs would macro-expand to applications of that (of course, they don’t). My reason is that I’m implementing a #lang
that needs to treat variable refs in a slightly special way.
@krismicinski no, there’s not something like that
ah ok, thanks for letting me know. In our little language we can probably just have a special deref form, I suppose.
I think that’s probably the cleanest way of handling this
lol
@mflatt Happy to report that all seems to be fixed with the new snapshot. Thank you so much for the quick fix. Of course, now I miss the turtle…:grinning:
@greg That’s awesome.
@jeapostrophe Is there any way to specify an or
in a package’s build deps?
Like, it needs to have either package a
installed, or package b
installed, but both are not required.
Looking for some Windows expertise: It has been clear for a while that files on Windows get opened in the background by things like the search indexer and virus checkers. It has also been clear that a file cannot be fully deleted while it’s open; you can delete-file
the file, but the file name remains occupied. You can rename a file that is open, though. So, it’s apparently well known that that delete a directory tree, you need to move each file to the temp directory and delete it there, so the temp name can stick around for a while while you carry on deleting the original enclosing directory. I didn’t really understand that until now, but now I know. (What happens if the tree you want to delete isn’t on the same drive as a known temp directory? Good luck finding a writable place on the right drive.) With further investigation, it seems that a directory cannot be renamed if a file within the directory is opened. And since files randomly get opened in the background, that means that any attempt to rename a non-empty directory can simply fail. In other words, any Racket program that uses rename-file-or-directory
on a non-empty directory can randomly fail on Windows. Is that really true? And the only way to make renaming work reliably would be to move each item within the directory to the temp directory, then rename, then move each item back?
@mflatt that unfortunately sounds plausible. I haven’t looked through the windows apis in a while, but it is possible to enumerate the handles for a given file, though, and figure out which processes own them, and wait for those processes to stop being so selfish…
@leif nope
:disappointed:
Oh well, thanks anyway
If it is really necessary, then I think the thing to do is have two different sources out there in the world that provide the same package name
A little bit unsatisfying, I know
Hehe…ya it is.
Oh well, thanks though.
I just feel bad for all of the native packages I have on the package server…but I really don’t see a better way to do it.
@jbclements How hard would it be for you to update the portaudio binary (for os x), that your Racket package uses?
@mflatt @blerner You can monitor and process USN records (https://msdn.microsoft.com/en-us/library/aa363798(v=vs.85).aspx) related to the folder you are interested in to see if files are in use, and respond when they are closed (USN_REASON_CLOSE). It isn’t pretty, but in the long run it will be less frustrating, more stable, and kinder on system resources than monitoring file handles.
Though consider that the possibility an operation on a non-empty folder in Windows could fail at any time is normal, and it is up to the developer to account for it. How far the developer goes should be determined by how critical the operation is, and whether it is appropriate to continue trying if something goes wrong. Handling it automatically would make a nice option, but I would not want it as default behavior.
If you assume an aggressive posture on file/folder delete/rename operations to maximize success, it will lead to confusion on a good day, and destruction on a bad day. If, for example, the wrong service/driver locks a file because it cant gracefully handle its non-existence, and you delete or rename that file during a millisecond window the file is closed and reopened, even when the developer was only making a casual or accidental attempt, it could cause a bluescreen. If the service/driver then cannot be started on next boot because that file is missing, it could cause a startup repair. If the startup repair fails to return the system to a bootable state, as can happen when some FDE products are in use, the machine often will end up in a startup repair loop. I had this scenario occur on thousands of devices in my environment a couple of years ago. It was expensive.