
@samth: From your help, I managed to implement the variable elimination and constraint generation parts in Haskell to help with my own understanding. The one thing I’m confused about so far is what the σ represents in CG-Fun, which I don’t see mentioned elsewhere.

I guess my main question right now is “How do you debug this stuff”? lol

Does anyone have recommendations for debugging FFI stuff? I assume the answer is something like “read the docs back and forth, do a debug-build of racket and re-learn gdb”.

lexi.lambda: sigma is a substitution

reply_broadcast messages not yet supported

Alright… I’m still not clear where it comes from in that particular rule.

in the App-Inf-Alg
rule, \sigma comes from \sigma \in \And C \downarrow R

the last premise of the rule

how to compute that is defined on the previous page, where it says “the minimal subsitution \sigma C R can be defined …”

Yes, I see that, but I’m not sure what it means within the CG-Fun rule, since my understanding is that the substitution is computed from the calculated constraints.

I also don’t know where \sigma is coming from in that rule

that looks like a typo to me

Hmm. Safe to ignore it, then?

Yes, I think so

ok, thanks

Probably there’s some other disjointness condition that should be there, but I couldn’t figure out what it should be

@jerryj Uhh…do you just mean debugging in general?

If you can be a bit more precise that would be helpful, otherwise ya, all I got is read the manuals, and squint at your code harder.

@leif I’m having trouble debugging some FFI wrappers I wrote. I’m not getting expected results, it smells like i’m not dereferencing a pointer right or something. I don’t know if there is a reasonable way to poke around in memory after that FFI call to see if i can discover some clues about what I did wrong.

i’ve had good success with debug-build of racket + gdb/lldb

Like, I imagine a gdb debug session might look like: 1) breakpoint after the troublesome ffi call, 2) peek at the pointer thats giving me garbage 3) see what that pointer is pointing to to see if it has the correct data / etc. But, that seems very difficult. I am willing to do difficult work, but I am just trying to get a sense if that’s how a real expert would do things

minor gotcha is that your ffi libs are loaded dynamically, so you’ll need to use file name + line to set breakpoints, or wait until after the symbols are loaded

the underlying scheme object format in memory is relatively simple, but can require a bit of spelunking through header files to figure out

@thinkmoore awesome, thanks!

I’m OK with spelunking, it sounds like i can rely on existing skills to do some of this work. I was worried that I might be getting in way over my head or something.

@jerryj there are some gotchas that might trip you up, if for example the problem is racket moving things on you… does your FFI pass racket values back and forth?

yes :grin:


wow it was a lot easier to build racket than I thought it would be. you all are amazing :heart_eyes:

I’m now taking a look at actually generating substitutions from the calculated constraints, and once again, I think I’m getting tripped up over the notation. Section 3.5 states that “for each (S <: X_i <: T) ∈ C”, we can calculate σ_CR(X_i).
I think I understand a few things. R is the return type of the function, and variance can be calculated just by recursively inspecting the type. However, what I don’t understand is the text immediately following that—is it saying that we can prove σ_CR is just σ? What is σ′?

Additionally, the section that describes how σ_CR is calculated describes cases for covariance, contravariance, and invariance, but it also leaves an additional else branch. When might a type be none of those things?

Oh, nevermind that last bit; I missed that it requires invariance and S = T.

first, σ is the minimal substitution, and we prove that σ_CR is minimal and unique

second, the last case is sometimes called “bivariant” and it means the variable isn’t referenced at all

Alright, that makes sense. The proofs are pretty much over my head at this point, but I trust that they are correct. ;)

yeah, you don’t need to pay attention to most of them

you could always come to OPLSS or the Racket Summer School, though :wink:

Maybe I can figure out how to find the time away from work. ;)

Does this algorithm guarantee that σ_CR(X_i) is unique? That is, I don’t have to worry about producing conflicting substitutions?

Yes, that’s what the proof proves

ok, I figured as much, but I couldn’t glean that from trying (and failing) to understand it.

hmm… I’ve implemented this, but I may have implemented it wrong. my implementation seems to sometimes generate inconsistent substitutions for ill-typed expressions.

When I try and apply a function of type All(X, Y)(X, X, Y)→X to arguments of type Int, String, and String, it should produce an error, but it happily produces a substitution including both X↦Int and X↦String.

I would imagine the constraint generation part is correct, since it generates the constraints Int <: X <: Top and String <: X <: Top, which seems right.

you need to combine the two constraint sets together before generating the substitution

that will fail, giving you your type error

that’s the big \Wedge in App-Inf-Alg

ah, right, that makes sense. and that can simply use the subtyping relation to determine if the constraints are compatible?

how are type errors produced by that process usually formatted? it seems like the resulting errors could be somewhat confusing.

C \wedge D is defined at the top of section 3.3

yes, I’m looking at that right now

well, TR just says it can’t infer the type, and gives you the type of the function and of the arguments

yeah. that seems somewhat worse than unification error messages, though perhaps less likely to produce misleading messages (since it simply doesn’t try to provide anything more precise).

If you have ideas for including more information, I’d be interested :slightly_smiling_face:

I have no idea :) if that’s the best people have figured out how to do with this inference technique, I doubt I’ll be able to come up with anything better. but I’ll probably start thinking about it once I internalize what’s going on better.

for some reason I think I expected local type inference to be a simpler process than HM, but it seems to be significantly more complicated in practice.

The actual algorithm W isn’t that simple either

I suppose not, but I’m not sure I’ve ever actually implemented the “real” algorithm W

the simplicity of a single constraint generation pass combined with an extremely straightforward constraint solving pass is pretty easy to understand at a high level, though

right, the tricky part is (a) when you want to make it efficient and (b) when you add generalization

yeah, I’ve peeked through OutsideIn(X) and it’s terrifying

I just mean generalization in the let
-polymorphism sense

oh, sure. I didn’t realize algorithm w didn’t include that.

though I think I buy the argument against let generalization

I write a lot of haskell with monolocalbinds implicitly enabled and it never really comes up in practice

no, it does include that

clearly I’m not communicating well

oh, I see. you’re saying algorithm w is tricky because it does those two things.

Are the Scheme_Type
values meant to be human readable? Is there listing somewhere of what types the integer values correspond to?


!! Thanks!

alright, looks like I got my haskell impl working. thanks so much for your help! :)

one remaining question I have… this paper doesn’t mention alpha renaming anywhere. is it something I ever need to worry about? that is, does it assume <:, ∧, and ∨ check for alpha equivalence?