
I fixed the issue with beta func, because I was not adding more than two number :disappointed: but now it gives me this: (realbetaFunc 1645.0 357.0) printing: 1645.0 printing: 357.0 gsl: exp.c:545: ERROR: underflow Default GSL error handler invoked. Process Racket REPL aborted (core dumped) where realbetaFunc just takes reals and gives reals no logfloat here

Done

@carette has joined the channel

and joined

That underflow is ‘correct’, in the sense that (realbetaFunc 1645.0 357.0) is ~~ 3.387 10^(–209)

Oops, typo, that exponent should be –409.

So the question becomes: how to get underflow to go to 0 ?

Another question would be: why is this not in logFloat? Then the (base e) number would be –940.53, which is perfectly fine.

as I said in the email, this is inside superpose which converts from logdomain to real again, So I removed the back and forth conversion for efficiency.

okay I looked at the implementation of beta in gsl, which does what you said; first beta_ln then exp, but for exp it gives underflow. hmm,

I guess I can do this, probability-defines.rkt:test> (betaFuncreal 1645.0 357.0) –940.5371673608024 probability-defines.rkt:test> (c-prob2real (betaFuncreal 1645.0 357.0)) 0.0 probability-defines.rkt:test>

here betaFuncreal takes real and gives prob

and my prob2real doesn’t underflow, So I will use this then

hmm, if you look at the stack.yaml, I include the hakaru from local folder. I wonder how the versioning works here. Are you using cabal or stack?

I think for stack it might just be picking the local without any versioning, I am not sure.

okay the new way works. :smiley:

made a change to output file format; seperate each sweep (I think that’s what we named) with tab instead of putting in parens and separating by space. This will make parsing a lot easier as you can just split a line with tabs


If someone thinks this is not a good idea we can change back, as I think for now nobody has generated output data. Other than me.

Benchmark output thread!

While generating clinical Trial the output for time for each trial is 0 ms. As it is very small trial. But for overall time for different N, adding them up gives us a real sense of performance. I wonder what we can do for this?


Try timing with (current-inexact-millisconds)

That will give higher resolution

let me try

I coded up how to compute accuracy for GmmGibbs. It’s in runners/accuracy/GmmGibbs.hs right now, and contains some simple code code for parsing/printing our shared file format that would be useful for other runners and accuracy computers coded in Haskell. Where should this useful-to-share code go?

I think runners hk is a stack package for generating executables if you want to make it compile with stack you can move it there, in some specific folder.

something that takes command line arguments for the benchmark and the input file name and generates the accuracy would be nice for me to run and compare

@rjnw That’s what I wrote for GmmGibbs, except it takes two command-line arguments: the input file name, then the log file name (which can be “/dev/stdin”). I moved it to under runners/hk/GmmGibbs, and I tried to add it to runners/hk/Makefile but I don’t understand that Makefile…

My laptop boots again now. I’ll probably focus on writing tonight.

I just wrote an abstract

hopefully it’s an accurate one — perhaps @ccshan or @carette can confirm?

@ccshan runners/hk has a cabal file, make does stack build and install

other than that it just copies prog from test code into each folder

Looks good (well, it needs proofreading). Thanks. By the way, is there “per-execution” LLVM optimization currently?

I don’t believe so (@rjnw can confirm) but the compilation is definitely “per execution”

Also, “per execution” was an awkward phrase, but I couldn’t come up with a better one

I am currently working on it

its per input file

yeah, I was just trying to come up with a way to describe that in the abstract

yeah I can’t think of a better word

per test maybe

depending on what we call running the code on all rows from one input file

Maybe it belongs to a level of detail that need not go into the abstract.

Please don’t let me stop you from writing.
Here’s a global writing issue: how to notate Hakaru programs in this paper? The notation in Figure 4 (“Key informal typing rules”) works for discussing simplification, and I propose that this figure be introduced in Section 2 and collect all constructions of Hakaru that we need to discuss in this paper. So for example, to talk about histogram and compilation, we should add bucket and let to this figure. Do we need to change the current notation?

currently working on writing in the Compilation section