rjnw
2017-11-12 10:10:53

I fixed the issue with beta func, because I was not adding more than two number :disappointed: but now it gives me this: (realbetaFunc 1645.0 357.0) printing: 1645.0 printing: 357.0 gsl: exp.c:545: ERROR: underflow Default GSL error handler invoked. Process Racket REPL aborted (core dumped) where realbetaFunc just takes reals and gives reals no logfloat here


samth
2017-11-12 15:42:04

Done


carette
2017-11-12 15:59:10

@carette has joined the channel


carette
2017-11-12 16:02:30

and joined


carette
2017-11-12 16:06:22

That underflow is ‘correct’, in the sense that (realbetaFunc 1645.0 357.0) is ~~ 3.387 10^(–209)


carette
2017-11-12 16:08:07

Oops, typo, that exponent should be –409.


carette
2017-11-12 16:08:38

So the question becomes: how to get underflow to go to 0 ?


carette
2017-11-12 16:09:37

Another question would be: why is this not in logFloat? Then the (base e) number would be –940.53, which is perfectly fine.


rjnw
2017-11-12 17:09:46

as I said in the email, this is inside superpose which converts from logdomain to real again, So I removed the back and forth conversion for efficiency.


rjnw
2017-11-12 17:13:45

okay I looked at the implementation of beta in gsl, which does what you said; first beta_ln then exp, but for exp it gives underflow. hmm,


rjnw
2017-11-12 17:14:16

I guess I can do this, probability-defines.rkt:test> (betaFuncreal 1645.0 357.0) –940.5371673608024 probability-defines.rkt:test> (c-prob2real (betaFuncreal 1645.0 357.0)) 0.0 probability-defines.rkt:test>


rjnw
2017-11-12 17:14:28

here betaFuncreal takes real and gives prob


rjnw
2017-11-12 17:14:51

and my prob2real doesn’t underflow, So I will use this then


rjnw
2017-11-12 17:18:25

hmm, if you look at the stack.yaml, I include the hakaru from local folder. I wonder how the versioning works here. Are you using cabal or stack?


rjnw
2017-11-12 17:18:52

I think for stack it might just be picking the local without any versioning, I am not sure.


rjnw
2017-11-12 17:23:56

okay the new way works. :smiley:


rjnw
2017-11-12 17:50:35

made a change to output file format; seperate each sweep (I think that’s what we named) with tab instead of putting in parens and separating by space. This will make parsing a lot easier as you can just split a line with tabs



rjnw
2017-11-12 17:53:23

If someone thinks this is not a good idea we can change back, as I think for now nobody has generated output data. Other than me.


rjnw
2017-11-12 17:53:43

Benchmark output thread!


rjnw
2017-11-12 17:55:00

While generating clinical Trial the output for time for each trial is 0 ms. As it is very small trial. But for overall time for different N, adding them up gives us a real sense of performance. I wonder what we can do for this?



samth
2017-11-12 18:11:02

Try timing with (current-inexact-millisconds)


samth
2017-11-12 18:11:15

That will give higher resolution


rjnw
2017-11-12 18:19:18

let me try


ccshan
2017-11-12 22:46:11

I coded up how to compute accuracy for GmmGibbs. It’s in runners/accuracy/GmmGibbs.hs right now, and contains some simple code code for parsing/printing our shared file format that would be useful for other runners and accuracy computers coded in Haskell. Where should this useful-to-share code go?


rjnw
2017-11-13 00:32:56

I think runners hk is a stack package for generating executables if you want to make it compile with stack you can move it there, in some specific folder.


rjnw
2017-11-13 00:34:07

something that takes command line arguments for the benchmark and the input file name and generates the accuracy would be nice for me to run and compare


ccshan
2017-11-13 01:59:39

@rjnw That’s what I wrote for GmmGibbs, except it takes two command-line arguments: the input file name, then the log file name (which can be “/dev/stdin”). I moved it to under runners/hk/GmmGibbs, and I tried to add it to runners/hk/Makefile but I don’t understand that Makefile…


ccshan
2017-11-13 02:00:09

My laptop boots again now. I’ll probably focus on writing tonight.


samth
2017-11-13 02:00:17

I just wrote an abstract


samth
2017-11-13 02:00:43

hopefully it’s an accurate one — perhaps @ccshan or @carette can confirm?


rjnw
2017-11-13 02:01:30

@ccshan runners/hk has a cabal file, make does stack build and install


rjnw
2017-11-13 02:02:18

other than that it just copies prog from test code into each folder


ccshan
2017-11-13 02:05:22

Looks good (well, it needs proofreading). Thanks. By the way, is there “per-execution” LLVM optimization currently?


samth
2017-11-13 02:07:05

I don’t believe so (@rjnw can confirm) but the compilation is definitely “per execution”


samth
2017-11-13 02:07:26

Also, “per execution” was an awkward phrase, but I couldn’t come up with a better one


rjnw
2017-11-13 02:07:37

I am currently working on it


rjnw
2017-11-13 02:07:59

its per input file


samth
2017-11-13 02:08:19

yeah, I was just trying to come up with a way to describe that in the abstract


rjnw
2017-11-13 02:08:31

yeah I can’t think of a better word


rjnw
2017-11-13 02:08:37

per test maybe


rjnw
2017-11-13 02:09:01

depending on what we call running the code on all rows from one input file


ccshan
2017-11-13 02:18:48

Maybe it belongs to a level of detail that need not go into the abstract.


ccshan
2017-11-13 02:23:02

Please don’t let me stop you from writing.

Here’s a global writing issue: how to notate Hakaru programs in this paper? The notation in Figure 4 (“Key informal typing rules”) works for discussing simplification, and I propose that this figure be introduced in Section 2 and collect all constructions of Hakaru that we need to discuss in this paper. So for example, to talk about histogram and compilation, we should add bucket and let to this figure. Do we need to change the current notation?


samth
2017-11-13 02:54:42

currently working on writing in the Compilation section