rjnw
2018-2-1 14:59:34

for the first one racket 20 haskell 10 for the second one both 10


ccshan
2018-2-1 14:59:59

20 what? 10 what?


rjnw
2018-2-1 15:00:05

sorry data-sets


rjnw
2018-2-1 15:00:16

one initialization per data set


rjnw
2018-2-1 15:00:33

I will double check if haskell is doing the same


rjnw
2018-2-1 15:01:35

yes once per data set for 10 datasets and averaged accross all of them


ccshan
2018-2-1 15:49:03

I see, so I would try two things (which can be done in parallel): average more runs, and make sure that Haskell and Racket are computing the same numbers up to rounding error


rjnw
2018-2-1 19:17:38

rjnw
2018-2-1 19:18:24

I verified rkt and hs without categorical, they are exactly same upto 0.0000000000000001


rjnw
2018-2-1 19:20:21

I verified categorical seperatly by running it 10million times and comparing probability, next I will compare the outer loop of gibbs


rjnw
2018-2-1 22:26:50

ccshan
2018-2-1 23:38:25

@rjnw Hmm, so what changed between your figures at 2:17PM and 5:26PM, other than jags being gone? Why is racket at 5:26PM no longer doing worse?


ccshan
2018-2-1 23:38:34

(Thanks for checking numerical equality.)


rjnw
2018-2-1 23:40:05

I will add jags later today as this was done outside docker. There was a bug in normalizing the array before calling categorical in my code.


rjnw
2018-2-1 23:41:19

I had checks for everything except that :persevere:


ccshan
2018-2-1 23:42:05

Yay, so did you do the “divide (i.e., subtract) by max” thing in categorical?


ccshan
2018-2-2 00:57:54

The latest plot is promising.