
Can you put that command somewhere in the directory?

we are only 4–5x faster than linear regression on haskell which is similar to the difference in clinical trial

ok, that makes more sense

Sorry for the total silence.

I have kept up to date with all that’s been done though. Happy with it all.

How should Section 6.3 change?

And I can’t “git pull”:

MacBook-Air-2:ppaml carette$ git pull
error: cannot lock ref 'refs/tags/clich?-lathered-stiffener-muckiest': Unable to create '/Users/carette/ppaml/.git/refs/tags/clich?-lathered-stiffener-muckiest.lock': Illegal byte sequence
From <https://github.iu.edu/ccshan/ppaml>
! [new tag] clich?-lathered-stiffener-muckiest -> clich?-lathered-stiffener-muckiest (unable to update local ref)

Any ideas?

--no-tags
seems to do the trick.

avg time for linear regression for hakaru 0.00036337s

Ok but that’s 11x faster than 33μs

Sorry I compared to linear regression I ran without runtime specializations, I can’t run with runtime specializations with linear regression due to some bug right now

without runtime specializations it was around 100μs

Ok but this is the same machine on which you previously got 33μs with runtime specialization, no?

yes

Ok I think I’ll just write 11x now.

that sounds good to me.

I think the main item that I’d like to change is our list of contributions at the end of the introduction. As is, it seriously under-sells what we’ve done. And the previous referees really dug in to that.

Go ahead and revise (or add TODO listing what you want to mention or what referee comments you want to respond to) and I’ll react

I’ll revise - that’s more constructive than just adding TODOs.

After dinner.

Both revising and adding TODOs can clarify your intent.

Part of the intent is to emphasize that even though we make it all seem straightforward, it wasn’t. Lots of design was needed. Symbolic arbitrary-dimensional integrals are very hard to do on a computer, even though it all seems just fine in paper-math. For example. I think the same can be said of Histogram.

Ok, I’ve rewritten the contributions part to my satisfaction. I’ve amped up (and added to) all our contributions. And I believe what I’ve written too.

a few things:

- Hoffman and Gelman is published in Journal of Machine Learning Research 15 (

- We should cite Rao and Blackwell

I believe the only remaining thing we’ve talked about is benchmarking “exact” inference in some other systems

Well, does anyone read Russian or are we just going to cite Kolmogorov without reading it

Should I ask Katie to help read it?

But “Wikipedia says Kolmogorov had idea X” is good enough for me, although also possibly true for most X


I created a submission

I don’t think trying to read that paper will be helpful

@carette You believe the histogram transformation hoists code?

@carette You wrote “on top of our scientific contributions, there are significant engineering contributions that should not be overlooked” so what are those scientific contributions and what are those engineering contributions? It seems that there should be two lists, or each contribution should be classified as either scientific or engineering?

@carette You wrote that unproduct is “modular” - what modularity do you have in mind?

I did not change @pravnar’s affiliation since that seemed like asking for trouble

Re: histogram - the first two rules in Fig. 7 pushes Fanout and Split out of an arbitrary context. Maybe ‘hoist’ is not the best word, but it sure moves code around a lot.

Re: scientific vs engineering. I would much rather not try to pin that down in the paper itself. If forced to, I would say that (1) and (2) are more science, (3) is a combination, (4), (5) , (6) are engineering. Many reviewers downplay engineering contributions too much.

It is modular in the sense that it 1) works for many products, not just one, 2) has no new code for doing distribution inference [it relies on the dimension 1 code], 3) does not just work at the top-level of an expression.