rjnw
2019-4-29 15:32:04

I don’t know how to make sense of this but this is the output for psi linear regression ../../testcode/psisrc/lr50.psi p(a,b,invNoise) = 2.7228147365903951e+15·[-invNoise≤0]·[invNoise≠0]·e^((-1.3823361949553033e+04)·a·invNoise+(-2.3241504108563555e+03)·invNoise+(-3.7136498654593869e+02)·b·invNoise+-1275·a·b·invNoise+-21463·a²·invNoise+-503/20·b²·invNoise)·invNoise²⁶·⅟π·√̅6̅6̅6̅9̅4̅1̅/̅5 for def main(){ dataX := readCSV("data/LinearRegression/dataX.csv"); dataY := readCSV("data/LinearRegression/dataY.csv"); n := dataX.length; assert(n==1000); assert(n==dataY.length); n = 50; invNoise := gamma(1,1); a := gauss(0,1/invNoise); b := gauss(5,10/3/invNoise); for i in [0..n){ y := gauss(a*dataX[i]+b,1/invNoise); cobserve(y,dataY[i]); } return (a,b,invNoise); } comparing to the comment in the review $ cat counterexample.psi def main(n){ x := array(n,0); x[uniformInt(0,n-1)] = uniform(0,1); x[uniformInt(0,n-1)] = 2*x[uniformInt(0,n-1)]; return x[uniformInt(0,n-1)]; } $ psi counterexample.psi p(xᵢ\|n) = ((-n²+n³)·[-1+xᵢ≤0]·[-xᵢ≤0]+(-n³+n⁴)·δ(0)[xᵢ]+1/2·[-2+xᵢ≤0]·[-xᵢ≤0]·n²)·[-n+1≤0]·[-⌊n⌋+n=0]·⅟n⁴ Pr[error\|n] = [-1+n≤0]·[-n+1≠0]·[-⌊n⌋+n=0]+[-⌊n⌋+n≠0]


ccshan
2019-4-29 16:24:08

IIRC, PSI achieves exact inference on Bayesian linear regression, but takes a long time to do it (see a figure in our submitted paper).

What the counterexample in the new Review E shows is that PSI can simplify arrays without unrolling them as long as the arrays require a bounded number of random choices. In other words, it would be more accurate for us to say that PSI needs to unroll random choices in an array in order to simplify them.


ccshan
2019-4-29 17:37:39

Today I presented the histogram optimization to WG2.11. Jimmy Koppel pointed me to this possibly related paper: https://dl.acm.org/citation.cfm?doid=3152284.3133898


samth
2019-4-29 17:44:12

Cool!


samth
2019-4-29 17:44:43

Having looked quickly I think they don’t do our optimization but it is very related and cites lots of other interesting work


samth
2019-4-29 17:44:59

How did the talk go? And can you share the slides?




samth
2019-4-29 21:02:50

I grouped and organized some things there


samth
2019-4-29 21:22:52

also it should now be accessible to @carette


carette
2019-4-29 21:34:16

I can confirm that I can see and edit. But I will not be able to do that until tomorrow - my slides for my morning talk are still not ready!


ccshan
2019-4-30 01:52:31

@samth I tried to strengthen our response to “This actually expresses a probabilistic model, not an inference algorithm.” What do you think?

I think this draft responds to everything we should, except @rjnw is checking “Did you use ./build-release.sh and passed the —nocheck flag?”


samth
2019-4-30 01:56:19

I made some more comments


ccshan
2019-4-30 01:57:01

I resolved one comment from you


samth
2019-4-30 01:58:45

On that one, I was thinking of a more specific reference


samth
2019-4-30 01:59:12

ie where in one of those papers the precise issue is discussed


ccshan
2019-4-30 02:03:37

Well… I mean, one of those papers is titled “Probabilistic Inference by Program Transformation in Hakaru (System Description)” and another is titled “Composing Inference Algorithms as Program Transformations”. And two other papers describe a particular transformation.


samth
2019-4-30 02:08:51

Sure, but is there a specific section we can point to that answers the reviewers question without saying “we wrote a whole paper on this”?


samth
2019-4-30 02:09:06

I recognize that we did indeed do that


ccshan
2019-4-30 02:09:29

So do you think it’s better to just refer to the first sentence of the abstract of http://homes.soic.indiana.edu/ccshan/rational/system.pdf and the first section of http://auai.org/uai2017/proceedings/papers/163.pdf


ccshan
2019-4-30 02:09:31

?