
I don’t know how to make sense of this but this is the output for psi linear regression ../../testcode/psisrc/lr50.psi
p(a,b,invNoise) = 2.7228147365903951e+15·[-invNoise≤0]·[invNoise≠0]·e^((-1.3823361949553033e+04)·a·invNoise+(-2.3241504108563555e+03)·invNoise+(-3.7136498654593869e+02)·b·invNoise+-1275·a·b·invNoise+-21463·a²·invNoise+-503/20·b²·invNoise)·invNoise²⁶·⅟π·√̅6̅6̅6̅9̅4̅1̅/̅5
for def main(){
dataX := readCSV("data/LinearRegression/dataX.csv");
dataY := readCSV("data/LinearRegression/dataY.csv");
n := dataX.length;
assert(n==1000);
assert(n==dataY.length);
n = 50;
invNoise := gamma(1,1);
a := gauss(0,1/invNoise);
b := gauss(5,10/3/invNoise);
for i in [0..n){
y := gauss(a*dataX[i]+b,1/invNoise);
cobserve(y,dataY[i]);
}
return (a,b,invNoise);
}
comparing to the comment in the review $ cat counterexample.psi
def main(n){
x := array(n,0);
x[uniformInt(0,n-1)] = uniform(0,1);
x[uniformInt(0,n-1)] = 2*x[uniformInt(0,n-1)];
return x[uniformInt(0,n-1)];
}
$ psi counterexample.psi
p(xᵢ\|n) =
((-n²+n³)·[-1+xᵢ≤0]·[-xᵢ≤0]+(-n³+n⁴)·δ(0)[xᵢ]+1/2·[-2+xᵢ≤0]·[-xᵢ≤0]·n²)·[-n+1≤0]·[-⌊n⌋+n=0]·⅟n⁴
Pr[error\|n] = [-1+n≤0]·[-n+1≠0]·[-⌊n⌋+n=0]+[-⌊n⌋+n≠0]

IIRC, PSI achieves exact inference on Bayesian linear regression, but takes a long time to do it (see a figure in our submitted paper).
What the counterexample in the new Review E shows is that PSI can simplify arrays without unrolling them as long as the arrays require a bounded number of random choices. In other words, it would be more accurate for us to say that PSI needs to unroll random choices in an array in order to simplify them.

Today I presented the histogram optimization to WG2.11. Jimmy Koppel pointed me to this possibly related paper: https://dl.acm.org/citation.cfm?doid=3152284.3133898

Cool!

Having looked quickly I think they don’t do our optimization but it is very related and cites lots of other interesting work

How did the talk go? And can you share the slides?


I have not written any response at https://docs.google.com/document/d/16FiTwJW-xlHEbE2n5jrIhktUNjB5TAN80MISymf_bpI/edit?usp=sharing

I grouped and organized some things there

also it should now be accessible to @carette

I can confirm that I can see and edit. But I will not be able to do that until tomorrow - my slides for my morning talk are still not ready!

@samth I tried to strengthen our response to “This actually expresses a probabilistic model, not an inference algorithm.” What do you think?
I think this draft responds to everything we should, except @rjnw is checking “Did you use ./build-release.sh and passed the —nocheck flag?”

I made some more comments

I resolved one comment from you

On that one, I was thinking of a more specific reference

ie where in one of those papers the precise issue is discussed

Well… I mean, one of those papers is titled “Probabilistic Inference by Program Transformation in Hakaru (System Description)” and another is titled “Composing Inference Algorithms as Program Transformations”. And two other papers describe a particular transformation.

Sure, but is there a specific section we can point to that answers the reviewers question without saying “we wrote a whole paper on this”?

I recognize that we did indeed do that

So do you think it’s better to just refer to the first sentence of the abstract of http://homes.soic.indiana.edu/ccshan/rational/system.pdf and the first section of http://auai.org/uai2017/proceedings/papers/163.pdf

?