
I’ve added my comments. In particular, the response regarding “exact inference” was… not exact? I have changed it to We do use exact numbers for exact inference during both the simplification and histogram phases — no floating point is used there. When we obtain a closed form from just those phases, it is indeed “exact”. In later stages, we agree that what we call “exact inference” is only exact under the pretense that floating-point computations on real numbers are exact.

I have also added ‘symbolically size’ to
“algorithms whose idiomatic expression requires symbolically sized random array variables that are latent or whose likelihood is conjugate”

Lastly: do we want to start our reply with ‘Wow!’ ? I agree with the sentiment, but if the response will be given ‘in context’ (i.e. as currently embedded), it doesn’t read well, as the referee just made a statement of fact [which we agree with] and we start with ‘Wow!’. I found it jarring.

I revised some more. @rjnw Any news?

Sorry I was away, I did not use the build-release flag I have changed the scripts let me run again

I’m happy with the review as it now stands [modulo the missing bit, of course]

I just ran for clinical trial on data size 70 time improved from 450s to 200s and for linear regression with size 900 earlier it was 498s now 227s

Thanks. With both build-release.sh
and --nocheck
, right?

yes

Would you mind trying two more problem sizes for each of the two models, such as data size 50,90 for clinical trial and data size 300,600 for linear regression? Then we can respond that indeed the running time is 45% but the apparently superlinear time complexity persists.

ct50: 47s
ct70: 204s
ct90: 651s

lr300: 9s
lr600: 64s
lr900: 230s

Great! Thanks. I finished writing the response. I’ll send it to Francois in a couple of hours.